Call-outs that track moving objects

Hi everyone,

I’ve been playing around with Blender, and have made this video:

It’s a video that displays the relative size of the Sun, the orbits of the planets, and Betelgeuse.

Now I would like to add labels or call-outs to the orbits of the planets. Something simple, just a line coming from a point on the orbit, which goes up and then to the right, with the name of the planet above it, or something like that. The labels would track the moving orbits, but would stay the same size throughout.

What would be the best way to achieve this? Can Blender do this? Possibly in the compositor? Is there a trick I can use to have the labels be rendered as part of the scene (I don’t see how I could get them to stay the same size though)? Or is it possible to somehow export the locations of objects from the scene so that the compositor can paint labels there, or perhaps an external program?

Many thanks in advance if you can help me out with this!

Kind regards,
Pepijn Schmitz

Yep, blender can of course do this and much, much more should you need to.
Just create the labels as text objects in the 3d view and assign them to different render layers. You can then use the compositor to manipulate them anyway you want.

That would make the labels 3D objects, part of the scene. They would deform and scale and rotate as the camera moves and zooms out. That’s not what I’m looking for.

I mean text labels that move with the scene, but that are otherwise not part of it, that stay exactly the same size, aspect ratio, orientation, colour, brightness, etc., irrespective of the position relative to the camera of the object they’re associated with. Basically, a bitmap or static piece of text, but which moves with an object in the scene. I’ll try to find an example of the kind of thing I mean.

Is that possible with Blender? What is the best technique for doing this?

There are many ways to do this. One is to make the camera the parent of the text objects. The text would maintain its relationship to the camera no matter how the camera moves. So the text isn’t affected by the lighting, you could use compositing, or you could put them on a separate object layer from the lights in your scene, and make the lights affect only the planet/stars layer (“This layer only” option in the lamps panel). For the callout lines, I guess you could use hooks at the ends of curves or vertex parenting to keep one end always at the planet.

But that would make the text completely static, wouldn’t it? It wouldn’t be able to move with the objects in the scene? Or am I misunderstanding you?

What I want to achieve is that labels appear that identify the orbits of the planets. These labels should move with the orbits, but otherwise be completely unaffected in appearance by the moving camera. I’ve made two example images to show the kind of thing I mean:

Can this be done with Blender? Or Blender in combination with some other tool (without having to manually track the positions of the orbits)? I could imagine putting an Empty in the scene at the place where I want the label to point, but I have no idea where to go from there.

Here is my .blend file, in case anyone’s interested.


Betelgeuse.blend (344 KB)

you may just have to parent them to the camera and animate them along local co-ordinates maually.

EDIT: or you could add an image in nodes, layer it, and translate is according to the points?

Sure, I could definitely manage this in some manual way of course. But that would be a lot of work, and it just feels to me like there would be a way to do this automatically, either in Blender itself, or by combining Blender with some other tool.

Add one empty that follows the orbit where You want the label.

Add another empty at the camera and parent it to the camera and add a constraint that track to the first empty.

Add a plane with your label - make sure the planes origo is at the ‘connection point’ with the orbit, place it facing the camera at any distance You like but so it from camera view aper to be at the first empty.

Parent the plane to the second empty.

Now the plane should always face the camera, always at the same distance but always so it aper to follow Your planet (or whatever Your first empty do).


LaH, Cool solution!

Thank you very much, LaH! That does indeed get me almost the whole way there. The label nicely tracks the location of the planet, while staying the same size.

There’s just one thing left, and that is the rotation. Because the label is parented to the second empty, and the second empty is rotating relative to the camera, the label rotates relative to the camera is well. So while it starts out perpendicular to the camera, as the camera moves the label rotates out of alignment.

That is, if I understood and implemented everything correctly. Would you mind taking a look? I will attach the test .blend file I made to try this out. Many thanks for any time you are able to put into helping me out! Sorry to keep on about this, but I’m pretty new to animation with Blender and I’m having an especially hard time understanding all this parenting and local/global rotation, etc. business…

Is there a way to have the location of the label be determined by the second empty, but the rotation be such that it is always perpendicular to the camera?



labeltest.blend (153 KB)

Aha! I think I figured it out: I added a “Copy Rotation” constraint to the label, which copies the rotation of the camera. That overrides the rotation it’s getting from its parent, while still allowing the parent to set the location, and it makes sure that when the camera rotates, the label rotates the same amount so that the text stays perpendicular to the camera.

Now I just have one final wrinkle to work out: in my original animation I don’t just move and rotate the camera, I also zoom out, which would again affect the scale of the labels on screen. I guess I will have to animate the scale of the labels exactly inverse to the lens angle of the camera, or does anyone know a better way?

I must say this a lot of work. I think it would be nice if Blender had a feature where you could export the 2D on-screen coordinates of specific objects or vertices to the compositor, so you could then draw a label (or something else, I’ve seen people wanting this functionality for drawing lens flares) there in the compositor.


labeltest.blend (202 KB)

The only alternative to Your solution I can think of is not to zoom but to move the camera in and out.

But that’s not the same thing and might not do the trick for the rest of the animation.

@LaH: the problem is that if I don’t zoom out and just move the camera away, the image becomes too flat. Betelgeuse is so huge I’d have to move an enormous distance away from it, and there would be no perspective distortion of the planetary orbits any more (which I want in there because I like the look). And I have to zoom in in the beginning because the Sun is so tiny compared to Betelgeuse.

(I even have to fake to have the Sun visible at all at the end. It’s scaled up ten times at the end, just so you can see it… ;-))

Anyway, I tried scaling to compensate for the zooming, but that doesn’t work unfortunately, because the relation between the focal length of the lens (which is the only thing you can keyframe) and the apparent size of objects on the screen is not linear. By fiddling with the IPO curve for a couple of hours I could probably approximate it, but that’s much more work than I’m willing to put in to it, and I wouldn’t get it perfect anyway…

So the conclusion seems to be that this is possible as long as you don’t zoom in or out. But it’s still a very elaborate process, and it would be nice if Blender offered a more direct path to achieve this effect. Time to put in a feature request I guess… :slight_smile:


Yeah, thats what I tried to say in my second paragraph. Sorry it don’t work out.

Edit this as a manual animation sure will be a mess even if You need to do it only for one label, then copy the scale to the other.

It must be possible to find a formula for the scale / zoom calculation so pyconstraint would be a solution but thats not enabled yet for 2.5, at least not in my build. Maybe drivers is a way to work around that - I have not yet found time to play with drivers.

Again, sorry, and hope someone else can come to rescue.

This animation is a real cool project BTW.

As a side note, zooming is a crop function so then perspective doesn’t actually change, rather the field of view changed. Whereas flying the camera out would result in perspective changes as the objects that are near would look larger until moved further away.

Yeah, I know. I’m only zooming in at the beginning because the Sun is so tiny I can’t get near enough with the camera to have it fill the screen (it gets clipped by the near limit of the camera), not to get any perspective changes.

I guess I could probably scale the whole scene up so I didn’t have to zoom in to be able to have the Sun fill the screen. But I don’t really feel like doing all the manual labour to make all these labels… I think I’ll wait to see if someone’s interested in my feature request first. :wink:

Can you setup another layer with another camera looking at the labels that dont change size just have relative linked movements? hm,mm.

@3pointEdit: Don’t think that give us anything useful - cos then we have to scale the movements and are trapped in the exact same problem.

I think we need python. Sans pyconstraints it’s drivers, and if drivers don’t work access the ipo curves and creating the scale curve for labels before hand.

I know nothing about drivers, nothing about accessing ipo curves and nothing about the math needed. But it’s all of kind a general usefulness.

We should have a repro somewhere for blender python code snippets… Would be useful both for bge and blender.

I think this problem is solvable - but I don’t have the time to do it. If I know all the stuff beforehand it would be a quick fix.

If someone know the math needed (focal length of the lens -> scale) or how to use drivers or precalculate ipo curves leave a note!

I don’t think you need to mess around with extremely complicated things like zoom factor compensation. It seems to me that the basic issue is determining how the 3D motions, represented by certain pixels along the orbital paths, are translated into a 2D path that your labels must follow to stay tied to the moving image. While I don’t think this can be highly automated – the 3D->2D translation, e.g. world coord’s to screen coord’s, would be very scene and image-specific – there may be some ways to speed up a manual animation process.

The first step I’d take would be to make a reference pass rendering of the full animation, in which only the orbits are rendered. On the orbits are placed reference markers – could be a small object, or if you can do OpenGL renderings through the camera, something like an Empty. These markers show the labels’ “attachment” points. You should be able to get them accurate down to a single pixel along the orbit path.

In your main animation file, you can create a new Scene (let’s call it “Ortho Labels”) that can have an entirely different camera to render from. Set this camera for Orthographic rather than Perspective. Load the reference pass frames up as a background image/movie and set it for auto frame advance.

Now you can start animating your labels, be they text objects or textures on small billboard planes, by “tracing” (i.e., rotoscoping) the reference points as they move across the BG image. Since you’re working in ortho view, the 2D BG image is matched by the 2D camera view – no perspective issues, no change in label image size. How many keyframes you need will be determined by how smoothly you want the labels to track the orbits.

For maximum accuracy and smoothness, you might try using Emptys to actually track the reference animation, since they are easy to align, then have your labels track the empties via parenting or a constraint, whichever gives you more flexibility. You should be able to get a 1/2-way decent preview of their motion relative to the ref sequence by using ALT+A. If you start by keyframing the first & last locations of the ref points, then move to the middle, then do the middle of those, and so on, splitting the keyed motion up in equal segments as you go, you can perhaps be more efficient than keying every frame right from the beginning.

This probably sounds like a lot of grunt work and it is. That’s the nature of roto work. But it is probably faster than writing, troubleshooting and debugging a custom BPython solution. Great art = 10% inspiration and 90% perspiration, or so I’ve heard. In my experience, that’s an optimistic ratio.

In your main scene you can set up the Compositor to use the Render Layers (if any) of your 3D animation, then another Render Layer from the Ortho Labels scene supered over that, using whichever blending method works best for you. Alpha Over, Screen, Lighten, Add, are all options depending on the nature of the labels and what you want them to look like.

Since you’re working in a completely different Scene, I think you can even set the render size (for Ortho Labels) to 2X your main movie size. The BG image will scale accordingly as long as it’s the same image aspect ratio. Then in the Compositor work, use a Scale node to bring the labels back to the scale needed to match the main animation (50%). This could maybe help smooth out the motion some if necessary. But it might also adversely affect the labels’ image quality. Test first!