camera rig - simulating nodal offset

I’m building an accurate camera simulator for a project.

I need advise on the best way to simulate nodal offsets.

very simply:

for each camera focal length value the camera rig needs to move forward and back along a single axis.

The graph shows the relationship between the focal length and the offset values for a camera/lens setup.


What would be the best approach to take to building this type of behavior?

what is the best way to define the nodal offset curve?

is this a rig? constraint? animation? something else?

Any ideas would be appreciated, thanks!

I seriosly doubt Blenders Camera has a nodal point, other than Cameras Position, and if it does - probably not at the same place as your Graph. But thats probably not important for you?

Just out of curiosity: What does your graph show? Every Lens has other nodal Point, so this is for a specific Lens?

Do you have the Formula to get from focal length to offset? If so, i would try to add a driver to camera and then scripted expression where you can put in your formula.

If you don’t have the formula, well i’m not sure about this…

good luck :wink:

Heh, some Alfred Hitchcock ‘Psyco’ zoom action, huh?

Think I’ve seen some formulas on the interwebs you could use to convert from focal length to blender units without too much trouble/trial & error.

i don’t think nodal point has much to do with Alfred’s “vertigo” effect, which you probably refer to :wink:

bashi’s suggestion of a driver is a good idea, however I would link the lens to the z-position of the camera, not the other way around. As far as a formula, you could just create a fake user f-curve that looks like the graph you have posted up there. Then make the script expression sample the f-curve, based upon the camera z-position to derive a new value for the lens. I’m not sure which value you need to drive, are you thinking focal length? Then when you move the camera the lens will change automatically.

Hi everyone, thanks for the responses.

Atom - your description is exactly what I would like to do but in my case the focal length should drive the offset amount. the offset curve should be something that I can adjust and tweak based on panning the camera based on this method:

Looking into drivers - I have a simple file where the focal length drives the translation of a cube - I can see that adding a modifier to the driver gets me close to what I need …if there was a way to add an f-curve as a modifier to the driver I would have it exactly…

any suggestions?

Here is something like that. You can shape the fcurve of the focal length in the graph editor.


26_camera_y_driven_by_focal_length.blend (411 KB)

Hmm, I wonder how the tracker accounts for nodal offset? I thought that it would only be an issue for tripod type shots, however there is a difference between DSLR type cameras and ENG or video type cameras. In the latter the nodal point is quite a way forward of the handheld position.

Thanks Atom!

If I understand your example file there’s still needs to be another curve that represents the how the nodal offset (Y Location) behaves for each focal length value. I think it’s all about the driver modifiers…

:frowning: BUT digging thru the docs I find this:


In this example, we are going to control the size of the well-known monkey head (Suzanne) with the Y-location of the Empty driver. So, we Add Driver to the three ScaleX, ScaleY and ScaleZ channel of the Suzanne object (as usual, if there is no curve yet, it is automatically created). Note that for now, there is no curve, so Blender applies a one-to-one mapping, as if there was virtual unitary gradient linear curves (materialized as yellow dashed lines in the pictures below). This also illustrates that you can use the same driver property (here, the Y location) for several different drivers…

So I guess drivers are indeed the right approach to solving this problem BUT unless these docs are not up-to-date - I believe some other approach is needed.

Any ideas? … Thanks Again!