Hi guys

Well… i’ve one question about the cammera in Blender (i’m talking about 2.5x series).

Right now, with the camera in Blender we have to position the camera, adjust the lens and take a picture… but the thing is that sometimes i need to adjust the camera in certain position, and take a picture that is bigger than the allowed camera.

Right no when i have to do so, i take 4 renderings of the image, with diferents shift values and then join the pictures in photoshop (sorry gimp fans) but is time consuming…

So, is there a way to make the area cover by the camera bigger (i’m not talking about resolution, you can add more resolution, but the image won’t cover a bigger area) or a script that allow me to do so?

Thanks in advance for your replies.

to increase the field of view of a camera, you have to change the lens/scale.

Changing the lens values distorts the image. This is normal, but undesirable (for the work i need to do usually). You can’t scale the camera (it will not effect the view/render area)

Right now we can do this:

But i need this:

That’s why right now i need 4 renders with shift to do what i need. So i’m looking for alternatives…

To obtain a larger area coverage from a given (perspective) camera position means having a wider angle of view. That is exactly what changing the lens value does in Blender (in fact, you can opt to have the values shown in degrees instead of as a notional focal length). Any apparent ‘distortion’ is simply an accurate representation of the scene as viewed from that spot. The only other consistent way to capture a wider area is to move your camera back.

For example, here’s a shot of the eccentric mechanism on a model beam engine:

To capture the whole thing, we can reduce the focal length

(oops, you weren’t supposed to see the top of the walls :slight_smile: ) or we can move back

The stronger perspective effect in the shot with reduced focal length may make it seem ‘distorted’, but in fact the central area of it is identical to the original close-up shot:


By the way, ‘Scale’ in this context is the equivalent to focal length if you are using an orthographic camera. As you say, scaling the camera Object doesn’t affect the render; it only changes the representation of the camera in the Blender view.

Best wishes,

Thanks for your replies.

But yeah, a wider angle of viewcan can solve the problem… when the viewpoint was at the center of the image. Problems arise (not that visible) when the view of point is not at the center, wich in the end, is what is needed (not mentioning the fact that the render will lose some details if render at the needed resolution, so double resolution was needed in the end).

In the end was shorter in ammount of time to process 4 independent shfted renders at 720x520 than doing one render at 5760x4160 and crop the result…

Sometimes you win, sometimes not. :slight_smile:

Thanks again.

There are ways to define panoramic renders. Also, the angle-of-view of the camera can be anything you want. You can use an orthographic projection on your camera as well.

Certainly, the “stitch them together” approach that you used is valid. I find it helpful to enter numbers explicitly (Ctrl+N) for complete precision, so that everything matches lock-step as only a 3D renderer can do.