Cameras as eyes

When using a camera to take a shot, it acts like, well, a camera.
Meaning, you can’t see what is immediately around the camera. The shot starts from what would be a few feet away. So, if you try and take a close quarter shot from a persons view, you have to actually move the camera back.

This is fine, unless moving back puts you behind or, even worse, into an object. You can then move those objects into another plane, and that works, unless they would normally be casting a shadow in to the scene, or you removed a wall that was blocking light which will now shine into your scene, ruining the shot.

Adjusting the lens on the camera is neat, but it distorts everything. So, if I lower the lens to something near 0, I get the stuff around the camera, but the whole shot now is extended and distorted out like a tunnel.

Is there anyway to get a shot that would be from the way an eye would see things… i.e. from the point you stand? Or am I doomed to always shifting objects to set a shot?

Thanks
Abavagada

Have you tried adjusting the clipping? ClipSta = Clipping Start, anything closer isn’t rendered.

Hope this helps… may not though, I’m still a noob.

Imp

I’m not sure I clearly understand what you mean… :-?

that’s what I think you are asking: Is there a way to move the camera away/near an object without having the camera rendering it?

Well, there’s two simple solutions:

1- adjust the Clip Sta and Clip End of the camera

2- move objects on different layers and turn the layers on and off

I hope that was helpful

Martin

ok… more blunt

I know how to move things onto different layers, and turn them off, to take shots. But that has to be done because you have to move the camera back to get an shot.

Take a real camera in your hand. Look at your room through it, noticing what is the closest thing in the image.

Now, remove the camera and look at the same view, and notice what is the closest thing you see.

I want a way to render from this eye view point, not a camera point of view, which starts further ahead of the camera. That way, I don’t need to worry about moving the camera back and moving objects to layers.

Now, I know that logically, this is possible, since you can take a picture from anyplace. But is Blender capable of doing this currently?

Clearer?

Not really. Are you saying things close to the camera don’t render? Try lowering the ClipSta. Beyond that… I’m not sure what you’re talking about.

Imp

A camera image starts a distance from the eye that is looking thru it.

Anyone who has ever used a camera in real life has experienced this.
That is why they need to keep backing up to get the shot that saw with their eyes.

I just want to be able to get a shot that is like it is from the eye, and not like a camera.

And without moving a camera back, or getting distortion.

Ok… 3 ways I know to do this.

  1. Change the lens, but I don’t think you like that, for distortion.
  2. Move the camera back, not sure why, but you don’t want to, ok.
  3. Make the camera bigger… try that? Just select, resize. You’ll probably move it back a little after doing that, but try that.

Imp

I think changing the lens value is exactly what he is looking for actually.

It’ll turn the camera into an incredible fisheye, but that actually resembles the human eye more than the default setting.

Just lower the lense value and see how you like it :wink:

btw: there is no such thing as a shot that is like your eyes view things… basicly cause you have 2 eyes. And as long as your two eyes are looking at a single shot… it’ll never appear 100% real.

I don’t want to move it back for a very basic reason… your real vision doesn’t need that.

Imagine this scene.

You are in a building. You open a door, put your room into the room, and look.

Now, do this with a camera in front of your eye. The first thing you see is now at least a foot in front of your eye, which is not how it should appear if your were casully looking in. To get the proper place, you would have to move your camera backwards, which means moving your head backwards.

CRACK. Good job. You have just smacked you head into a wall.

Now, in Blender, we can compensate by moving that nasty wall into another layer, then moving the camera back. But now, whatever light was in the room behind that wall is flooding into the room, casting shadows into your image that shouldn’t be there, or perhaps allowing more light in then you wanted.

So, now you have to go over and “turn off” any lamps that might be affecting the image without the wall.

Seems like a lot of work for a simple view from an eye place, and not thru a camera.

uhm… don’t forget you have 1 camera but 2 eyes. that already knocks down your field of vision. Secondly what you see with your eye is distorted - the only thigs which are sharp and not distorted is what you are focusing on. The rest is distorted , but our brain would have learnt to compenste for such distortion so we don’t pay attention to it. Your camera and eye therefore work the same - what differs is that when a picture is taken (and developed) the eye or camera can focus on the distortion at the periphery of the image because it has been ‘frozen’. In real life your eyes are constantly darting from one object to another to make sense of things.

The other difference is the focal lenght - fixed in your eyes but not in your camera (depends on type of lens/camera!) I’m not sure if I remember correctly but I think the human eye focal lenght is of 50.
How close an object appears depends on the focal length. If you set your camera to the same focal length as your eye then things will appear at the same distance away.

I think the problem you are refering to is the width of the image ie. your 1 camera versus your 2 eyes which give you a wider field of vision - which is why you step back when using a camera because you are only using one ‘eye’.

Solution:

  1. Render a panorama!
  2. lower the lens setting and widen the image width. Distortion will occur at the edges but this is happening in your eyes too.

ilac where did you get a varying focal length lens for your camera?

I believe its your eye that has the ability to vary its focal length. Where as a camera moves the lens inorder to accomplish the same thing.

Going along with this discussion, in blender is it the front part of the camera or the point at the back of the camera that we get the rendered image from?

the pointy part of the camera is its back.

Martin

I think I know what he’s asking…

What he wants is a wide angle lens that doesn’t distort at the edges. This is a problem because

  1. A camera project it’s image onto a plane. The edges of the ‘film’ are farther from the lens.
  2. The eye projects it’s image onto a sphere. All points on the film are equidistant from the lens.

That’s why you don’t see fisheye even though your eye even though it has a much wider lens than a camera.

I know of no way in Blender to do this directly.

You might be able to do this in two passes by rendering with a wide angle camera, then rendering again with the first image mapped to the inside surface of a sphere (or would it be outside???).

Wide viewing angle, distortionless images, flat film. Pick any two. Unfortunatly, every 3D package I’ve ever seen has already picked ‘flat film’ for you.

Bob

Both. The image is formed on the front part (viewing plane), but rendered with an ‘eye vector’ that passes from the back point of the camera through the appropriate pixel on the viewing plane.

Bob