Stereoscopy vs. Depth of Field

In my renders, I’m often obsessed with using as many features as possible for maximum effect and quality. One thing I try not to omit is Depth of Field, to blur areas out of focus from where the character is looking. I also want to make stereoscopic versions of my renders, so people with red-cyan glasses or the Oculus Rift (when it gets there) can enjoy a version with real perception of depth. But then I realized there is a problem: How do you make both work together?!

The problem in a nutshell is this: When you render a stereoscopic image, you give the viewer the ability to focus their eyes on any spot. Meanwhile, DoF blurs it based on where the 3D camera is focused, which implies at which depth you’re supposedly looking. If you’re using just one or the other then everything’s fine… but if using both a problem emerges: DoF already says where you’re focusing but stereoscopy allows you to focus anywhere, and of course you can’t make the blur magically follow your eyes as it’s part of the image.

Initially, my solution was to enable Depth of Field in the non-stereoscopic version only. But out of curiosity, I did a few renders with both DoF and anaglyph stereoscopy… and surprisingly it made sense to my eyes, although my brain would still wonder “why does this stay blurred when I focus on it”. I only tried this with static renders however, and in an animation the focal point changes as the camera focuses between closer and farther objects… the viewer would then perceive that their eyes are shifting focus outside of their control.

So what is the best choice? Disable DoF in the stereoscopic render, or maybe there’s a way to make both work acceptably together?

Whether using parallel cameras or toe in cameras you still need to place convergence. Once you choose a distance to converge at then there shouldn’t be much of a problem with the Shallow DOF, it might appear as a fringe at the boundary of the foreground object?

To the original question I noticed the same issue in the movie Avatar, and was annoyed that I couldn’t look around a shot because it was blurred out. Forcing me to stay at the convergence point.

Don’t. Personally, I feel that shallow depth of field doesn’t belong in stereoscopic films. After all, it’s a device to draw the user’s attention to a particular place, but you can use convergence to do this too! Simply converge the camera on the subject, and there’s no need for shallow DOF anymore.

You should also avoid it because it’s creating a situation where the viewer may want to focus their eyes on the background, but no matter how hard they try, they can’t un-blur the background, and this can be a source of viewer fatigue.

Also, to the guy above me: I don’t know what you think a parallel vs toe-in setup means, but it all has to do with convergence – they’re one and the same. How “toed in” your cameras are determines the convergence.

I’ve just finished up work on a 3D feature film, and I gotta say… avoid 3D unless you really really need it. It’s an amazing amount of extra work and headache for so little extra reward. Not only do you have to keep track of convergence, and vertical alignment (this was for CG effects in a live action film, so I dealt with that aspect of it, too), but you also have to be very conscientious of what happens to that convergence when it’s blown up to a 50-60 foot theater screen. What looks good on your computer monitor might actually make people’s eyes toe-out on the big screen! Rule of thumb we figured out was no more than 28-pixels of disparity in the background (foreground, do whatever you want) when working in 1080p. But if you’re only interested in seeing it on a small computer screen – go nuts! Have fun!

Human eyes do defocus as well as converge. Shooting parallel removes pincushion errors and allows convergence after render but some people do shoot convergence baked in by shooting toe in.

Generally, DOF and stereo images don’t mix well. There are several things that make stereoscopic films work better, such as deep depth of field, slower cuts, i.e. Longer shots, not having things that stick out in front on the convergence point get cut off by the frame boundaries, using more normal lenses, and many other principles.

You really should decide if your film will be stereoscopic or not. The best stereo films, like Avatar and Gravity, have been designed for stereo from the beginning. Some films add it as a post process, an afterthought, but IMO its not worth it then. It becomes more of a gimmick.

Think of stereoscopics just as you decide something like the aspect ratio of your project. What is the right fit in this case? And just like an aspect ratio, it’s not something you should decide after the fact.

In the next Generations of HMDs you can focus on different planes in the scene like in the real world. Nvidia has shown first prototypes of it, called “Near-Eye Light Field Displays” with micro lenses.

Papers:
https://research.nvidia.com/publication/near-eye-light-field-displays

Different approach with two LCDs:

I think dof in stereo images is not a bad thing. Converging your eyes on a point that is not on focus plane is also one source of discomfort. If everything is sharp, viewer has stronger urge to look at the areas that are in deep in positive or negative space but he still has to focus on screen plane because the actual image is there. Having these areas blurred by dof gently forces you to look where it is more comfortable to look.

Thanks for all the feedback! I read about light-field in the past… a special screen which understands stereoscopy and sends light to your eyes in such a way that you see the blur based on where you’re focusing (like in real life). In any case, it’s a more distant thing after basic stereoscopic displays (like the Oculus Rift which we’re still waiting for), and likely not something Blender supports yet in case it has to.

The reason I want DOF isn’t really to tell the viewer where to focus: It’s only because it looks good! Getting rid of it would be like getting rid of bloom, refractions, motion blur, or any other good looking graphical effect. Unlike other effects unfortunately, DOF can’t be recklessly slapped in… it’s supposed to make sense based on perspective, and if something else (like stereoscopy) hints that your focal point can differ, you do at least in theory get a conflict.

IMHO, “stereoscopy” is mutually-exclusive with “depth of field” for the following reason:

When I look at a “3-dimensional” image, through the proper equipment, you want me to believe that I am looking at an actual 3D world. Therefore, I expect to see things as I do in the real world: “no matter how close to me it is, or how far away, it is always in focus.” Why? Because, in the real world, the lens in my eyeball instantly adjusts to put the object into focus as my brain scans the entire visual field to construct “what I see” within my brain’s visual cortex.

“Depth of field” is an artifact of the reality that a photograph is being captured by a single, fixed-focus, lens.
All of which is “fine, acceptable, accepted, and expected” for a flat image on a screen or on a page. But this is not “what I see,” because it is not “how I see it.”

When I put my finger in front of me the background does go out of focus, I’m just not aware of it due to my attention being drawn to the subject.

@sundialsvc4: Yep… that’s exactly the problem. I’ll probably disable DOF in the stereoscopic and 360* versions, but leave it on in the normal version only… this totally feels like the best approach overall.

Ridley Scott has a habit of using DOF even in 3D, but only in specific types of shots like ‘over the shoulder’ dialog ones. In those cases your attention is solidly on the speaker so the out of focus head floating in front of the screen doesn’t really distract (unless you look down to where that head meets the edge of the screen. That can be odd looking). In wide shots he usually uses deep focus so you can pick where you want to look. Best of both worlds in my opinion.