Respect The Photographers

This rant is about how the term “depth of field” seems to have been adopted by CG artists with exactly the opposite of its original meaning in photography.

In photography, the term refers to the depth around the focal plane at which things look in-focus. Because of the physics of the process, increasing the depth of field requires narrower apertures (or conversely, longer lens focal length), which reduces the amount of light hitting the sensor (or film, if you’re of the Old Skool), which requires either a longer exposure, or greater sensitivity from the latter (or both), to make up for it. Longer exposures are more prone to motion blur, while more sensitive films tend to produce grainier images. And cranking up the signal gain on digital sensors introduces more noise, which produces its own kind of “grain” effect.

In CG, on the other hand, it was easy from the early days to render everything with infinite depth of field. But as the technology advanced, and we had computing power to burn, it became more fashionable to emulate the imperfections of the photographic process, including things like bokeh, vignetting, film grain, motion blur, and, yes, less-than-infinite depth of field.

So really, when CG artists (mis)use the term “depth of field”, they are really referring to shallowness of field. They are subtracting from the depth of field, not adding it.

The reason why I think it is important to remain consistent with the original photographic meaning is because there is so much more that CG artists can learn from photography, in terms of composition, exposure, colour etc. In fact, I think that doing photography can help improve your CG skills.

So let’s have some respect for the other technology-intensive imaging art that came before CG, OK?

1 Like

This would just lead to confusion, I see no point in doing that. Not every CG artist knows the technical stuff behind the software.
And since the goal is to fake dof, it should be named dof.

The misuse of the term is what is causing the confusion.

No, the goal is the opposite of DoF, as I pointed out. See what I mean by “confusion”?

1 Like

Depth of field is simply the depth of the region that is in focus, i.e. the distance between the closest point that is in focus and the point furthest away, still in focus. The definition is the same for CG and cameras.

By your definition you should only be allowed to talk about depth of field when making the aperture smaller, i.e. increasing the depth of field. When increasing the aperture, the depth of field gets smaller, just like when changing it from infinity to something smaller in CG.

1 Like

I agree that it’s important to know this - it gives greater understanding of physical processes, and what makes a picture. Thanks !

In CG terms, “depth of field” is mimicked by intentionally blurring things outside of a certain in-focus range, increasing the blur with distance. It’s used because people expect it to be there. The same thing happens with the lighting effects that occur when sunlight bounces through the glass elements of a lens – a somewhat-hackneyed effect that’s usually accompanied by desert scenes and sound-effects of cicadas. :slight_smile: Real photographers put hoods on the lens specifically to prevent this, but the “visual trope” continues to be used nonetheless.

Control of focus remains a powerful dramatic tool – such as the “focus pull.” Cinematography principles apply to CG movies just as they do with the ones made using a camera.

1 Like

Then why do people say they are “adding” depth of field when they make things blurred instead of in focus?

You see, that’s the wrong way round. Depth of field is what you have when things are in focus.

The fact that people are inconsistent does not change the definition.

My guess is that people use the word ‘add’ because they are ‘adding’ it as an effect. It is an extra thing that is considered when rendering.

I would hardly consider it disrespectful to use the term.

1 Like

There is a add on called Real Camera which I think is an example of how you would want blender to handle Depth of field / focus.

https://3d-wolf.com/products/camera.html

1 Like

Yes, you are photographically correct – “lens-wise” – when you say that depth of field refers to the Z-dimensions of a lens’s focus field. But CG does not have to be concerned (at least, not in the same way …) about “optics.”

(full disclosure: yes, I have an “old-timey” 4x5 bellows camera and still use it regularly with a collection of antique lenses … developing the black-and-white film in my own darkroom.) :smiley:

The digital computer knows nothing about optics, therefore nothing about things being blurry, unless you cause things to be blurry. Which you do in order to mimic the corresponding optical effect.


From a cinematographic standpoint, focus and therefore depth-of-field remains one of the most important dramatic tools, because it provides a visually-familiar and very powerful way to control exactly where the audience’s attention should be placed in the Z-dimension. If the frame is full of details and all of those details are simultaneously in focus, not only does it “not look like a photograph anymore,” but your eye doesn’t know where it is supposed to look!

(And then, there’s the “focus pull” … which turns optics into a hugely dramatic surprise when well-executed. Just ask Alfred Hitchcock …)

The “optical effect” in question being lack of depth-of-field, not its presence.

Remember, the optical effects we spend so much time mimicking nowadays are all due to imperfections and limitations of the photographic process. In photography, it was a great step forward to eliminate them (or at least minimize them), but in CG we pride ourselves on making it look bad.

In CGI, depth-of-field is often used to force the attention of the viewer to a certain subject or point in the scene. In other words, the focus is clear while an otherwise detailed background is blurred. This also occurs a lot in TV shows and movies.

I don’t use it so much in my work out of preference, but many others use it all the time.

Orson Welles’ Citizen Kane was, among other things, a pioneering use of depth of field and the hyperfocal distance. Go see how it was used there.

You wont get very far in this field(unless you’re just a pure modeller) if you don’t eventually pick up a few photography skills so you’re preaching to the choir here.

“Well-intended ‘correction’ notwithstanding,” what I originally said is what I meant. If you have a lens, you have depth of field. And if you don’t have a lens, you don’t. But since audiences expect these photographic-based artifacts to be present, because they always are present in anything made with a lens, we have to mimic them in our CG. In this case, to define a Z-depth range in which the subject is sharp, and outside of which the subject is – because there is no lens – intentionally and correctly blurred, as though a lens was there. (Our Blender cameras also have “zoom” and “wide-angle” settings, and lots of other goodies … “Bokeh,” anyone?)

But there are other things related to physical photography – and to the various media in which we express our pure-digital works (print, video, film) – which must also be kept in mind, such as the actual range of so-called “f-stops” that these various media are physically capable of reproducing, and how all of them actually respond to various levels of light. There’s actually a definite depth of technical understanding that you need to have in order to produce work that “looks good.” Film photographers, and printers, have been doing these things for years, and the principles still hold.

1 Like

On the contrary, it is quite possible to get depth of field without a lens. Consider a pinhole camera, which gives you effectively infinite depth of field, with no lens at all.

rEsPecT tHe pHotographERs!

This gatekeeping is funy and ridiculously hostile. “Chefs,respect mcdonalds workers when you do burgers! You are not ADDING pickles, you put burger around the pickles! VERY important!”

As you know, in the pinhole camera, the pinhole functions as a lens, by restricting the range of light beams that can hit the film. CG doesn’t have a pinhole either, unless it wants one.

Fundamentally, CG is simply algorithms, unconstrained by physics of any kind except to the extent that we want them to be. We could, if we wanted to, simply trace a straight line (in a chosen “light direction”) directly from the image-plane to each object and call it good. That would indeed be the simplest approach, and it would of couse be mathematically correct. But it would not be what the human eye is conditioned to see. (After all, the human eye also “has a lens!”)

I don’t find anything “hostile” at all. Just the usual digital water cooler and a few people standing around it, talking. But your comment does remind me why I do not eat at McDonald’s! :smiley: