Raytraced DOF

I am trying to use Depth of Field for the first time, it seems to work ok, however reflections don’t seem to have been affected? if it is raytraced (not just z-depth of the image) then why would the reflections not be blurry?

You’ll have to be a bit more specific - how did you do the DOF?
If you used the camera-with-jitter-IPO-or-circle-path-tracked-to-empty-plus-mblur trick then reflections should be blurred correctly.
If however you used the zblur sequence plugin then you’re right, they won’t be.

This is because zblur uses the zBuffer of the rendered image to tell how far away things are, and blur them appropriately. However the zbuffer doesn’t include the distance to reflected objects, just to the distance to the reflective surface itself (it could only be one or the other, and having it the depth to the mirror is easier, and makes more sense anyway).

[edit] I just saw

Blender’s internal renderer is a scanline renderer, not a raytracer. It only starts tracing rays once it hits a material that needs it. As a result it cannot do “true” focal blur or motion blur, and they have to be faked.
Blender’s render is unlikely to become a full raytracer any time soon, although more and more raytracing elements are being hybrided into it. That’s what yafray is for :wink:

“true”? so that would require tracing rays oversampled based on lens type and stuff?

I guess you could get that by averaging a bunch of samples [accounting for different lenses how you blend the samples], but a ray tracer may be faster doing that

all blender [240 or 256; I forget] images combined
render time: 20 mins or so [I don’t remember which computer, gotta try again I guess]
http://home.earthlink.net/~nwinters99/BlenderStuff/megaDof5.jpeg

—warning, long technical post ahead. Those faint of heart please skip ahead—

Yes, “true” raytraced mblur and dof are done by oversampling. In a full raytracer, standard AA is done by jittering the direction the rays are shot in, DOF is done by jittering the position the ray is shot from (the larger the lens, the more jitter), mblur is done by jittering the time the ray is shot.

However a scanline render starts by generating a zbuffer and stores what goes where, AA can be simulated by generating several of these rounding differently at the edges (which is why blender’s OSA only used to antialias the edges of objects) but it can’t do either dof or mblur. The closest it can get to either is to render several entire frames at each jittered time/position and average them (which is how blender’s mblur works) but this gives a very obviously sampled appearance, because the whole frame is rendered at the same time (or from the same place, for dof) as opposed to for the raytracing version where the jittered times (positions) are different for each pixel. It’s like the difference between soft shadows by dupliverting a lamp to a grid, where a low number of samples means distinct-looking shadows, and AO where a low number of samples is just noisy.

BTW: how long did it take you to set up that dof image? Not the render time, but how long it took to set up all the images, and mix them together…

blender 2.34 introduced oversampling over entire materails, useful for when you have procedural textures [because they aren’t mip mapped], and other times when details are smaller than a pixel. [say you are using raytraced reflection and a bump map]

the entire setup was well under a day, the combination of images is done entirely in blender’s sequence editor

which is why it looks like on the rook that is in focus that the image is pallated with the color poorly sampled and no or bad dithering. [loss of precision when blending images]

the sequence editor adds [15 initally, I mis-counted] 16 frames together, each of them after being multiplied by some value [iirc 1/16]. Each of those frames is motion blurred with 16 samples.

I don’t know if that is a render from the modification with 16 frames, or the original with 15

This is why, kids we should define our questions more accurately.

Sorry, I meant using yafray as the output, and using the DOF setting on the camera.