depth of field, defocus and z-masking - boo-berries!!

Ok, I’m new to 3d in general (and newer to Blender) and trying to wade my way through my first project. I have completed the models, materials, rigging and animation for scene one and have started to learn compositing etc. The shot is following a butterfly as it flies past and through a couple of trees. I have used defocus to generate DoF and now have run into the problem of my character (butterfly) going behind some branches that is also goes in front of (ie the tree is on a separate render layer).

After fiddling a lot I managed to get a working z-mask layer but the problem I am encountering is that with the defocus, it doesn’t seem to be wide enough to mask out the butterfly as it goes behind (screeny attached). I ran the z-mask render layer into a duplicate defocus so that it should be the same settings as the tree.

The idea of render layers perplexed me for a while, then I finally understood them; now I dread the problem of layers “interacting” along the z-axis. I asked my peers (maya people) and it seems in maya to deal with this you render a layer where the character has been assigned a special white non-light-receiving material and everything else gets a black one. This then creates a mask for the character. Sounds good (maybe a duplicate scene with new materials?) but thinking about it even that would have issues with DoF blurring wouldn’t it?

Blender’s z-mask seems more logical (and a lot easier) but how do you deal with the DoF issue?

Attachments


I dont think I understand it right, but I would create 2 layers, one with the enviroment and its lights, and other with the butterfly and its own lights. Then compose all the layers in the node editor and add the defocus node at last. It should work for all the scene.

Thanks elclanrs, that works though I loose the butterfly being in-focus (or rather not defocused) at all times. I’ll have to test render the whole scene to see how blurred the butterfly gets - it might look better anyway… But yeah defocus at the end of the pipeline, rather than on each tree and z-mask layer solves the masking issue.

Select your camera and hit F9 to get to camera settings. The field labeled “Dof Ob:” will allow you to assign the Dof distance to track any object whether it be the Butterfly or an empty, etc.

If you select the “AllZ” option for each render layer then all objects behind the butterfly will render with an alpha cutout of the butterfly when ever the butterfly occludes them (butterfly in front), then when the butterfly goes behind objects on the other render layer the opposite will happen (butterfly gets the alpha mask). If the butterfly is in front of one limb but behind another then both render layers will be partially masked.

FSA is strongly recommended any time you’re using Blender’s compositor.

ZMasking is really a better trick for using external compositors because only Blender is capable of using FSA which really totally negates the need for zmask. In the long run learning proper matting/masking techniques via Blender’s is usually but not always a cheaper option (less memory intensive…memory issues crash Blender) than zmask.

Yeah I already have the DoF tracking a parented empty on the butterfly. I tried rendering with FSA on 5 and it doesn’t up the render time (from OSA on 8) too much and the general quality seems a lot better. I’m not sure that I fully understand what is going on but I’m assuming that the “allows Full Sampling” is what you are referring to Rambo when you strongly recommend it. Also where is the temporary folder it stores the tiles? Does it need a lot of space or is it only storing the tiles on a frame-by-frame basis?

That’s cool about the “AllZ” option - I didn’t know what it did and I think exactly what I want. Though if this does all the occluding then why did they add the Zmask render layer option?

The 2 images show why Rambobaby is spot on about using FSA and how it solved the issues I had following elclanrs’ advice of putting the defocus over everything.

Also currently I only have vector blur on the butterfly, considering I have fast camera moves should I add it to the trees? Would one bother to add it to the sky/bg renderlayer? I heard something about if you don’t then you get “strobing” - not sure what that is… thanks for help so far!

Attachments



Oh god now I seriously need help… the advice so far has yielded awesome results except for the first 450 frames where the butterfly walks up a blade of grass. I used images projected onto planes for the background and foreground blurry grass and modelled the blades in focus. With the new composite (based on advice so far) these frames are completely screwed. Something to do with the alpha of the image projections I assume.

2 images: the first shows the problem and the 2nd shows what it should look like (my first composite). Note how much better the background trees are in the first image - that’s what I want to go for but obviously the issue is a show-stopper. I tried adding an AlphaOver node so I could use the “convert” button which does something but not much.

Attachments



Weeeee it’s almost dawn and I think i have it… bit convoluted but basically merging the 2 above setups: keep the z-combine nodes but also run the Image outs into alphaOver nodes. Then run the Z out of the final z-combine and the Image out of the final alphaOver into the defocus.

Now I have my initial problem of the butterfly being too blurry. sigh getting closer. I think I might find a way to exclude the butterfly out of the defocus for the first 450 frames and then Video sequence a fade into the rest where it is part of the defocus… my brain hurts.

Yup, alpha always has to be set after any depth channel effects because depth channels are strictly binary (occlude or be occluded) and obliterate any material/texture based alpha upstream in the noodle. That’s why AllZ works in the first case…it eliminates any depth (binary) information from inclusion in the render layer (at render time) resulting in a binary “false” or vanishing function.

In the case of the grass however you only have alpha occlusion of the image planes so the binary info being pumped to the defocus node is making a mess of things. This is where things get interesting. The problem with the trees looking better in the first image is the image’s Sample Filter Type which can be found on the Render tab. These filters have an inverse effect when FSA is used (gauss becomes the sharpest filter while catrom becomes the blurriest…the polar opposite effect of using OSA). Also, setting the filter’s value to 0.50, it’s lowest setting, nearly always yields the best results regardless of OSA/FSA usage. With FSA it doesn’t make any difference what type of alpha channel you use either because even with a premultiplied alpha type (sky or premul) the alpha is ALWAYS composited last resulting in a Keyed or straight alpha. This is because FSA is a series of non-aliased images (regardless of channel type, e.g. alpha, RGB, or Z) meaning that it is totally impossible to multiply any background color into the matte. The final product of all channel types are ultimately premultiplied as the final result of the composite. As a result FSA level 5 will take 5 times longer to composite (post render action) than any level of OSA. The actual render time is the same for either aliasing scheme having the same number of samples but FSA does take longer to advance from frame to frame because Save Buffers is a disk caching routine meaning that you require a minimum of 5 times more overhead (lowest FSA level) to write the render tiles to disk as you would with an OSA configured render which also has save buffers enabled. This is because OSA only writes a single image to disk. All OSA sampling gets done old school Blender style, in RAM.

Essentially, sampling is just a series of ever so slightly skewed camera shots which get recombined along the same lines that your brain combines the images from each of your eyes into a single stereo view. Since FSA does this with a minimum of 5 samples and, because it does this as the last step in the composite, Blender devs ware able to truly antialias those horible depth maps into a smoothly averaged final result.

As for where Blender stores these samples, you can set the file path in your user preferences (drag down header at the top of the screen). The default file is either /tmp\ or C: mp depending on the OS you use. These saved buffer files are multipass openexr files saved to disk as full 32bits per channel floating point color. This makes them MASSIVE. If you have a file utilizing 5 to ten scenes, one render layer per scene, an average of 5 passes per render layer, and you render in full on HD, you can run well into the several gigabyte range of disk space for these saved buffers…all to composite a single HD frame.

I don’t mean to imply that z-masking is useless either, it just seems a bit unintuitive to me always feeling like I’m trying to give my self a mohawk while using 2 mirrors cuz I always get it backward. The main usage seems to be advertized as a way to overcome the inevitible seams that are nearly impossible to avoid when working with standard alpha types (you always have to blur and distort things to hide them). It may be able to save you gobs of memory and render time overhead as compared to simply following the procedure I posted above (under certain circumstances), if you take the time to learn to use it, but proper masking via node techniques will usually win that fight since node based matting allows you to use instances or render layers rather than having to composite an additional full render layer. In my experience FSA sequence depth is far less prone to causing memory related crashes than that which I’ve seen via the width required when using multiple render layers which were necessary for OSA masking techniques. Other’s mileage may vary.

As usual RamboBaby - awesome reply. I did a 200 frame test render and its looking sweeeeet. Still got a lot to learn but the ‘tiles’ are falling into place. Now 4886 frames left to animate…

What’s this for anyway? It looks really good. Kinda reminds me of a PBS special on the dawn of time or something. I hadn’t bothered with sun sky yet cuz the release note renders looked blown out and crummy. You seem to have captured an ethereal sort of nostalgic kind of vibe with it though. The only time I ever get to see skies that pretty in this area of the country is after a hurricane. Very calming.

wow thanks! My fatigued soul needs some pepping up! The sky is one of my matte paintings - I haven’t worked out sun and sky stuff in Blender either, seems easier to just paint what you want. You should have a play with the mist though, that can make for some more pixel-bling to renders.

This is a 4 minute music video clip for one of my songs. It’s a fairy tale story that has been in the family for a while and this is the opening scene. I’m still working on other models but hopefully it should be done in a couple of weeks. I’ll post in the appropriate place of this forum when it’s released.