DoF on planes with transparency...

Hi… I´m Looking for some way to do Depth of Field using planes with transparent PNGs… using the node editor with the Defocus Filter, but the results are like this:

http://www.ies.ufscar.br/geminis/actual_results.jpg

http://www.ies.ufscar.br/geminis/explicacao.jpg

Someone sugested to separate the cube in one render layer and the plane in another, but I´m willing to do something like this, with thousands of planes, wich is inviable to separate onde plane from another:

http://www.fxguide.com/article463.html

A direct link to the making-of… special attention on the massive attack shot… that´s just planes with transparency on 3dmax…
http://www.fxguide.com/modules/NewsUpload/files/08Feb/hun/atilla-fxguide.mov&OBT_fname=atilla-fxguide.mov

There is some node trick or material configuration that I could do that? or Blender is´nt ready for VFX work?
Any help is appreciated.
Thanks!

That kinda talk is liable to get you fired up around here. Understand that Blender is first and foremost a 3D program. As for being ready for VFX work? well, no man is an island and, likewise, no program is a stand alone. Every major VFX outfit (and a fairly good number of minor ones too) employ programmers who design custom software for the very reason outlined above. Blender developers are often recruited into the fold because their favorite piece of freeware is lacking some specific feature/set they need to put the finishing touches on one of their personal works. So, they’re loaded into the breech. And that’s how babies are born.

Ready for VFX work…lets see: 32bpc-HDR; R, G, B, A, Z, Vec, Nor, UV, ObID, pass it, matte it, exclude or recombine it any way you want it, model, animate, light, shade, constrain, modify, simulate, bake to IPOs, UV unwrapping (best in the industry), render bake, object to object baking, image rendering and compositing, Full Sample Aliasing (the most innovative and effective method ever devised for compositing 3D image files), import & export to other 3D file formats, script, , instance, mannage, dynamically link multiple scenes and files via built-in database management system, verse file checkout system…these are just a few of the features in Blender’s arsenal, many of which are on the leading edgeof 2D & 3D imaging capabilities so, to answer your question, yes; Blender is quite capable when it comes to VFX work but, the real question here is, “Are You?”.

Some shots are going to be time consuming and frustrating no matter how advanced technology becomes, that’s why they’re so damned expensive. Your solution however is rather elementary, it’s called camera clipping. Simply render several depth limited slices of your scene and composite them alpha over one another in their proper front to back order (this can be done sequentially and without interruption by creating a new Full Copy of the scene for each depth slice and loading each of these scenes into the compositor from the first scene. just be sure to adjust each scene’s camera to the next depth beyond that of the previous scene). Because you’re using several scenes you get the added bonus of freed memory between each scene’s render slice, allowing you to utilize massively more memory than any one scene could ever handle without running the risk of causing Blender to crash.

Hi, RamboBaby! very very thanks for your reply!
I do know that I´m not ready for some kind of VFX work… and I do believe that Blender is everything that you wrote… it´s my main working tool in my animation pipeline… but sometimes it seems that some easy solutions are´nt that simple… and that´s it… Your sugestion of multiple scenes with different camera clips is perfect for me… I will try it! thanks!

No one is going to disagree with you on that point. Some things concerning nodes are a complete pain in the but, particularly where certain types of mattes which are compositing standards come into play. They’re not hard to create, just a pain to link up…like in the attached image:

It’s nearly ridiculous to need to use all those highlighted nodes to create an intersecting matte but it may be even more absurd to clog up the program with a bunch of redundant features which can easily be created by the user. I guess this is why they call it a roll your own approach. A lot of this redundant nonsense should be cured when v.2.50 comes out though.

Attachments


That is odd, I am seeing the opposite result. Because nodes are processed after rendering Blender does not understand it needs to ignore the alpha channel and it blurs the object as if it were a plane when it should trace on through the alpha section.

Attachments


Defocus.blend (163 KB)

All depth channel effects disrespect alpha channels, period. These two channel types are each other’s antithesis. Don’t expect to ever see them cooperate in their current incarnations because they perform polar oposite functions. It’s easy enough to work around this with masking techniques.

In other words: They aint down with da brizown.