New Composite Nodes: UV Map, ID Mask and Z-combine

http://www.blender3d.org/cms/Composite__UV_Map__ID.830.0.html

More new nodes, but I have no idea of what they can be used for :D:confused::confused:

Even looking at the example files, which I guess are supposed to be “obvious”, is a puzzle :confused::cool:

Mike

Now you can alter your uv maps (images!) without having to rerender whole scene\animation. This can be a huge advantage in certain situations.

old z combines could not zcombine with each other. Ton fixed that (so he told me, if he can be relied upon, i dunno) so you can z up many planes, a la Disney in Bambi.

I’m not sure if I understand it right. Does it mean that:

  1. I can prepare an object with UV coordinates.
  2. Render it.
  3. Use the composition nodes to swap different images on my UV coordinates without rendering the scene again? Simply loading the new image into compositor?
    If this is true, are there any quality issues or serious limitations?

I was able to get the z combines to work with each other. Note the redundant z input from renderlayer 2. You have to do this every time or it wont work. I’ve been doing this for a couple of months now. Jpg compression makes it hard to see but if you look close you’ll see the aliased edges from the depth mattes. Each object is on a seperate render layer.

Attachments



rimau: Take a look at the Emo example on http://www.blender3d.org/cms/Composite__UV_Map__ID.830.0.html. I don’t know of any issues with this technique. It’s just a pass as any.

Rimau, I believe that you can now LAYER uv maps, not swap them. Until the developers institute multilayer openexr files into the compositor you will have to re-render in order to update changes in your scenes. Some functions in the compositor can be dynamically updated on a per frame basis, but I believe these are the post processing functions. You can see this to dramatic effect with normals via a normal dot node which will relight your scene from any direction you choose, even from the back side.

I believe the time is coming when scene geometry will be rendered from an omnidirectional point of view and you’ll be able to manipulate your rendered image files in any way you choose without having to do a complete re-render. Shadows and shading may have to be recalculated as you move lamps and the camera around in such files, but geometry, materials and textures will not . But until until we get multilayer exr files re-rendering is the order of the day.

If I am wrong about the post processing part someone please correct me.

Hey Baby: in your noodle, you lose the Z info from the top render. BUT your noodle is a good work around, provided you layer back to front, and not try to put something in between with the second Zcombine.