Brainstorm (can it be done??)

Im working on some match moving tests.
Using Icarus, Blender… um… so far its all i need… right?

Basically the question is “can i match the lighting of my renders to the footage via the sequencer…USING 3D LIGHTS AND SUCH… TO RE-AFFECT THE DIFFERENT PASSES il have rendered out?”

maybe with some depth mapping?
is there per object depth mapping possible?

the result would mean i can match coloring and lighting to my scene without re-rendering…

Ive seen something like this done in Nuke. for District9-
I know blender can do it… dont know how…

Do you mean like a normal pass?
This is the only way I know of relighting things post render.
This is done in the compositor, not the sequencer.

im looking into it/this…

"The Texture Node allows to use Blender textures, to make effects based on normalized pixel coordinates (ranging from -1 to 1). Use it to create blend ranges for example. "

From the same page:

This UI widget allows to input a Normal, or to perform a dot product. You can use it for quick re-lighting of a RenderLayer, when it has a Normal pass.

Isn’t this what you’re after?

I’m not sure if this is the correct way to do it, but I’ve just tried experimenting in 2.5 and I came up with this, an easy and quick method of adding lighting into the scene using the nodes coupled with a colour ramp.

Could come in handy for those little high lights and rim lights, as well as over all lighting tone.

EDIT: Forgot link :stuck_out_tongue_winking_eye:

you all rock! -i got it going on…

i render to multilayer, all passes- and can “view” the animation in the image-editor w/ a refresh on.(via nodes comp…) nodes rock.
blender rocks.

my mind is blown yet again…

what i want to see in the viewer is the result of all the passes + changes (via nodes) to be seen…
so far ive been "mix"ing and multiplying the passes, with node effects, back onto the full render…instead of tweaking 1 pass in the composite… and seeing the change… do i sound kookoo?
thanks much.

‘Match lighting’, using the normal pass will allow certain amount of relighting with regard to direction but have you got the colour and intensity etc covered to match your original footage?

Are you capturing as much info to help you when aquiring your footage, like mirror ball or nodal pan fish eye pano for environment and reflections + 18% grey ball for sampling colour of light and shadows in various directions + exif data from the camera for virtual camera matching + clean plate shot for each video shot with exif data and for patching comps.

The match lighting / relighting aspect sounds easy but if don’t have the above then it could still be a rather time consuming process ‘matching lighting’, as it also includes matching colour and strength of shadows to be believable. :slight_smile:

If not here’s a link to some of the above. Absolutely sound, bang on tutorials. Modo related but easily transferable to blender including Linear Workflow.

This video is informative and shows us the artist working hands on :D(pun joke) with the normal render pass in the node tree to change light direction etc.

what i want to see in the viewer is the result of all the passes + changes (via nodes) to be seen…

Blender Compositing view has a checkbox called use background … enable that. (put a X in it)
Now add a Viewer node … besides the small preview image inside the Viewer Node. A render result up and until the Viewer Node will be shown as a background under your node tree.

So I usually duplicate a viewer node, collapse it to a ‘ball’ just the header to drag around. and hook it up wherever in the node tree I am at to view the render. and sometimes I leave em there for further work.

You can also , split the view to a UV image editor, and make it preview a Viewer Node. There’s tons of places to display your Node Compositing result :slight_smile:

The key is the ‘Viewer Node’ by default. When using compositing you have one under your ‘Composite Node’ in the Compositing view.

Hope I didn’t confuse you.