Is there a way to assign a camera to a specific render layer without duplicating the whole scene? I’m trying to create a stereo camera rig and the scripts available are not what I’m looking for. I prefer to have full control over the “eyes”.
What I’m trying to do is render images from both cameras side-by-side. (the only stereo format youtube accepts) I managed to do this duplicating the scene once per “eye”. But this is not what I’m looking for, because if I decide to change something it won’t show up.
After trying to create a beam splitter using environment maps and all kinds of crap that didn’t work I decided to render the image with a depth mask. I managed to do that, but of course it introduces 100 other problems. And importing this camera to a different scene causes the application to crash. This is ridiculous.:ba:
It’s much faster and easier to create two renders separately and put them together in post-processing.
Ideally 2 images side by side. It would make editing much easier. However if it’s impossible to do this without duplicating the whole scene, a depth map will do.
Yes, I would like to render form two cameras simultaneously and have the both images stitched together into one as illustrated below. [side-by-side.png]
In case there’s anyone else out there trying to do 3d properly in Blender, here’s some useful info:
This is my camera rig [3d_camera-rig0.002.blend]
In all my renders I assume that 0.2 blender units = 1cm. One default ‘cube’ = one meter.
To move the camera around grab the small cube on top of it, to adjust convergence rotate the empty. (There are two lines coming out of the ‘eyes’. Rotate it so that they cross where your subject is located)
Contrary to popular belief, you can recognize good 3d images by the fact that the ‘3d effect’ is very subtle on the object of interest. And also: Try not to place anything closer than 50cm in front of the eyes. 3D that jumps out is very annoying.
Set the output size the same height and twice the width of one image. Load one image sequence and use a Translate node to offset it to one side, then load the other sequence and offset it to the other side.
Of course you still have to render the sequence twice, I don’t think there is any way round that.
Or do I misunderstand? Do you want a single, superimposed image, or two images side by side?
This is exactly what I did, and the result is visible in the screenshot from the render window I posted above. I know how to set up nodes to stitch together 2 images side-by-side. But that’s done by importing 2 image files from disk, not by combining render layers etc.
Just to clarify, the method I use now is this:
I render and save what each camera sees separately
I import these 2 files (one per eye) into a different blender file, where the nodes are set up to stitch the images together.
What I am trying to do is simplify this procedure. I can do what I want by duplicating the scene twice, than assigning each duplicate to a separate render layer. When this is done, I can use the output from layers instead of images from the disk. However this method is gay because it duplicates the whole scene and creates a mess.
The anaglyph image above is just to illustrate my point about convergence.
I’m not very up on 3d rendering like this, but in the most recent issue of blenderart magazine, there is an article about this. Quite a long article, but at the end it describes duplicating the scene and combining with nodes. I don’t know if thier method would simplify things any for you or not.
Thanks Randy, I downloaded the magazine and read the whole article. It shows pretty much the same thing, but it’s good to know some other people are trying to do 3d as well.
You say you’re “not very up on 3d rendering like this”, what is your preffered method? Is there a better one? For me merging 2 views into one image makes editing much easier.