rendering left eye/right eye in single pass

Hello Blender Nation!

I’ve been looking for a way to do anaglyph renderings and have found several options, however, I can’t seem to find what I’m looking for. According to the 1/30th rule of deviation in stereo3D, I need two cameras separated by the distance of 1/30th the distance to the subject. I want to set up my scene with the proper proportions and then render the left and right cams in a single pass so that it only needs to calculate the scene once.

My question is: how do I set up the render so that it will compute the scene once and then deliver outputs from the left and right camera into two separate images?

I know this can be done by using multiple scenes, but this feels hokey to me. I’d like to control the cameras with a parented rig, and have all my geoms in a single scene so that I only have to edit things once.

Thanks for any help in advance.
Monty

Only one camera can be designated as “Active” at any one time in a Scene, so despite its “hokeyness,” the multiple-scene approach is the only way to do simultaneous renderings afaik, along with the necessary control of inter-ocular distance, inter-camera axis orientation, and all the other tweaky aspects of anaglyph rendering. Furthermore, only one image (one camera’s view) can be rendered at one time, so even if you use multiple Scene, each Camera will need to do its rendering in turn if a stereo pair is your goal.

However, since you can Link (not Append) objects from Scene to Scene, I see no reason why the control mechanisms can’t be manipulated in one Scene only, as Linked objects. Something to look into, anyway. The Camera parameters like Lens will need setting individually but they need to match at all times anyway so there should be no problems there.

The multiple Scene approach should also give great flexibilty for the compositing needed to output a single anaglyph image if you’re planning on a bi-color approach.

Thanks Chipmasque. I was beginning to wonder if it was possible at all. I guess not. It would be great if multi-cam rendering was integrated so that the render calculations wouldn’t take soo long.

I think that’s a false grail, if you think about what you’re describing – two cameras means two different (though very similar) views, so all the calculations would have to be done separately, no real way to share the process between the two images because they are just different enough to require individual treatment, so to speak. Stereographs recorded to film can be done simultaneously because of the chemical nature of the medium – virtual image rendering software doesn’t so much record as build the image, a fundamentally different process.

While two images can undoubtedly be built at the same time given some custom programming, whether or not this would lead to any net savings in render time is a matter of some doubt, and would of course depend a lot on the elegance and efficiency of the coding.

It would be an interesting experiment to build a multi-scene stereograph setup and use the Compositor to assemble the separate camera images into a single rendered image, side-by-side like an old-fashioned stereo-pair, then see if there are any render-time savings compared to individual single renders from the two cameras. The Compositor would obviously add some processing overhead, so I think it would increase the render time. But this is the only method I can see working in Blender (pre-2.5 – not sure of there are any changes in this regard in the latest edition).

However , if you’re planning on bi-color anaglyphs, this would be a considerably more efficient way than post-processing individual images, since you could do all your lighting and filtration for the stereo pair internally and output the combined image ready for viewing.

Don’t know if this is the same of what you’re looking for???


http://www.noeol.de/s3d/

Not my thread…

"well, here is a little step by step HOWTO to making good anaglyphs and -using blender or any other software- to get full animated anaglyphs

1.- get some 3d glasses (if you buy the srheck 1.5 dvd watch it to learn lot’s of tips about bad anaglyph (it’s mainly colour and separation errors because the the anim wasn’t designed for home anaglyph entertainment.)

2.- search the web for some anaglyph theory and tutorials… get ready to play with gimp or photoshop and render pairs of images.

3.- render scenes with elements near and far away and not too saturated colors (BW is better for starters). you must place pairs of cameras instead of only one. move (some centimeters only) as if cameras were your eyes and you were looking some point in front of you (over 10 meters work fine) so rotate them crossing them slightly, remember…one per each eye ;-). save the pair of images as left&right

4.- open them on your photoed prog and mix the channels (get some tutorials from google)

5.- once you master the still anaglyph tech create your animation in the same way without animating the “crossing angle” between eye cameras (it would brake the depth illusion)

6.- get a free command line anaglyph mixer (i used JAC :: http://www.google.com/search?q=joerg…en-US:official ). Render the both cameras animation and mix the frames using the mixer (jac doen’s works with animations, so i made a ms-dos batch file (.bat) with multiple calls to each pair of images (not manually, i build an actionscript to write the text because it’s the only programming language i use)…

7.- and “boyla” you get an anaglyph image per each pair of rendered files… make an avi, wear your 3d glasses and have a good trip!!!

:slight_smile:

if someone have any questions… just ask.

BTW: i know about the anaglyph sequence plugin on blender tutorial guides but i prefer to do some color fixing to the image pairs before the anaglyph conversion[/b]
…"
Thread here: http://blenderartists.org/forum/showthread.php?t=30555&page=2

Create 2 cameras, set the distance,and then so you can do animated 3D, parent both cameras to an empty. Then dont touh the cameras, just move the emmpty. Then you can ctrl-0 each individual camera and render the image or animation. What you need is a program that puts both the left and right images or animations together. I freaked out when I found this software, because I have been investigating and doing 3d anaglyphic and polarized for a long time. Let me look for a link to the program and a screenshot for your cameras.

Stereo Movie Maker is the name of the software anyone who is into 3d has to have. It takes your blender left and right animations and puts them into one for anaglyphic 3D. A must have. Free and downloadable.

This is a screenshot of my stereo cameras. The large arrows are used so I can easily see where my cameras are pointing. The cross is the empty. (I just use a cross instead of arrows…) If your doing animations you HAVE to have both cameras linked. Click on the left camera and render it…then do the same for the right. Save them as left and right files. The open up stereo movie maker and it will ask you to load the left and right file and do the rest for you! If anyone has any other 3D questions, I have been through hell with 3D and will be happy to help you. Jeff

Attachments


Perhaps you could raytrace everything and use a prism shape in front of the lens that distorts the field of view into 2 seperate images across the lens. Using a wider camera (double width) could you then split the image in compositor (left and right) then blend together?
Seems tedius and render intensive but fun in a single scene. Really dont know if you culd warp ray samples thru that type of lens object.

The stereo effect depends on having two slightly separated viewpoints with slightly different perspectives and other depth cues, which the mind integrates into stereo depth perception, so splitting a single-camera view in two via a prism wouldn’t carry the necessary information for a true stereo effect.

Using external apps to produce bi-color anaglyphs (like the red-cyan type) is a good way to go but I think it could all be done within Blender via the compositor. I wrote a stereo-anaglyph plug-in for another 3D app a number of years back (the attached image is an example of its output, and it could do movies as well), and have no doubt that the processes I used could be reproduced completely and more efficiently in Blender using a multi-Scene setup and the Compositor, maybe some Material nodes as well.


The Summer Folk made with Anaglyphos and Pixels 3D Studio

I think no reason why the control mechanisms can’t be manipulated in one Scene only, as Linked objects.

This is all very helpful. Jeffry and others thanks for your help. The mixing of the anaglyph is not the hard part, it’s acquiring the correct stereo pairs and being aware of the stereo position of objects to create maximum effect. I’ve got the script and some other techniques going now. I’ve also been trying the multi-scene approach and feel much more comfortable with it.

Mostly I’m trying to speed up renders and am looking for a way to cut down on the left/right processing. According to Chipmasque, this is not easily done. However, I have heard other 3D software can. 2.5 has stereo integrated into the game engine, it would be nice to have it integrated in full renders as well.

Not easily done given the current rendering pipeline for Blender, but obviously not outside the realm of customization – Blender as Open Source is designed to be so extended, both via scripting and lower-level coding.

As far as the BGE, I’m not familiar with its implementation in specifics, but if it’s like many other game-engine-based stereo, it deals mainly with stereo-camera setup, and twin pipelines for the raster image processing that can usually be multiplexed in a number of ways for various kinds of stereo viewing. This isn’t the same as a full rendering pipeline, which has to deal with AA (OSA), lighting considerations such as AO, and all the other elements of a full rendering that are not used in GE processing. The GE pipeline is designed for real-time image presentation, but even so I’d be willing to bet that the stereo-pair image pipelines are significantly separated, and add considerable frame-rate loading.

But in any case I agree that stereo imaging could be more integrated, but it has been a rather isolated niche for CGI in general until fairly recently, so whether coding resources have been designated to it is a matter of question. However as I mentioned, it’s a field ripe for parallel development with the trunk of Blender development by an independent coding effort.