stereo rendering (for virtual reality)

First, I suppose I could briefly introduce myself. I am Clark Wise, a student at Valparaiso University, and I’m currently undertaking two independent studies. One involves Blender.

Here at VU, the College of Engineering has a Scientific Visualization Lab (SVL), which is a large virtual reality lab currently based off of a VisBox-X2. Even so, major upgrades and expansion will be soon in coming. The basic idea is that we have pairs of projectors, each with a linear polarizing lens filter. The user(s) wear polarized glasses, and with the overlayed images the user(s) see in 3D. I know that a number of users here are familiar and are experienced with passive stereo such as this, which is why I turn to you. :eyebrowlift:

My goals are to get renders, especially animations, to be viewable in striking 3D in the SVL. I have one significant setback: I’m a Blender newbie. I’ve been interested in Blender for quite a while, but only now have I begun to learn using it. All at once, I set off to learn about modeling, animation, a little python scripting, and (I assume) compositing in the process of accomplishing my goals. Eventually the incorporation of the game engine would be great, but I’ll save that for a later date.

Present Goals; I can think of two major parts:
Render in stereo. For the purposes of the VU SVL, the only differences in the “two” images be that the camera is moved slightly laterally from the other. I put “two” in quotes because it’s really one image – the two main projectors are set up as one double-wide desktop (such as in dual-monitor workstations where the second screen is just a continuation of the first and you can drag things from one to the other). From what I’ve read around this forum, using composite nodes to stitch together my rendered images/frames may be my ideal solution. However, I realize there are other options available, and I warmly accept all suggestions and opinions.

Get scale/proportions correct. I’m sure there’s an obvious solution for this of which I’m simply unaware, given my very limited experience with Blender. Basically, the issue here is when I set cameras to eye-width apart, I need to make sure that the relative scale of the model is in correct proportion. For example, an imported CAD model of a Harley motorcycle I have would seem really close and be quite difficult (and possibly painful) to focus on with your eyes. I think this is only a matter of someone telling me where the functionality is for this and what it’s called.

Future Goals, to hit the big ones I foresee:
Game Engine. As I said, this is more distant, and we’ll touch on this at some later date.

n displays. By this point, the system I’ve been describing has only involved one screen (two projectors). However, the actual system also has a floor (another pair of projectors), and upgrades will soon add sides walls (two more pairs, totaling 8 projectors). Thus, instead of 2 cameras, I would like to be able to extrapolate to 4, or 8, or n. I assume that doing this will be a trivial addition/adjustment to the method used for the initial 2. Ergo, this is not a big deal, as it will likely be trivially simple.

Well, I thought I had another, but I’m coming up blank right now. I think that’s plenty of writing for now. :rolleyes:

I’m trying to learn everything quickly, but under my heavy class load and busy schedule, I can only manage so much. So, if you wouldn’t mind, please bear with me and be thorough and detailed in your explanations. That would be much appreciated!

Many thanks are in order for paprmh, who has been very gracious and helpful already. And I thank everyone for any input they offer. I’ll shut up now.

hey wiseguy316, when you pm’d me and asked if I was willing to give you some tips on stereo rendering you didn’t say anything about this…:eek: I just thought you wanted to do some simple xeyed stereo pair animations…wow! cool stuff.

Anyway, you might want to check out these links:

I didn’t look too hard, and didn’t find anything about what the results were but if nothing else, there are a few people listed that you might want to try to get in touch with. They will probably be able to help you much more than I ever could…

I’m putting together a new blend with my old stereo cam rigs, and both a new node & a SE setup to stitch the images together. I can post it for you if you want…This is from an old thread…slightly edited (yeah, I was too lazy to type it all again :o )

Well, given that I was going to post all that information anyway, I figured I didn’t need to include those details as long as I got the results I was looking for. :slight_smile:

Thanks again. I’ll work more on this over the weekend, so any more questions I have I’ll know in the next couple days.

Wouldnt it be easier to set up 2 separate machines for it?
It would generate much less confusion and the perspective could be adjusted easier.

Of course this conflicts with your plan, buts just my 2 cents.

Set up two different machines how? I think the way I have it is pretty straightforward…

I mean set up 2 machines, one for each of the projectors, load the scene, adjust the cammera-offset to match the “stereo-effect” and render. If set with the same resolutions and stuff it should match perfectly.

You could even adjust the stereo-offset “live” in the preview window. I think this would be the least complicated version, for it lets the machines totally standalone and you dont have to fizzle with nodes, renderlayers and monitor-outputs. Of course this involves buying a second machine and setting it up. I saw this setup in a CAVE somewhere in Austria.

Of course this is all about money and easiest/cheapest solution.

There are many roads leading to Rome :slight_smile:

That sounds too complicated and expensive… moreso than the current setup. How would you synchronize the two machines? And for running programs that display in stereo? How would you keep programs synced on both machines and running the same?

Doesn’t sound too feasible for this… Besides, we actually have four projectors right now. Soon, we’ll have eight. Someday, perhaps we’ll have all twelve. Now, how many machines do you suggest I buy? :eek:

Anyway, this is the way my setup works, so this is what I’m going to work on. :cool:

Alright, good progress is being made. I’ve got a makeshift multi-camera rig, with which I can render from different angles. And, with all credit to paprmh’s example, I can stitch together my renders with composite nodes. I have one major hurdle left to clear: scale.

Since I’m a blender newbie, I don’t know an easy way to ensure things are to scale. For example, I have a CAD model of a motorcycle, and I want to ensure it looks about the size of a motorcycle, neither miniscule nor gigantic. Just in general, what’s good practice (and what features are available) when dealing with relative scale of objects? And what sort of unit system is available?

Also, how does “scale” under camera lens (in orthographic mode) relate, and how does it work? How should I use it?

At this point, I play things by guess-and-check, but it’s highly inefficient. Thanks!

The “scale” under camera lens just affects how large the camera object looks in the 3D view. It’s there if you already have something parented to the camera, but want to resize it to make more sense. If you just scale the camera with S-KEY then it will resize any objects parented to the camera, potentially throwing things off.

As far as scale goes - you’re fine as long as you’re consistent. It doesn’t matter if you have a camera 3 blender units away from a 1 blender unit cube, or a camera 300 blender units away from a 100 blender unit cube - they will look exactly the same.
So just find out how long the motorcycle is in real life, and how many blender units long the motorcycle model is. Plop the two numbers together and you’ve got your scale ratio.
It helps that with real-life objects, the audience already knows how big these things are and so they will think they’re that big. :wink:
Just make sure that the size of the roads, buildings, trees, and biker are the same size proportionally to the bike. :slight_smile:

This consistency also applies to your camera(s). It should be the same distance from your objects as a camera crew would be. For the stereo effect, the cameras should be the same distance apart as human eyes are.

As for lens values - it’s mainly determined by how big on the screen you want the picture to be.
If you’re shooting for true realism, then the field-of-view of the camera should equal the size (in degrees) of the screen for the audience. Smaller lens values will give an effect of looking through binoculars, and larger lens values will give sort of a fish-eye look. I think.

Even still, you can play around with the settings a bit. Knowing what the effects are helps to make trial-and-error much more efficient.

Thanks a lot! Those are some great tips. So far, you guys have helped me accomplish everything I hoped to.

yea! another happy customer!

I personally think 3D display technologies are going to be the next big consumer thing. So, you efforts are well spent. Thanks for using Blender! Please post and share your success with the community, ok?

@Alitorious: I think what he was talking about is how the lens button turns into a scale button when you toggle the orthographic button…you are referring to the size button.

First of all, don’t use ortho(!)… visual cues help establish the 3d effect. If you toggle ortho on, all size vs distance information is lost and you loose a major part of your 3d effect. When ortho is on, scale more or less zooms the camera in or out.

as far as scaling your cam separation to the scene, thats why I told you about my little suzanne cam rig. With everything parented to suzanne, all you do is scale suzanne to the appropriate size for the scene and the camera separation is automatically set for you.

@Alitorious: I think what he was talking about is how the lens button turns into a scale button when you toggle the orthographic button…you are referring to the size button.
Ahh, my mistake. Yes, that slider just controlls the ortho ‘zoom.’

Incidentally, using ortho would get quite the binoculars effect. One of the largest cues for 3d are perspective lines and vanishing points - all parallel lines, if extended to infinity, will intersect. The farther away and more zoomed in the camera is, the less of an angle parallel lines make with each other. In a true orthogonal view all parallel lines remain parallel.
Actually, until not too long ago Blender used a trick for orthogonal rendering: it just moved the camera back 100 times and zoomed in 100 times.

Thanks, guys. I realize I won’t typically use orthographic view; it just wasn’t as obvious as with the normal way.

I do, in fact, plan to enter the consumer market at some point with technologies utilizing Blender.

Also, I didn’t come to this forum just for an answer to my question (though that’s originally why I found it). It’s a great community, and I can learn a lot just by poking around for fun. I plan to stay around for quite some time.