First, I suppose I could briefly introduce myself. I am Clark Wise, a student at Valparaiso University, and I’m currently undertaking two independent studies. One involves Blender.
Here at VU, the College of Engineering has a Scientific Visualization Lab (SVL), which is a large virtual reality lab currently based off of a VisBox-X2. Even so, major upgrades and expansion will be soon in coming. The basic idea is that we have pairs of projectors, each with a linear polarizing lens filter. The user(s) wear polarized glasses, and with the overlayed images the user(s) see in 3D. I know that a number of users here are familiar and are experienced with passive stereo such as this, which is why I turn to you. :eyebrowlift:
My goals are to get renders, especially animations, to be viewable in striking 3D in the SVL. I have one significant setback: I’m a Blender newbie. I’ve been interested in Blender for quite a while, but only now have I begun to learn using it. All at once, I set off to learn about modeling, animation, a little python scripting, and (I assume) compositing in the process of accomplishing my goals. Eventually the incorporation of the game engine would be great, but I’ll save that for a later date.
Present Goals; I can think of two major parts:
Render in stereo. For the purposes of the VU SVL, the only differences in the “two” images be that the camera is moved slightly laterally from the other. I put “two” in quotes because it’s really one image – the two main projectors are set up as one double-wide desktop (such as in dual-monitor workstations where the second screen is just a continuation of the first and you can drag things from one to the other). From what I’ve read around this forum, using composite nodes to stitch together my rendered images/frames may be my ideal solution. However, I realize there are other options available, and I warmly accept all suggestions and opinions.
Get scale/proportions correct. I’m sure there’s an obvious solution for this of which I’m simply unaware, given my very limited experience with Blender. Basically, the issue here is when I set cameras to eye-width apart, I need to make sure that the relative scale of the model is in correct proportion. For example, an imported CAD model of a Harley motorcycle I have would seem really close and be quite difficult (and possibly painful) to focus on with your eyes. I think this is only a matter of someone telling me where the functionality is for this and what it’s called.
Future Goals, to hit the big ones I foresee:
Game Engine. As I said, this is more distant, and we’ll touch on this at some later date.
n displays. By this point, the system I’ve been describing has only involved one screen (two projectors). However, the actual system also has a floor (another pair of projectors), and upgrades will soon add sides walls (two more pairs, totaling 8 projectors). Thus, instead of 2 cameras, I would like to be able to extrapolate to 4, or 8, or n. I assume that doing this will be a trivial addition/adjustment to the method used for the initial 2. Ergo, this is not a big deal, as it will likely be trivially simple.
Well, I thought I had another, but I’m coming up blank right now. I think that’s plenty of writing for now. :rolleyes:
I’m trying to learn everything quickly, but under my heavy class load and busy schedule, I can only manage so much. So, if you wouldn’t mind, please bear with me and be thorough and detailed in your explanations. That would be much appreciated!
Many thanks are in order for paprmh, who has been very gracious and helpful already. And I thank everyone for any input they offer. I’ll shut up now.