I have seen Avatar in 3D, and since then I can’t stop thinking about the techniques. Anyway, I think that if the Open Source Community could get the technology ready, it could become pretty important and maybe even the standard for this technology
So out of curiosity, I started to collect some ideas about how this could be done (and how it is done). So as a first thought, I have to say that if a group of people actually wanted to work on it, a wiki would be way better than my crappy brain-storming. Not all of this is blender related, since blender is only the development platform for those pictures, films or games, not the viewing platform. Anyway, here are my thoughts:
1 - 3D / Stereoscopic Camera
Stereoscopic devices (Hardware like screens, projectors and so on) are planned to come out in the next 1-2 years (2010-2012) for home use and affordable prices, according to some news portals. There are different methods:
- cross-eye or parallel images
- polarized glasses (2 graphic card outputs needed)
- Shutter glasses
Blender should have a built-in stereo camera. This camera would then need to be able to show a depth of field (like http://www.blenderguru.com/creating-depth-of-field/) and adjust the cameras to the object of interest. What I mean is that in the human view, most of the time, our eyes are not perfectly parallel but the both look at an object of interest. There are certain rules on how far the two cameras would need to be separated from each other to see it best in 3D (1/30th of the focal distance). These are some: http://blenderartists.org/forum/showthread.php?t=175125&highlight=stereoscopic+camera, http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/stereographics/, http://blenderartists.org/forum/showthread.php?t=175125&highlight=stereoscopic+camera&page=2. It seems that the user ROUBAL knows quite a bit about all that. The standard distance between the 2 cameras is 6.5-7cm, but if this stays fix, a focussed object far away from the stereo camera seems flat. So there needs to a way of adapting it. There should also be a possibility to adjust what seems to be in front of the screen and that seems to be in (behind) it, but that might just be the focal distance. If so, it wouldn’t need any adjustments. For testing, i.e. to see if the settings are ok (if no better output is available), anaglyphs could always be used. So when actually “filming”, there has to be a possibility to adjust the stereo camera position, the focal distance (distance to the object in focus) and the distance between the two cameras.
2 - 3D / Stereoscopic Output
There is still the problem of how the output would be done. There are 2 possibilities here:
2.1 - Use of Blender as with different output formats
The user can choose for which one of theses systems the output should be rendered. Playback is then only possible on a device supporting the feature. I’d suggest to modify the script found on http://www.noeol.de/s3d/ because it uses 2 scenes instead of only one with a stereo camera, which means that everything changed in one scene has to be changed in the other one as well (see chap. 9)
2.2 - A stereoscopic video codec
The player (hard- or software) decides which method to use. Blender can render one output that works on all players. This would work like this: The image saved is the one for one eye (for example for the right eye), and the other image is only a delta, the difference between the two. This way, the player decodes both images on-the-fly and decides what to do with it. You could even watch a movie like that in 2D (if you only show the right-eye-image, for example). You would then need at least 2x24 images per second.
This would need a codec capable of saving this kind of information. Imagine Xvid3D or Ogg Theora3D. Video Player software would also need to support the feature (you could then just set what kind of output you want in Xine, Mplayer, VLC)
3 - Camera control
As I said before, when actually “filming”, there has to be a possibility to adjust the stereo camera position, the focal distance (distance to the object in focus) and the distance between the two cameras. This range of options would be easier to control if you could fly through the scene using a joystick. Some laptops have a hard-disk protection device (HDAPS) that can be used as a joystick in Linux (using the joydev module, hdaps needs to be loaded). You could then basically walk around with your laptop and see what the scene looks like (like Cameron’s virtual camera, I’d suggest to watch the explanations of the “Behind the Scenes” of Avatar, see pirate bay or others for a torrent, it’s very interesting)
In this mode, Blender would save the path of whatever movement you make and use that later on for rendering. You would see a very rough outline of whatever happens in the digital scene while walking through it. Problem here: Blender might need to know where the virtual camera is (it needs some kind of positioning system), unless a real joystick is used.
4 - Moving protagonists
This is the most tricky part because it needs some hardware:
Like the Na’vi in Avatar or Gollum in Lord of the Rings, a plugin in Blender that allows to record the movements (even of eyelids) of actors and apply these to a digital protagonist would be needed. This would happen on-the-fly so that it can be seen in the virtual camera (mentioned before). This information would then also be saved for rendering later on. A green (blue) screen would be used for the shots.
Problems with this are:
- Like before, Blender might need to know where the virtual camera is (it needs some kind of positioning system), unless a real joystick is used.
- The movements of the actors will need to be recorded by another/some other camera(s) that are mounted on tripods or similar. This is one of the most tricky parts.
- If there are any non-digital protagonists, a real-world camera needs to be mounted on the virtual camera so that it basically sees the same thing the virtual camera does (except it sees a green screen which gets replaced by the rendered scene)
- Physics and collisions: If the actors touch each other during the filming, but their digital bodies are bigger than/very different in shape they are, those digital bodies mustn’tgo inside each other because the size is not adjusted.
- An optional interesting thing would be if the digital bodies could get hurt for example.
Anyway, I’m have no skills in programming, writing scripts and so on. I’m just doing some artwork (and I’m a beginner with blender). I know this is all very high-level stuff, and it’s supposed to be a brain-storming. I hope it helps someone anyway.