Advanced filming and effects (inspired by Cameron's Avatar)

Hi,

I have seen Avatar in 3D, and since then I can’t stop thinking about the techniques. Anyway, I think that if the Open Source Community could get the technology ready, it could become pretty important and maybe even the standard for this technology
So out of curiosity, I started to collect some ideas about how this could be done (and how it is done). So as a first thought, I have to say that if a group of people actually wanted to work on it, a wiki would be way better than my crappy brain-storming. Not all of this is blender related, since blender is only the development platform for those pictures, films or games, not the viewing platform. Anyway, here are my thoughts:

1 - 3D / Stereoscopic Camera
Stereoscopic devices (Hardware like screens, projectors and so on) are planned to come out in the next 1-2 years (2010-2012) for home use and affordable prices, according to some news portals. There are different methods:

  • cross-eye or parallel images
  • polarized glasses (2 graphic card outputs needed)
  • Anaglyphs
  • Shutter glasses

Blender should have a built-in stereo camera. This camera would then need to be able to show a depth of field (like http://www.blenderguru.com/creating-depth-of-field/) and adjust the cameras to the object of interest. What I mean is that in the human view, most of the time, our eyes are not perfectly parallel but the both look at an object of interest. There are certain rules on how far the two cameras would need to be separated from each other to see it best in 3D (1/30th of the focal distance). These are some: http://blenderartists.org/forum/showthread.php?t=175125&highlight=stereoscopic+camera, http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/stereographics/, http://blenderartists.org/forum/showthread.php?t=175125&highlight=stereoscopic+camera&page=2. It seems that the user ROUBAL knows quite a bit about all that. The standard distance between the 2 cameras is 6.5-7cm, but if this stays fix, a focussed object far away from the stereo camera seems flat. So there needs to a way of adapting it. There should also be a possibility to adjust what seems to be in front of the screen and that seems to be in (behind) it, but that might just be the focal distance. If so, it wouldn’t need any adjustments. For testing, i.e. to see if the settings are ok (if no better output is available), anaglyphs could always be used. So when actually “filming”, there has to be a possibility to adjust the stereo camera position, the focal distance (distance to the object in focus) and the distance between the two cameras.

2 - 3D / Stereoscopic Output
There is still the problem of how the output would be done. There are 2 possibilities here:
2.1 - Use of Blender as with different output formats
The user can choose for which one of theses systems the output should be rendered. Playback is then only possible on a device supporting the feature. I’d suggest to modify the script found on http://www.noeol.de/s3d/ because it uses 2 scenes instead of only one with a stereo camera, which means that everything changed in one scene has to be changed in the other one as well (see chap. 9)
2.2 - A stereoscopic video codec
The player (hard- or software) decides which method to use. Blender can render one output that works on all players. This would work like this: The image saved is the one for one eye (for example for the right eye), and the other image is only a delta, the difference between the two. This way, the player decodes both images on-the-fly and decides what to do with it. You could even watch a movie like that in 2D (if you only show the right-eye-image, for example). You would then need at least 2x24 images per second.
This would need a codec capable of saving this kind of information. Imagine Xvid3D or Ogg Theora3D. Video Player software would also need to support the feature (you could then just set what kind of output you want in Xine, Mplayer, VLC)

3 - Camera control
As I said before, when actually “filming”, there has to be a possibility to adjust the stereo camera position, the focal distance (distance to the object in focus) and the distance between the two cameras. This range of options would be easier to control if you could fly through the scene using a joystick. Some laptops have a hard-disk protection device (HDAPS) that can be used as a joystick in Linux (using the joydev module, hdaps needs to be loaded). You could then basically walk around with your laptop and see what the scene looks like (like Cameron’s virtual camera, I’d suggest to watch the explanations of the “Behind the Scenes” of Avatar, see pirate bay or others for a torrent, it’s very interesting)
In this mode, Blender would save the path of whatever movement you make and use that later on for rendering. You would see a very rough outline of whatever happens in the digital scene while walking through it. Problem here: Blender might need to know where the virtual camera is (it needs some kind of positioning system), unless a real joystick is used.

4 - Moving protagonists
This is the most tricky part because it needs some hardware:
Like the Na’vi in Avatar or Gollum in Lord of the Rings, a plugin in Blender that allows to record the movements (even of eyelids) of actors and apply these to a digital protagonist would be needed. This would happen on-the-fly so that it can be seen in the virtual camera (mentioned before). This information would then also be saved for rendering later on. A green (blue) screen would be used for the shots.
Problems with this are:

  • Like before, Blender might need to know where the virtual camera is (it needs some kind of positioning system), unless a real joystick is used.
  • The movements of the actors will need to be recorded by another/some other camera(s) that are mounted on tripods or similar. This is one of the most tricky parts.
  • If there are any non-digital protagonists, a real-world camera needs to be mounted on the virtual camera so that it basically sees the same thing the virtual camera does (except it sees a green screen which gets replaced by the rendered scene)
  • Physics and collisions: If the actors touch each other during the filming, but their digital bodies are bigger than/very different in shape they are, those digital bodies mustn’tgo inside each other because the size is not adjusted.
  • An optional interesting thing would be if the digital bodies could get hurt for example.

Anyway, I’m have no skills in programming, writing scripts and so on. I’m just doing some artwork (and I’m a beginner with blender). I know this is all very high-level stuff, and it’s supposed to be a brain-storming. I hope it helps someone anyway.

Ok, get to it Mr 10 post.

I wish I was thinking about something and Blender does it real time…:evilgrin:

Good ideas. I think we will see some things start to happen with project mango. Possibly not as advanced as you layout but a good start.

P.S. I am good with your ideas even if you only have posted 10 times. I would not think that would matter but apparently some people measure you by your post count. OK time for me to get back to my life. Can’t spend the whole thing counting posts in blender artist :slight_smile:

ccherrett… i guess you didn’t understand abidos’ post… you can already do all this in the realtime game engine…

Yeah, this is a good wish list for project Mango for sure. Seems like you’ve really thought this through and done some research.

For anyone serious about 3D -> may I suggest visiting mtbs.com which is dedicated to stereo 3D. I personally use an iZ3D monitor with polarizing glasses at home. Great for gaming. THis would be fantastic for blending BUT Blender needs to support quad buffer Open GL before you will be able to model in realtime 3D.

Output formats don’t need to be exclusive. The rendering of alternating eye’s images is all that needs to be done. Then the enduser just selects their viewing system (eg projectors, anaglyph whatever) when the start the open source 3D player {which is already available}.

the camera trick is pretty easy to do already.
You jsut have to do a rig where the two cameras have a 6.5cm space between, and look at (Track To) the same Empty Object. Then use this same object as your focal object target in each camera setting.
You can even add some constrain to the camera so they don’t look too much to each other.
Then if you want to render it directly in the compositing node, just take the red from the left camera, and the green and blue from the right camera, and you’ll have your anaglyph stereo preview.

You should go have a look at http://3d-techniques.wikia.com/. It’s brand new and almost empty, but that’s probably the way to go for development and as a knowledge base. I’m going to transfer the contents of my post to that wiki as soon as I have time, actually. Oh, and thanks for your feedback

@walshlg - what are you talking about? mtbs.com goes to a bookshop. Are you SURE that is the correct address?

@francoistarlier - your hackish work around has one major problem, it causes the cameras to angle in, which causes two different planes of interest, which causes vertical parallax which gets worse towards the edges of the image, which causes head aches in viewers.

Blender NEEDS a stereo camera type. The script examples I have extensively used to test stereo in blender have to use a nasty hack to get two images in one pass
1 - create a scene with two cameras with appropriate separation
2 - duplicate the scene
3 - change duplicate scene camera to 2nd camera
4 - in composite select the render from one scene as one eye, render from other scene as other eye
5 - hack off the edges of the renders so only the overlapping parts of left and right eye remain, render time having being wasted on edges that will never be seen

Major problem being, as soon as you change anything in one scene, you have to repeat the entire process from the beginning so the left eye and right eye images have the same things in them.

Also, a stereo camera type would mean a parameter for separation would be available for scripting.
Something Cameron paid a lot of attention to in film Avatar was having the camera separation change relative to the focus. If you are going to move a camera around in an animation it would be nice to be able to change it dynamically.

Take a look at this page here with working blender examples:
http://www.noeol.de/s3d/
Look at part 10 - if the shiftX parameter for cameras was more accurate, the need to over render width and then crop would be removed.

In short:
A - Stereo camera type, or being able to render from two cameras in the same scene = win for blender
B - Improved shiftX accuracy = win for blender
Just about everything else you would want to change/control for stereo rendering would be scriptable. Coding A & B would get rid of some very nasty hacks that have to be currently employed.

Some of my information is totally wrong, I’m sorry. If the cameras are angled in, this causes the problem of keystoning (see http://www.stereoscopy.com/faq/closeup.html). This means that the base distance can’t be changed as freely as I originally thought. See http://nzphoto.tripod.com/stereo/3dtake/fbercowitz.htm and http://nzphoto.tripod.com/stereo/macrostereo/macro3dwindows.htmfor details about that.

I’v been looking for this for ages. I tried setting blender 2.49 up with stereo vision in University VR suite but didnt get anywhere. There is no documenation on game settings -> stereo etc past a sentence and didnt get any replies to my thread 2 years ago… sigh