I’ve been reading and watching tutorials and things on doing 2D to 3D conversion in Mocha and After Effects (which I have access to at work) but it occurred to me, blender now has Point and Plane tracking, compositing with displacement filters, and video texture mapping on planes, why can’t I do conversion at home right within blender?
Has anyone tried this? What would be some technical obstacles to this? I haven’t worked with the tracker beyond little test tracks
Yes, but no. I mean not turning a single frame into an object, but more like converting a film or home movie into a stereoscopic one. Like what they do for 3D rereleases of older movies, or to cheaply(and poorly) make a 3D film without having to upgrade equipment.
Yeah. I actually worked at a stereo conversion company that did exactly that. I helped convert Iron Man 3, Thor: The Dark World and a few other ones. They mostly did new movies, and used pretty detailed geometry for closeup projection mapping.
3pointEdit is right: one of the huge areas lacking for me in Blender’s Compositor is the masking tools. With 3D conversion, you really really need good roto, especially for fine details likes hairs. Also, “in-painting” as it’s called is super important. That’s where NUKE really has it over blender. You need to be able to convincingly paint in areas behind the foreground for the left and right eyes (in the X value). As far as I know, blender doesn’t have great clone painting tools to do this.
And finally, you need to be able to dynamically set the conversion distance and the interocular distance (between the eyes) to make the video appealing. There is a script out there (or addon,I should say) that helps setup the camera with multiple views to render out an anaglyph image. But you can’t control the distance of the cameras in an animation, and it has to create a copy of your main scene and put the effect together in the compositor.
If you got good parallax and tracking in the original footage, and don’t got anything moving (but the camera, of course), you might be able to reconstruct the scene somewhat by using techniques like this guy:
(there might be more, but these are the first few i found there that seem to focus on this)
You might want to give Gimpel3D a try: http://www.gimpel3d.com It’s free and appears to be designed specifically for 2D->3D conversions. I haven’t tried it yet, but I plan to whenever I have some free time.
Let me first say that I know nothing about stereo conversion beyond what I read on fxguide.com. And let me also say that stereo conversion has always seemed like a very bad idea to me. Even when producing a movie filmed in stereo, such as avatar, you need al kind of painting and cloning tools to correct the cameras for mirror missalignment and such.
That being said, this is how I would naively go about doing a stereo conversion in blender. I would build a quasi 3d reconstruction using real 3d object for walls, floor, etc. and 2.5d stuff for more organic elements, such as people. Then, I would use project from view to bake texture on these objects. Using different time frames, one can then fix these texture by painting in things there are behind foreground elements. One would need aggressive scripting to make this realistic. I am actually developing an addon inspired by syntheyes’ clean plate extraction trick that could help for this kind of stuff. But it still requires a terrible amiunt of manual painting, which in practice means opening up the texture in the gimp, as blender painting tools are indeed not that awesome.