Creating stereo 3d in post?

Is it possible to create a second camera image from the first camera image by using the z depth pass? Sort of like how movies are converted from 2d to 3d, only that there will be no guessing at the depth, as that’s already provided in the z pass. I think this could be an alternative to rendering two cameras, much like how vector blur can be an alternative to rendering multiple sub frames for motion blur.

For a true stereo effect you need two cameras because of the different parallax each has, a slightly different angle on the scene which when viewed as a stereo pair gives the objects in the scene a roundness that post-pro stereo cannot, at least not when working only with stereo pairs created from the same image. Parallax also determines the displacement of stereo-pair image components with depth, which is critical to the stereo effect. Using a z-buffer to calculate and reconstruct such parallax seems a whole lot more complicated than simply setting up two properly-placed cameras to begin with, which will provide true stereo parallax.

I would think that the dimensionalisation (3D reconstruction of 2D film) process just remaps source into a point cloud constructed from source. Then you reshoot the 3D point cloud with 2 virtual cameras. This doesn’t remove the need for multiple eye renders.

I guess there would be a way to do it with a multiplane system in the compositor, but that would result in flat cutout 3D images. Then there is still the issue of background wrap around, like what happens in the problematic DOF blur node.

Yes the setup would be more complicated but the render time would be nearly half that of a traditional 2 camera 3d render.

The first option is probably what the node would do in the background. Create that point cloud, virtually shift the perspective slightly (would probably be customizable to a degree), and then interpolate the missing pixels. However even though that is still technically a second render it should still be significantly faster than a full second render from a 3d camera.

Obviously this would only be used for movies, not still shots, because that fast moving images would distract from any obvious anomalies in the generated second image.