Reverse Z-buffer?


I am trying to render a scene with two cameras:

1- the default camera in Blender which I shall call a “frustum orthoscopic camera” (no issue there)
2- the conjugate of the first camera which I shall call a “frustum pseudoscopic camera”.

Amongst several differerences with the first camera, this camera has a reversed Z-buffer, image symmetry is flipped and is looking the opposite way from the first camera. The reversed Z-buffer seems to be the tricky part as I have not seen anything in the python API to set this property. Is there a way/hack to do this?

Thank you

How does this work? Is it a camera with inverted projection?

Unfortunately there is not such concept in Blender to change the vertex shader directly.

But somehow you can do it by going really low level with it, copy the entire scene, iterate all objects, change their vertices (multiply the modelviewproj) again from scratch.

On the contrary, something like this can be done in Unity (or even Godot since it supports GLSL directly) asap because you can change both vertex and fragment shaders according to what you like

Thank you @Const.
This is needed for a hologram-printer. Basically you generate a (large) bunch of views from a scene by scanning in a matrix fashion (XY) a virtual camera which is located on the (virtual) hologram surface. If the hologram contains an object which is meant to stand out of the hologram-film, the camera will at some point endup inside the object. At that point, in order to fully render the object, you need that double camera setup. Once all images are rendered, an optical setup print the hologram by encoding each image-view as “hogels” on the real film surface.
Anyways, I have resorted to go low-level and am programing from scratch a rendering engine with OpenGL API.

1 Like

I don’t know about the specifications of this hologram printer, but I guess you have looked at it and know about it. I wonder if you actually need multiple aligned renders with color pass and depth pass that way.

Thank you @const

Not sure what you mean by:

The video you found is a good one about how analog holograms are made, the traditional way. Holographic printers are a way to make holograms from digital content

Say for example you want to write your own renderer.

You have the color pass rendered in cycles and also the depth pass so you can have the Z depth buffer of the frame as well.

Then your own renderer can focus only on the specific parts you need, since you have already the color and depth information you can do other stuff like polarization etc.

Then the other part of using multiple renders, I wonder if is the same principle used, as these animated hologram stickers (you tilt the sticker and the image is animated).

If you found any resources or texts about this technique let me read about it.

Re. the renderer, I am rewriting from scratch with OpenGL as my needs in terms of image quality/rendering are not too high/complex.
Animated holograms are called “stereos” or “multiplex” holograms. Here is a video link which explains the first step of making one (ie. the “H1 master”):

There is a community of professional holographers making such holograms from digital content. My feeling is that they are holographers more than digital creators, so they might be interested in high quality digital content

Very nice, I will start looking more into this since I am quite interested.