Rendering Omni‐directional Stereo Content on Eevee | 360º 3D VR

Hello friends!

I’m making a 360º stereoscopic Virtual Reality short animation rendered in Eevee called Resilience Satelle r.i.c.e.: https://blenderartists.org/t/resilience-satelle-r-i-c-e

…but as you know we don’t have an equirectangular/panoramic camera for Eevee just yet. :sweat_smile:

So far THIS PAPER from Google is still the best description on how to achieve this.
Pay attention that I’m not a programmer, but still this paper talks sense even to me.

My question:
:thinking:
How hard would it be to convert the code they share in the last two pages, to make an add-on to create such a camera for Blender Eevee renderer.

I know that eventually Eevee will have a panoramic camera, but because we are almost ready to render I’m searching for a more immediate solution.

Is this code even compatible or adaptable to a Blender python script/Blender add-on?
I’m open to invest some money on this, but first I would like to have a confirmation that is actually something that worth’s it.

Best regards
-Rogério-

So the paper code seems to want lower level access than I believe Blender provides. Unless you can access the render engine rays, which maybe you can?

But is there any reason you can’t use two cameras as part of a stereo pair, rotate 360 and stitch everything together? It seems similar to what the code is doing?

1 Like

Hello @RajW!
Thank you very much for taking the time to give a look to the paper!

Well… my only reason is that to do all that it’s just way faster and less painful to render everything on Cycles :sweat_smile: We do have all our shaders compatible with Eevee and Cycles… but I like Eevee look better and wile a 4092*4092 frame in Eevee takes 5 minutes to render on Cycles takes 1+ hour.

Being this an animated movie with 20000 frames…
Unity and Unreal Engine does this stuff, right! There must be a way! :slight_smile:

Unity does this by rendering 2 cubemaps https://blogs.unity3d.com/pt/2018/01/26/stereo-360-image-and-video-capture/ and unwrapping them in the end to an equirectangular projection… Blender has light probes that capture cubemaps, can’t a script be made to used the probes data to render a VR output?


May be a very stupid question, but wouldn’t something like this work?

If you place 2 light probes, separated by about 6 cm and somehow obtain the cube maps that these light probes capture for the reflections… convert the cube maps to equirectangular and place one on top of the other… wouldn’t this output a 360º Stereoscopic rendering? :thinking:

Why can’t it be done?