…but as you know we don’t have an equirectangular/panoramic camera for Eevee just yet.
So far THIS PAPER from Google is still the best description on how to achieve this.
Pay attention that I’m not a programmer, but still this paper talks sense even to me.
My question:
How hard would it be to convert the code they share in the last two pages, to make an add-on to create such a camera for Blender Eevee renderer.
I know that eventually Eevee will have a panoramic camera, but because we are almost ready to render I’m searching for a more immediate solution.
Is this code even compatible or adaptable to a Blender python script/Blender add-on?
I’m open to invest some money on this, but first I would like to have a confirmation that is actually something that worth’s it.
So the paper code seems to want lower level access than I believe Blender provides. Unless you can access the render engine rays, which maybe you can?
But is there any reason you can’t use two cameras as part of a stereo pair, rotate 360 and stitch everything together? It seems similar to what the code is doing?
Hello @RajW!
Thank you very much for taking the time to give a look to the paper!
Well… my only reason is that to do all that it’s just way faster and less painful to render everything on Cycles We do have all our shaders compatible with Eevee and Cycles… but I like Eevee look better and wile a 4092*4092 frame in Eevee takes 5 minutes to render on Cycles takes 1+ hour.
Being this an animated movie with 20000 frames…
Unity and Unreal Engine does this stuff, right! There must be a way!
May be a very stupid question, but wouldn’t something like this work?
If you place 2 light probes, separated by about 6 cm and somehow obtain the cube maps that these light probes capture for the reflections… convert the cube maps to equirectangular and place one on top of the other… wouldn’t this output a 360º Stereoscopic rendering?
I understand that this thread is old, but I think I can give an answer to this last question for someone that might be still interested:
no it can’t be done like you say, because the parallax deviation would be correct only if you look at the panorama from a specific point of view (the one looking in the direction of the two spherical cameras), if you rotate the head on the opposite side the two parallax deviation of the R and L eyes would be flipped. The only way I know to solve the problem without coding is by doing a lot of renderings, let’s say 360 spherical panoramas for the L eye and 360 for the R eye, and while doing that rotating the camera of 1 degree for each pair of renderings, after that crop the central strip of each spherical panorama and stitch it next to the central strip of the next rendering until you reconstruct the whole thing (as explained here: https://www.youtube.com/watch?v=a5hy4QdcFGU&t=157s this is in Unreal engine but the concept is the same). The cons of this approach is that you need to do 360*2 spherical renderings and trash 99% of the rendered area, also the final result would only work well at equatorial height, if you look up or down you would see noticeable aberrations.
Ahaha I feel you, it always happens to me too, time runs so fast lately .
About our topic, yes cycles provides already a great algorithm for omnidirectional stereoscopic panoramas, I also think it’s defenitelly worth using that instead