Will EEVEE Support VR (Equirectangular) Rendering?

I realize it may very well be too early to know, but I scoured the 2.8 Development logs and I can’t seem to find any information on this. Will EEVEE support rendering for VR headsets (Equirectangular Spherical Stereo)? I am very interested in developing animations for VR, and would be quite happy if EEVEE supported it.
I am aware the dev team plans to add dedicated Render and Animation buttons (as opposed to the Render Viewport buttons).

2 Likes

Do you refer in real time or to render? Realtime I didn’t read anything about, and eevee is all except faster for VR (60 fps in each eye)

In Cycles, if you set up a camera with a 2:1 aspect ratio, enable Views, then in the camera tab select this option:


You end up with a 3D 360 VR render. However, the Equirectangular option does not exist in the EEVEE camera tab. When making pre-rendered animations for VR that run at 60 FPS, render times in Cycles can take ages. I would love it if in the future, I could create these animations using EEVEE and save render time.

2 Likes

I have search and I didn’t fin something about it. Maybe will be better ask to dalai in twitter

1 Like

Alright. Since the Panoramic option is still there but doesn’t do anything, my guess is they’re planning it but don’t have it fully implemented yet.

In principle, nonlinear transformations like equirectangular don’t work well with rasterization (i.e. what your GPU does with Eevee), because there’s no efficient way to “bend” triangles out of shape.

However, you can render to a cubemap (should be possible via script even without dedicated support) and then resample that into an equirectangular map (example program).

2 Likes

well, it should be possible, UE4 does that.
If you mean for rendering (not for realtime viewport) then it should be possible, what I don´t know is if it´s planned or not like others say here

At last blender conference, there was a talk about the benefits of using Grease pencil + VR to create a scene.
There are devs dedicated to VR. I don’t doubt that VR support for EEVEE is a goal they want to achieve.

Currently, EEVEE only renders images through OpenGL render buttons.Sampling for these images are limited to 16 AA samples as it was for BI.
But Viewport Samples can go a lot higher than 16 samples. Motion Blur is not rendered.

Panoramic button is present for Camera because it is same UI template with 3 buttons that was used for BI and Cycles.
Button is there because it was already there, before. Not because the feature is expected to work for EEVEE.
In a process of UI evolution, don’t expect everything visible to be related to Present.
Things from the Past are visible. Things from the Future are visible. But it is possible that most of them are not presently working.

Thanks for the info. I had a feeling EEVEE would still support VR, I just wanted to make sure. I look forward to when 2.8 officially releases.

1 Like

UE4 only does rendering of the VR view (aka what you’re seeing), not a 360 rendering thing. It’s possible to use a plugin for 360 stereo rendering but it’s extremely slow because to get the 360 stereo view you need to (the plugin does this) take a screenshot, take a second screenshot for the stereo effect, move the camera 5-10 degrees in order to not mess up the stereo, repeat this until you’ve got a 360 field of images, then splice them together. For every frame. So it takes a long time but the results can be cool: https://www.youtube.com/watch?v=Kz2xpeGkcRU

Things like Ansel do stereo 360 image capturing way faster though so there’s probably a better way that can be developed in Blender.

What I mean is that UE4 does that natively, without the Ansel plugin, and you don´t have to do it manually :slight_smile:

But yes, Ansel is more optimal I think.

Cheers!

Are there any news about panoramic rendering in Eevee? Although I’m not an expert in 3D, I don’t get why panoramic rendering is so difficult to emplement in Eevee. Technically speaking, panorama could be just a sequence of Eevee viewports attached to each other.

But how do I get Evee to render a cube map?

1 Like

If your renderer doesn’t support it natively, you can “manually” render it out by doing something like this:

  • set your rendering resolution to something quadratic (e.g. 512x512)
  • set your camera FOV to 90 degrees
  • for each side of the cube, rotate your camera so that it looks at the side of that cube and insert a keyframe
  • render the six frames

I tried the manual technique, it’s pretty close, only there’s one thing I don’t get. The top and bottom camera seems a bit offset for some reason, when I reconstruct the cubemap in photoshop. They don’t match somehow. Does anyone know what causes this ?

The camera are all 90 degrees FOV, rendered at 1024x1024. They’re all positionned on the same point.

I think I found out why. I changed the camera clip start value to the lowest value I could. It seems like it resolved the problem.

Here’s my rig. You’d have to render each camera individually and then stitch the images in after effects or another software to make the cubemap. But it works, tested here in Montreal in the SAT dome.

blender_28_cubemap_camera_rig.zip (106.6 KB)

1 Like

Thanks to UnitedFilmDom team, for their excellent tutorial and blender files rig.
Like previous posts mention, it uses 6 cameras, from a single vantage point, each with a 90d FOV.
The rig is a collection which you add to your scene.
You render out in Eevee, preferably as a batch in seconds or low minutes, depending on hardware.
You add the resulting 6 images to a UV mapped cube, with emitter shaders at each face, facing in. Then, using a camera in the middle (no other lights or objects, at all) you render with Cycles, with the built-in panoramic equirectangular camera.
Finally you may add the metadata to view in a 360 VR viewer, like Google photos, Flicker or other.

I adapted this:
Most important tweaks:

  • No bloom (or any other post image compositing or processing, that modifies lighting per shot)
  • Extend camera screen space (not actual FOV) to overlap outside field of view.
  • Add metadata so you can see the result in 360 viewer like Google

The rig and adapted workflow give 42.5 Megapixel VR panorama results, in a few minutes (about 5 minutes)!!
View my work and the panorama

1 Like

Until that time comes, when it is supported, I have solved it! No seams.
In my previous post yesterday I had managed a descent result with slight seams. That workflow could go no more because:

I worked on idea #1:
I stuck with six camera rig and increased the FOV of each from 90 degrees to 96d, so that the shots overlap in the perimeter, about 15% (in the following example each shot is 2300 pixels and the overlap is 150 on each side —> the resulting panorama is 8000X4000 equirectangular, each image has a unique area of 2000X2000 pixels)
Then I fed the set to Hugin’s Panotools, inputed the cubemap coordinates in PanoTools format (yaw, pitch) and gave the overlap corners for control points.
This is saved as a .pto project file so you can rerun the set with new Eevee outputs with one click!

Hugin software, not only removes seams but does a clearly better job at preserving edges near the edges of the individual photos where pixels are stretched

View my work in progress and the links to the panorama test shots(Eevee Architectural Interior - #2 by csimeon)

What happens if you stitch together 1 pix wide strips? It’s gonna get a lot of images, but will all screenspace effects get lost?