How's VR development going?

There’s very little disucssion about VR on this forum (and zero threads with a VR tag in this category). Was it a tech demo where development just stopped after the first version shipped? Is it waiting for Eevee Next to receive updates?

I’m just curious because I tried it with a few what I thought would be easy scenes, around 60k triangles in both, two to a dozen objects, half a dozen lights, contact shadows in one scene but no difficult materials, materials that use a bit of noise in another… and on a 3090, I think I got about 10 fps. Even the reprojection was absolutely tanking (solid mode was fine, of course).

But then I tried it with an animal with very detailed fur, and I though this wasn’t going to end well, but it was actually better performant than my scenes! So then I thought I’d go crazy and just open a (for us, a small ID company, medium sized) movie project we recently did, 1.5 million triangles, 500 objects, at least one material that does some stuff, massive contact shadow distance and AO distance… and not only did I get what looked like 30fps (which is still low for VR), I was able to play the movie and glide along with the camera and see everything, including geometry nodes animations, which was pretty mindblowing. :exploding_head:

So, very uneven experience, which granted, Blender does a lot so it’s difficult to optimize, but now I’m very curious what is happening in this area (but we’re so, so very close to pulling the trigger on moving to Unreal where I work).

(Also, just one small detail which would increase the UX by 1000%… if I aim the laser pointer at something which has normals pointing up, don’t place my headset directly on it. It’s probably a floor, so it would make more sense to place the headset at my body height. We do this in Unreal already where one has direct control over the navigation implementation. Also, wasn’t able to figure to figure out how to switch from movement to manipulation, but maybe that’s not implemented yet.)

1 Like

i wonder if the problematic scene was specifically the half dozen lights? were they all shadow casting? higher light counts are one of the expected improvements in eevee next.

also since each eye needs to be rendered, view dependent calculations like lighting are doubled up, whereas geo nodes are likely only calculated on up frame update and resolution indepenedent, so there shouldn’t be too much of a vr hit.

that ux improvement sounds sensible! probably should have a raycast both downwards and in the direction of the pointer to add sensible margains to avoid obvious intersections. there’d always be edge cases, so maybe one could configure the distances.

And is not only about rendering and speed. But is about having out-of-the-box solutions regarding face capture, motion cap suits, live camera link. Everything can be imported from the asset store and then linked with Blueprints to work together.

From what I see now Blender is great for creating and editing assets, but then you start hitting certain limitations.

I don’t know much about VR but it’s mentioned as an area that can be benefited by Blender Vulkan migration.

OpenGL isn’t developed anymore. Vulkan is replacing it and since the introduction of Vulkan 1.0 no core changes have been made to the OpenGL standard. Vulkan during its announcement back in 2015 was actually named OpenGL Next. New technologies (For example GPU Raytracing API, but also AR/VR standards) we want to benefit from are only available/standardized in Vulkan.

1 Like