Using a VR 3D Image / Video as a background image of Camera

Hi, does anyone know an add-on or a way to use a VR180 stereoscopic video or picture as a background image for the panoramic equirectangular camera. I know blender has a add-on to preview the scene inside a VR headset, and I would like to get a preview of the scene + background video. That would be extremely convenient for VR video compositing.

Thanks.

Try environment texture on the world. I believe it should do the trick. Or wait. It depends on how the video should be mapped…

Thanks, it’s true the equirectangular VR picture is similar to an HDRI, but I want to use a stereoscopic image, and apply it to the camera (maybe as a image background) or to the render (maybe with some nodes in the compositing panel, as I would do in davinci fusion). One of the goals is to estimate and compare the depth from the video with the one from the scene before render.

Welcome :tada:

…interesting… i just did very little things with the stereoscopic camera setup and i never tried to add an object into an already generated stereoscopic video and indeed the only thing i found (yet) is this:

Adding 3D objects to your VR180 or 3D 360° video:

Thanks, I’ll check that video and try to follow along.

I already did some tests, adding 3D objects in VR Video. I usually do it inside Davinci Fusion as it allows you to preview your object over the video, but importing FBX is not ideal, and importing renders without a preview of the position of the object in space is not good (due to stereoscopy and distortion of the equirectangular render, you can’t move it freely on your video). So I just tried simple 3D objects created inside Fusion.

Maybe another way around would be to create simple 3D plans in Fusion over the video and then export tracking data to Blender …

But direct 3D preview inside headset in Blender would definitely be more convenient.

1 Like

Blender supports 3D by default. Though I think it wasn’t developed for a long time. Since it’s kind of buggy at the moment. But that means that blender has attributes for both eyes and it seems like those attributes could be used as masks to mix between two separate textures. Meaning you can split the left and right eye. I hope there is a way to do it without scripting. But I looked up in all of the inputs you can use for a material and the only one that sounds right is Attribute. But what is the name of the attribute? I have no idea.

I’m also interested in this topic.

In Unreal Engine you can get a flag for which eye the frame is rendered for.

Then the shader calculates the UVs for the image. And it works great in UE.

return ResolvedView.StereoPassIndex; - UE

I wonder if it’s possible to get this in Blender?