I rendered a panoramic image in Blender, but when I viewed it in VR, I noticed that there was a significant gap in the image. It appears to be caused by the left and right sides of the panorama not matching up. Why would this issue occur?
This might be because of the denoiser. On the borders of the image, the denoising process doesn’t have information about what’s outside, so the outer parts of the image may have slight inaccuracies.
This can be tested easily: if it’s the denoiser, increasing the render quality (increase samples, reduce treshold) should make a difference. Rendering to a fully clean image without the denoiser would completely fix this.
I tested by this method and found that the problem has been resolved, but increasing rendering quality without the denoiser has significantly increased the rendering time. Is there any way to improve the rendering efficiency?
If you are doing a single image, maybe it would be possible to use render regions to get just the borders of the image in high quality and then stitch them on top of the full image?
An other way I could see would be to find ways to accelerate the rendering of the scene as a whole. Interior scenes tend to be slow and noisy, as the light can just keep bouncing around without finding an open sky to escape to.
First, the obvious things.
Make sure your window’s glass material uses the shadow disabling trick.
Use light portals in the windows if your aren’t already and make them match the size of the window as exactly as possible. Those will help clean the ambient light coming from the sky.
Now, I have a more advanced trick if you know how to bake textures and if it fits your needs.
If you have an interior scene where the lighting won’t need to change, there is a way to massively accelerate the render. It involves baking the lighting on the walls, floor and ceiling, but then using that bake only for indirect light (so it won’t be directly visible, but will help the rendering).
See my second reply in this thread, where I detail the steps.