I’m working on making a cubemap for unity from a panoramic photo (from openfootage). I Have six cameras set up to match what I need in Unity. These are inside a sphere with the panoramic photo applied using sphere mapping coordinates. The image below shows the +X camera as active. Note that the image does not display in what you would think of as “normal” orientation.
When I render an image from outside of the sphere I get what you see below.
The viewport view is looking through a camera outside the sphere. Next to it is the rendered result. The rendered result is much different then the viewport.
When I render the active camera shown in the top image (+X in unity) I get a similar result. The viewport is looking through the camera with the rendered result next to it.
The rendered image is what I am wanting. What I came curious about is why the rendered image in the later two examples do not match what is seen in the viewport.