I did some tests today with a 360 degree HDR environment map and a glossy cube.
Maybe I missed something and maybe it’s a silly question. But to me it looks like the reflections on that cube are always calculated from the nodal point of the panorama (i.e. the center point of the panorama) no matter where I place that cube. Is that true?
The cube in the scene is 2x2m and its distance to the scene’s camera is about 7m. But if I rotate the cube at an angle of approx. 45 degrees towards the camera it reflects the nadir of the panorama, although it is ‘physically’ far away from that point in the scene. You can see my tripod in the reflection as if it would sit right above that. I positioned the camera in the scene to a point at which it reflects the camera position on the tripod when I shot the panorama. Is there a way around that?
What I naively did expect, was a reflection that would show a more distant part of the ground, because the cube is farther away. (The white line in the screen shot.)
But maybe this can’t be calculated different, because the dotted extension of the light path would indeed hit the nadir of the panorama and there is no information about where to bounce it back before it reaches that point …
If it is true that reflections on an object are always calculated based off the nodal point of that panorama, then how could we achieve really ‘correct and accurate’ reflections for the objects in the scene? Another problem is the size of the reflected tripod in the cube … Until now I thought that having an HDR panorama of a set would have been sufficient to do some vfx integration but looking at the reflection, that was surely naive again.