Does Cycles use the nodal point of an environment map to calculate what to reflect?

Hello together,

I did some tests today with a 360 degree HDR environment map and a glossy cube.

Maybe I missed something and maybe it’s a silly question. But to me it looks like the reflections on that cube are always calculated from the nodal point of the panorama (i.e. the center point of the panorama) no matter where I place that cube. Is that true?

The cube in the scene is 2x2m and its distance to the scene’s camera is about 7m. But if I rotate the cube at an angle of approx. 45 degrees towards the camera it reflects the nadir of the panorama, although it is ‘physically’ far away from that point in the scene. You can see my tripod in the reflection as if it would sit right above that. I positioned the camera in the scene to a point at which it reflects the camera position on the tripod when I shot the panorama. Is there a way around that?


What I naively did expect, was a reflection that would show a more distant part of the ground, because the cube is farther away. (The white line in the screen shot.)

But maybe this can’t be calculated different, because the dotted extension of the light path would indeed hit the nadir of the panorama and there is no information about where to bounce it back before it reaches that point …

If it is true that reflections on an object are always calculated based off the nodal point of that panorama, then how could we achieve really ‘correct and accurate’ reflections for the objects in the scene? Another problem is the size of the reflected tripod in the cube … Until now I thought that having an HDR panorama of a set would have been sufficient to do some vfx integration but looking at the reflection, that was surely naive again.

minoribus

If you wanted an accurate reflection of the scene, you would need to model the scene. The HDR panorama is mapped as if it was on an infinitely big sphere around the scene. One panoramic photo is taken from only one point, so it does not contain information of how stuff would look from any other point. You could have a panoramic photo taken at the origin of the object and then at the camera to have more accurate results. HDR panoramas are sufficient in many cases, because mostly incorrect reflections from irregular shapes are very difficult to notice. If they get noticeable, you simply need some other solution.

What MartinZ said. A panoramic image contains no information whatsoever about the position of pixels (I can’t say objects because Cycles does not “see” objects with any stretch of imagination). Therefore the only thing it gives is direction to each pixel. Render engine can only use this direction for sampling the panorama image, basically doing a lookup to the image texture based on reflecting object’s surface normal. This gives the impression that reflection does not change when you move the object in scene - surface normals point in the same direction, so you see the same thing in reflection. It would be the same when world sphere were infinitely large, because in this case moving the object would have an infinitely small change in the direction vector between a certain point on the object and a point on world sphere.

Thank you, MartinZ and kesonmis, that makes perfect sense. So every object in the scene has reflections based on the position of the camera when the HDR was taken. Good to know for future projects. I found an approach which is a bit on the line of “modeling what to reflect” near to the object. That did the trick for me. It uses an HDR and a backplate which is projected on a plane.

Thanks again!