Valid camera movements to retain illusion of perspective with objects sitting on HDRI backgrounds

I found this a very interesting subject that’s definitely worth discussing. I’m not aware of any Blender or other CGI wiki explaining it officially although one may exist. Curious if others can confirm my suspicion before I set up a test case to try it for myself: I should be able to easily confirm or infirm it, but would really like to know the official truth from others who understand those things at a deeper level.

So we all work with HDRI images in most rendering projects. When it’s a static render everything’s straightforward: You set the HDRI as your world background, put a shadow catcher plane on the ground, position the camera at an angle that makes sense, then render whatever you want on top. But what if you want to make an animation this way? You can of course animate things in your scene as long as the camera position is static: The world is mapped to the view at an infinite distance, thus if you move it in any direction the objects will appear to glide across the world and the illusion of perspective is broken. You can dynamically rotate where the camera points and even change its zoom but never the origin.

Or so I though. Last night it occurred to me there should in fact be at least one pattern in which a camera can move, without objects at a finite distance floating off the ground of an infinitely far away map. You should only need to respect one rule: The camera must spin around the origin point at the center of the scene and can’t change distance from it, meaning you can only go in circular or spherical motions.

If I’m right the following technique should work: Create an empty at the world origin. Put your camera at any position and make sure it faces toward the empty. Parent the camera to the empty which is your only pivot point. You aren’t allowed to move the empty but can rotate it: The camera will always stay at the same relative distance thus objects shouldn’t appear to be shrinking / growing / sliding.

But is this actually true or am I just imagining it? I may be wrong about vertical movements being possible even this way, but shouldn’t it at least work if the camera only spins horizontally in a flat circle? What are the actual movements that cause the “infinitely far away” effect of an HDRI background, which are the spatial rules to abide by so you could still move a camera in sync? Is there a list documenting the mathematically allowed transitions a 3D camera could make without breaking depth perspective with a spheremap / cubemap background? This is both fun to think about but I’d definitely have more flexibility and new practical uses by properly understanding it!

You might be interested is this technique…

1 Like

Very useful and interesting, thanks! I actually used a similar technique before in camera tracking: I put a plane on the ground and gave it a mapping of the HDRI used by the sky… I believe you could simply plug in the Camera or View or Window input to the mapping node without needing that empty, for a 360* HDRI this technique may be needed instead. In fact I think I even managed to simulate bump mapping with per-pixel lighting from a custom light source by plugging the background into a bump node and that into the normal input, fun stuff! And indeed the plane will be darker or brighter compared to the surrounding lighting… I worked around that by simply having it touch the edges of a straight sidewalk so it wouldn’t be noticeable.

1 Like

This video is also very useful and explains a similar technique. Might try to mix them and see to what extent I can get both a simple setup and the desired result. I don’t feel like mapping the HDRI both to geometry and the world, so I may go with setting the world to a Holdout node and using a mixture of emission and principled on the dome thingie.

Hello guys, did you ever try to do this for the shadow catcher material, instead of all that complication of point of view and empties and all the stuff of the videos?

With this, you can move the camera with no problem and the projection is perfect.

The result with this after some corrections:


Or:

It’s only necessary to edit a bit the RGB curve and create a transparent mask for the borders.

1 Like

Ops, I forgot an example with reflections.


And, yes, with light projection.

1 Like

Was just trying something similar myself! No world texture if I’m going this route in my setup: I use the empty that determines projection height to drive a gradient, which mixes between an Emission and Principled shader at that height. This does cause the ground to reflect itself twice and so on, but it’s definitely working in principle and showing results :slight_smile:


The shadow on the ground doesn’t seem to be that strong which is a little concerning, though I think that’s due to me using a HDRI where sunlight doesn’t shine directly: I wanted to stick to something that doesn’t have any objects up close as that seemed more likely to break the illusion.

1 Like

Hard shadows are complicated to look real depending on the apparent surface of the image. Like here:


It always looks better if the surface is flat, like here:

What I like about this setup is how simple it is and the fact that it never produces any kind of distortion.

I’d like to make sure of one thing: When you plug an Environment Texture node with a hdr / exr texture in a surface’s Emission shader, will the extra lighting information stored in the format still be used? So the sun in the HDRI image still shines as sharply and brightly as if plugging it in the world background instead. That might be one reason why I don’t seem to be getting shadows and the exact lighting I expect with my version of the setup, though I may be missing something else too.

Well, I only use it to texture the shadow catcher as with that I have no need to create an environment. The problem with emission is that you don’t produce directional lights like the sun as it works in the HDRI.

So, you will never have correct shadows if you use it in an emission shader.

Aha, so that is a problem. The issue is you need to put the whole HDRI on geometry if you want to be able to move the camera around slightly, if you need camera movement you can no longer use a mere shadow catcher or map the HDRI infinitely far away under any circumstance.

I could map just the bottom part to geometry to get a static ground as in your first example, leaving the rest as the sky. But then I’ll get seams where the geometry ends and the world begins rendering, the upper part of the HDRI will be at a slightly different position / rotation / scale. If the ground is faded away smoothly of course, maybe that’s not such a big issue in the end?

People mentioned it should be possible to use a fancy node setup in the world to stretch the texture so that it moves with the camera, in such a way that allows maintaining the effect fully world side and getting perfect accuracy. Does anyone know of this fancy setup so perhaps I can try it too? To be fair I could give it a try even if drivers and Python may be involved.

No, you will not get any seams if you map it as I showed in my setup, not because of the mapping, at least. The problem is that the shadow catcher is also going to receive light from the environment, and as it has the same colors and intensity as the environment has it will tend to look a bit different. That’s why it’s useful to correct the RGB curves a bit and use a mask as a factor for the transparency on the borders.

This video points to an amazing addon that solves the projection issue by using a special node setup in the world to resolve the mapping. I prolly won’t use the full addon each time as I don’t like depending on them, so instead I may try to extract the technique used there and put it a reusable use node group.

In the meantime what I found works is this: Pick only an HDRI image that encompasses a huge area without any large objects nearby… Shanghai Riverside is a perfect example, especially as it doesn’t have parts the viewer expects to move like people or a direct view of waves in the water. Project a bottom snapshot of the HDRI onto a ground plane essentially converting it into a standard 2D texture… if you can you may even give your plane a more complex shape to create something like a stairway, the higher you can make it and the further it can stretch out the more sense it will make as the camera moves around, the goal is to simulate a platform for the camera to stand on but which merges well with the ground in the real HDRI.

The important part is understanding only a handful of HDRI’s will work properly if you want scene geometry to fully interact with the ground. For instance you can use a bump node to extract the normal map, but this uses the brightness of pixels so if bricks of different colors sit at different heights it’s not going to work well… same for roughness the reflectivity of each brick must match its color. Also it needs to be a flat ground like asphalt or sand or something else… if the camera is looking straight down at a large element like rocks, you need to replicate those as real 3D models since you can’t use bump or even micro-displacement to get an object able to move over or between close details without it being clearly fake. If you can, find an original texture or floor model that represents the pattern on the ground of the HDRI and use that instead. I’d say that’s an important lesson to take from all these attempts.

I looked at the node setup used by LilySurface for this purpose and made my own a simplified version which should do just what I need. It’s not perfect and you need to watch your valid camera positions closely, but it pins the floor in the world itself so it’s all I should need! This is what I came up with, let me know if anyone has a better version in mind.

Here’s a blend with it: Use your own HDRI, it should work well with any. If you pass it through the node with a positive Z direction set it should map the ground accordingly.

untitled.blend (829.0 KB)

Only annoyance is, you definitely get some stretch across the horizon, and it changes with angle. In my screenshot you can see pieces of the fence suddenly go diagonal and point toward the center of the screen. You need to keep the camera very and close to the ground to decrease its visibility.

1 Like

Best drop in solution I’ve found so far, but I still need to keep looking in my case…

  • Needed to reduce size (“direction”?) quite a lot to get proper scale, which distorted the background
  • I don’t see any shadows on the ground (apparently I need to look into something called a “shadow catcher”
  • If you attempt to rotate the HDRI via the mapping node, the ground alignment gets ruined when you move the camera

Seems a lot of people have been trying to solve this, and maybe it’s solved and maybe it isn’t, but it’s not easy to find a working solution!

EDIT: Found a similar tutorial which gave the best and most easily controllable results (for me) yet!

EDIT: Damnit, that method apparently causes fireflies… sigh…

bild

1 Like