Projection mapping perspective manipulation

Hi everybody,

I’m planing to do an projection mapping at an stage in a theater using Blender. I’m using multiple projectors at different perspectives. For scenes with a simple texture and an emission shader right now this works great. I’m rendering the scene from every perspective using different cameras representing the projectors.

The problem is, when displacement and 3D models are coming in. Because then the perspectives won’t match up. The main perspective is the one from the audience. So all the other projectors have to distort it’s rendering matching to the audience perspective.
The only idea I had right now is to render the image once from the main perspective and use this image to project it to the scene. Then in a second step the other perspectives can pick up it’s picture with the correct distortion.
Does anybody has another idea or experience to solve this problem easier and ways faster without this steps?

I’ve found a video explaining the problem in general. But i don’t know how to solve it in Blender. https://youtu.be/LsAGJqVrdkc?t=65

Greetings josaur

This one is a bit tricky. It takes a bit of work but it is possible to do in Blender.

Part 1: The “easy” method

Here’s a test scene I set up to answer your question. There is one collection that has a model of the real life building, and another one with the fake animation to be projected onto it.


What you need to do is render the scene once from the point of view of the audience, and then again from the point of view of each of the projectors using the image you rendered from the first view.

Step 1: Render the audience view

First select the audience camera as the main camera (Ctrl+0) and render the animation with a field of view that encompasses everything in the scene. Toggle the render settings so it renders the fake models but not the real life building model.

Projector Diagram 0


Save a copy of this image/animation.

Step 2: Render the projector views

Now hide the “fake” collection and go back to the real world model. Create a material that uses the previously rendered animation/image as an image texture.

Now to get the texture to line up: Set the view to the audience camera (0 or Ctrl+0) and then select the real building model and go to the UV unwrapping view. Select all of the faces in edit mode, then press U and select “Project from View.” You should see the UV coordinates lining up with the rendered view.


Now it has an emission shader that shows the same locked perspective even when you change to different angles.
Projector Diagram 6
To render the proper perspective you need a second camera that is aligned to the focal point of the real world projector (I’m assuming you have measured this.) Select the projector camera view with Ctrl+0 and render the building. Since it is just using the emission shader we can use Eevee for fast rendering.

That should get you most of the way there, except for a few specific issues that you might need to worry about with this sort of thing.

Part 2: More advanced stuff

2a. Perspective distortion

If you are only getting the texture coordinates from UV mapping, it will translate the textures using affine maps, which can can cause a weird wobbly effect in the textures in some situations. This is the same effect that causes jittery polygons in old PlayStation games. It is possible to mitigate this by subdividing the model, but you can also make a shader that will do correct perspective for every pixel. Here’s a version I came up with:

I was going to write a detailed explanation of how this works, it might be more complicated than what you need.

2b. Color Management

If you render the animation using the Filmic settings (enabled by default) it might end up with a slightly different shade after rendering a second time from the emission shader, since it is trying to apply the same color transform twice.

Projector Advanced 4
Projector Advanced 5

There is a setting for the color management in the Scene tab, and another setting for the color space in the texture input node in the shader. The output of your animation render should match the input of the texture node. I could be wrong on this, but I believe you can use either of these settings as long as the input matches the output.

1 Like

I have been working for a while on getting this figured out myself. I figured that the render was doing something to the image that distorts. You are the first I have seen show a way of handling it specifically for what I am after.

Right now I am attempting to match the one camera in scene with the render from the same camera. I thought it was aspect ratio at first and that did not help. I am now back to square pixels and I am trying to fit the render to the model just to verify the overlay in blender itself.

I have not yet tried your distortion fix yet. I will try it but, wanted to ask where are the mapping node location numbers are coming from? My final output will be to a png sequence imported into unity and that output ultimately goes to 2 overlapping projectors that have a combined resolution of 3180x1200. I am sure there are issues with that I will have to work out and so far attempting to create multiple cameras with the same overlap in blender has failed. I cannot get the extensions to be recognized for some reason when enabling that setting. I thought that it was the shift in perspective using 2 cameras in unity that was the problem.

I took a rendered image and used it on an image plane import in blender after unity was showing the offset from the model. Same exact offset in blender though. I have spent hours attempting to get this working right and I know there has to be a way to get the render image to correctly overlay the objects in scene.