I’m in the process of trying to map 3D camera rotation to a background image’s translation, and then composite the 3D scene onto the background plane scene, like old 3D arcade games, before skyboxes were figured out. But right now, I’m stuck figuring out how to make the background plane translate.
At the moment, I’m using an imported image as a plane, set to 10 dots per Blender unit, so I can easily match an orthographic camera’s pixel density to the texture’s.
My first approach was to just use repeat extension and the X and Y location parameters on the Mapping node. Worked fine, but then I needed to rotate the image. Rotating the plane itself would technically work, but that opens up the possibility of seeing gaps in the image, so ideally I needed the texture to rotate, not the plane.
Rotating it on the mapping node skewed and distorted the image, so I then I tried mapping it to an empty. That broke the pixel density (shows up in the render), so I tried to map the empty’s rotation while retaining the generated texture coordinates. That did my head in, so now I’m changing tack. Move the camera around the background plane, including rotation. I can mitigate gaps by adding padding, but I need to figure out how to procedurally “teleport” the camera when it reaches a defined edge to the opposite edge, like how Blender does infinite mouse.
That’s my latest decision, but honestly, I think I’m still overcomplicating things. Is there any solution to any of my processes that I could use? I am not picky, I just want this idea to work.