Create Texture Dynamically From Camera View... Possible?

I want to map what a camera in my scene is seeing onto the surface of an object that my main camera is seeing. Is there a way to pull this off?

Check out the attached screenshot. Basically I want to map the view from Camera A onto the cube and plane seen by camera B. I want to this be dynamic so that as I work on the scene in front of Camera A, the texture on the objects changes accordingly.

Is this possible?

Not dynamically, no. You need to render the scene and then use your render as a texture for a new render.

1 Like

I believe what you want is Camera Projection as covered in this video…you might have to use the info and do things as you need but should work just fine…

2 Likes

Thanks, I suspected as much but I was hoping maybe there was a way to ‘live wire’ the camera feed as it were. It would be an incredibly powerful feature.

I’ve seen this tutorial before. This is for mapping a texture from the perspective of the camera, not for actually projecting another camera view onto a texture. Thanks though, cheers!

I guess you didn’t understand…
It will work if you use the info from the video and apply it to your needs…

1 Like

Oh I guess you’re right, I didn’t get that. Can you send me your project file? I would love to see how it’s done. Cheers!

Projection.blend (1.5 MB)

This is really a hack but does what you wanted…Just import your textures to the individual materials.

1 Like

Ahh, I see what you have done. I think we aren’t talking about the same thing here. What I’m trying to do is create a texture from the scene and have that image (which is seen by Camera A) projected onto the surface of another object in real time (which seen by Camera B, which would be the main render camera in the output). In other words, there are no image files being used, the texture is being created dynamically as the scene is rendered.

For instance:

Imagine there is a camera pointed at a character dancing. There is another camera pointed at a screen which is displaying the image the other camera sees of the person dancing, in real time.

Make sense?

Not possible as part of the render. You’ll need to composite it afterward,

1 Like

Copy that, thanks for weighing in

I played with the second camera idea, and it does work…
Camera One looks at the scene…Object 1 gets an image from Camera One…
Camera 2 looks at the image on Object and renders out…

Problems…
1> Everything in the scene will have to be a projector. ( ok as it allows multiple projectors) haven’t gotten to the point of checking if multiple objects can project to a single camera.
2> For the second object to receive the image it needs a duplicate of the original image…You can’t have a New Material, You can add materials as I did in the first example) so that is a deal-breaker as you need the image to be rendered, to begin with… I haven’t looked to see if it can render as it is looking ( I doubt it)!
So no matter what you have to have a texture rendered before you can start to do this! Pre-Rendered…no problem…

1 Like

Ha, I love your determination! Yeah, I’ve tried to figure out how to hotwire the camera view to project onto another surface but it does seem that compositing is the only way. Cheers for trying though!

1 Like

I hope the real time compositor will allow that one day ! Sad to see we are in 2021, every game engine can easily handle that, even some use it as main core feature (portal, splitgate) but it’s not implemented in eevee or cycles…

1 Like

Did you find out ? I’m interested!

If your happy with the ‘video screen’ plane being one frame behind… could I sugest this (nor tried yet)

  • Before starting use VSE to create a series of tiny, empty PNGs (they’re just dummies) starting at frame 0 if your scene starts frame 1.
  • Load them into your screen’s image texture as an image file sequence, and edit the start frame or offset so that while your on frame 1, it’s on frame 0.
  • In compositing, set an output file node node with the same filename and attach this to a render-layers set to the relevant camera.
  • On frame 1 it will read frame 0 and save over frame 1, on frame 2 it will read frame 1 and save over frame 2…

Ha sounds interesting but I don’t follow as those steps make no sense to me :scream_cat:

OK, I’ve corrected a few mistakes, but I’ll try again… not typing on a phone with autocorrect :woozy_face:

I’m going to assume your scene goes from frame 1 to frame 250.

  1. Before starting use VSE to create a series of small PNGs that will be overwritern during the actual render. In this example, I have made each show the frame number.

  2. Load them into your screen’s image texture as an image file sequence, and edit the start frame or offset so that while your on frame 1, it’s on frame 0.

  3. In compositing, set an output file node node with the same filename and attach this to a render-layers set to the relevant camera.

  4. On frame 1 it will read frame 0 and save over frame 1, on frame 2 it will read frame 1 and save over frame 2…

I’ll edit this with screengrabs as I get time.