I am looking for tips on how to best set up View Layers or other Blender configurations for a project. I modeled a proof-of-concept virtual set, which will be put on a very large monitor behind our talent in a live public-access news show.
The virtual set itself has a fake curved screen:
Using view layers, I render a file that is just the UV coordinates of the screen (masking for the desk in front) from the virtual camera’s view:
To animate content on the screen in real-time (hopefully once the virtual set is photorealistic), I can then in Godot use a fast 2D shader to composite 2D elements onto the screen to fake the distortion of the screen. The following image is compositing the debug grid texture purely in 2D in Godot:
Notice that there are some weird noisy spots at the top and bottom of the fake screen. Since the red and green channel of each pixel is used for mapping, it’s critical that they purely represent UV coordinates. I think what is happening is that antialiasing is changing the pixel red and greens a little bit at the edges:
I haven’t figured out how to get a “perfect” UV map with red, green, and alpha channels.
I’d really like to be able to render both the set and the screen mapping image in a single button press, so I’ve been using view layers:
This approach is pretty close, but I think there are times where I will want to be able to use the same model, but switch the material to a shadow or reflection catcher. I think Cycles can substitute a single material on a layer (like for clay renders), but I’d want to be able to do something like make everything except the screen a hold-out/catcher. The catcher should also let me create natural looking reflections, since the red/green texture could reflect off of catcher surfaces.
Sorry for such a long write-up, but I wanted to get ideas for best practices on achieving this as a workflow, and I’m afraid that if I asked a more specific question, I might be missing a more holistic change in approach.
Thank you!