I saw a neat video, where a unity Dev used 2 cameras
one camera was a aerial perspective, the camera cast a ray forward until it hit the ground plane (x-ray) then it cast a regular ray backward up the same line, where ever it hit, it placed a camera that focused on the same point the first ray hit,
this texture was then projected on top of the viewplane, over the focus point,
effectively making a ‘portal’ through all obstructions allowing you to see through a ‘bubble’
Put a transparent plane with alpha texture in front of the main camera. Shoot a x-ray from the plane and detect the ground property. Shoot a normal ray back to the camera. Place the videotexture camera on the hit point and copy the rotation of the original camera and render the texture to the plane.
Shoot a ray form the player to the main camera.
At the first hit object (if is closer than the main camera) place the other camera.
Take a snap.
Then send it to a videotexture.
Place the plane with the videotexture in the overlay scene at the other camera location.
This way the the xray would be on top of anything and at correct size (hopefully).
Dynamically adjust the resoultion (no need for FHD far away).
So is this for when a 3rd person view character walks into a cave / building and you don’t want to explore its nose hair (keep maximum the camera distance)?
I think of making snapshots after a certain amount of time of the block easy demo to increase framerate.Disabling the videotexture script and enabling it after a certain amount of time.Putting the snapshots on a occluder.Meaning let framerate go back to 60 when the videotexture script off for a certain amount of time.And framerate would drop when it is on.
exactly like walking in a isometric city with obstructions, and you could see through them, but only in a limited range, almost creating a ‘fog of war’ or a visual range for the player when inside a building.