After solving a camera tracking, Blender will output a distortion node set to undistort so you can create a CG with proper perspective. At the end of your compositing, you need to redistort your scene with that node set to redistort and maintaining original plate format. I already did some VFX scenes with live action footage, yet never did a scene with CG at camera borders. This one require a overscan resolution to avoid croping (not related to Eevee overscan). I found an addon but didn’t solve. Has someone a work around this problem? Shouldn’t that be a feature in Blender?
I just created an account so i could reply because i have been struggling with this also for a while now.
Its a little late, but i think i have found a solution. The issue is that the distortion node uses the resolution of the source video and not the Overscan Resolution.
The solution is to add two scale nodes to the Render Output. One before the distortion node and one after it. You need to scale the footage up by the factor of how much overscan you have. Lets say you render with 50% Overscan, so that your resolution is 150%. Set the scale factor to 1.5 before the distortion and then revert the value to its original by scaling it down to (1/1.5). This should result in a correctly distorted Render that matches your footage and can be composited over the footage. The only Problem that remains now is the Output resolution being at 150% so you might have to render it first and then crop the result, or save the render output and then do the compositing. Maybe anyone else has found a solution for this.
Scaling will affect your CG position in your tracked scene, specially if there’s a lot of camera movement. For proper lens distortion CG workflow you need to render more pixels but maintain your plate perspective. That addon compensates that lack of pixels by recalculating sensor size. If a given plate was shot 4K in a ARRI Alexa LF 36.7x25.54mm sensor, addon will change to 38.53x25.54mm, considering that 5% was increased. A proper overscan will add more pixels, keep your camera centralized and maintain original sensor size. There’s a perspective shift and some lighting changes:
I think there might be a minor misunderstanding.
The render input is not in any way scaled up. I increased the render output size to 150% but i also compensated this by dividing the focal length by 1.5. This is basically what the overscan plug in does and should give you the extra Pixels needed for redistortion. I should have said that in my first reply.
The scale nodes are only there to fix the issue that the distortion node thinks the overscan size is the same as the footage size.
The render output is now the footage with the correctly distorted 3d render with overscan composited over it but it also has a frame of transparent Pixels around it because of the 50% added resolution. This is the only problem where i could not find a solution without either re-rendering and cropping or saving the raw 3d input with the overscan and then doing the composite at the footage dimensions.
I tryed that setup, it added a gap at the bottom if compared with the addon version. Yet a solution that doesn’t require rendering those additional pixels will become very handy, in fact would be great. Maybe I’m overseeing this, most people won’t notice a slight change in perspective from addon. Thank you for you attention to this matter.