Proper workflow to add green screened actor to 3D environment (with camera, actor movement)?

Hi, I am new to VFX, and I know I am jumping in the deep end. I am trying to add keyed footage into a 3D environment where both camera and subject were moving.

As experiment I am using my cat. I have 804 frames of 25 fps footage. 1080p. Have it in OpenEXR image sequence format.

I just want to see if I can get this test done successfully.

I am using SynthEyes, understand I can use Blender for camera solve, but I prefer SynthEyes for its control and export options.

Anyway, here is what I have done:

  1. Solve for camera. In SynthEyes I used Auto, refined points, deleted points that clearly didn’t make sense, made sure no trackers were on cat, set up ground plane, orientation, origin. Error about 0.4. Exported to Blender as python script.

  2. Key the footage. For this test I didn’t use green screen. I used rotobrush in AE.

  3. Rendered from AE keyed result as OpenEXR iamge sequence, 25 fps.

  4. Imported camera to Blender. SynthEyes has option to create camera background for Blender, and here is what that looks like:

  5. Now here is where I am confused… I realize I can just set the keyed image sequence to be the background, and I get good result, things look fine, but with this option I can’t do anything like have lighting effect the “actor”, can’t have shadows cast. For this reason I thought people generally use image as planes.

If I do that I get this result:

Notice how at times the cat is sliding around. I suppose this is unavoidable since the camera solve inherentlky had trouble at certain moments? But shouldn’t the camera solve error data be able to be used to apply a transform to make the view look same as the camera background view? Or am I missing something? Now with my planned shots (me playing guitar, sometimes walking on plane, no crazy movement) should I expect to have better result?

I don’t want to invernt wheel, so how would you go about this - using combination of Blender, SynthEyes (or other solver), AE - to get an actor inserted in such a way where you can have post flexibility with things like shadows?

thanks!

  1. The phenomenon of cat slipping is that motion tracking is not perfect.

Perhaps some incorrect checkpoint values are included.

  1. I think we can use the same method as the video for the shadow problem.

Which calls for a cat model. And perhaps projection mapping. If you have a very simple tube and ball for a head mesh, you can project the texture of the cat from the camera view onto the mesh. Standard budget hollywood VFX. That simple mesh will catch light/shadow and cast shadow.
Back to just the image sequence above: Depending on the activity you may be able to make a mask (rotoscope) to select part of the cat and do a color change. But it would be easier to reshoot and have a strong light so you get the cats shadow with it. Or mask out a shadow from a leopard and use that. Lol…
Image as planes work if you preplan the lighting well and catch shadows. Otherwise you add things to fake the missing stuff…
You can probably find a cat running profile video that could be stretched out on the ground as a fake shadow. Its another possible way to fake.

Sliding cat - yes its avoidable. Never happens in a blockbuster. Incorrect lens from the SynthEyes import?? Forgot to click Show Stable and/or Render Undistorted in blenders tracking windows Clip Display Menu (top right)? Remove some of the worst tracking markers and solve again perhaps. There is tracking and Tripod tracking in blender. Perhaps you need to set something like tripod in SynthEyes.

I only use blender out of the 3 apps - so I cannot give a solution involving all…
Watch Ian Huberts videos - Blender can do a lot. He uses something else for mixing his video. Watch at least one of his last 3 videos - then go look at the tutorials from several years back. Good inspiration & info for you.

Thank you for the replies. Will definitely watch those videos!

I think the lens settings and camera sensor settings are somewhat accurate. I am using GH5, an MFT camera, and shot 1080p at 25 fps with 21mm focal length. The solver shows focal length 20.9, FOV around 44 degrees which seems right. I specified the sensor widfth as 17.3mm though in SynthEyes manual it says that specifying such info might be a waste of time since the true data from manufacturers is rarely known with high accuracy.

I redid the tracking, this time manually. My error is higher (.66) but the cat slides around less. And looking at the error graph in SynthEyes, at the moments where the cat is sliding, the error spikes. At those moments there is a lot of motion blur in the trackers and I just can’t find obvious placements for the trackers at those moments. SynthEyes probably has advanced features to deal with that situation but I am just learning that beast of a program…

Honestly I think my test case is a bit useless now that I think about it. When I do actual filming of my muisic videos I will have more controlled camera movement and I as the subject - playing guitar, sometimes walking - will be ina camera view that is probably more ideal than me filming my cat on the carpet…

But this test case really helped me see some of the challenges. I think I could get the test working really well if I frame by frame tended to the high error parts. But just not sure how to manually track parts where there is motion blur. Do I just place the tracker in middle of the true location and blurred appearance?

thanks again

In Blender I Guesstimate for a frame or two when needed. Has SynthEyes got the function of showing the prior and upcoming marker locations (like Blender has) ?? That gives a clue of where a marker needs to be. I have noticed I often twist with my hand shake - so in the same frame where there is blur in one spot - you MAY have a good marker elsewhere that is a useful guide - again, following the marker trail.

Think more on the lighting when you film. The cat was too washed out / dark… Unless you did not apply a view transform to the .exr footage.
Also - avoiding making shadows / filming in overcast is standard hollywood. You must have seen ‘making of videos’ with stars in the bright sunshine under a large shade.
This just popped into my head.

Its a goldmine of useful info, even though its caroons.

1 Like

Hi, I can set keyframes for the trackers so that at a blurred moment I have good track before and after. My cat test actually is not very represenative of how I will actually shoot. WHen I do this for real someone will film me playing guitar and sometimes walking on floor. Will use green screen with well placed track markers. I think my camera solves will be way easier as there won’t be these sorts of jerky camera movements I did for the test.

Lighting will of course be hardest part. I have decent lighting set up, an Amaran 200x, Amaran F21x, Aprture MC Pro, and a couple small lights, diffusion, fresnel, hopefully when I decide what sort of environment to composite myself into I can match lighting convincingly enough, and will see what After Effects can do in terms of helping match my footage with the environment.

thanks

Ok, I actually have things working now using image as planes. Oddly enough it was a frame offset issue. I only realized this when I had SynthEyes export as part of its Blender export a camera projection screen instead of camera background. In the texture settings for that screen it said start frame 1. So I redid my image as planes, this time selecting every frame except first, and everything is fine.

So this means the weird sliding of the cat was not solver error - it was just frame offset error which is some sort of user error…

1 Like