UV Project modifier for VFX Paint Outs

Hey all. Long time lurker, first time poster.

So, I just wanted to pick some intelligent brains.

I am trying to do some complex 3D paint outs, and my weapons of choice are AE and Blender.
I can use both quite well, but I am having some trouble sussing out the best way to go about using a great Blender tool for a purpose it wasn’t designed for.

I have a shot that needs an object removed. I can camera track in either Blender or AE (and transfer via JSON), no problem.
That camera is in blender. I create basic scene geo, and then UV map that geo, and then use the ‘UV project’ modifier from the moving camera to project the footage onto the geo. Works for me.
What I want to know is… is there a way to render out that projected texture as a flat, baked, stabilized video to paint on back in AE. I can do single frames using a duplicate camera, and bring everything back, but brain is failing me. There MUST be a way to use this data to get out a nice, flat, paint-able projected texture, and then re-apply it in Blender to get a clean plate.

Also, apologies if this isn’t the right place for this question. Couldn’t find a slightly more general ‘Blender use in VFX’ cateogry.

Would love any thoughts people have on this.


Hello and welcome !

I’m not sure to exactly understand what you want to do… But this remind me of something I did one…

you want to track a particular part of the shot to stabilize it, clean it , and then reproject-it on the shot…

I did that one to remove some reflection of a car that moved in the distance, I plane tracked the door of the car, kind of baked that so I ended up with everything that was inside the plane, cleaned that and put the cleaned version in the plane, if you see what I mean…

Is that what you’re talking about ?

Close, yeah. But the shot will be translating in 3D, hence the need to use a 3D track, and more robust 3D projection than AE can handle.

So essentially, I want to create a stabilized, undistorted plate, using the UV Project modifier. Shot has 3D move, track it, project it onto geo, then re-render from a stationary camera, paint on that render in AE (I can get that far by myself), but then use the painted clean render, and project it BACK onto geo and add the original camera move back in. Would I simply re-project it from the stationary camera onto the geo and re-render the shot with the moving camera again?

This would be a great plugin or tutorial for Blender, as the only Compositing software outthere that will do stuff like this is Nuke, and I know Nuke is crazy expensive for a lot of Blender’s user base.

Hum, I’m trying to follow your explanations but at some point you lost me :slight_smile:

Yeah sounds the way to go, if the camera move is too complex you might end up with your cleaned image bleeding on some extreme camera angles, then you probably need to add one or few more cleaned images.

Why AE and not photoshop ?

In any case, maybe posting a few images to illustrate what you want might be better.

Well if you really don’t want to spend any money you can look into Natron, for simple comp work it should be great.
And if you have a bit of money I’d look into Fusion.
AE is great too especially since it doesn’t seems super complex.
For more advanced compositing Natron, Fusion or even blender viewport compositor might be interesting to look into.

Not really about money, more about familiarity to tell the truth, I just know AE inside and out, and likely don’t have the time to re-learn what I know, in Nuke or fusion for these shots on a project.

Anyway, you are right. Likely easier to explain with some visuals. I am sure I am doing something wrong. Here’s a link to what I am exploring thus-far

My question is, how do I then setup in Blender for the UNDISTORT, to re-render the ‘AE test junk’ and ‘bake’ it in?

I know I can do this in Mocha, etc, but I am just trying to figure out a 3D projection workflow for myself since I think I will need it on some shots (as an example, removing a lighting rig like this from on top of the crutch the person is carrying). If I can get proper geo and camera in there, I can mimic Nuke’s 3D project painting capabilities with AE and Blender… my brain is just stalled on the workflow here.

1 Like

Hello !

Thanks for giving some examples, that helps !
Yet I’m not 100% sure of what exactly you have in mind, is it something like this that you want to replicate ?

If not please show an example in nuke or else…

If Mocha allows you to say remove the light easily you should definitely use that as it’s probably going to be the simplest. You don’t want to use more complicated techniques to spend more time cleaning a footage don’t you ?

I think mocha, and Nuke-X got plugins that allows to remove automatically parts of the image , like you select one element and it will scout for surrounding frames to remove it. That would be the first thing I’d try if I had to remove said light, and that might give a basis simpler to clean.

In the end, the shot you are trying to cleanup looks quite challenging, in the snow example you’ll see that it’s much simpler, the light stay coherent and they basically track a plane, where in your case there would be several planes to clean with the ‘stairs’ part with a lot of parallax involved, reflections, changing lights, motion blur and such.

If I had to cleanup something like that, first I’d ask for the phone number of the person responsible for the shot so I can tell them right away what I think, and then it’s probably going to take a lot of time and energy without a guarantee of complete success.

Anyway, if the technique you’re trying to recreate is the same from the tutorial I posted, first I really encourage you to use an appropriate software for that, blender and AE is going to slow you down a lot since you’ll be bouncing back between the two and they are limited since they don’t have the right tools for that ( even tho it’s always possible to work around) .

I’d look into fusion which you can probably use for free within resolve, not sure it can do that , but that’s probably a better option.

If you insist on using blender, what you are missing is an animated bake : https://www.youtube.com/watch?v=uRNssN00CVw

Basically the method goes like that :
1/ Once you tracked the shot, you create a 3D model of the set, it needs to be very precise to be effective, if not everything will slide and you’ll have an harder time to clean up.
That’s why in the cleanup tutorial they use a plane aligned with the ground, if you have several “walls” to cleanup you need some faces perfectly aligned with your 3D track.

2/ You need to UV unwrap the model, again, cleaner UV will help the cleanup process a lot.

3/ Then you have to bake the whole shot from the tracked camera , as a result you’ll get an animated texture baked from the UV which are static.
That’s what they get here :

4/ Then you clean that up in a compositing software

5/ back into blender you use the cleaned animated bake, map that as a texture within your UVs and render the shot, which should give you an animated clean plate.

6/ back in AE, you use the clean plate to remove the object you want.

But, you’ll probably quickly see that it’s better to shot smart, and try to minimize these kind of cleanup at the shooting stage. The most important is the effect you’re going to put in the shot, this is where you want to spend the most of your time !

Have fun !