Can't replicate After Effects result in Blender with basic nodes

I’m trying to create a mask for lights in Eevee, in order to have some of the functionality of Cycle’s light groups. The masks I rendered in Blender work with AE’s luma matte, but I’m struggling to produce the same effect using Blender’s mix nodes.

This is the render without compositing:

This is the light mask (value is used as mask):

This is the goal result, rendered in Blender without compositing. I want to approach this as close as possible with compositing:

This is what I get in After Effects after using two solid red layers in overlay mode with the Light Mask as Luma Matte on top of the no composition layer shown above:

This is what I get in Blender with the following node setups in the compositor, trying to emulate the AE method:
With Alpha Over:


With Mix Nodes:

Compositor:

Neither of these look like the AE result, so this is the best that I’ve managed to produce in Blender so far, with another setup:


Compositing Tests.blend (1.5 MB)
Had to remove the EXR images, so the file would upload. They can be re-rendered locally for testing.

While I believe my best Blender result is about the same quality as the AE result, I’m puzzled as to why the simple setups didn’t work. Am I doing something stupid, or is it the case that the basic nodes work differently in Blender than in AE?

Does it need to be done with a mask? If you’re going to have the combined render and the render of that single light anyway, you can subtract it from the combined (removing that light’s influence), adjust it however you want (by, say, tinting it red), and then add it back.


That’ll give the exact same result as changing the color of the light in the scene (save for small variations in the post-process bloom), since mathematically it’s the same thing.

If you really want to do it with masks, though (which means throwing away that much more valuable isolated RGB light render and using only one channel instead of all three, but again, if that’s what you need for this case), things get a little trickier, since you can’t get “inside” the render anymore and have to operate on the combined pixel values. I don’t know how close you can get to that rendered result (I didn’t try), but you can certainly mimic the same comp operations between Blender and AE.

The first thing that’s probably causing differences is that Blender works in linear, while After Effects works in display space by default, so the math of the color operations is being fed different values. The second thing is that AE limits blend modes to a layer’s alpha channel by default (so the compositing happens only within the area where the layer exists, with the black outside ignored), while Blender doesn’t (so the black outside is composited with the same blend mode as the rest of the image).

I couldn’t replicate your AE result using the layers you said you had, but if I recreate it on my end (two red solids, Overlay blend), and then rebuild that same comp in Blender, I get a very near but not quite identical result. Not sure what accounts for the remaining difference, maybe differing implementations of the Overlay mode? Or slightly different math used by AE in determining luma for the luminance mask? And I’m definitely not sure what accounts for the difference between my AE comp and your AE comp.

2 Likes

You nailed it.

My original ideal involved actual masks: I was going to do two renders, a normal one, but with all lights pure white, and one in which the diffuse of every material was converted to grayscale, and I would have three colored lights in Red, Green and Blue illuminating the scene. I then would separate the lights with a Separate Color set to RGB. This would allow me to isolate up to 3 light groups in 1 render, while the light isolation method, like I did in the OP, would take 3 renders, but would be perfectly accurate, as you pointed out.

Using your node setup with this RGB lights method, I almost got it right, but there are defects on the edge of the orange sphere:

I feel like this could be doable with some mathematical fix that took into account the difference between the color used on the light in the RGB setup and a pure white light, such as (1, 0 , 0) + (0, 1, 1) in the region affected by the light. Will try some solutions later.

One note: I think you ticked “clamp” somewhere in your Renders, or otherwise changed something, because you can see the highlight in the black metallic ball isn’t white in the first render, and in the last one it is kinda fried. The highlights are working fine on my end, using your node setup.

To speed things up a bit, I separated the lights completely between view layers, so no need to render the light to be recolored:

Also just wanted to leave this here for people searching this topic.
https://docs.blender.org/manual/en/latest/render/layers/passes.html

1 Like

I think I managed to make the RGB lights pass work. I will be doing more tests to see if it breaks down in other scene and light configurations, because the top of the orange sphere is still slightly off.

Ah, yeah, it’s because I actually made & exported those demo images in Nuke (where I’m most comfortable), and only rebuilt the setup (remove light, tint, add back) in Blender to take that screenshot. I was working fast and left Nuke’s default view transform on, which is a naive sRGB lookup (akin to Blender’s Standard). So no highlight rolloff, the solid red light source stays solid red.

And a few last notes on the RGB mask workflow: for starters, the Blender HSV adjustment node adjusts colors by converting to HSV, adjusting the chosen channel as needed, and converting back, so saturation behaves in a slightly different way. Using Blender’s HSV to desaturate this image results not in A, as you might expect, but B.

That’s why I was using that custom desaturation group in my example earlier: a more typical Rec709 desaturate can be achieved by multiplying R x 0.2126, G x 0.7152, B x 0.0722, and adding the three results.

And your second issue is coming from the fact that you’re not subtracting a pure white light, but pure white pixels: after you split out, say, the red light, you’ve got an RGB image with the same value in all three channels. Then you’re subtracting that value from the RGB values of the rendered image. Basically, the values you would need to subtract from your image to exactly remove the influence of the light are these RGB ones:


But the values you’re subtracting are these RRR ones:

So the orange sphere, for example, has too much blue subtracted, while the green sphere has too much red subtracted, and so on. You’re getting away with it in the limited sandbox of your demo scene, with simple solid-color objects and lots of shadows, but it is definitely a hack, and you’ll probably see more and more artifacting as your scenes get more complex.