Why do different mix types not behave right?

Why do different mix types not behave like they do in typical painting apps? In photoshop, when I do a darken effect eg. a black and white circle over a colour image - a vignette, the colour image darkens where there is black. While the white areas are ignored and dont affect the result.

In Blender that white area always mixes in (making a milky effect) even if the effect is to darken only?

If you are working with the “mix” node and are working with 32bit per channel inputs, the behaviour can differ. If you set the “factor” in the mix node higher then 1 the blending can have some strange results.

Another factor is enabling/disabling the “A” (alpha) button in the mix node, this can often lead to a total different result.

Hope any of this helps.

The main reason is the most obvious: the people who wrote Blender aren’t the people who wrote Photoshop . Blender nodes are not the same as P-shop layers, and the algorithms both for the Mix operations and the way in which they are passed through the pipes in noodle aren’t the same as P-shop uses for combining layers. I agree that some aspects of Blenders Color>Mix nodes are not entirely as one might expect, but I find it more useful to just develop alternatives, like using your vignette art with Multiply rather than Mix. Or try out a Subtract operation with the same art but inverted.

I’ve noticed a number of variations in how such operations work from app to app, so I don’t think Blender is all that unusual in this way.

Maybe the PyNodes will provide more options once they’re implemented.

Thanks everyone. It seems that so many apps emulate ps. Even gimp handles transfers in a similar way. I just couldn’t get it to handle the darkening around the edges the way I expected. I really love the node aproach, but the alpha wasn’t working right.

you have to understand the different modes that the mix node has, what the Factor socket does, and color math. Then it is not mysterious or “wrong” and you can make it do what P-shop does.

when i say wrong, i mean the preview shows something completely different to the result. And on occasion the deactivate of alpha doesnt work…

It can, indeed, be interesting to troubleshoot these things … yet, that’s exactly what you have to do. (And by the way, you have to do exactly the same thing in Photoshop, for the same reasons. Especially once you leave the comfortable well-trod pathways and start to ask that race-horse to do anything more than walk.)

One thing that can really help you is the Normalize node. It “squishes” its inputs into a known range of outputs… and even though it might well not be a node you want to keep in the final version of your “noodle,” it’s certainly a visually informative one to drop in while you are “debugging” your data flow.

There’s really no way around it for long, however: you are “debugging” what is actually sometimes a very intricate “computer program.” Node-networks of all types are a very sophisticated way to process data. Therefore, they come with the same set of “issues.”

Thankyou all I apreciate the advice and don’t want to sound ungratefull.

Actually it’s not a bad question at all, I think many of us Blenderbenders start out using Photoshop experience as a basis for trying things in the Compositor. Some operations are very similar, some not, it just takes working with the actual results and leaving a few expectations behind to really start learning what’s possible. After a while, using the nodes becomes (almost) as predictable as using Photoshop’s layers (which isn’t always predictable either, as sundialsvc4 said).

Thanks for giving this information because I was very confused with that. But after reading it, I got the perfect idea about it. It is really helpful to me.