When to use Photoshop, when Blender?!

Hi,

I really want to know, when (and for what purpose) you use Photoshop and when (and for what) you use Blender for post-processing and compositing.
I’m new to this stuff and would be happy, if somebody could explain me the pros and cons!
I don’t get the point, why you don’t do everything inside Photoshop ( or possible even inside Blender).

Thank you.

You uses what is faster. For example after a render I don’t like yet the contrast. Instead going to blender and change the lights strenghts it is faster going to gimp and change there the contrast of the image or adding some effect you like like a little blur or whatever.

But if you are rendering an animation it would be faster to do this postprocessing in blender, instead of loading in gimp the thousands of images to do the postprocessing.

I like photoshop, but also gimp and Corel Photopaint. Photopaint has some things like the divide blending mode that works amazing to remove a background. Also other things not related to 3D, like the conversion to black and white image where you can use many conversion types, why photoshop doesn’t have that I don’t understand.

There are a couple of advantages to doing things in Blender, to the point that I don’t do much final post-pro in Photoshop at all besides adding a signature on personal projects. (note that most of these are actually disadvantages of PS rather than advantages of Blender’s compositor per se, most of them apply to Nuke and After Effects as well as Blender).

-Blender dynamically links input files, Photoshop does not. You can sorta fake it in PS with smart objects and the Layer > Smart Object > Replace Contents command, but it’s a pain to do, especially when you have many layers.

-Everything (or almost everything?) in Blender is done in floating-point. PS is largely a low-dynamic range system. It has limited 32bit support, but many tools aren’t useable, and you must downconvert manually rather than automatically when saving like Blender does. If you’re in 8bit, that means your bloom/glare effects can’t lock on to superwhite regions of the image, you’re gamma, curves, exposure changes, and so forth can’t knock down clipped out regions or bring out shadow details, that stuff is just gone.

-Everything in Blender is non-destructive. Got a nice set of blooms, glares, paint fixes, and so forth and realize you need to re-render for some other part of the image? No big deal, re-render, reload the file, and you’re good to go. The only real hiccup is the fact the image node is inexplicably missing its own “reload” button, but that’s hardly the end of the world.

-Photoshop has some really weird alpha channel behaviors, mostly stemming from the fact that it doesn’t treat it as it’s own true channel in most cases, and it assumes all alpha channels are straight.

-If it’s possible to non-destructively combine masks in PS, I’m not aware of it.

-Photoshop is not node-based, which can make complex/branching trees needlessly confusing. Especially when you have bizzare stacks of adjustment layers and clipping masks to try and get some semblance of control of different layers.

-Photoshop does not support multi-layer EXRs (yet).

Wow, thank you for this detailed list.