So.. Is there any real reason to use Nuke/Natron with still images

Over Krita/photoshop?

Yes, the node based workflow is much more flexible for managing render passes and multiple elements, plus you don’t need to redo all your edits if you need to re-render (just refresh your read node). Also, I don’t trust Photoshop to handle floating-point or linear images correctly. The data stored in an .exr is totally different color model to how Photoshop normally works (scene vs display referred, integer vs float, linear vs non-linear…) plus many of PS’ tools do not work properly in 32bit mode, making the whole thing pretty useless.

Okay,

I hear that 32 bits per channel is tossed arround in every discussion I find about this topic,

now give me a reason to use 32 BPC when 32 bit files are not as compatible as 8 bpc?

One example reason to use 32 BPC over 8 BPC on stills:

You’re working on a spherical HDR to use as a lighting environment for your 3D scene. 8-bit won’t do here, you need a high dynamic range image to successfully to IBL (Image Based Lighting).

Because you lose bit-depth by working with 8 bit images. If you change the levels, or adjust contrast on an 8 bit image, you immediately lose data and are now below the color depth resolution and expose yourself to color issues such as banding.

Your final output will be 8 bit usually, but you need to have room to adjust and tweak before you compress it into an 8bit colorspace

When you comp in an app properly designed for such things like Nuke, Natron, or Fusion, they are just as compatible as 8bpc, if not more so (no need to linearize, for example). 8bpc can result in banding, and massively limits your ability to adjust and remap exposure. When you do post-pro in 8bpc, you’re stuck with whatever white point you chose at render time (white point being the output value that ends up being 255 in your 8bpc image, it’s 1.0 by default). If you decide that was too low and you want to damp down your highlights, well, too bad! You’re stuck with 1.0 now, everything above that got clipped! There’s all sorts of other problems, other people have already mentioned banding when doing color adjustments, for example.

If you’re dealing with data passes like Z, normals, position, etc, floating point is almost essential. In float, Z-buffer and position values can just be in real-world units, like meters. Negative values are ok, and there’s no “maximum”, so you don’t need to worry about fitting all the data in 0-255.

Also, data in 8bpc is almost always non-linear by necessity, which is not good for compositing. Data passes like normals don’t even make any sense in non-linear (distances do not have gamma), and light adds in a linear fashion, so either your comp doesn’t work quite right, or you have to convert back to a linear version of your colorspace (which has plenty of potential problems) and to avoid banding usually means the compositor has to promote back to 32bpc internally anyway.

Btw, normally it’s fine to use half-float, which are 16bpc. They’re still floating point, so they have far more dynamic range than an int16 file (like when you set 16bpc in Photoshop, for example). They sacrifice precision instead of dynamic range. Which is useful for visual data, because your eyes have a massive dynamic range but are not particularly precise.

If you need to deliver a DCP 8bit won’t cut it