Textures and Alpha Modes, Oh my!

I am having trouble understanding the purpose of the recent change to the texture alpha mode setting as specified in the recent meeting notes:

  • Image texture alpha modes have been changed. The new Channel Packed mode can be used for channel packing as commonly used by game engines, where the RGB and Alpha channels contain different images that should not affect each other. The new None replaces the Use Alpha option. ( Brecht Van Lommel )

As it stands now, “Straight” and “Premultiplied” are useless…

  • “Straight” means premultiplied:
    • Mess up the color very slightly (more in eevee) if not using external textures with cycles.
    • If alpha socket is used, then it is un-premultiplied.
  • “Premultiplied” means premultiplied-ish:
    • Mess up the color if not using external textures with cycles.
    • If alpha socket is used, then it is un-premultiplied.
  • “Channel Packed” is what “Straight” should be:
    • What straight was for eevee before this change
    • Except packed textures on cycles it is what “Premultiplied” should be.
  • “None” actually works correctly! Yay…


Here is the texture used:

None of this makes any sense to me. Why add another alpha mode when all you really need are two: straight and premultiplied? Blender already seems to know if the texture has an alpha channel or not, so ignore the alpha mode if there isn’t one, and use straight (the real kind) as the default when there is.

Secondly, for “Straight” and “Premultiplied”: Why change the output of the color socket on the image texture node based on the connected/use state of the alpha socket? It’s confusing as hell and should not happen at all (this is why we have the alpha mode setting in the first place). Connecting the alpha socket causes the color to be un-premultiplied (via division) causing a loss of precision at lower values and black to stay black. This can cause all sorts of artifacts for things that are not pure black and white masks.

What is the reasoning behind all this?


The idea is to put 4 textures into one image file. One texture of grey values into each channel.
The change is not purely an alpha property.
That is probably an attempt to keep amount of settings low into Image panel that is obviously confusing.
Channel Packed should probably be an option that would greyed out alpha setting.

Which has always been trivial to do with textures without an alpha channel. The problem has always been that with an alpha channel and the mode set to “straight” the image gets premultiplied anyway by cycles. Before this change, straight actually worked with eevee but not cycles. Now it does not work on eevee either, as I mentioned in my post. Again if setting the mode to “straight” would actually function like it says in it’s tooltip, we would not need another alpha mode:

This has been broken for cycles for years… most people have been getting around it by duplicating the image and leaving alpha enabled on one and disabled on the other, using the color socket from one and the alpha socket from the other. The other solution was to just make sure to never use a texture with an alpha channel. Storing random crap in each channel has always worked for pure RGB images, just never for alpha channels, because cycles has always ignored the alpha mode…

Well, it clearly is.

If you read my post you would understand. We don’t need a “channel packed” alpha mode, we just need “straight” and “premultiplied” to function as described, which very clearly, they do not.


My first thought:
“Ehm … OpenEXR Multilayer could do same, without constraining us to 4 monochrome layers.”

Are these first steps to enhance support for image multi-layering in Blender? That would be good news.

The goal is not to use RGBA channels of image as one texture.
So we don’t care at all how EEVEE or Cycles will treat a whole image with this alpha mode.
The texture is just one channel.
User is supposed to use outputs of a Separate RGB node or alpha output as texture, not the Color output of Image Texture node.

The fact that Straight is not giving a good result is another problem. It is listed as bug and is treated as a bug.
I don’t say that is not related to changes to introduce that new channel mode.
But existence of this new Channel Packed mode does not necessarily imply that bug cannot be solved without removing it.

The goal is to be able to do anything you want with *RGB* and NOT have blender screw it all up because *A* exists. This is what “straight” alpha is supposed to do and does do in all other software.

Again, the user is free to do anything they want with RGB… their need of a Separate RGB node or not is irrelevant.

Unfortunately, going futher back in time to 2014, it is claimed to be intended behavior: https://developer.blender.org/T38582

That of course makes no sense, given its definition, but hey premultiplied is also busted. Winner, winner, chicken shitter.

All we need is for “straight” to function like “channel packed” on those 3 of 4 demo images I posted and for “premultiplied” to function like “channel packed” on the cycles packed image one. Do that and “channel packed” is no longer needed.

These alpha modes are a total fuster cluck, and if “channel packed” is the new alpha mode hotness, it should be made the default mode for all new images, as its output is obviously more desirable than the other options (if it gets fixed for cycles packed images that is, as in: straight instead of premultiplied… lol). Highly ridiculous… HONK HONK.

1 Like

The way it works now in Eevee is compatible with how Cycles was designed to handle alpha, which matches usage of renderers like Renderman and Arnold in production. That is, texture interpolation happens in scene linear space with premultiplied alpha.

The reason for this is accuracy. When you see an object far away and your pixel is covering some area of the texture, you do texture filtering over that area. The result should be as close as possible to doing the entire render at high resolution and then scaling it down. And if that’s the goal, texture interpolation in scene linear color space and with premultiplied alpha is the only correct solution.

Game engines have different requirements and we’ll iterate more on the code to make those use cases work better. Some game engine practices like channel packing are very much a hack to work around file formats and GPUs not natively supporting multilayer images in an efficient way. And that’s fine, we will accommodate too. But Blender is designed to work for production rendering too, so it takes some work to find the right way to handle both.

See this bug report as well:


Some movement:
https://lists.blender.org/pipermail/bf-blender-cvs/2019-May/123851.html and at …/123852.html

Recently, me and my coworkers have also been having troubles related to alpha textures.
Since Brecht appears to have given up on me in the dev forums, maybe someone here has something to say about this.

This image shows a simple quad with a material mode that layers two textures on top of each other. White dots on white background. But as you can see, I’m getting these black outlines around the dots since Blender merges the black and white alpha texture with the color channels before displaying them. This happens with all renderers in all display modes with the sole exception of Cycles in Rendered mode.
(But obviously, nobody is going to paint their textures in Cycles Rendered mode.)

Am I just too stupid do figure out what I need to do to have my Material displayed correctly, or is this really some sort of bug?
And does or should any of the new image modes fix this?

If in your dots texture, transparent pixels have a black RGB color : Result of mix node is coherent.
If they are supposed to correspond to white : that’s a bug.
If user creates an image where a color is stored into RGB channels, he should obtain this color and not a pure black instead.
And every other result should be considered as a bug.

The color in the semi-transparent areas is pure white. No black or any shade of gray should be visible. Not even if it’s black outside of the visible area.

The issue has now been reported, and the devs acknowledged it as a high priority bug. :slightly_smiling_face: I’m so glad it was not just my own incompetence.

Sorry man, I still don’t get the reasoning here. I get what you are saying about rendering and *literal* alpha. I am trying to understand this from an artists perspective (modes, inputs, and expected results).

Let me break this down into some more specific questions (for anybody to answer):

  1. When an artist selects “straight” alpha, do they expect the first( A ) or second( C ) result from the color socket? Why is the first( A ) result correct over the second( C )?
    a_premultiplied c_straight

  1. When an artist selects “premultiplied” alpha, do they expect the first( B ) or second( A ) result from the color socket? Why is the first( B ) result correct over the second( A )?
    b_premultiplied_with_funky_color a_premultiplied

  1. If an artist selects “straight” or “premultiplied” alpha AND uses the alpha socket, do they expect the output of the color socket to change (A > D or B > E)? Why change the output of a socket if one of its siblings is used?
    a_premultiplied > d_straight_using_alpha
    b_premultiplied_with_funky_color > e_premultiplied_using_alpha
    To me this is the most confusing thing in all of this. Again it seems rather odd to alter a socket output based on its siblings use. Worse is that it does it to all instances of that texture node in an entire material. Secondly, because it seems to be un-premultipled if the alpha socket is used, the values get less accurate near zero and zero stays zero. If it’s trying to give an un-premultipled result, why premultiply it in the first place?

  1. If an artist selects “straight” or “premultiplied”, are they expected to use the alpha socket or not?

I know all I am presenting here is a simple gradient texture, but somewhere down the line I would love to see example use cases for each alpha mode, and renders showing why it is correct over the other modes. This is something that the blender manual would benefit from as well.

Also: Yay, I was able to reply before my internet dies again!

1 Like

The alpha mode setting indicates the alpha storage in the file, that Blender will convert from when loading, or convert to when saving. That means if you are loading a PNG file with straight alpha, setting it to premultiplied is never correct and no useful output should be expected from it.

The color space setting works the same way, all rendering happens in scene linear color space and it only indicates the color space to convert from.

Makes sense, but that still does not explain the alpha/color socket shenanigans.

From what I can tell the most common usage of an alpha texture is to use the color output with the alpha driving the blend factor of a shader mix or color mix node. For example:

Well, given that “straight” and “premultiplied” both un-premultiply (still don’t know why) the color sockets if the alpha socket is used, anybody who needs to use the alpha socket might as well use the “channel packed” mode, as it is clearly more accurate. This is what I have been doing for years (separate color and alpha textures), and have yet to experience problems with this method…

Does anyone have a demo showing the failure of “channel packed” over the “straight” or “premultiplied” alpha modes?

You are supposed to add another shader node(Principled, Diffuse, Emssion, etc…) between color output socket and shader input socket.

No, not necessarily. Using the color output directly is the same as using an Emission Shader node. Blender handles that internally for you.

1 Like

I was not trusting EEVEE. But you are right, it works.