Composite node Alpha socket help?

I cannot get the Composite output node to honor a supplied Alpha channel (threading the input socket on the node). Help! I have tried plugging in a Value node, the alpha channel of another image, the output of a Set Alpha node, you name it. Except of course, the option that will work. Which one of you kind gents will tell me, I’m sure.

Yes, I am rendering out to a PNG format, checking my work with GIMP.

A agree - some expert explanations of these thing would be welcome :slight_smile:

<thread highcheck=ON>
And in one breath with the Alpha output&modifiers:
The explanation of the Z-output of images is also a thing an expert should look at:


Do you mean that

  1. it physically / graphically won’t connect the line from one node to another


  1. they’re connected but not working

If 1) I’ve found that by zooming in / panning around, that seems to “wake it up”.

If 2) … I don’t know :smiley:

Post the .blend if you want, I have a Win Xp CVS 2006-11-29 build here, can try running the .blend

Are your running 2.42a? If so, … There have been numerous fixes / changes in the nodes in CVS, might be worth trying it there.


Hey Werner, vas geschieht? Yes, I was checking my work in the wiki, preparing some mix examples, and cant get the darn thing to work. Have not tried to supply a manually generated Z channel yet; still working on documenting the basics! Although I know that EXR format images honor the Z, as per my latest DOF rewrite. Don’t know if the Composite output node honors a supplied Z; I suspect not.

Mike_S: they connect,but don’t communicate. I really was trying to take the alpha channel from any picture (PNG) or RenderLayer node and use it as the Alpha channel to the Composite output node. yes 242a. ANY example will do, I can replicate. Nothing I try works. Even using a SetAlpha node etc.

For example, render a solid cube in the middle of the camera view. The cube is alpha 1, and the background is 0. In the Node Editor, add an Image input node and load up a JPG. Connect its Image output to the Image socket on the Composite Node. Connect the RenderLayer Alpha socket to the Alpha socket on the Composite Node. The Compositor should use the supplied Alpha to override the Image Alpha and thus render only the portion of the JPG that ‘overlays’ the cube. But no, it doesnt…so the question is, what does the Alpha input socket on the Composite Node do?

AND, I would think the Alpha input socket on the SetAlpha node would use an Alpha channel from a Renderlayer or PNG Image Alpha output socket, but no, it doesnt work either…for me. Probably cause I dont have some button clicked… Thx for anyone’s help!

I think maybe it’s a bug. In the nodes window, the Composite node’s preview area DOES show the jpg being combined with the render layer.

  • with the render layer’s Alpha output going to the Composite’s Alpha input, my Render layer and Image node appear to be equally mixed (only in the Composite preview).

  • with the render layer’s Image output going to the Composite’s Alpha input, my Render layer and Image node appear to be equally mixed (only in the Composite preview), but both appear to be something like 50% alpha (semi transparent)

Doesn’t work for me either. BUT connecting the Image ouput from the RenderLayer to the SetAlpha works for me

Btw, thanks very much for documenting all of this, but this is really silly,sad, and annoying and it’s the same old Blender documentation (lack thereof) from the programmer’s , with the user’s playing “I wonder how this works” game. It’s only because the software is “free” that I guess we torture ourselves. (It isn’t really free, if you have to spend an extra XXXX hours learning/scraping for information on basic functionality).


I am a user from the C-key days, and Blender is open because I helped pay for it. So, no, it aint free, but nothing is. In my two decades of software experience, user doco is the last thing (actually, training material is really last) and frankly, best left to laymen to write. The programmers do document pretty well, actually, you just have to know where to look for it. I usually get the new release, see all the new goodies:), try to use them:confused:, get interested, get frustrated:mad:, then challenged to figure out how it works, put it to practical use or dream up things to do with it, and then write the UM:cool:. So, yes, its us users who have to fend for ourselves. BUT, we have like 40,000 active users worldwide. Seriously, I’ll commit something to the wiki and 15 minutes later there’s a post from some guy in Australia to this forum…pretty amazing, and desparately calling out for easily understandable, non-technical, practical doco. Which I am helping to write. I guess that is my role, and I’m happy to play it. But its a never-ending battle.But if you think we users have it bad, imagine being a new programmer wanting to help out with the code…

Heh, heh, I’ve done some programming, and actually made some very minor changes to the Blender gui (only on my own machine). I chose to try the gui for programming, because

  1. I thought it would be one of the more simple areas of the program to start with

  2. It’s the only area of the program that I found some internal technical documentation on :slight_smile:

Anyway ,back to the nodes-alpa … did you find the same conclusions as I did, do you think it’s a bug?

edit … Hmm was that big “agreed” there before … ? :confused:


It all works as expected here…:wink: both with 2.42a and current cvs.

It does set the alpha channel but it does not change any of the other color channels. In other words, it does set the transparency of each pixel but all of the other color info of your origional image is still there. To see the result in your uv/image window, all you have to do is hit either the draw image with alpha (it dosn’t display alpha properly imho) or draw alpha only buttons. Blenders exterior render window does not show alpha at all but if you hit Akey in the render window, you will at least see the alpha channel.

As far as checking out your png with gimp…it’s so obvious that I really hate to ask you this…did you toggle RGBA in the format tab before you saved your png?

in the 1st image below, the green scene is just solid green with alpha=1 and suzanne is ~50% transparent…the second image is the saved png(heh- the thumbnail is bad just click on it anyway)


yes, I had to embarass myself at the feet of THE master (and I think we all know who THAT is). I was expecting to see the background X in the output render (stupid me), confusing transparency with saturation. Thank you paprmh. and yes, omg, you caught me. the first time i saved the png it was rgb only. more searching, trying, then I saw the RGBA and with great relief enabled it, (like, why would NOT using all of an image formats’ channels be the default?) rendered, reopened the recent file in gimp, saw that it was still 1, frustrated, confused, embarassed…yada yada (see the missing step? - F3! to resave image output after enabling RGBA). Then, when dragging over renderout as GOD HIMSELF subtly suggested I do in the bug log, I dragged over the part of the image THAT WAS 1 and of course with my such bad luck did not realize about image sizing taken from top node so the bigger alpha test map I was using was centered and the non-transparent portion did not overlay any of the smaller input…let’s just say it’s been a long day. So I took out my frustrations and made him a flag and put it in the wiki. Which I will use as my image the next time I submit a bug, and hope it humors him. And wrote some wiki stuff on splicing segments together.

Now I have to try and remember why the heck I started with the whole alpha channel thing in the first place. Was it something we were talking about?

Doesn’t it just kill you that there are so many places in this program where you can go wrong? I routinely make the same rookie mistakes, cuss, spit, snarl, fix it and go right back just to do it all over again. This usually happens to me with renderlayers, so I get a good laugh when I see posts like this. Thanx Bro.

Maybe you could document what “as exptected” means :smiley: .

It’s also confusing to me what the Alpha output of the render layer / Image node does.

Why does feeding the Image output of a render-layer/image node to the Alpha input of a Set Alpha node work, while feeding the Alpha output node to the Alpha input node not work? :confused:

And what is/can the alpha input socket on the Composite node used for?


see for what the Alpha output does. So, if you like had a glass image and wanted to make it totally opaque, you would thread the alpha output to the dreaded SetAlpha node and SetAlpha to 1. When you render it now against a background, the background will not show thru.

feeding the Alpha output to an Alpha input does work. You can use the Alpha channel of one picture to override the alpha of the base picture; that’s what SetAlpha does (and yes, I was trying to beef up the wiki when I started this whole fiasco). For example, render a ball in one scene, and a cube in another scene so that it would partially overlap the ball if they were in the same scene. Feed the Image of one to the Image of the SetAlpha node. Feed the Alpha of the second to the Alpha of SetAlpha node. Feed the image to a compostie and render. As you click-drag over the render, you will see portions of whitespace (where the other object is NOT) to have an alpha of 1, while portions of the object that dont overlay appear solid, but in fact have an alpha of 0.

Render shows you the colors, not their transparency. You can only see transparency when it is in front of something; if that makes sense. hope so. if not, we’re both out of luck.