compositing blur/glow with a transparent background

I’ve been reading through this thread to try and get my head around the whole alpha&glare issue, and it seems to me that most of the best posts are just stating a work around for something that really should be implemented in the nodes already!

To my mind, if I’m using the premultiplied alpha method, when I add glare this should just work in the same way as all the various halos and other things that are translucent. Isn’t it just a bug in the glare node code that it doesn’t do the calculation taking alpha in to account.

I’ve got something I want to export and use in a video editor. For this reason I really need to have the glare on the image, with the alpha defining the transparency of that glare at any given point… but without some really significant fudging it seems I can’t actually get there. It’s frustrating, because I spent ages getting the scene looking as good as I can in blender, only to be let down by the nodes that I’m using… The best option currently seem to be to ignore the glare (and lens distortion) nodes I’m currently using - but it feels like such a shame!

Any thoughts about whether this could be fixed? Or will we be fudging forever more? Or am I just missing something?!

Hey, I’ve got similar problem here.
The alpha/transparent blur is OK, the point is - also parts of image that shouldn’t be blurred, are blurry…



U can see on picture above - in the first blur node I get correct image (the one above masked by one below and blurred), but after mixing with non-blurred… the non-blurred part becomes blurry…
When replacing Mix with Alpha over I get strange edges (like when unsharp masking) visible on the cubes where there should be sharp edge.

That doesn’t look quite right to me, as you have the image being blurred along the way somewhere there…

But I repeat - this is all just a fudge anyway! Is there a place to report this (the lack of alpha support on glare etc) as a bug? Or make it a feature request? I don’t see this issue being beyond the developers - I guess it’s just a case of will to do so…

A well thought out submission to the 2.6 bug tracker might help.

C’mon guys, what bug do you see here? There are no bugs, but lack of understanding. Please go through my earlier post in this thread again and try to understand what I am talking about there.
I don’t explain how to “trick” blender, I don’t show “workarounds”. I show how things work, I show how we should understand RGB and A. Those things are GENERAL. They work the same no matter what software you use. The issues that you are facing are NOT Blender specific.
Let me be clear: I’m not any blender fanboy who says “Blender is the best, I’ll kill anybody who doesn’t agree”. I see many flows of blender, but not in this case. Here everything is fine.
I will not even try to explain all this again, because I already have.
I could (and probably will eventually) go through the setup made by Kalia and say why things go as they do.
@ Kalia: Could you upload the image that you used in the input image node? (The one that is supposed to show ALPHA)
This is important, because I need to know what color values you have in “transparent” areas.
If you don’t have it, just tell me how EXACTLY you created this.

Hey Bartek,

Thanks for your response - but perhaps I didn’t make my point clearly. I’m not saying this is a ‘bug’ per se, but perhaps a unimplemented feature which would help us not have to do the jumping through hoops that you’ve demonstrated.

I appreciate that you don’t consider these ‘workarounds’, but I’m afraid I disagree. Nodes are awesome, and are crazy clever for letting you do things that aren’t in the ‘normal’ workflow… but I would argue that my expectations of the functionality are not unwarranted! Why would the alpha channel need to be stripped out when adding a glow or lens distortion?

I have found a much less cumbersome workaround which will do for now. It basically involves running the processed image through a math node set to pow with value 0.1-0.3.

I’m just curious why this isn’t handled properly in the nodes in question… Like I say, not a ‘bug’ as such, but something that could be better!


What do you think?

Mike

File attached.
I created it by making empty image in python console, then went pixel-by-pixel through scene z-buffer and where the value was “something” (guess more/less than 0.5, as the range is 0.0-1.0). Where the restriction was met I set all the values (RGBA) to 1.0, otherwise set Alpha to 0.0 (though it’s unnecessary afaik, since the initial value of all empty image pixels is 0.0).

Attachments


Bartek, there’s one piece to your explanation earlier that I don’t get. I have a BG that takes up the whole frame, and I have smaller pieces that fit in and around the background. When I run my layers through glare nodes and do the whole ((IND+DIR)*COL) deal to get a full image, the alpha is completely gone. I have simple gold-rimmed text that sits on a stage. I use Material ID to isolate glare on the gold highlights. The final result for the text looks fantastic, but there is no alpha at all remaining at the end of the node tree that I created. So, when I AlphaOver with the BG stage, it doesn’t do me any good. When I use the “Separate RGBA” and pull the alpha, it just cuts off the glow beyond the alpha.

I’m just not seeing a way around this issue. I know it’s not Blender since I also ran into the same issue in After Effects–just my own lack of understanding compositing.

OK, looking at it closer, now I see that the render passes themselves don’t have any alpha channels.

So how best to work with them when dealing with render layers and the need to overlay the passes for a final shot?

http://www.pasteall.org/blend/21564

There’s a .blend file with a simple example of what I’m talking about. How do I make that cube have a glow or blur or whatever that extends past its alpha edges and then is placed over top the wall behind it? I can make this example work with just the images, but as soon as I have to use an alpha or a Material or Object ID (which is also basically an alpha), it doesn’t work anymore.

Here is my solution: Take the original renderlayers image and alpha and subtract them, then blur that result. You basically end up with a blur alpha channel. Then add this to the original alpha channel clamped. The use set alpha with the original image blurred to the same settings. Seems to work.


Even if you dont see it on an alpha background, I would imagine that the image will be placed against a background of some color at which time you will see it.

Read Bartek’s first post over and over again until “the little light-bulb turns on in your head.” Eventually, it will do so.

“Do the math.” And, as you do so, “keep your eye on the ball.” You have two channels of information here: RGB, and Alpha. There are many different ways that this information can be processed … premultiplied or “sky” being just one example … and the controls for using them are in several different places. But, in the end, no matter what, “a certain production-line sequence of mathematical transformations are being applied to several large raster grids of digital information.” You need first to understand them, by diving as-necessary into the documentation surrounding Bartek’s original post (and mine), so that you can, for your own particular project and situation, choose how to employ them.

I do not mean to be patronizing here. We have a saying in programming circles, “TMTOWTDI = There’s More Than One Way To Do It.” (Affectionately known as “Tim Toady.”)

It can be very difficult if not impossible to “eyeball” this, because what you’re trying to control is not really “visual” at all … it is a multi-stage data processing pipeline. You are in effect “writing a computer program” when you assemble a node network. Use the tools available to you to measure the numeric values that are passing through the system at some selected point in the image so that you can very-clearly see what Blender is actually doing. There are many ways to do it … but it’s really not very possible to visually diagnose a problem.

There are no “Blender bugs” here. (Not here…) Instead, there’s a task of great complexity that you’re addressing through the use of tools that each have many different options: options that are intended, in certain common situations, to produce a particular useful effect. You must understand what inputs each tool expects to take, what it does with those inputs, and, in your actual node network, what inputs the tools are actually receiving. You will debug this, as surely as you would do with any complex computer-program. (The "complex computer-program’ in this case being “your node network,” not Blender itself.)

No, there are no bugs here. It was INTENDED to take a rocket scientist to produce what would be the obvious chosen result for any given scene. It’s like designing a car and then handing someone a box of parts for the window system, so the driver has to assemble the automatic windows themselves. They give you all of the parts in a box and, sure, you can assemble it 20 different ways, and maybe one person out of a million won’t actually want windows that go up and down when you press a button, so it allows for flexibility in design and result. But everyone else just wants to press the button and get the obviously desired result.

This is the perfect example of programmers VS average users. Programmers want to create a “powerful” system with a lot of different options, and they don’t care how complicated it is to get the result, because they designed it and totally understand it. Whereas, most users just want the same, simple result and don’t want to melt their brain figuring out something that should simply be a check box. I don’t know why it has to be such a ridiculous process to just get the same damn image, minus the background. So, who’s really failing to understand things here? The user or the programmer? I mean, really. Who wants to go to the trouble of incorporating all of these glares and glows only to have them look totally stupid without the background? Doesn’t that defeat the purpose of having them in the first place? This is not a bug - it’s just a bad design. You guys can give all of the explanations you want, but it doesn’t change the fact that this is just not pragmatic or user friendly.