Video banding filtration using Compositor nodes

Prepping animations for video can be frustrating if the images contain smooth gradients, because the unavoidable codec compression schemes tend to create banding in the gradients due to the lossy nature of the compression – the fine gradations are turned into clumps of similar or equal values that are perceived as bands when viewed.

I ran into this a lot with the images for Kata – the stage lighting used has some very smooth gradients that look great in the PNG renders but tend to look noticeably banded even with high-quality settings in the codecs. It’s the nature of the beast that a codec will trash a smooth gradient.

In prepping the files for the release of Kata, I developed a means to work around the problem and reduce banding to nearly imperceptible levels:

The top image was made from a video frame of Kata that uses no filtration. The bottom uses the noodle described below. In each a section has been enhanced to emphasize the banding and make the difference more visible. In the top frame the banding is subtle but quite visible – it is nearly completely gone on the filtered frame. Even subtle banding becomes very visible during viewing because all the bands are set in motion, and the human eye detects subtle motions very easily, an evolutionary leftover from our hunter/huntress-gatherer days.

The method I developed introduces a measured and adjustable amount of noise into the gradients. Because this is random noise, it’s inclusion in the gradients is imperceptible if kept to a minimum. As a graphic designer for print I used a similar technique to reduce banding in digitized and digital photography, and so adapted it for video projects.

The purpose of adding the noise is to randomize the values in the gradient over a very small range, not perceptible except when magnified, but enough to prevent the codec compression from creating bands of same-value hues in the compressed frames.

The noodle, and some explanation:

This screenshot shows the original PNG image with an extremely smooth gradient from the lighting scheme used. In this case the entire movie was loaded as an image sequence, using the final-version frames – all effects, color correction, and transitions in place. Adding the noise is a step made just before video compression.

The noise plate was made at the same frame size as the movie, just a simple application of Photoshop’s Noise filter, 5% value, monochromatic, to a pure black value. The enhanced section shows the structure of the noise more clearly. It’s important to keep the noise source as minimal as possible. As you can see in the Viewer Node at upper right, it isn’t even visible at normal viewing size.

The original image is piped through a Chroma Key node, using the intense cyan color of the lighting as the key color. The result was then passed through a couple of other nodes to produce the grayscale mask shown – this is used to Factor the addition of the noise to the original image. For even greater control over the amount of noise added, a Math>Multiply node can be placed between the Invert node and the Screen/Factor input.

The noise is added to the masked areas of the image using a Color>Screen node. This seems to work better than Add. Note that even in the magnified Viewer node, the noise is just barely perceptible – that’s the goal. The enhanced area show how much is actually there – the very smooth values of the gradient have now been randomized by a small amount in very localized patches that the eye can’t see at normal viewing scale (upper right Viewer Node) – the gradient still looks very smooth.

As shown in the top image, once this “noisy” gradient is processed by the codec, there is much less banding evident, because of the randomization of the gradient values. The tradeoff – the compression is slightly less efficient, leading to a small increase in the final video size compared to an unfiltered source.

In Kata, nearly all the banding affected the cyan lighting and its gradients, and that color was pure enough that the Chroma Key+ masking was very effective over the entire length of the movie, without affecting other areas of the images. For other projects, another approach to limiting the noise to appropriate areas will need to be found, but this does prove the concept.

Another idea for rendered images is to find ways to introduce noise into the rendering itself, preventing the gradient from having the mechanically-smooth value changes that can lead to banding. Again, it’s important that such noise be kept at imperceptible levels at normal viewing scale and normal viewing conditions. A little noise goes a long way!

This is very clever and so well-presented. I’ll surely bookmark your technique. Can you give an idea how much it increased the file size for the affected frames?

Yes, but remember – your mileage will undoubtedly vary :smiley:

Kata = 7130 frames (just under 4 minutes @ 30fps); 768x432; multiplexed output to video using Virtual Dub; Xvid video codec set to Quality 1.50 (1.0 = highest available); MP3 audio codec set to 32kbps/11KHz stereo; output to .avi container.

Unfiltered video size = 66.733Mb
Filtered video size = 75.556Mb

  • 8.8Mb or about a 13% increase for this first test file.

I’m currently doing a second pass on the frames to try & eliminate some even more subtle banding in the shadow areas of the frame. I’ll report the results when it’s finished.

Thanks for those numbers. 13% is not bad for getting rid of something I really hate. I understand how the improvement will vary depending on the project.

On closer inspection, the banding actually looks worse in spots in the filtered image at top of your post. I see it particularly in and around the large curve of the K. Please tell me it’s due to the jpeg compression.

Yeah, that’s not banding but JPEG crap – you can tell because it’s spotty rather than forming band shapes.

I’ll put all the images in a zip and post a link so you can see them clean of any JPEGarbage.

Good eyes, btw – I didn’t think that posting the image might introduce even more garbahj…:rolleyes:

Here’s the link to the original images:

Adding noise is one way of reducing banding but if it’s not adaptive and just a layer over then it can as shown increase compression time and file size. Noise can also be ignored by the codec too unless going for high bitrate.

Adaptive noise can be done in Avisynth I think as another option to test out possibly.

Thats quite complicated to get rid of banding :smiley:

In 2.49 -> F10 -> RenderTab -> Dither
In 2.5 -> Render Tab -> Post Processing -> Dither

Exactly designed to break color banding and more less is what you do =)

Meant to add about ‘dither’ to previous post but laptop battery died. :slight_smile: But banding happens for many reasons, dither is not a one fits all.

‘Dither’ works well for high bit depth like 16bit/32bit down to 8bit, which is the main purpose of the dither feature in blender but to say it will solve all banding is a little off the mark.

Banding when going to compression codec as in the original post is a more difficult problem to solve as Chip says png’s look great, compressed codec not.

The dither option looks like it might be very useful to add a noise factor to gradients during the rendering process – I’d have to test it with some rendered sequences. But no way I’ll re-render over 7000 frames, not to mention re-do a lot of the effects work and post-render touchups needed. That way lies madness :eek: :slight_smile:

One possible drawback to dither is that it is applied globally, although that does not seem to offer any visual degradation compared to an un-dithered image at normal viewing scale, in the test images I’ve rendered up. Quantifying the optimum amount of dither also requires a fully-rendered sequence to test each amount of dithering – could be very time-consuming getting the right value pinned down. But it does add noise to gradient areas, and in a visually more pleasing way than the overlay approach I use.

My idea was meant for a post-rendering fix, though. Yellow points out one major limitation – it’s an overlay. While the masking nodes help keep the noise only where wanted, because it’s a static image, if it becomes at all visible in the final video it degrades the visual quality, a bit like a scrim over the viewing area. So it has to be used with discretion and subtlety, and only where it is really needed.

It would be great if the same idea could be made more adaptive, much as the dither option does with the initial rendering. But working within the current toolbox Blender offers, and working on already-rendered frames, this seemed an approach worth trying, and it does indeed provide a solution. Maybe not the most elegant nor the most effective solution, but one worth considering. Use it or not as you see fit.

PS. It’s (possibly) interesting how the whole idea came about – after seeing the banding in my video saves even at high Q levels, I decided to post an inquiry about fixes here on be-ay-dot-oh, only to find my ISP was down. Bummer. But, necessity being invention’s Mom, I spent the downline time trying my home-made band-aid approach. Now I have other possible solutions as well, so it’s all good (better, even) in the end :smiley:

@ yellow – I’ll look into AVIsynth, you’ve mentioned it a lot and it seems to be a good tool to have around.

Just wanted to point out buildin dither, many don´t know it =)

Every new approach is worth trying, if it is an improvement or offers more possibilites its a win. If it enhances an existing feature it is a win.
If it was for nothing at all, still you learned something for yourself and it is a win.


I like your approach.
In one of my last projects I had banding as well, a dark foundry scene with a molten metal man inside. The perfect birthplace for banding =)
But it was a 1080p stereo 3D project and I decided no one will pay attention to the banding unless those watching the 30 secs 1000 times, so I let the bands be bands :stuck_out_tongue:

I find it more vital to prevent banding in stills than in animations, with lots of cuts and light changes it dissapears in the action anyways.

Kata has both – many fast cuts and lighting changes – but because of the nature of the lighting, and the overall “art direction,” with deep shadows and pools of light, the banding becomes horribly obvious in many places. Probably a “worst-case scenario” for the problem.

I like the dither option, thanks for bringing it to my attention – I always tended to think of dithering in terms of 8 & 16-bit images, like yellow mentions, but it does have applications in high-bit-value imagery as well. Had I known of it – and thought of it as a possible fix for compression banding – prior to rendering the Kata frames I would certainly have tried using it.

I generally do this in After Effects, but only to the background which is where most banding occurs. I also will add a blur to the noise layer to help smudge it out.

Chip, yes, banding is a problem to deal with when compressing to a codec but whether it can be wholely attributed to ‘compression’ I’m unsure, there are many things going on as well as compression. Here are a few suggestions to maybe look into:

Is the extent of banding you are seeing ‘really’ encoded into the video file or is it partly a decompression / playback problem?

Your input is imagery in 8bit RGB (256 Levels) and you’re encoding to 8bit YV12 (YV12 is 4:2:0 subsampling) at DVD resolution and 16 - 235 Levels?

Are you doing anything to pull the full range levels (0 - 255) of your RGB images into the reduced 16 - 235 range that is the ‘norm’ for DVD? Does Xvid have options for this?

Or are you truncating what’s below 16 and above 235?

If for example you are truncating the levels, when played back on an RGB device the levels 16 - 235 will be stretched to 0 - 255 to put YV12 black and white to RGB black and white and that could introduce banding. ie Banding that is not actually in the encoded file but introduced or amplified at playback.

If you’re not squashing the full 0 - 255 range into 16 - 235 and encoding it straight through then again, at DVD resolution your player maybe doing the truncating for you as DVD is 16 - 235 ie. ignoring the levels outside of the 16 - 235 range and then expanding whats left to RGB playback, introducing banding again.

Are you able to extract a frame from your finished video without scaling to RGB levels, which is the norm for all applications like FFmpeg, you’d need something like Avisynth ConvertToRGB with PC levels for that.

As a small aside, if you’re going to DVD then are you adjusting the colourmetrics to BT601 from sRGB (close to BT709 HD) if not you may see a more saturated image than the original. Again players will see DVD resolution and apply BT601 some do that whatever you feed it, occasionally they might attempt to read a Display Sequence Header, if your encoder adds one. :slight_smile:

There maybe things that can be done before resorting to adding noise, to greater or lesser extent.

Alternatively, if you have a sample of the encoded movie you could post.

You can either frameserve your RGB images out of blender into AVISynth and onto Xvid, taking care of adjusting levels, colourmetrics, adding adaptive noise (something like AddGrain or Gradfun plugin) or load your images into VDub via an AVISynth script. Let me know if your interested in these routes.

I’ll give blurring the noise plate a shot, too, good idea. Where the banding occurs is a matter of the scene image content – a number of my Kata scenes have gradients in the mid & foreground, and they show banding just as prominently as the BG areas.

I’m working on an even more complex noodle now, finding ways to include and exclude regions of the imagery from the noise addition – much finer control than before, with the ability to tailor the noise content to the nature of the image. I find that dark, relatively unsaturated areas need much less noise, which is good because that’s also where the noise might become more visible.

Other areas to explore might be the nature of the noise plate – perhaps a chromatic rather than grayscale noise might reduce the “scrim” effect somewhat. Lots of room for experimentation.

That’s definitely a possibility, as I get slightly different viewing results when switching players – Virtual Dub provides the cleanest and (imo) best playback – very sharp, contrast extremely close to the original PNGs, and less apparent banding for the most part, though it is still there. But it also displays a fair amount of frame-tearing in certain high-contrast and high-delta sequences.

VLC and WMP show near identical playback characteristics – less contrast (usually in the form of flatter blacks), more banding, and a generally less-sharp image quality. Not egregious but noticeable.

My goal is to get acceptable playback in all three of these player, which will require some compromise in terms of the final “de-banding” solution.

I’m currently intending delivery of Kata only via web venues – Vimeo and YouTube – or by direct d/l of the final source .avi, much as I’ve done with all my other videos and the Kata teasers. Intended for PC playback only. Should I find a need to use the DVD format both as a delivery medium and as a playback format I’d definitely have to find ways to accommodate the issues you present, but truth be told, it’s way over my head in terms of tech details. I’m aware of the problems but just don’t have the tech savvy to muddle through to a good solution. Too many gray cells lost to a misspent youth, I guess ;).

That’s a bridge I’ll have to burn when & if I get to it. Wait… that’s not right. Well, you get my drift.

EDIT: Re-reading your post, I can answer at least one of the questions – no RGB value scale truncation or compression is going on to the best of my knowledge – the entire 256-step scale is used. This is very evident when I play the.avi file from a disc on my x-box (doubles as a DVD player) – the blacks are crushed, the highlights blown, and the banding as ugly as it gets – definitely the .avi is not tailored to NTSC/p480 limitations. I’ve played around with some Compositor tricks to compress the values and adjust the chromaticity, but they’ve not been highly successful – just shooting in the dark, really, a bit of kitchen-sink video chemistry. But educational, also.

“dithering” always rings with “ugly” “loss of quality” in the back of my head.
Yeh, I guess you´r one of the old dogs as well.
Whenever we hear dithering we think, at least I do, of 256color gif dithering to be able to display a 1024x768 image at once because with 32k colors you wouldn´t have enough graphic memory left to do so :smiley:
Selective palette and dithering… vomits


Oy, I was afraid you were going to say that. In my experience YouTube’s low bitrate introduces banding/posterizes gradients, even at 720p. also vomits

I don’t know about Vimeo.

Yeah, I avoided YouTube for a long time because of the crap quality, but it seems to have leveled up a bit since being eaten by Google. I still prefer Vimeo. Both venues convert a source .avi to some sort of Flash format for play on their sites – it’s the only way to guarantee efficient streaming. But it’s also convenient and makes viewing possible by a much wider audience than those who are savvy enough to deal with the source .avi, the whole codecs issue and that stuff.

I usually post a link to d/l the source .avi as well, for those who want to see the movie in its native size & quality.

Been doing some more testing and elaborations on the original idea, and seem to have come up with a better way to introduce and manage the noise factor.

First, I switched to chromatic rather than monochromatic noise – this actually becomes less visible while still randomizing the gradient areas.

Next, to remove the “scrim” effect, I made 5 different noise plates – Photoshop’s noise filter produces different results each time, a good random or at least pseudo-random pattern of speckles – and used them as an Image Sequence in the Input>Image node for the noise, with Cyclic enabled. Now the plate switches with each frame, providing a different pattern of noise for the codec to chew on from frame to frame. This is analogous in some ways to a very fine film grain, I guess.

With a couple more Matte nodes to create an adjustable Factor mask for all banded areas of the images, I was able to tailor the amount of noise to the kind of area being filtered – very dark areas can stand only just some much noise before it becomes visible, while lighter areas can handle more noise, leading to better banding reduction.

The results can be seen in this short video made from opening frames of Kata: Kata-BandingFix-Qsplit-Xvid2.avi

The screen is split down the middle horizontally and vertically into quads, with diagonally opposing quads either having or not having filtration – quads are labeled. Playing through the video, it’s fairly easy to see some difference and improvement in the filtered areas, but stepping through the frames can show things more clearly. On my machine the unfiltered banding is very visible in WMP & VLC, not so much in Virtual Dub. And some residual banding is visible, but it no longer has the geometrically sharp edges that can make it so visible – the noise breaks up the edges. Again, it’s not a perfect fix, but it sure does help.

One value range that is still subject to visible banding is made up of the lowest levels just above basement black. Here the range of values in the source imagery is so small, but covers such a large area of the frame, that no amount of noise that stays invisible can prevent banding. But the diffused edges of the bands that the filtration causes does make it less noticeable imo. This is much like dithering, so I guess this is a way to dither in post rather than during the rendering process.

Time to try this on all 7130 Kata frames… good time to nap while they cook :wink:

BTW, my browser chokes trying to playback the above video, so just download it & view it locally for best results.