Subtle tones get severely banded on encoding

I’m finally done rendering & editing my show reel and would of course like to have a cadillac version to put in front of prospective clients, but I’m being styimed by the video encoding process. A lot of my imagery uses subtle lighting effects that fade in smooth gradients in the original PNGs, but get strongly banded when encoded. Here’s an example

The PNG version does show banding, despite my having added a diffusing bump texture to the surface being lit, but it’s not nearly as noticeable as the other two.

As can be seen part of the problem is the media player’s decoding – some are very clean, others not so much. But the encoded vids show similar problems across all playback venues

This version was encoded in Blender using MPEG/h.264 into .avi at 4000Kbps video bitrate. 4000Kbps is a typical bitrate for web usage, but even at 20,000Kbps (20Mbps), the banding is still highly noticeable.

Any suggestions (short of re-rendering) as to how to avoid the video banditos when encoding?

Ensure the levels in your encodes are suitable for playback, the whole 16-235 thing and dithering. Encode outside of blender where you can control encoding parameters far more.

You can also framserve out of blender via avisynth to encoders like hcenc and x264. x264 offers encoder presets as a starting point. Avisynth offers dithering control, encoding to 4:2:2, levels control etc.

Did you dither the PNG output in the first case? Sadly there isn’t much you can do with Video Codecs, they suck.

Thanks for the suggestions, dithering in particular. I can’t afford the time to re-render a big chunk of the footage,though, I’ll just have to see what I can do in post.

@ yellow: I’m somewhat aware of the whole tonal range compression/levels clipping issue in doing video conversions, but not so much on ways to work around them. I generally try to light with a full range of tonalities, so while I try not to blow out the highlights, some subjects require very high-end white and/or very near pure zero black. Is it necessary to compensate for tonal range compression in the original rendering or are there post operations that can help in this issue? I don’t have access to a wide range of video post options, so recommendations of specific software features isn’t as useful as a general description of what needs be done.

I am using encoders outside of Blender (e.g., Handbrake, which afaik uses x264), but still run into the banding issue, so I suppose I’ll have to do as much as I can with the source imagery, where I have more options.

An interesting discovery regarding the Dithering option – it has a much greater effect on banding when used during the original rendering than if used to process already-rendered files such as an image sequence. I find this a bit odd because it’s included in the Post-Processing section, so I assume it’s applied to the image as it comes from the rendering pipeline and just prior to being written to a file. Since this is (I again assume) essentially the same process used when an image sequence is the source, I wonder why the significant difference in the results?

I’d suggest that the place to dither is in post, the very last operation on a ‘master’ mezzanine movie file rather than at render time to image sequences, why would anyone want to screw up perfectly good image frames with dither to solve playback problems in a media player? :slight_smile:

The dither function in blender I think is more aimed at conversion from 32bit/16bit to 8bit images.

I can only speak from what I’ve been seeing in the renderings. In re-rendering certain scenes from Kata that I’m using in the show reel (this is in 2.49b, BTW, but it works the same in later versions), I used the Dither option and it massively improves the banding I’d get in the PNG imagery (in the BGs), but did not affect the figure rendering to any visible degree unless I pushed the Dithering value to its max of 2. This even when zoomed in to see things on a pixel-by-pixel level. I took test renderings into Photoshop and did blink-comparisons using layers, and the BG was obviously dithered while the main subject was not. Seems the Dithering option is actually a very selective filter, seems to work only where needed when used at level 1.0 or so.

By contrast, when taking already-rendered (without Dither) image-sequence frames from the same scenes into the Compositor and writing them out with Dither enabled, the dithering is visible but does not affect the banding to a large degree. So it seems that the Dither option is designed for use in the rendering stage, rather than as a purely post operation.

One of the reasons I never used Dithering in the past is because I always thought of it much as you seem to, based on the bad old days of pre-24 & 32-bit color, a way to prevent horrendous banding in 8 & 16-bit RGB images, But it seems to have evolved into a more specialized purpose, and does seem to cure a great number of banding ills when used intelligently.

Sorry I think we misunderstand each other, I mentioned in first post about encoding outside of blender via x264 or hcenc (mpeg2), that is via a tool like AVIsynth, which allows all manner of filtering, levels adjustment, and dithering.

I was totally disregarding blenders built in dithering because like the encoder options any meaningful control via parameters is practically non existent.

You mention understanding of levels, if you don’t mind me asking how are you adjusting them to get your luma to fall into 16 - 235? Or are you somehow setting the x264 options ‘input-range PC’ and range TV via blenders x624 encoder options?. To do the luma scaling at the encoder? Or are you encoding h264 as a native stream unmuxed and muxing into an mp4 or mkv with full range levels and setting the fullrange flag ‘on’? So a media player knows to do the scaling into 16-235 for you?

When you go from RGB levels to YCbCr levels if they’re not adjusted your image will have its contrast stretched and banding will almost certainly appear.

You can also do ordered dithering via AVIsynth with more control.

So two possible routes would be either frameserve your RGB imagery to an external encoder like h264 setting the various ranges, color matrix, trying some of the presets and profiles. Animation is one such option.

Or use a tool like Avisynth with ImageSource() as input for your image sequences, then you can do a levels adjustment, resize, sharpen, dither whatever needed before encoding to an encoder that doesn’t give you the options that x264 does.

Just suggestions, trying to be helpful.

Too bad, because it does seem to be not only controllable but also very useful. I only disregard options that have no demonstrable use for a particular task, rather than as part of a blanket attitude. But whatever.

What I’m doing right now is trying to develop more useful post-processing methodologies to deal with problems like banding as well as general encoding tasks, instead of just accepting defaults and living with the result. While it would be nice to have a ginormous suite of software dedicated to video alchemy, for me it’s not all that practical, so I use tools like V-Dub and Handbrake to accomplish what I can. Blender, too, if it does what’s needed. Whether or not they can achieve the results you recommend remains to be seen, as I’m still in the process of learning to greater depth what the various options are. I appreciate your informative comments, they do present some aspects of encoding I hadn’t yet addressed. I’ll post back here if I develop any useful approaches to the issues raised.

Nicks video on taming banding might offer up some insight.

While it is After Effects based, the main take away is to mix in a bit of barely perceivable color into your banded layer so the encoder has more color information to work with.

Interesting. I did something like this with my initial version of Kata, using the Blender compositor to lay in a faint animated noise pattern in the banded areas. It was a limited success, because to eliminate the banding the noise had to become just barely perceptible, reducing the overall perceived contrast & sharpness of the imagery. Always a trade-off. Had I known the Blender Dithering option was as effective as it is, I’d have been a lot better off using it to begin with.

Of course compression into video codec may eliminate some benefits too.

I’m fully expecting that, but the task at hand is to minimize the banding to only the inevitable, that which video compression brings to the imagery.

I have a number of frames written out now with Blender Dithering incorporated into the BGs, I’ll see if I can put together a split-screen with those and the earlier non-Dithered stuff, so the benefits are easily seen. Turns out I didn’t need to do a full re-rendering of the scenes, just the backgrounds of sequences that exhibited banding in the PNGS, so the new plates rendered fairly fast. I used the Compositor to replace the earlier banded BGs, which was pretty much a seamless process even though I hadn’t saved anything in layered EXR, just used some masking tricks.

@ yellow – part of my pipeline on this project is writing out a set of PNG images from the VSE that incorporates all the transitions and effects, for eternal processing and muxing with a mixdown of the audio edits. So it seems a good place to also do the levels massaging you recommend, see if I can output frames that the encoding 'ware won’t have to jump through hoops to crank out as clean as possible. I plan on using two containers, .mp4 and .avi, to try and insure playability across a range of players and other utilities. I’m shunning QuickTime as it doesn’t offer much in the way of benefits but is cranky as heck on the PC. I’ll also be looking into formats appropriate for CD/DVD in case I want to send out hard copies for playback on HDTV.

Sadly I think that the VSE doesn’t include proper color management (even though you can switch to float) so you get limited color space for corrections and I guess effects. Will have to consult Yellow or the blog.

chip disregard my comments about levels, of coarse your source is RGB so ffmpeg handles the levels to 16-235 going RGB to YCC.

Long time since I’ve encoded from an RGB source I failed to put my brain into use. :slight_smile:

If I understand correctly there is no linearizing done in the VSE but as you say there is ability to work at float precision.

re. color management if we’re inputing sRGB gamut and sRGB gamma that’s what’s going out to delivery anyway albeit with a rec709 curve, there’s no CM needed and what needs to be done is handled by the encoder, only linearizing for blending and transitions is missing in the VSE with regard to CM.

Video geekery aside ;), the Dithering option in Blender’s Post-Processing section seems to be the ticket to much cleaner final videos. I put a little clip up on Viimeo to demonstrate:

The video is native 720p, and uses a split-screen produced in the Compositor to illustrate the benefits of using Dithering for certain subjects. Left half of the screen is un-dithered imagery, and shows considerable banding in the gradients of light scattered about the setting. On the right I’ve swapped in imagery of the setting made with Dithering set to its max of 2.0 – almost no banding is visible, and the perceived depth of the setting is enhanced considerably to my eyes, a little side benefit of the de-banding process.

The Vimeo conversion may introduce a measure of tonal distortion itself, so for the best look at the comparison, download the original from the Vimeo page while it’s still available.

The .mp4 video was made in Handbrake using the h.264 video codec set for an average bitrate of 5Mbps, not atypical for web-delivered video material. I’ll also be making a comparable version in the .avi container using Blender, just to see how that pathway pans out.

PS. The .avi version written out from Blender using the MPEG/h.264 options @ 5Mbps also looks immeasurably better in the dithered portions. This is good because while most artifacts like banding are produced during compression and thus are codec-dependent, the choice of container format often determines which players can actual play the videos. Always good to have options.

Having played both the .mp4 and the .avi versions in a handful of players, the benefits of using the Dithering option are very obvious. Just wish I’d tried it earlier.

EDIT: I just discovered that the video that is linked-to for downloading on the Vimeo page is yet another damn transcoding of my original upload. Don’t know why they do that! But it does not reflect at all the quality of the original, since it not only introduced more (and more severe) banding but also seems to have changed the entire luma and chroma structure. I’ll try to find somewhere to upload the original test video for direct download.