In-depth interlacing tutorial - compositing plus avidemux

Creating Video for Broadcast Using Blender and Avidemux

This guide will walk you through altering a Blender scene so that it can be rendered for Standard Definition or High Definition at 60 fps, and then interlaced in Avidemux.
It however, does not cover the necessary procedure for encoding your final video into a format that will be “broadcast ready”, or ready to be burned to an HD-DVD or Blue-Ray disc. I hope to cover these things in a future guide.
This walkthrough assumes you have an in-depth working knowledge of Blender, and at least a casual understanding of Avidemux.

Many problems arise when using Blender’s fields option to obtain an interlaced picture. When sequencing pre-rendered interlaced video in the Video Sequence Editor, you can run into problems when you try to add other elements or apply filters. You might also find subtle artifacts during transitions, especially when dealing with heavy motion. Let’s face it, in post-processing, interlaced video is a pain to deal with.
To get around these annoyances, I propose rendering video at half the normal vertical resolution, at sixty frames per second. I have found this much easier to work with, and after making the desired alterations, the result can be fed into Avidemux for interlacing.

Doubling the framerate:

First off, if you’ve been working at 30 frames-per-second, you will need to convert your scene to 60 frames-per-second (or from 25 to 50 for those of you in the PAL crowd) This basically entails doubling the length of any IPOs and NLA strips that compose your animation.

Make sure all layers are visible so all of your NLA strips show up, and in the NLA editor, with the frame indicator at 1, select all of you strips and scale them to exactly 2.0

For IPO curves, simply double the Xmax value in the Transform Properties box.

Also, if you have any softbody objects in your scene, you should cut their Speed variables in half, and re-bake their animation if necessary.


Whichever format you’re using, you will need to halve the value of SizeY, and double the value of AspY, except in the case of 720p. 720p is non-interlaced, and thus must be rendered at full frame, but still at sixty frames-per-second.

This chart shows the conversions for the most-used formats:

Format Field size AspX AspY Framerate
NTSC 720x240 10 22 60
PAL 720x288 54 102 50
720p60 1280x720 1 1 60
720p50 1280x720 1 1 50
1080i60 1920x540 1 2 60
1080i50 1920x540 1 2 50
All HD; resize later 1920x720 2 3 60/50
(my apologies for the ugly table, you might need to paste it into notepad, or another text editor for the columns to line up)

If you render at 1920x540 and later re-size your resulting render to 1280x720, you’ll be hard-pressed to notice the slight drop of vertical detail, so you can go ahead and do this to save a little bit of render time if you like. But for those of you who wish to create the highest quality HD, I suggest rendering at 1920x720 with a Asp settings of AspY:2 to AspY:3 The result of this can be re-sized to either 1080i or 720p with the proper aspect ratio of 16:9 and retain the highest quality. But keep in mind, all of the examples I use here are in 1080i.


If you happen to be using the Blender Compositor, you may have to make a few more changes. If you’re compositing a blender-render with an external video, you need to make sure that video runs at sixty frames per second. Depending on the resolution of the external video, you may also have to feed it into the Scale node to halve the vertical resolution.
Also, in all of your blur nodes, make sure that the X value is double the Y value. If you’re using my suggested 1920x720, you might want to play with the X and Y options for blur, rather than flat-out doubling the X-value.


If you will be re-sequencing, or otherwise altering the result of your render, I suggest rendering as individual frames in either the EXR, Targa, or maybe DXR formats.

EXR offers High Dynamic Range imaging capabilities which can preserve sharp specular highlights during post processing. Using an EXR sequence, you may notice a more vibrant picture after applying some filters, like gaussian and motion blurs.
Please note that broadcast formats, as well as nearly all current displays and projectors have a limited dynamic range. In the final broadcast-ready version of your film, the augmented luminance provided by the EXR format will be mixed down to the low dynamic range, but should still show improved specular highlights and color contrast overall.
If you’re keen on saving hard drive space, select EXR’s half option above the codec selection drop-down box.

EXR also contains the following compression options:

ZIP - Probably what you’ll want to use. Provides the best lossless compression for most images.
PIZ - Try this if you’re rendering with Ambient Occlusion, or if you for some other reason have a grainy picture.
RLE - The Blender 2.42 release log has this to say about RLE: “runlength encoded, lossless, works well when scanlines have same values.” Having never worked with scanlines, I’m not quite sure what that means. But at least if you use it on a regular image, it will produce much larger files than the other methods of compression.
PXR24 - Produces a much smaller file, at the cost of lossy compression.

The Targa format, while somewhat limited for this purpose, also takes much less hard drive space. If you have a small hard drive and limited backup capabilities, this may be your best option.

Then again, if you’re feeling adventurous, and have oodles of hard disk space to spare, you might try the DPX format. From what I gather, the DPX format is mostly used for scans of movie film, but I understand that the entire Elephant’s Dream movie was rendered with this image codec. The film shows off some very nice natural-looking contrasts and specular highlights, so you can draw your own conclusions. I haven’t yet had occasion to use DPX due to my own storage constraints.

After the render has completed, apply whatever post-processing and re-sequencing you need to in whatever software you employ for the task, and output a video file of some kind.
I sequence the frames in Blender’s sequencer, and render them as an AVI JPEG with a quality value of 100. But you can output any kind of AVI file you like, just so long as the quality is sufficiently greater than the format you will ultimately wind up with (triple the bit-rate should be a safe bet)
The maximum bit-rates of common video media (without audio) are as follows:

Blu-Ray: 40.0 Mbit/s
HD-DVD: 29.4 Mbit/s
DVD: 9.8 Mbit/s


Once you have your final cut, fire up Avidemux and load your AVI. Then add in whatever WAV audio track you have to go with it, by selecting Audio > Main Track in the menu bar.
Open up the Video Filters window (Ctrl+Alt+F), and in the Interlacing Tab, scroll down and add the Merge Fields filter. This will combine every two half-frames of your video into interlaced full frames that will run at 30 fps. Now your video will be interlaced with odd-frames first. If you’re working with a format that requires even-frames first, also apply the Swap Fields filter.

The Merge Fields filter in Avidemux is the only means I’ve found using Linux software to interlace video like this. I’m sure that under the Windows and Mac OSX platforms, one might find alternate methods.

Well, that’s it! I hope to follow this up with a guide for encoding to actual broadcast formats, perhaps using Transcode, but for now I hope you find this information useful.
If you think I’ve missed something, gotten something totally wrong, or you just have something to add, don’t hesitate to tell… someone. Or email me at [email protected]. Feel free to re-post this tutorial whereever you like, so long as you credit me and include my email address.