Elephants Dream Render Format

I’ve been having a bit of trouble finding some detailed information on what settings, Codecs and so on they used for rendering in Elephants Dream, I would like to know what video format they used, or the process they used to get the final video output, as I want to render high quality lossless video. Whats the best to use? Thanks! :slight_smile:

They rendered to a .png sequence with the aid of a render farm.
Rendering Services Provided by

Bowie State University Xseed

Dr. Calvin W. Lowe, President

Prof. Mark Alan Matties, Director


For rendering the movie, in the USA generously donated access to their Xseed cluster. Prof. Matties volunteered for us, providing local technical assistance. At the peak of our rendering load, we were using 160 of the 224 dual G5 Xserve nodes.

You can find all the info here:


If I remember correctly, Elephants Dream was rendered as a sequence of Open EXR images, and those lossless files were then edited together, and output to formats for the DVD and the additional HD versions.

Nope, it was .png.

Uncompressed audio tracks and original png files: media.xiph.org/ED/

Holy cow, these things are super detailed - right down to every single stitch on their clothes. And the files are life size too. No wonder they had to use that render farm.

Thanks for the replies. I thought that they rendered the frames to .png but I don’t know what they did from that point on, I wouldn’t imagine they converted to uncompressed avi.

Well it was released on the net in 2 formats, QuickTime and DivX, so they probably converted to QuickTime Animation (lossless compression) and AVI since, as far as I’m aware, DivX won’t work with QT and QT doesn’t much care for AVI either.

RamboBaby, DwarvenFury is right, it wasn’t PNG. We rendered directly to OpenEXR format on the render farm. This allowed us to make accurate colour corrections directly on the rendered frames, which came in handy.

After we had the entire movie finished and collated as EXR sequences in the video sequence editor, we then used the EXR files to render (really just conversion) out some different format versions for different purposes. We made a 16bit DPX sequence to take to the DI-lab for putting on HDCAM tape to be screened in the cinema, and we also made a PNG sequence to use for compressing the web versions. The PNG sequence was then turned into avi versions, with some DivX compatible MPEG4 codec, and it was also turned into (via a couple of steps) the QuickTime versions, with the H.264 codec.

Cool. The only thing I was able to find was the link to the .png files above. I looked at several of the full frames and can’t imagine that any of the ones that I saw (considering how incredibly detailed they are) staying within the stated memory limits. Did you guys render to layers and then compose those into the final image sequence that you posted on the net?

Well, all of the shots involved compositing to some extent. Some were split into render layers, but that’s mainly for ease of compositing - from what I understand with our conversations with Ton, render layers don’t actually ease the memory burden at all, it’s just selectively hiding or showing what’s already there. To save memory, you have to split the shot into multiple scenes (‘Blender Scenes’, that is, in the top header menu), since after each scene and all its render layers have finished rendering, memory is freed and the next scene is rendered. There were a few shots that we had to split into multiple scenes, mainly because of heavy geometry (like the big hands at the end).

But in all of these cases, the compositing and combination of the layers was done within the single .blend file, and rendered out a single EXR image. We didn’t actually render the layers to separate EXR sequences, because of our particular situation with a huge, yet overseas renderfarm, out bottleneck was bandwidth, not processor power, so we had to limit the amount of files we were sending back and fowards as much as possible. We touch on this issue in the talk we gave at the Blender Conference, perhaps it’s interesting to you: http://video.google.com/videoplay?docid=1831262087977442632&hl=en

As far as memory limits, our (Andy’s and my) workstations that we did the lighting/compositing/render work on were dual G5s with 2GB ram, as were the render farm nodes. So we had to stay within that or it would crash or other horrible things would happen. The ‘Free Tex Images’ option helps a lot here, so that the large image textures are cleared from memory before the compositor does its thing. There’s a bit more info on our memory issues here too: http://orange.blender.org/blog/stupid-memory-problems/


Than you! Checking it out now.

Man, that should be required viewing for all Blender users. I especially liked the crapy-matics and movie references. Thanx again Broken.