Im attempting a project that involves putting together several sequences of an animation created with blender. I was wondering if I could get some suggestive feedback as to what the best method would be to do so. (I will cut the seperate sequences later in a 3rd party video software).
Should I set blender to output to raw avi? Should I simply render to several seperate individual images to put it all together later? And whats this EXR format I hear about that was used in elephants dream?
What I do is render into uncompressed (raw) AVI and put together in 3rd party program. YOu don’t really want to recompress when outputing through 3rd party program. The program might think the artificats made in previous compress are suppose to be there and the file size might actually be higher.
From what I remember off the top of my head EXR is a HDR image format. Image that stores lot of data, so that if you want to adjust the exposure level you can do so later on without having to re-render!
Whether to output to images or video depends largely on the length of the animation and the render time for frames. If it’s going to take a while to render each section (either because each frame is slow or there are a LOT of frames) then rendering to images gives you a little security since, if there’s a crash/power-cut/failure half-way through, you can re-start the render from the last rendered frame. You may not be able to do this with an incomplete video render.
Also, if after rendering you find a small section that could do with some work, you can re-render that section and drop it straight into the image sequence with no cutting and pasting of video strips.
In the area where you choose the format… you can choose to output to openEXR.
Other than that it looks like Elephants Dream actually rendered into PNG. Quoting Matt:
” But why all the composite /color correction work is done on the 3d scenes and not on rendered (floating point openxr) images is pure amateurism.”
It’s not amateurism at all, it’s working within the constraints of the project. Our major bottleneck was not in processor power, but in bandwidth, transferring the files back from the USA render farm to Amsterdam - the most we could get out of our link was about 200KB/s, and with 5MB for one layer of an HD EXR frame, those transfer times add up quickly (especially with speed vector passes, Z passes, etc), and along with it, the time needed to do the inevitable checking of frames and re-rendering.
Rendering to separate image sequences per pass would have had it’s advantages, but we never would have been able to get the files back from the farm in time. As it was, we had to use the 16 bit half precision EXR format, and Ton also added the ‘preview jpeg’ option so we could see the results of the render quickly, without waiting hours for the EXR frames to transfer. There were also many advantages in compositing direct from 3D, if only from the aspect of organising the files and re-rendered versions. It made things a lot simpler to manage, and also allowed a lot more freedom of splitting things up into multiple scenes, not having to worry about file sizes, which was necessary in scene 8, many of the shots in scene 8 had to be split into at least 3 scenes, with up to 10 or so total layers.
It seems there has been a discrepancy in the past over what format elephants dream output to. Most seem to argue .png was used, however, I’ve been told otherwise.
Based on the response from ‘Broken’ on this thread, he seems to expound that EXR format was used from the render farm in order to allow exposure corrections later without re-rendering.