I’d like to render a 4k video. There are 10,000 frames and the length of the video is about 5 minutes.
Each frame takes about 2 minutes to be rendered. It means it’s take a very very long time to render the whole scene and it’s impossible to render it in one session.
I think of something but I’m not sure if it will work or not. I will render 1 minute of video per day. I’ll have 5 separate parts. I’ll merge them using VSE. I’m not sure if the merging process will take the same time as rendering or not.
Can you not just pick a new start and end frame each day? Then just join them using any video editor that can do a direct stream copy (if you are rendering to video directly).
According to my calculations you will need 14 days non-stop render time, so you won’t get one fifth done per day. Unless you throw multiple machines at it.
Render to image format, stop the render at anytime, continue from where you left by setting the start frame again or disable overwrite in output settings.
Once ready, encode the image sequence into a video format.
@JA12 Is right. If you are rendering a long animation, You really should just output to images.
If you disable overwrite:
Then you can just press esc whenever you want to stop, and when you start the animation again, blender will pick up right where you left off.
In terms of stitching the image sequence together, I would recommend Virtualdub to do this. It can load a .png sequence and export it to any codec you want really quickly. I did a 3 minute animation a while back, and it took maybe 5 minutes to convert it to a .mp4
Thank you so much for your answer. It’s really helpful and contains a lot of info. On top of that because you recommended me a fast software. I really appreciate it.
Use a “frame per file” format such as, specifically, MultiLayer OpenEXR which will allow you to resume rendering at any point. These files also capture all of the (selected) data produced by the renderer, in pure-numeric format. (Particularly things like “object ID” or “distance from camera.”)
Break the project down. Even if the video is five minutes long, it consists of shots that can be rendered one at a time, then assembled in a video editor.
If the camera doesn’t move, the background doesn’t move either. Props in the scene don’t move. Therefore you need only one frame of this information, into which you composite the things that do move.
If “the only thing that’s moving against those non-moving objects is the light,” what’s probably moving are shadows, and you can render shadow-only, then composite these in, too. (Remember that you can use both BI and Cycles in the same project at the same time.)
When you’ve finally made the final shots in M-OpenEXR or simply OpenEXR format (for the final step that no longer needs layers), then you concern yourself with “printing” video files that will actually be watched. This is the [only] point where you deal with things like gamma or compression, to suit the technical characteristics and limitations of each target-device.