Speed Up Encoding Rendering?

I’m using the video editor/sequencer in Blender. Is there a way to increase the speed it encodes because its’ seriously not fast enough but really slow imo. It’s currently rendering roughly 400 frames per second without effects or any transitions applied. Is there a special button I need to turn on…or a special blender build that I can use to speed up the rendering encoding part?

The codec I’m encoding in is h.264 lossless at 1920 x 1080 resolution at 100% and at 24fps.

The video consist of one image in 15 minutes or 21600 frames for testing purpose (the real experiment begins when I try encoding it in 1hr length).

No sound.

Does speed depend on computer specs? If so…

My computer specs are:

window vista: home premium 64-bit
Intel Core 2 duo 2.4 ghz
8gb DDR2 RAM
250 gb 5400 rpm
nvidia geforce 9600 gt 512mb

400 fps rendering is slow? Compared to what?

1080p is a good chunk of image per frame to be processing, even of there’s no delta in the footage (which lossless encoding disregards anyway). Expect seriously slower response if encoding actual footage with compression, and sound will slow it even more.

so 400 frames per minute isn’t slow for encoding in Blender?

Your first post stated 400 frames per second – quite a difference! And 400 frames per minute ain’t slouchin’ either, when it comes to writing video frames at 1080p. That’s better than 6 fps. By comparison, my current animation is writing out at about 3 or 4 fps max, and generally slower, at 720p. It has a few effects strips, and an audio track, and I’m writing it to MPEG/h.264 with mp3 audio, at medium quality/compression settings, on an AMD dual core @ 3GHz but only 3Gb RAM. In my experience, writing video is never as fast as reading/playback, especially with compression encoding of both video and audio.

wow. I thought 400 frames per minute could be faster in blender but with you response, I should be grateful. I wonder if I can get a decent encoding speed if i were to add effects strips such as transformation and such. Usually, I would edit certain part of the sections clips beforehand by adding all the necessary effects and such before bring it all together in the final sequencer which all I need to do is keyframe the opacity for fade ins and outs but that’s about it.

I wonder if 400 frames per minute is faster or slower when compared to Adobe Preimiere or Final Cut or other industry heavily used video editors…hmmm…

Codecs make you pay for real time playback, during the compression phase. That is processor cycles are put in at the encoding stage so that your PC doesn’t have to work so hard during playback.

Rendering to a highly compressed delivery codec such at full HD @400 fps is slow?! I’ll be damned!
In all fairness this’s just made my day! :slight_smile:

Many people (myself included) would give an arm (and possibly a leg?) for such rendering speeds. Even the corrected value of 400 frames per minute (~6 fps) is just out of this world for most of us really.
To give an example, my main rendering machine, a quad-core (2,66GH, 4GB RAM) 64bit Linux workstation running an optimally compiled Blender version, spits out about 1 frame per second (!) when rendering from the VSE at full HD. If many effects are involved (which is usually the case), then it could even slower than that.

To “speed up” render times you could output to completely uncompressed stills (e.g. pngs with 0% compression) or to video using a lossless codec (e.g. HuffyUV). The drawback is huge file sizes at full HD (e.g. a png still could easily be 6 MB or more, amounting aprox to 9 GB per minute; it can be more for HuffyUV).
Then use VSE or any other encoding app you like to encode the video to a delivery codec, targeting specific devices such as PCs, cell phones, TVs, etc. Thus, the general idea is to render to an intermediate “codec” and then encode to a delivery codec.

I’m discovering this not to be the case, against my own “common sense” about it. I’m currently writing two versions of my showreel (3248 frames @ 720p with audio tracks), one to a PNG image sequence for post-processing & encoding outside of Blender, and one to MPEG/h.264 with mp3 audio as mentioned above, bitrate max 4000kbps, audio 192kbps. Surprisingly the PNGS run out at about 1/2 to 3/4 second per frame, or approx 2fps or less. The MPEG version runs from .03s minimum to .25s maximum per frame, or about 10x faster on rough average. Go figure.

But PNGs would have a file write stage where as MPEG is cached while constructing.

That makes sense, though the ratio between the average writing speeds still surprises me. I’m now writing an uncompressed intermediate for use in Handbrake, and Virtual Dub is hitting a pretty constant 4.75 fps dumping it all into the .avi container with the PCM audio – ends up about 8.6 GB for ~1.75 minutes of footage. Handbrake, which I use for encoding h.264 into the .mp4 container, runs at about 15fps but does two passes on the source, with pass one set for Turbo, so overall it yields about 8fps for the entire task.

So it seems based on these observations using the same sources that Blender’s right there with other utilities in terms of encoding speed.

I’m glad to hear that as I often get the feeling that my renders seem to take ages. I guess that I have never really benchmarked it. I am used to realtime playout :wink: Infact at my work it is often quicker to transcode by playing from edit suite to another! Leveraging the realtime power of the graphics cards.

Interesting! Silly question but are you absolutely positive about the % of compression for pngs?
PNG may be a lossless image format but note that it is a compressed one. So it takes some time to compute. I wonder what the speed difference would be if you render to BMPs or some other raw format which involves no compression.

As for the MPEG/h.264 encoding, it’s again interesting but not really a fair comparison. When rendering an image sequence, Blender will render each and every frame (per definition). With an mpeg stream however, it will not render (i.g. store) e.g. 10 successive frames if they are identical. It will use the first as a reference for the rest and encode only the differences if/when there are any. That’s the idea behind I, P and B frames.

If you really want to make a more reasonable comparison then you could try mpeg/h.264 against HuffyUV. Again, note that HuffyUV is lossless but not RAW rgb. An even better comparison would be mpeg/h.264 vs AVI RAW.

I don’t really think that file I/O is the main reason for the difference reported. Sure file reads/writes take time but that in the area of miliseconds.

Chip note that the 2-pass encoding (which is recommended btw) takes a lot more time overall but is more accurate and fast because it actually analyzes the frame sequences in the first pass so there is lot less guesswork involved. That’s why it’s more efficient and can encode at high bitrates real fast. Unfortunately, we don’t have 2-pass encoding in Blender.

So it seems based on these observations using the same sources that Blender’s right there with other utilities in terms of encoding speed.
I agree. If you really think about it it makes sense as most of the apps you mention (except VDub) use ffmpeg as a backend! :slight_smile: