2.49b VSE playback performance lacking

Hi,

I’m attempting to use the VSE for editing a short animation I’ve been working on, and it seems to have great difficulty keeping up to realtime playback. This is especially becoming a problem now that I’m adding audio, as the playback seems to be even slower with accompanying audio tracks.

A little info on my setup:

hardware
-Phenom X4 2.5Ghz
-Nvidia GT240 1GB RAM
-8GB main memory

software
-Ubuntu 10.10 64bit
-Nvidia proprietary driver (v260.19.06)
-Blender 2.49b 64bit

project
I’m using image strips from PNG files, at 720p (1280x720)

So what’s the problem here? My setup isn’t screaming, but it should certainly be sufficient for some simple video editing! It seems like blender is just being incredibly inefficient. I have noticed that during playback on the VSE, blender is completely maxing out one of my cores; perhaps it’s bypassing hardware acceleration and doing everything in software for the video display? I can’t comment on that really since I don’t know anything about Blender’s internals, but it seems to be the case.

It would be really great to do the editing work in Blender, but this isn’t usable. Can anyone share what their experiences have been? Perhaps it’s better on Windows? Any tricks to speed this up? Am I missing something? Is it any better in 2.5? Any thoughts/suggestions appreciated. It’s looking like I need to use another NLE; kdenlive perhaps?

And yes, I should move on to 2.5, but I didn’t want to migrate my project to alpha/beta software in the thick of things:-\

Regarding sound, I’ve noticed speed variations between sound cards and drivers, for example I seem to get better playback using an external USB Audigy sound card than mobo’s.

I’m using image strips from PNG files, at 720p (1280x720)

Have you considered using / generating proxies?

So what’s the problem here? My setup isn’t screaming, but it should certainly be sufficient for some simple video editing!

You’re not editing video, it’s image sequences. Video playback and editing is generally easier on resources because it’s video space and video sources may well be subsampled 4:2:0 etc. Your image sequences are RGB. ie 4:4:4. Are you’re png’s compressed or not?

It seems like blender is just being incredibly inefficient. I have noticed that during playback on the VSE, blender is completely maxing out one of my cores; perhaps it’s bypassing hardware acceleration and doing everything in software for the video display? I can’t comment on that really since I don’t know anything about Blender’s internals, but it seems to be the case.

Blender doesn’t use HW acceleration and certainly not for image sequences afaik. Even HW acceleration like vdpau will not help image sequence playback only decoding certain compressed video formats and is hardware specific.

It would be really great to do the editing work in Blender, but this isn’t usable.

Blender is usable but would suggest generating proxies for whatever you can.

Can anyone share what their experiences have been? Perhaps it’s better on Windows? Any tricks to speed this up? Am I missing something?

Proxies either jpg’s or compressed video maybe.

Is it any better in 2.5? Any thoughts/suggestions appreciated. It’s looking like I need to use another NLE; kdenlive perhaps? And yes, I should move on to 2.5, but I didn’t want to migrate my project to alpha/beta software in the thick of things:-\

You’ll have much the same performance with kdenlive I’d imagine, it uses FFmpeg for playback as well I think.

2.49b in my opinion is faster playback than 2.5 so maybe not move yet and 2.5 will not generate proxies either, holding back until the effects stack stuff is done :slight_smile: Although it will read and use proxies generated by 2.49b or in the same folder / format structure.

Hi yellow,

I appreciate your response, thanks.

Have you considered using / generating proxies?

I’ve been using proxies in the context of linking objects, etc, but until I read your reply, I didn’t know about proxies in the VSE. I did a quick google for “blender vse proxy” and hit the wiki page here. It gets me started with the basics, but has brought up a couple more questions concerning their use:

I have 40 different shots (image strips) in the VSE. I was hoping I could select them all then hit ‘Proxy’ to turn it on for all of them at once, but no beans. Is there a quick way to enable this feature for the entire sequence instead of for each item in the sequence?

Also, I was a bit disconcerted to read on the wiki “Disable proxies before final rendering.” Shouldn’t blender automatically assume that the user is just using the proxies for editing but doesn’t want to output from the proxy data? This seems a bit odd to me.
So if I understand correctly then the workflow requires me to enable proxies on each image strip every time I need to do editing work, then disable the proxy on each when I want to output the sequence?

Also, do you know where these proxy files are generated by default?
Where do I specify to generate JPG proxies? There is a Quality setting which seems to imply that the proxies are all JPG, but I’m not sure.

Your image sequences are RGB. ie 4:4:4. Are you’re png’s compressed or not?

The PNGs were outputted from Blender, which I’m assuming is compressed. A typical file at 1280x720 is around 600Kb, which is way too small to be uncompressed I think. I output to PNG from the openEXR files I rendered specifically for performance while editing in the VSE, but I guess it wasn’t good enough.

I didn’t follow you talking about the image sequences in RGB, and what the 4:4:4 is referring to. Can you help me better understand what Blender’s internal process is doing? What is happening in this process that’s maxing out one of the CPU cores?

So all of this is got me thinking a little more generally about workflow; what is the normal workflow for this size project in Blender? (2 minutes, 30-40 shots) Is the workflow I’m trying to do unusual? To preserve lossless images but still cut down on size I picked PNG. Is there a better choice? I would prefer not to apply any lossy compression until final video output.

Sorry about the slew of new questions, and any more info you or anyone else wants to send my way is appreciated.

I wish. :slight_smile: No mass selection that i know of without hacking blenders code or maybe someone code an addon. I did have a go with blenders code recently to add the Use proxy option in 2.5’s file manager along with the other odd selection of choices there but couldn’t get it to work.

I just use a python script to generate all the proxies for all the clips in a folder, including creating the folder hirearchy blender wants, ie BL_Proxy etc. As if it were generated by 2.4x. It’s a lot less painfull. I recently did it for 250+ clips of HD camera shots. Over 50GB.

As you’re on Linux a decent bash script with something like imagemagick on the CLI might work for creating proxy images from your full size images.

Also, I was a bit disconcerted to read on the wiki “Disable proxies before final rendering.” Shouldn’t blender automatically assume that the user is just using the proxies for editing but doesn’t want to output from the proxy data? This seems a bit odd to me.

So if I understand correctly then the workflow requires me to enable proxies on each image strip every time I need to do editing work, then disable the proxy on each when I want to output the sequence?
That’s never right. It’s bad enough having to select each strip in turn to turn them on without having to do the same to turn them off as well. :slight_smile:

Also, do you know where these proxy files are generated by default?
Where do I specify to generate JPG proxies? There is a Quality setting which seems to imply that the proxies are all JPG, but I’m not sure.
The proxies go in a specific folder called BL_Proxy which is in a folder named the same as the clip they relate to, so 40 clips is 40 folders named as each clip with a BL_Proxy folder inside each, then within each BL_Proxy folder is a 25 or 50 or whatever size you chose to work with / generate. They are jpg and only jpg will be automatically picked up by Blender. You can use .avi’s but you’d have to manually set file and folder locations for each strip, which is an even bigger PITA. :slight_smile:

The PNGs were outputted from Blender, which I’m assuming is compressed. A typical file at 1280x720 is around 600Kb, which is way too small to be uncompressed I think. I output to PNG from the openEXR files I rendered specifically for performance while editing in the VSE, but I guess it wasn’t good enough.
Yes it seems unfortunate that full size .png’s are too much for the system setup. So a script needed to generate .jpg’s from openexr then.

I didn’t follow you talking about the image sequences in RGB, and what the 4:4:4 is referring to. Can you help me better understand what Blender’s internal process is doing? What is happening in this process that’s maxing out one of the CPU cores?
There are two main colour models for video, RGB and all those others with a ‘Y’. :slight_smile: Incorrectly named ‘YUV’. YCbCr is the digital video colour model. The stuff that comes off a consumer video camera, so when you mentioned video editing although there are a selection of RGB video codecs the majority are such as YCbCr. Video is generally subsampled 4:2:0 or 4:2:2 and as such generally hold less information to generate each frame at playback than the same if it was an RGB source.

However, video which is said to 4:4:4 is generally pretty much the same as RGB. Subsampled video, say 4:2:0 will generally playback more easily and generally use less resources than the same as RGB. Depending on codec.

So all of this is got me thinking a little more generally about workflow; what is the normal workflow for this size project in Blender? (2 minutes, 30-40 shots) Is the workflow I’m trying to do unusual? To preserve lossless images but still cut down on size I picked PNG. Is there a better choice? I would prefer not to apply any lossy compression until final video output.
No I think what you are doing is absolutlely the right way to go. Proxies I think are required, until blender gets more efficient. there is talk of ‘optical flow’ and baking effects coming to blender, certainly playback speed is a current issue, hopefully improvements are on the way and negate the need for proxies.

Sorry about the slew of new questions, and any more info you or anyone else wants to send my way is appreciated.
No problem.

I wish. :slight_smile: No mass selection that i know of without hacking blenders code or maybe someone code an addon.
sigh, that a shame. It really seems to me that this ‘proxy’ functionality should be almost transparent to the user; maybe we can just turn it on, and turn it off with a check box, and the rendered output is always the actual source, not the proxies. Ideally the performance is good enough that a proxy isn’t needed though (hoping on 2.5…):yes: A script to generate the files is a good idea.

That’s never right. It’s bad enough having to select each strip in turn to turn them on without having to do the same to turn them off as well. :slight_smile:
Yeah, that just seems like craziness to me that the output would be derived from the proxy images unless the proxies are turned off…

They are jpg and only jpg will be automatically picked up by Blender.
Good to know. Helpful info for writing a script.

Yes it seems unfortunate that full size .png’s are too much for the system setup. So a script needed to generate .jpg’s from openexr then.
Indeed unfortunate. Have you had any luck working with image strips of 720p or larger PNG files in VSE? If yes, what hardware/software setup are you using?

Thanks for the quick explanation about the color models; it made sense.

Proxies I think are required, until blender gets more efficient.
I think that’s the meat of all this for me. Too bad since they seem to be quite a hassle. Well, I’ll work with what I’ve got in 2.49b for the time being and hope for something better in 2.5:)

Your response has been quite informative. As we say here in Sweden, giant thanks!

just thought of a question relating to this:

I noticed that nodes and VSE both seem to be single-threaded in 2.49b. I’ve heard that 2.5 will be more multi-core friendly (like continuing to work in the UI while rendering, etc) but does anyone know if node processing or VSE processing will be true multi-threaded? It sure would be great to see all my cores being utilized during node compositing, or when I’m working in the VSE.

Given the time needed to make and unmake proxies for a long series of sequences, I always found it simpler just to render out the edit at a 100% or scaled size, depending on your target output dimensions. The VSE writes fairly quickly, particularly if you minimize the 2.49b render window (same for the Compositor, btw). If you write using FFMPEG and your favorite brew of codecs, you can get relatively fast and 100% playback-accurate previews.

Because FFMPEG tends to write larger files than external muxing apps, I usually just use the VSE for reviewing the edits with sound, often in small chunks of frames or a short series of sequences. Once it all looks/sounds good I then write out a new final image sequence from the VSE and mux it externally

Haven’t yet used 2.54’s Compositor or VSE, though, not sure how it compares to 2.49b.

Generally speaking you don’t want any compression until you do the final version, in the pipeline you want to have uncompressed stuff so you don’t lose quality.

Generally but then .png compression is lossless, so is lagarith and HuffyUV so more accurate to say no lossy compression. :slight_smile:

Don’t think it’s particularly efficient on multicores. I’m currently using a dual quad xeon, 8GB RAM and CUDA capable vid card, not my own machine unfortunately :frowning: and nodes are still painfully slow on the whole. :slight_smile:

Given the time needed to make and unmake proxies for a long series of sequences, I always found it simpler just to render out the edit at a 100% or scaled size
I’ll keep that in mind. It’s always good to have alternate approaches w/ different strengths and weaknesses.

As a test, I enabled “Use Proxy” on one of my strips, then hit “Rebuild proxy”, but it doesn’t seem to be working for the following reasons:

-performance hasn’t improved
-image preview window is still showing the full res image
-I outputted a couple of frames from the proxied strip, but they are from the full res image, not the proxy. The wiki indicates that proxy needs to be turned off to output from the original image and not the proxy, so getting a full res image would indicate that proxy isn’t working.

What am I missing here for the proxy to work?

EDIT completely off-topic, but when quoting someone, how does one insert a link to the person’s post that you’re quoting from?

Are you setting your VSE Preview window to the same percentage size as the proxies?

When you rebuild proxies do you see a little counter on screen?

OT. When replying, use the ‘Quote’ button instead of ‘Reply’

That was it, thanks. This thread here was talking about changing the % size in the Render panel, which had me looking in the wrong place and was confusing me.
Using proxies isn’t as much a hassle as I would have thought. It looks like I can leave all the proxies ‘on’ and still render out full res, so once I create and enable the proxies, I can select the appropriate viewing size for the VSE, but that doesn’t seem to affect the output at all, contrary to the wiki saying it would. Maybe they made improvements since the feature was introduced? At any rate, it’s quick and easy and the performance is fine now, and from a functional point of view, working in the VSE preview window with proxies doesn’t affect rendered output, sweet. If the % render size in the render panel was originally affecting the VSE preview window, that’s good they changed it, as it’s not intuitive.

One issue I’m still seeing with performance is scrubbing with audio scrubbing enabled is simply horrid. This is with wav files at normal sample and bit rates. Whether I choose Audio RAM or HD it just isn’t usable. I think I remember hearing about this a while back. Oh well.

Thanks everybody for all the help, especially yellow!

You could give a try with 2.5 set to “frame dropping” playback mode in timeline header.

Edit :
Default is “no sync”.