Video Sequencer memory leak?

I’ve been using the video sequencer to edit a pretty extensive movie that I’ve been working on and lately the lag with blender is just killing me. Every move I make seems to take blender 20 seconds of thinking time, and although that may not sound like a whole lot of time, it adds up really fast.
So recently I decided that instead of making blender handle the entire project at once I would carve it up into segments and edit individual scenes. This works better at first but after adding only a few video files and cuts I end up with a ten second lag.

The problem seems to increase quickly with time, so I was wondering if there was a problem with the way blender handles memory in the video sequencer. (hopefully it would be something fixable) Or maybe it’s just that I have my settings all screwed up. Any help at all would be greatly appreciated!

Here are some pictures of my project and predicament. I accidentally muffed the second screen shot in paint (duh) but I think you can get the idea. (will attach a better one when I have the time required to open up the project again)

edit: Grrrr, I guess I screwed up both screenshots, sorry about that.

Attachments



As far as I’m aware, this is a problem with any application that is capable of working with video but, most packages that I’ve worked with have a means to deal with this. It’s a quality setting in Adobe After Effects that lets you choose from Wire Frame, Quarter, Half, or Full resolution. Blender does not have these quality settings which means that every time you move the time marker Blender is going to import the frame, convert it internally to 32 BPC RGBA and render the image. Move the time marker and hit F11 (not F12) and you’ll see that the frame has in fact rendered. This is an extreme pain in the butt as you’ve just found out.

Basically it’s caching the current frame into RAM. Once cached iy doesn’t have to be re-rendered if you move to another frame then come back to it. I do very little editing with VSE because of this and several other issues. There are some things I like about VSE but this one keeps me away from it for the most part. I just use other software for the job.

I think you’ll probably see a vast improvement in this department with the release of 2.50.

Try working with complex scene strips in VSE and you’ll probably want to smash your computer screen as the scenes render in sequence.

Basically it’s caching the current frame into RAM. Once cached iy doesn’t have to be re-rendered if you move to another frame then come back to it. I do very little editing with VSE because of this and several other issues. There are some things I like about VSE but this one keeps me away from it for the most part. I just use other software for the job.
I figured that it was something like that, but thanks for the explanation! Is there a way to ‘dump’ the cached frames after you’re done editing a section to improve performance?

If I had other software (besides Windows Movie Maker) I’d use it, but for now I’m stuck with blender.

:(Yeah, It IS a pain in the butt. The method you mention seems like it shouldn’t be that difficult to implement, is this going to be used in the future with blender? ::idea:: If you turned off all of the advanced render settings and set it to render at 25 percent would this help things?

Downloading now :smiley:

Yeah, I’ve been there…

UPDATE: Most of you will probably have figured his out by now but when using the sequencer turn off everything that you possibly can in the render options buttons and make the render size 25 percent, some effects will not appear correctly in the viewer box but it will make blender MUCH faster when editing!

Also, it sometimes helps to use the refresh button if your computer is really bogged down. Keep in mind that after you do this any frame you might want to view will need to be re-rendered. However, if you have the size set to 25 percent this (hopefully) shouldn’t be a problem.

I used the separate unrendered scenes before and got into the same problem because the scenes were somewhat complex. What I found was easier was to use a secondary stage before trying to combine the scenes, by rendering out the individual scenes to tmp as ‘scene1-001-scene1-999’ and ‘scene2-001-scee2-999’ as targas, then import each series of images back into the vse as image sequences, then work on the editing from there. The vse handles the information better since the frames are already rendered images, and it’s easier to recover a mistake by re-importing the appropriate sequence. For me, this is similar to the reason behind rendering to image sequence initially, since rendering to an animation right off the bat can lead to problems if the computer loses power or memory, and the complete effort is wasted, unrecoverable. Rendering to images allows more flexibility, and also is a first step toward using the nodes to recombine/distort/colorize the image files before adding them to the vse.
My .02 only, I understand where you are coming from and would love the vse to handle the scenes as proxy image somehow before rendering also.

This is pretty much what I’ve decided to do now. I started out hoping that I could do the entire movie editing within a single blend file, but have since realized that without super-computer resources, that it’s not going to happen.

I was hesitant to try the ‘edit it separately then combine it’ method because:
A) I feel much more comfortable being able to view my entire project at once
and
B) I didn’t want to have to set up a new blend file with the correct settings because finding the right settings on the original file was a long, tiring and mostly experimental, trial and error process.

Of course, both of these things are luxuries, not necessities, hence the switch.

Right now I’m planning on editing segments in separate files and them combining them in the original blend file that I started on. (I got a good 5-7 minutes before it became intolerably slow)

My (revised and still subject to change and debate) work flow is as follows.

  1. Capture with WinDV
  2. Open with Virtual Dub
  3. Use Virtual dub to ‘frame stretch’ the video in order to change the frame rate from 29.97 to a solid 24
  4. Use Virtual Dub to save the video
  5. Use virtual dub to save the audio as a WAV file.
  6. Open up the video in the blender compositor nodes
  7. Use the nodes to tweak the image (you can get some REALLY nice results with this)
  8. Render out to a sequence of JPEG images, in chunks if necessary, so long as you include the entire sequence.
  9. Open the image sequence and the WAV file up in the blender sequencer, edit them and render the images to MJPEG format.
  10. Open the MJPEG and WAV files up in your original project and line them up. Insert this episode at the correct place in your movie. Repeat steps 1-10 until editing is complete
  11. Find some way to get your project into a delivery codec, find a way to burn VCDs or DVDs and you’re set!

Anything wrong with my methodology so far? I wonder about rendering to the JPEG format so many times, but with such a large project on my hands I really cant spare the hard disk space to render everything raw.

If you can’t use targas, I would recommend png since it is lossless and very small footptrint as an image.

What is targas?

I use JPEG mostly because that’s what was described in the blender documentation. I think that I tried going to PNG (it was either that or OpenEXR or Bitmap or something like that) and the file sizes were far to large.

Are PNG’s the same size or bigger than Bitmap or EXR? Because if so then, for me, they are pretty much out of the question. (I’m about halfway through shooting this thing and I’ve already used up about half my hard dive space with all of the intermediate files :ba:)

I’m not that concerned about the JPEG compression, simply because I can’t seem to see a way around it. What I was concerned about what compressing and then re-compressing the JPEG images multiple times.

targa (.tga) is an lossless image format.

png’s are also lossless (unlike jpg) but compressed. png will be larger then jpg, but much smaller than bitmap (bmp), which is lossless and uncompressed.

I generally like pngs. filesize is quite reasonable and there is no information loss with successive editing.

In general, you want to minimize the number of steps in your process that involve lossy compression. Really, you only wan;t to introduce lossy compression when rendering the final video file (after the lossless frames have been rendered to tga or png).

There are probably exceptions buts its generally a good idea.