Color correction in Blender Video Sequencer.

So I ended up with many .mov 1280x720 30 fps segments from my camcorder that flawlessly import, edit, composite, admit transitions, effects and final render to conform a final sequence with nice stereo sound output, all within Blender 2.38 `now mature’ video sequencer.

But:

I am way dissapointed with the overall color hue and strength of the final edited footage.

I read something about color correction in blender via nodes. But I am fairly newbie to nodes in Blender.

Is there any comprehensive guide to color correcting individual strips of video sequences in Blender?

BTW here is the sequence file with the nodes. But I am not able to get any control over the saturation of the video strips in the sequencer.

Any help, please?

Regards.

You’ve got a couple of problems there. First, your node sequence is set up for materials; you need to be using composite nodes instead. Right now, the nodes aren’t doing anything. When you have “Do Sequence” checked, the output of blender comes from the VSE, not the nodes, unless you also have “Do composite” checked.

To color correct with nodes, set the node editor to composite by clicking on the icon that looks like a face, then add an image input node and load your video into that. You can then use other nodes to adjust the color, brightness, saturation, etc., and route them to an output node. Now, in the sequencer, add the scene to the timeline. With both “Do Composite” and “Do Sequence” clicked, you will get the output of the nodes routed through the sequencer. You will need a scene for each strip in your program.

An easier way might be to forget nodes entirely and use the color correction built in to the VSE. Selected a strip and go to the strip properties by clicking the little button that looks like a piece of film. On the “filter” panel, you’ll see a button marked “Use Color Balance”. Click that. Now click the far right color box right over "“Inv Gain”. Click the “Sample” button and then click in your preview image on something that should be white. Now click the “inv Gain” button and your strip will be properly white balanced.

You can also mess around with the other parts of the image by adjusting the the other two color boxes (inv gamma and inv lift). For example, if you lowerthe “v” value of the inv gamma value, you will essentially increase the contrast.

It’s easier to work with this control if you change one window to a luma waveform so you can see how your changes affect the output.

Nodes will definitely give you more control, but if you are new to all this, the VSE color correction is easier to use.

Hope this helps.

–Done. Flawlessly :yes:

—I do not manage to get rendered material with the changes to saturation, brightness and contrast applied, even with both «Do composite» and «Do sequence» clicked. :frowning: I know I am really missing something. See the situation here.

—I import directly the video files onto the timeline. The strips resulting contain the actual video material, which later I trim in the editor. Is that correct?

—And another question. Do you mean that for each video shot (or strip) I will need a set of, let’s say, «input», «brightness/contrast», «hue/saturation» and «composite output» nodes?

—This help has been very useful. But I’d rather prefer learning nodes, as I intend to edit high definition material, and great control is required. Where could I find «nodes for beginners» bibliography? Blender online manual is not very explicit, as far as I could see…

Last but not least, here is the improved version of my .blend file for editing video sequences. Have a look at it, perhaps you could point me the mistakes… Remember I am using Blender 2.48a.

Best regards and thanks for helping! :):slight_smile:

You’re getting closer. The problem is that there is no automatic connection between nodes and the sequencer. Instead of importing the strips directly to the VSE, you have to set up the nodes as you have done, but then choose import “scene” to the timeline. That will route the nodes to the VSE and you should get the output you want. Unfortunately, nodes render rather slowly and it is difficult to edit using them in that way, but that’s just how it works. And that’s why I prefer to color correct with the VSE itself.

And, yes, you will need a new scene with an input, correction noodle, and output for each strip that you want to correct. Again, a bit cumbersome, but just how it is.

You have a couple of choices to aid in editing this way. One, you could just set up the node and only check “Do Composite” and render to an intermediate file (preferably using a lossless codec like “HuffYUV” or uncompressed files. Then you can load the color-corrected strips into the VSE and edit as you have been doing. Or, you could route the nodes to the VSE as I mentioned and use Blender’s proxy function to create smaller versions of your video that you can edit with. To do that, set Blender to render at something other than 100% using the buttons on the main render panel. Now select a strip on the timeline and using the strip properties panel, click “use proxy”. Blender will usually create a directory for you, but you may need to select a place to store the proxy. What happens then is that Blender will create a series of jpegs, one for each frame of your strip, which may take a few minutes for a long strip. Since these are already color-corrected and “rendered” you can edit with them much more quickly. When you are ready to output, just set Blender to render at 100% again and it will automatically apply any editing that you did on the proxy to your original footage.

This is one of the neatest things about Blender’s VSE, since it allows you to edit hi-res video on just about any system; it doesn’t need much horsepower to work with the jpeg proxies once they are created. I use it to edit 720p footage on a small netbook, and it works flawlessly.

Once you get the nodes connected to the VSE so that you can make changes there and have them appear in your output, you’ve pretty much got the hang of nodes for this situation. From there you can experiment with different nodes to see what they do.

Hope this helps, and good luck.

Ok, so I managed to render a full sequence with three or four shots perfectly color corrected, but using only offline material. (I use 320x180 jpeg image sequences wich I create with the «Use Proxy» function in the strips properties, just as you told).

At this stage, the editing, post processing, color correcting and everything works flawless. The final output matches exactly what is expected.:):slight_smile:

But:

When I’m done offline and try to switch online (original 1280x720 footage is loaded then in the input node), then issues raise.:frowning:

If I load the online material and set the final render button in 100%, no proxy is used since full render resolution is selected. Then the material associated with every strip is only that selected in the input node, and will be hence rendered for all the strips!:eek:

So what I did is to create a new nodes package just below the first one, containing just the same: an input image node, a brightness-contrast node, a hue-saturation node and a composite output node.

Here you can see how the screen looks,

http://personales.ya.com/juanjavier_xxx/blender_seq_hd.png
and here you can see the .blend file.

Hope you could help. :slight_smile:

If i"m understanding correctly, the problem is that you truly have only one scene. Adding more noodles in a single node editor window won’t work because only one can be active at a time. You have to “Add Scene” from the dropdown at the top of the main Blender window. Each new scene added that way is pretty much self-contained and can have its own set of nodes, it’s own render range, and so on. So, you “Add Scene”, switch to the node editor and set up your noodle, and set the frame range to match your input length. You do that for each input clip that you want to work with. Then, when you add a scene on the VSE timeline, you will have a list of of all the scenes in your blend. You choose a different scene for each strip.

You can then use the proxies to speed up the editing process. You don’t actually “load” the online material again. The switch from proxy to full size version is handled automatically when you go back to 100% on the render panel.

If this doesn’t seem to make sense, let me know and I’ll post a blend file that is setup as I’m describing.

—And how do I final render all the scenes in the correct order? I assume that scene (green) strips should be imported in the timeline and not movie (blue) strips, isn’t it? I understand that each green timeline scene strip in the timeline editor contains all the scene data for that blender scene.

—One more question. Is there a way to avoid that Blender crops the image monitor when selecting output resolution different from 100% for offline editing purposes? If I select 100%, the full image is displayed in the output VSE monitor but editing becomes raaaather slow, and thus making online edition not an option. However if I select other than this, no scaling is performed, but cropping instead, which makes difficult the editing and color correction tasks. Note that the «use crop» button in the sequencer buttons panel is disabled. «Use proxy» button will render smaller jpegs but at the original size of the online, and hence cropped. Hope you understand…

—I will keep you informed.

Regards

I decided to go ahead and post a blend file anyway. Just in case you decided you wanted one, I thought I’d whip one up quick and have it ready. But as I set it up, I realized that it was a lot more finicky than I had remembered and it might be difficult to come up with from scratch, because there are several “gotcha’s” to watch out for.

I set this up with the default Blender scene renamed to “Master”. It’s set to 640x480, at 30 fps and is set to output to an Xvid-encoded avi. Nothing special there, you can change it to whatever you want. Note that “Do Sequence” is chosen, but not "Do Composite. This scene itself contributes no footage to the project. It’s just a place to edit with the VSE.

The scenes labelled “Clip1”, “Clip2”, and “Clip3” hold your video footage. Each of those scenes has “Do Composite” checked (but not “Do Sequence”), and if you change the view to “SR:2 -Model” you’ll find the nodes (I co-opted the model screen because I didn’t need it otherwise.) My noodles are junk, just set up so that I could tell that they were actually working. Insert your own color correction stuff instead, but keep the composite output hooked up. I also like the viewer window because it lets you see things change as you make alteration. Each of these scenes has the render range set to be the length of my input clips (change to match yours). Note that the framerate and output type (e.g. jpeg) don’t matter in these “clip scenes”. However, the aspect ratio and the size of the clip can be changed independently of the master scene.

Now the “gotcha’s”. If you want to add a fourth clip, make sure you are viewing one of the existing clips (not the master scene) and when asked, choose “Full Copy”. It’s easier to edit the parameters of one of these scenes than it is to start from scratch. If you do make an empty scene, make sure you at least add a camera to it. Blender won’t pass through the output of a noodle unless there is a camera in the scene, even though it isn’t needed.

Next, as you edit with this set up, be very careful when changing window layouts and moving to new scenes. Blender will change the scene whenever you go to a new window layout. Double-check that you are in the right window layout AND the right scene before making any changes.

Hope this helps.

Attachments

ColorCorrect.blend (217 KB)

You’re right that the green scene strips are loaded into the VSE. Treat them just like you would the raw (blue) movie strips. Just drag them around to put them in the proper order.

I’m afraid I don’t quite understand your last question. It may be that your resolution is just too high to fit everything in the window. You can change the zoom setting on the preview window by scrolling with the middle mouse wheel when your pointer is over the window. You can also put your mouse over the preview window and hit the “Home” key to zoom out so that the entire frame is visible.

Does that solve the problem?

—This is solved by now…:cool: I finally understood the internals of the VSE. No external re-coding of offline material in order to be imported from VSE is necessary as I used to do in the early stages…

The offline material is just generated by the VSE itself with the «Use proxy» button.

So this is finally understood.:spin:

One last question: how do I determine the length of the scene strip I insert into the timeline? It always import 133-frame long strip no matter what button I hit…:eek:

Where could I check-change the length of the scene that I import into the VSE timeline? Mind you that since 133-frame long strips are always imported, I can shorten the scene strip, but not lengthen!Perhaps you could lastly help on this as well…

Take a look at the blend I posted. Each scene is basically independent. You set the start and end frames in the scene and it is carried through when you import it to the timeline. The “master scene” is the length of your entire program. Each “clip scene” is set to the length of your input video and that is the length that is used for the strip when you import it.

Hope that helps.

—But what parameter has to be trimmed? And where is it?

—How do you manage it? I mean…in your sequence I imported my own .mov online high resolution files in the nodes, and regardless the length of them, the timeline strips for the green strip scenes kept displaying exactly the same frame-length. It seems that the length of the online hi-res file associated with the nodes is not relevant to the frame-length displayed by the green scene strips in the timeline. I don’t know how to trim this. Perhaps you could help.

This

…is my problem by now. How to change that «133»?
Regards.

The parameters that you set to control the length of each scene are the same ones you use for rendering. Specifically, the “Sta” and “End” frames on the Anim tab of the render panel. In the blend I posted, those are set to 1 and 2585 in the “master scene” and those values are the one that control the final output. However, you also need to set the Sta and End for every other scene so that it matches the length of your color-corrected clip. So, for the “Clip1” scene in my blend, the render range is set to 1 and 960 since that is the length of the clip. Clip 2 is 1 and 300, and Clip 3 is 1 and 1325. Just go to each clip scene, pull up the render panel, go the anim tab and set the sta and end values to match your footage.

If you have already imported a strip into the timeline and then change the render range, you will need to reload the strip (using the Reload button on the Input tab of the strips’ property panel) in order to see the change.

When using blender in this way, the render range is not a global value: each scene can have its own numbers.

If you can’t get it working, post a blend and I’ll take a look at it.

Good. I managed to control the strip length in every scene, as well as in the master scene and hence in the final output, thanks to your help. It was as easy as you pointed. Just checking the «Sta» and «End» parameters in each of the scenes did the trick, which was something I was not aware of, when selecting the «full copy» option…:):o:o

Now, for the audio.

Within the timeline, audio files can be imported and placed in the master sequence perfectly. Audio can then be heared in the editing environment, which was an issue in earlier releases of Blender and seems solved in 2.48.

But I have two more questions:

The first one is how to display the waveform in order to precisely edit or trim the material, which is important.

The second one is how to make the audio present in the final render, since a silent final movie is created when clicking «Anim». I know that there is a «Mixdown» button that performs such thing in the audio material. But then…how should it be reassembled into the silent video output? Mind you: recompressing the video again would be a pain, since final quality suffers.

No audio strip is muted in my .blend file. «Multiplex audio» button is selected in the audio tab of the render buttons. «.wav» format as well as «.mp3» have been already tested. The final output keeps silent.

Any idea on this?

Regards and thx for all the help…:yes::yes:

Blender is not a very good audio editor. If you really need to see the waveform to edit, you’d be better off with something like Audacity. You can edit the audio there and bring it back in to Blender. On the other hand, if you are talking about the video waveform, just make a new window on the sequencer layout (or co-opt one: I generally use the IPO window) and set it to be a video sequencer window. You can then change the display to the “Luma Waveform” using the menu next to the view button.

What video format are you trying to output that isn’t working with the audio?

…Hmmm…in other versions of Blender you could see the audio waveform in the audio strip of the master sequence right ahead…:eek: …so I supposed there could be a way.

—In this link you can see a screenshot of the blender sequencer during production stage of the film «Elephant’s Dream», a movie done entirely with Blender. The screenshot does not reveal lots of detail, but waveform can be seen in the audio (dark green) strips.

If you edit in Audacity, you hardly could sync your audio to events in the video…like splashing something in the water or so…

—Video waveform? I have never heard about that:eek:, perhaps you mean video IPO curves for transitions? That’s ok, there is no problem w/that. I don’t quite get you here:o

—Quicktime mov w/H264 codec. Video exports flawlessly, but no audio can be heard in the final output.

—Eeeek!!:eek::eek: Xvid format with xvid codec did the trick!! The output audio finally sounds!

So it seems that perhaps there is an issue with H264 video codec and/or .mp3 audio encoding.

Will research on this, and keep you informed.

Regards!:slight_smile:

Yes, that combination should work. Others will too, but the ones you tried previously wouldn’t. Neither wav or mp3 is an allowed format for an audio track in a QT movie. Just because Blender will let you mix and match codecs and formats, doesn’t mean they will all work.

No, I don’t mean IPO curves. Video also has a standard waveform that is very useful for analyzing a picture. In fact, it has two: a waveform for brightness, and one for color. Blender lets you look at either one by emulating standard video production tools. You can see the brightness waveform by changing a VSE window to the luma waveform type. You can see the color waveform by choosing chroma vectorscope instead. You can also call up a histogram that might be useful in color correcting.

You don’t need an external audio editor to do something as simple as adding a splash sound, and you don’t need to see the waveform either. You do need to be able to hear the track as you edit though. Fortunately that’s easy in Blender. In the sound block buttons (just above the mixdown button) make sure both the “sync” and “scrub” buttons are clicked along with the proper frequency setting for your audio (probably 48Khz in your case). Now click on the little speaker icon, at the right of the timeline button strip.

Now you can step back and forth in the timeline and hear your audio and can edit and place sounds.

Viewing the audio waveform is automatic, provided that the audio clip is a .wav file. Just convert all your audio to .wav’s and load them into the timeline: the waveform will appear.

…Hmmm, the audio in my timeline is already a .wav file. However no waveform is displayed.:confused::eek:

—Uuups…I was importing as HD audio, and not as RAM audio. If you do so, then the waveform appears. Otherwise it won’t…:o

Regards