VSE Query

Hi

Attached is a link to a screen shot of a video project I’m doing, nothing exciting, just a locked off shot of the sea. I don’t own a NLE so I thought I’d use Blenders VSE but I’m a bit skeptical.

http://yellowsblog.wordpress.com/files/2009/09/screenshot.png

Any NLE cats here, who would like to suggest what they can read into the attached screen shot of the same footage imported into the same version of Blender 2.49a svn but by different methods, side by side on my desktop.

I’m on a bit of a learning curve and really want second opinions on what I think is happening.

Cheers

just guessing but the one on the left looks like it has more compression or colour depth stretching. This creates the vertical bars in the histogram. Interesting I would guess that it was still frames of jpegs? against video stream.

3point thanks for the feedback, on the histogram, yes the gaps in the histogram concerned me too compared to the more smooth slopes to the one on the right, but wasn’t 100% on the reason why.

To give a little more way, there’s no jpegs involved but the source material straight of the camera is .mts (mpeg2).

I haven’t done anything with the footage in Blender, it’s just how it’s come in.

Do you notice anything else?

Rather than play a kind of obscure guessing game, why not just state what you think is going on with your imports? I don’t read video tech displays worth a damn, and I’m even worse at reading minds, but it’s easy to see that they are different – why this is so is another matter altogether, and I guess (but shouldn’t have to) that’s the point of your post.

So, clarify what you’re getting at, please. If it’s a matter of a difference between a vid clip placed as “Movie” and one placed as “Movie + Audio” then say so, as it may provide a better basis for figuring out why the difference. If it’s some other import option, then state which.

No I don’t think so. There’s no one forcing you to post on this thread, there’s no one forcing you to play an obscure guessing game.

interlaced blend mode?

Interesting, could you elaborate, I haven’t used the deinterlace option but…

The source material for the right hand screen is a .m2v mpeg2 file from my HD vid camera, it’s a 25p in a 50i container, captured with DVgrab and imported as a movie + sound, nothing special. A YV12 colour space converted to RGB by FFMpeg on import I guess? But I don’t know anything about what matrix it uses (REC709? REC601? PC.709? PC.601?) in the colour space conversion and whether it remaps/scales 16-235 to 0-255.?

The source material on the left hand side is the same file but sent through AVISynth and converted to RGB via ConvertToRGB(matrix=“PC.709”, interlaced=false) and exported from AVISynth as PNG image sequence and imported into blender in the usual way. PC.709 for full computer RGB range instead of REC709 for HDTV.

The reasons:

I’m reading The DV Rebels Guide by Stu Maschwitz and Color Correction For Digital Video by Steve Hullfish and Jamie Fowler, working through the tutorials and footage.

I noticed, (I think) that in blenders conversion of YUV type sources like YCrCb and YV12 (MPEG2 & DV) that anything over 235 (legal broadcast white) was getting clipped off and thrown away so it was impossible to reclaim anything in the full YUV scale upto 255. The top 10% as it’s called or Super whites. Including some of the tutorials in the books.

Using blenders Color Correction tool on the Gain brought the whites down from clipped at 235 but there was nothing above, FFmpeg I think had already discarded what was over 235 to 255. Dropping the clipped white below 235 with nothing above just gave me some nasty posterized white. :frowning:

Although I always shoot with zebra at 100% (or 70% for skin tone) it’s not as simple as that, you need to be on 100% or just under and as my camera captures (as many do) the full YUV scale upto 255 I’d like to have the option to use it and pull the whites down for the exposure I want (within that available), rather than having an arbitrary clipping off at 235, if that is what is happening?

I can understand the 16-235 thing for broadcast legal although that is an option not all video has to be broadcast legal? web based RGB doesn’t need to stop at 235? I don’t understand why a YUV->RGB conversion by REC709 (for HDTV) and a remapping to 0-255 is necessary in a creative application like Blender where there is post production going to happen. if it’s just a straight transcode of a format for broadcast then I can understand. Most NLE’s don’t rescale 0-255 for the creative reason.

So that was the reason to go with AVISynth and include a levels command in AVISynth when doing the YUV -> RGB conversion, before it enters blender. I think a lowsy colourspace conversion is going on, but not 100% sure. Testing gradient fills between REC709 & PC709 show the former to be very blocky and banding more evident in comparison.

I think FFMPEG is scaling 16-235 to 0-255 although I could and probably am totally wrong. I’d like to know and put me out of my misery. :slight_smile:

In the process of doing stuff in AVISynth I also noticed my older DV movies from an old vid cam, where I didn’t have a zebra to work with were also getting clipped off, of coarse clipping is going to happen, but by using AVISynth I was able to claim back super whites up to 255 and help alleviate the clipping a bit.

I also noticed that the histogram in the VSE showed a far smoother curves and the vectorscope a more evenly distributed range of colour with the AVISynth route than the FFmpeg route. As per my attached screenshot.

It really would be easier to just go and get Sony Vegas or something. :slight_smile: but I’m looking to composite video and CG so I thought I’d persevere with Blender. :frowning:

Thanks in advance, it’s not meant to be a rant, just difficult to explain, I think it’s the usual story, a little knowledge is dangerous. :slight_smile:

Unfortunately these images dont really portray the clipping issue well. But they do show the squashing of your tops. You are right though, CG imagery doesn’t often have this clipping introduced at import (unless it’s into a video editor) so it’s hard to know exactly what Blender is doing. FCP and Avid allow you to apply a Broadcast safe filter prior to (or after) alterations.

It would seem to be VERY sucky to fix everything in Blender prior to cutting with it. To be honest, these days all digicams (SX DIGIBETA P2 etc.) seem to come from the factory preset with hot whites at 105%, so don’t feel cheated. Many broadcasters don’t bother correcting them, and just allow the clipper in transmission to do it. Ofcourse with the result that whites posterise (yuck).

yes, that explains the clipping of the values, or, dropouts in the grid fashion. the right is better. It should be easy to test if the strip can rescale. Use the strip color color balance feature; there’s two modes now, ASC CDL and Lift/Gamma/Gain. Perhaps you can adjust gain to get the highlights down so they are not clipped?

Oh man, another long post. :slight_smile:

You are right, I was testing :slight_smile: re original post not suggesting too much and looking for evaluation before explaining too much. :slight_smile: non of the values of that vid import reach over 80% (it was a very changeable day and I’d locked my exposure + a 2stop ND filter on and didn’t check my zebra to keep a good exposure so it was low, the highlights on the waves should have been hot.) but I do have many examples where it does appear imports have elevated luma and clip and throw away as a result, but in comparison the same video the AVISynth route do not clip in a side by side evaluation.

t would seem to be VERY sucky to fix everything in Blender prior to cutting with it.

Yeah, the VSE isn’t meant to be a NLE for sure, with that in mind the idea of importing all manner of video formats which are generally in a delivery codec and then converting them to RGB to cut them, then rendering back to the same or similar YUV type delivery codec is not a best workflow in my mind.

Better YUV straight through for simply cutting those types, adding sound and some CC, but for compositing CG the RGB conversion has to happen. Better to have control over it though rather than arbitary conversion. So using an external app to get an image sequence, lossless AVI or Lagarith Lossless AVI is a better route anyway I think.

Bringing in mpeg2 files via the composite ‘Image’ node and then onto the VSE as a SCENE results in the same spiky histograms, so either Nodes don’t handle vid imports well either (Using FFMpeg in the same way as the VSE maybe??) or something goes wrong as it goes onto the VSE?

In 2.5 with Matts Colour Management additions hopefully nodes route is sorted, I need to test, but will that stretch to the VSE?

To be honest, these days all digicams (SX DIGIBETA P2 etc.) seem to come from the factory preset with hot whites at 105%, so don’t feel cheated. Many broadcasters don’t bother correcting them, and just allow the clipper in transmission to do it. Ofcourse with the result that whites posterise (yuck).

Ok :-), but this is the first stage in the compositing process with CG rather than out to broadcast :slight_smile: mpeg2 is already heavily compressed without an odd colour space conversion taking place on import to RGB, if that’s what blender is doing? So I’d like to hold on to all there is out of the camera. :slight_smile:

PapaSmurf, thanks for the response but I think the luma values are getting thrown at import, so there’s nothing to pull down, although i may be wrong about the throwing away, need to test more but I can’t really bring the clipped whites down below 235 as they will have a posterizing effect on the highlghts, a common give away of poor video. :slight_smile:

Papasmurf, after reading your comment about interlacing and the fact I was using in this case a 25p in a 50i container I thought I’d test another YUV type file import into Blender.

So I made a gradient fill image (Blue & Yellow diagonal) 1920x1080 1:1 PAR in Gimp, saved it as a .png, opened it in AVISynth and did a best possible ConvertToYV12(), exported as Lagarith (YUV not RGB) and used HCenc mpeg2 encoder on best quality settings to get a 100 frame Mpeg2 (HCEnc only accepts YV12) to play with. phew! No doubt there is a far easier way. :slight_smile:

I then reloaded the mpeg2 into AVISynth and did a ConvertToRGB(matrix=“PC.709” interlaced=“false”) in order to get a png export from the Mpeg2. But by using AVISynth’s YUV->RGB method. The reason, to compare a mpeg2 frame import and subsequent convertion to RGB against the png frame I exported from the mpeg2 source. Still with me. :slight_smile:

The results were mixed and has added a bit of confusion, he he :slight_smile: First the mpeg2 import showed a very spiky histogram with a very flat white base (is that lumiance?), the png frame I exported from AVISynth showed far smoother slopes. And looking at the image preview in blender it was clear to see that the mpeg2 source was very blocky at the edges of the banding of the gradient. So even though the exported png was derived from the mpeg2 source including conversion to RGB it appeared far less blocky and better quality than the mpeg2 source file converted in blender to RGB.

However I wasn’t expecting the minor colour difference between the two as I switched between them in the image viewer, the png from the mpeg2 appeared marginally darker, then looking on the RGB Parade the green channel (YUV luminance conversion?) was more flattened on the png and the red and blue slightly lower up the scale, due to the luminence difference I guess?

After that I imported the original png from Gimp and obviously that showed little or no banding. When I swapped between that and blenders conversion of the mpeg2 file and read the RGB parade, the R, G, B levels were identical positions on the scales. The only difference as expected was the mpeg2 showed a lot of gaps and spiky histogram. But the import was accurate. :slight_smile:

I then downladed a test chart which also showed 231, 253 and 255 numbered squares. I rendered out a YV12 file via HCEnc and loaded that into blender, switched on the zebras at 100% and 255 was shown clipping, 235 not again as expected for a proper import, so i’m at a loss with regard to me thinking blender is scaling 0-255, the test would suggest not?. :frowning:

I then had a look at the various YUV -> RGB conversion matrixes that AVISynth offers and I tried a lot of variations including ‘coring’ true or false, which I think affects the lumiance level conversion but I couldn’t get a match with the way blender had imported my mpeg2 file. In all cases the luma scope showed more elivated in blenders conversion.

I need to read some more and not sure what exactly what Lagarith and/or HCEnc are doing either with regard to levels differences.

I’ve had a look in blenders code for pointers but no luck.

It’s painful but fun, well to me anyway. :slight_smile:

As an aside I did find out about X264 project at VideoLan and enabling x264 rendering quality presets with -vpre which doesn’t appear to be enabled in FFmpeg’s compiling flags. Looking at the x264 menu in blenders render settings there’s a heck of a lot of options to try out. :slight_smile:

Look forward to any comments.

This seems to be the basic question you’re trying to answer. Thanks for taking the time to explain your process in such depth, it’s been an interesting read and I’ve started my own learning curve re: interpreting the tech displays for the VSE. Always good to learn new stuff.

I did a test using an image sequence rendered out of Blender that has a much less complex luma and chroma structure than your seascape, both as a way of learning to read the displays more intelligently and to check on a few of the questions you raised, mainly how the VSE may or may not affect the video data when placing various formats of image and/or movie as strips.

This still shows the test setup:


which can be viewed in full as a 6-frame Xvid .avi – 1280x968 screen captures (315kb) for stepping through the channels, with these notations:

Channel 1 in the VSE is a series of 30 frames rendered as .tga (lossless RLE) and placed as an Image Sequence. This is the purest RGB source I could imagine.

Channel 2 is an uncompressed .avi (AVI Raw option in Blender) output directly from the Channel 1 Image Sequence, a test of whether Blender introduces any significant changes in the image structures (as revealed via the displays) when creating the .avi dub.

Channel 3 is an MPEG2 dub of Channel 1 using the AVI Codec option, the codec choice being ffdshow/MPEG2/Quality100. This is the only MPEG2 encoder I have available, so its use is default, but for these tests it seems adequate.

Upper left in the screens is the example frame opened in the UV/Image Editor (.tga file) as a visual benchmark of sorts. All Channel 1-3 video versions should come as close as possible this image.

Since the original .tga frames had a somewhat limited luma range (the whites are not full-range up to 255), I then used a Brightness/Contrast node in the Compositor to jack the white just a smidgen past blown out. No way to overdrive (“105%”) these values as this is an RGB modulation that tops at 255. The intent was to produce a luma spike that just grazes the top limit, then see if it gets clipped or squashed in any fashion when formats used in the Channel 1-3 videos are produced and then placed in the VSE as Channels 4-6.

As can be seen by stepping through Channels 4-6 in Video Test Screens-XvidQ3.avi, the luma graph shows the expected elevated spikes and plateauing of the curve representing the blown-out cyan spotlight at image right for all versions (Image Sequence, raw .avi and MPEG2), the histograms show the expected changes (gaps in the raw formats caused by the Brightness/Contrast processing, spikiness in the MPEG2), and the overall image quality has small modulations, particularly the more visible banding, in the compressed MPEG2 versions. What also seems apparent is that there is no significant alteration of the waveform amplitudes due to placing either raw or MPEG2 formats, and the luma, at least, seems to be showing a full top to bottom value range, with no modulation due to placing a particular format as a VSE strip. All modulations seem to be a result of dubbing to the MPEG2 format.

Not shown in the example Channels are test trips I made using Virtual Dub and the ffdshow/MPEG2 codec to produce video examples from the original .tga Image Sequence – for both raw .avi and MPEG2 versions out of VDub, the results of bringing them into the VSE were identical.

I used 1054_Blender-2.49-OpenAlAlure-Py2.5 (BF) from graphicall.org for the above work, probably not an exact match to yellow’s SVN version but I assume the differences are not significant as regards these tests. One thing I did notice is that unlike v2.48 there is no report in the console upon placing the MPEG2 strips, so I placed one of the MPEG2 strips in 2.48’s VSE so I could capture this console readout:

Assuming that 2.49 uses a similar pipeline, this console report may be useful in clarifying the MPEG2 processing on placement. It does not mention FFmpeg specifically but rather a special converter. When placing an Xvid-compressed strip, the report is identical save for type of video (mpeg4 in that case), so it seems a common pipeline is used for at least these two compressed formats. When placing raw .avi there is no similar report so I think it can be assumed that the special converter is the YUV->RGB mechanism.

I haven’t yet learned enough to draw any detailed conclusions from these tests based on the tech displays, though the central question raised about Blender affecting the full range of values in MPEG2 source video when strips are placed does seem to be answered (nothing significant) for the kinds of sources I used.

Hope this is useful info.

ha, then it hasn’t been a waste of time, that’s good. :slight_smile: I appreciate your thanks for my long winded posts. :slight_smile:

I think there are two questions I’m trying to answer. The first is "Is blender doing the best conversion it can of the HDV and DV video sources I’m inputting into the VSE with regard to YUV -> RGB and secondly "Is blender leaving the luma levels alone, scaling them from YUV 16-235 to RGB 0-255 or scaling them oddly and loosing the super whites on occasions.

Going back to my screen grab the luma scope shows black about equal at about 2% on both, but luma is elevated in the left hand ‘direct blender import’ nearly touching 80% but at about 70% via AVISynth.

If a Pro looked at it they’d resolve it in minutes I guess, but like my screen shot shows something is wrong I think. AVISynth is renowned as an excellent video processing tool and the results from it compared to blenders conversion look better.

The clipping is a separate issue I think, perhaps it’s not a problem with rescaling but the way the Luma is handled between the various YUV spaces like YUV, YCbCr, YV12, YUY2 etc in the conversion to RGB and that on occasions the shift upwards in luma from an odd conversion, pushes white values that weren’t clipping in my camera over the edge and clipping them?

Perhaps AVISynth distinguishes and handles more clearly the differences between the YUV spaces when it does the conversion?

More tests, more learning and reading through your tests to help draw conclusions. :slight_smile:

btw I’m on Ubuntu Linux Karmic Koala, the blender svn build is from about a week ago + the patch in the patch tracker for the graticules on the luma scope / RGB Parade, so I think that’s safe. VirtualdubMod and AVISynth via Wine.

One thing I noted when reading some of the documentation on AVISynth is the statement that all the conversions from YUV<->RGB are not lossless, so in truth I would expect a difference between what Blender does with direct MPEG2 input and what Blender does with the same data processed through AVISynth. Quantifying that difference and interpreting it in terms of your particular needs and preferences is a process as much subjective as objective, I think.

If you’re looking to actually do a kind of calibration on Blender’s treatment of the MPEG2 then I think you’d have to find some sort of ultra-reliable benchmark in terms of both the image being processed (i.e., a standardized “test pattern”) and the ideal luma, chromaticity and other characteristics that should result from the YUV->RGB conversion of that image from an MPEG2 format that has characteristics similar to that your camera outputs, since not all encoders produce identical results even for the same encoding scheme/standard. And of course “ideal” will be relative to end use – compositing with CG, or for use in NLE’g projects, or some other goal?

One other aspect I’ve noted when using the VSE – it’s much more reliable in many ways when using uncompressed video regardless of original source. So it may be a more efficient pipeline in other respects to convert to RGB from a compressed format in an external app, with output to uncompressed video or RGB image sequence, than placing the compressed video directly in the VSE. Obviously this means a loss of data quality compared to the original CCD-generated RGB, but unless you have huge storage capacity and a camera that gives you a no-compression option, then I think you’ll have to live with the loss and just find the cleanest pipeline that suits your needs and budget.

yeah shoot raw from a RED1, haha

but seriously, in Avid the app. asks you how you want the source treated upon import (truncated to 16-235 bit luma value). Even an attribute that you could change would be handy for interpreting.

Yes, correct, not lossless but what I have been describing as ‘best’ colour space conversion and AVISynth does a very good job I think. It works in 32bit float I think, does FFmpeg? Maybe the spiky histograms from blenders conversion are to do with precision and where the conversion happens, in an 8bit codec or 32bit space? I clutching at straws currently, really don’t know or understand enough. :slight_smile:

A couple of issues with regard to lossiness in conversion, I believe, but more than happy to be put right, is precision (rounding errors) and colour space gamut. Converting YUV into an 8bit restricted colour space like sRGB rather than YUV straight into a 32bit Float workspace with something like AdobeRGB or linear is going to result in quality loss and shifting hues, due to the remapping or squeezing of colours together to fit the restricted colour space I think.

Blender has a 32bit internal work space, well for nodes and rendering anyway, whether that gets as far as the VSE I don’t know, yes there’s a FLOAT button but have you tried reading the histogram after using it? what exactly does the FLOAT buttont do I wonder, is there any point converting to 32bit mode when the colour space conversion compromised the import anyway :slight_smile: you would hope the YUV->RGB conversion was into 32bit float. Rather than 8bit first, or maybe going straight to 32bit is total overkill.

My original screen shot show a comparison between direct a mpeg2 input from a HDV camera and the same file converted to png image sequence. Quality is lost in the blender import, yes it is subjective but banding and blockiness of gradients are evident and elevated luma which has on occasions resulted in clipping, I think. So external app is the way to go in my opinion for the time being.

If you’re looking to actually do a kind of calibration on Blender’s treatment of the MPEG2 then I think you’d have to find some sort of ultra-reliable benchmark in terms of both the image being processed (i.e., a standardized “test pattern”) and the ideal luma, chromaticity and other characteristics that should result from the YUV->RGB conversion of that image from an MPEG2 format that has characteristics similar to that your camera outputs, since not all encoders produce identical results even for the same encoding scheme/standard. And of course “ideal” will be relative to end use – compositing with CG, or for use in NLE’g projects, or some other goal?
Yes chip charts are readily available and I could shoot one in a controlled setup with my camera to clip on 255 and see what happens on import. I have used a downloaded chip chart and encoded to YV12 and imported into blender, that came in and clipped at 255 as expected, 235 (legal white) was fine. However I now think that my camera doesn’t do YV12 a planar type YUV but YCbCr which is packed, so need to retest.

http://www.fourcc.org/yuv.php#Planar%20YUV%20Formats

One other aspect I’ve noted when using the VSE – it’s much more reliable in many ways when using uncompressed video regardless of original source. So it may be a more efficient pipeline in other respects to convert to RGB from a compressed format in an external app, with output to uncompressed video or RGB image sequence, than placing the compressed video directly in the VSE.
Yes, this is what I’m doing currently with AVISynth, I found that too. Especially with sound syncing. I’m intending getting an external sound recorder like a Zoom or Juiced Link so sound is going to be a separate import anyway and now Jack / new sound system is being implemented hopefully things will improve.

Obviously this means a loss of data quality compared to the original CCD-generated RGB, but unless you have huge storage capacity and a camera that gives you a no-compression option, then I think you’ll have to live with the loss and just find the cleanest pipeline that suits your needs and budget.
Yes definitely, my camera gives the option of 4:2:0 YUV to tape or hdmi out uncompressed RGB tethered to a 5 disk RAID or via an intermediate codec at 4:2:2 the second two options far better for chromakey and tracking, but I’m not there yet. I’m more than happy with HDV and it’s mpeg2 compression at the moment, it’s only one small aspect of a far greater set of things to learn and deal with. I don’t want to get bogged down in the technicalities of what should be an enjoyable creative process.

Re clean pipeline that’s the bottom line right from the start and why I’m here trying to establish what is happening with the mpeg2 imports in order to get the best possible quality from an already compromised source. :slight_smile:

thanks

I havent much experience of HDV, but at 25Mb what kind of compression artefacts do you experience? Blockiness in the dark regions like DV, or buzzing edges around contrasty areas?

Hi, the blockiness I’ve mentioned in this thread has been with reference to the color space conversion and then the mpeg2 encoding from HCEnc, which does a very good job. Rather than HDV/DV compression directly. The test video files were generated from a gradient fill 8bit image converted to a YV12 video.

Using blenders import method clear banding and stepped blockiness was far more evident at the graduation changes in comparison to AVISynth color space conversion. I’m going to test more.

Dealing with artifacts from HDV/DV compressed sources I am anticipating doing in AVISynth and before the colour space conversion, along with any deinterlacing, slow motion adjustments etc if necessary, then convert to RGB and then import into blender as image sequences or uncompressed RGB AVI,just to cut the edit including the compositing and then straight out frameserved to a delivery codec via TMPGenc 2.5 or Cinema Craft SP2 or x264 (A VideoLan Project) which FFmpeg uses I think for x264 encoding.

Thanks

Blender uses FFMPEG when importing mpeg2 video. So, you’re stuck with whatever you get from that. And, while you can add a bunch of custom properties to the FFMPEG library on export, I don’t think you can configure it in such a way upon import. If I cared as much about this sort of thing as you obviously do, I’d do such conversions outside of Blender and bring in an image sequence that has the color characteristics you are looking for.

Thank you for that clarification.

If I cared as much about this sort of thing as you obviously do, I’d do such conversions outside of Blender and bring in an image sequence that has the color characteristics you are looking for.
Ooh that sounds like it’s some personal quest for a certain colour only I want. :slight_smile: I’m commenting on and querying why mpeg2 and DV import into blender appear to be ‘not right’, it’s to help others not fall into the trap. It’s not a personal quest it’s a wide spread desire to hold on to the quality of the video footage.

No skin off my nose, I enjoy AVISynth. :slight_smile: I’m happy to wait until project Mango, or what ever BF call the VFX one, see what the vid guys make of the whole set up then. :slight_smile:

Wanted to clarify after reading that second quote from me – I wasn’t intending that as disparaging your intentions. I was just noting that you are obviously an in-depth user, someone with more than a casual interest, and that I was not.