Problems with MTS (AVCHD) files - video is twice as long as it should be

Hello

not really sure this is a blender issue. But anyway: blender 2.69, Win7x64

I have a MTS file recorded with a Canon. I used 25p (however the documentation states that 25p internally is treated as 50i) when recording. When having a look at the file Windows Explorer tells me in the Status bar that is has a Frame rate of 25. avidemux tells me it has 50 FPS.

When importing the file in blender’s VSE, the video is double as long as the audio (when the output should be at 25FPS). I can add a speed control and speed the Strip up but the result is very jerky and not good.

The only somewhat usefull info I found was this:
http://ffmpeg-users.933282.n4.nabble.com/Duplicates-frames-when-converting-interlaced-h264-to-png-frames-and-double-length-when-converting-in4-td4084820.html

and some stuff regarding timecode http://wiki.blender.org/index.php/User:Nazg-gul/ProxyAndTimecode (which I think does not help in this case).

I also tried to Play around wiht mencoder and mplayer but did not succeed (I have no experience with these tools).

Does anyone have experience how to handle this Problem? My goal is to get a good quality video out of the mts file in Blender.

Best regards
blackno66

A .mts file demonstrating this can be found here: 00083.MTS

What frame rate is set in the Render settings? Try doubling the frame rate.

Steve S

Steve S: well, the output is 25 FPS which is what I want; it works if I set it to 50 FPS but that’s not what I want.

I uploaded a .mts file which hopefully clarifies my problems:
00083.MTS

I could have written this. I had exactly the same issue. I noticed that at 50 fps the frames seemed to repeat on every second frame so I set frame rate to 25 and sped up footage. But I lost frames. That was weird.

I have read that the avchd format should be interpreted with its associated timecode files from the original camera folders. But blender doesn’t import like that.

ffmpeg to the rescue:


ffmpeg -i inputfile -vcodec libx264 -qp 0 -r 25 -s hd1080  outputfile.mp4

It will convert your 50fps file to 25 fps losslessly (qp set to 0)

The only drawback is that the file sizes will skyrocket. For instance, your file size grew by a factor of ~11 (52 MB from 4.6 MB).
You can then import it to the sequencer and edit as you please, see screenshot:


Needless to say that all this is done on linux. If you’re on windoze, then I’m sure you could use some ffmpeg frontend such as winff to do it --just don’t really know how.

I also remember freaking out upon realizing the frame rate mismatch in the AVCHD files I received a while back (a friend’s wedding I’d promised to edit). I dind’t find it very convenient to preprocess the files before editing but it was easily doable.
Why the nominal value (25fps) is different from the actual value (50fps) is really beyond me. I wonder how e.g. final cut or premiere deal with such crap.

A snip from a post I made sometime ago

"Importing mts (which is AVCHD) at this frame rate is problematic with Blender. AVCHD does not support 30p (or 25p). Many cameras (mine) from Panasonic, Canon use what’s called PsF (progressive segmented frame) which folds 30p footage into 60i format. Your program needs to be able to interpret this properly. Blender does not. I have not been able to figure a work around and would love it if the developers could make the correction for that. Premiere and a few other NLEs (I am using Lightworks) can see it properly.

ps. I am using Vegas Pro now and it supports mts properly as well.

Regarding the use of external recoders like ffmpeg: If you use lossless your file size will increase many times. A simple rewrap to a different container (.mp4 or .mov) will accomplish what you want without any quality loss and without increasing file size (except for a few kb in the header). There are two GUI programs that are free that you can use if you don’t want to use ffmpeg (or ffmbc) from the command line: AWPro and EyeFrame. Both are good user written wrappers for ffmbc.

Just a note to the developers, add the correct interpretation of progressive field footage as it is common in DSLR footage. Most NLEs properly interpret the footage today.

Blendercomp…

THANKYOU for my christmas present. I shall never part from my winff portable! It has succeeded where many have failed.

ffmpeg and ffmbc both work from the command line or bat file in windows.

@3pointedit: I highly recommend you try doing a rewrap rather than the suggested settings by blendercomp. Your files will be 1/10th the size! the program “EyeFrame Converter” will do this and has presets for rewrapping (as well as other nice recoding utilities).

You will not lose one bit of quality in the process, only the container is changed! That is the problem with blender.

hmm, ok. Is it just an x264 codec in disguise?

I thought that I recognised that app, sadly I don’t have admin rights at work. And we are to stingy to buy Adobe Media Encoder.

If you don’t have admin use ffmbc (a broadcast copy of ffmpeg) and use the following comand line:

ffmbc.exe -i “<SourceFileName>” -vcodec copy -acodec copy -threads 8 -y “<OutputFileName>.mp4”

the key items here are “copy” option for both the audio and video codecs. You can substitute “.mp4” for “.mov” if you prefer a quicktime container. This leaves intact the encoding done by your source (H.264) and basically just changes the container, i.e. the header on the file. Because quicktime and mpeg support 25p (and 30p) Blender will recognize the footage as progressive.

Both the programs I recommended are just GUI shells that execute ffmbc.

@blendercomp: this works, but (at least on my PC) takes quite some time (actually: aeons :wink:

@dancerchris: the wrapping seems to go fine (I used the command line). However, in Blender, I experience problems with the new files. Blender 2.69 (release) will freeze at some (seemingly random) point. When skipping through the frames, when rendering out to a H.264 or even after it has rendered out the movie. Same applies to new version from graphicall.org (Date: 2013-12-16 20:26, Hash: d963412 btw: since when are there no release numbers anymore?)

Does anyone else experience this?

Addendum: I used
ffmbc -i “<input file>” -r 25 -acodec copy -sameq -threads 8 “<output file>.mov

and this produces files which blender will accept without crashing. However, conversion takes some time (conversion runs at around 50-60 fps at my PC)

Good to hear that, I’m happy for you.

Regarding the use of external recoders like ffmpeg: If you use lossless your file size will increase many times.

that’s not necessarily the case; you can be overly paranoid and set it to 0 for a purely lossless conversion or you can go for a lossy one in which case there are trade offs involved

A simple rewrap to a different container (.mp4 or .mov) will accomplish what you want without any quality loss and without increasing file size (except for a few kb in the header).

uh, pardon me for asking but have you actually experimented with the file the OP provided?
That’s the first thing I tried and it failed. The container format changed but the frame rate remained intact (i.e. 50fps).
The ffmpeg documentation sez otherwise but no matter what I tried it failed.

Just a note to the developers, add the correct interpretation of progressive field footage as it is common in DSLR footage.

ffmpeg is used for reading and writing video files, so I don’t see how this concerns blender developers

And btw a nominal 25fps value which is actually 50fps does not sound like “common dSLR footage” to me. For example my 550D records 25fps if I set it to 25fps, not 50.

Most NLEs properly interpret the footage today.

great :slight_smile:

ps: pardon my sarcastic tone, had a very bad day… :frowning:

anytime man! :slight_smile:

Does this actually work? Because no fps value is passed as an argument here. It will simply rewrap the original video into an mpeg4 container but it will not change the frame rate.

I’ve done tons of rewrapping in the past but whenever a different frame rate was needed for the output file the fps value was also required. In fact that’s exactly what the ffmpeg docs say.

this is to be expected; but if you have a quad core it should go pretty fast as you could set 3 or even 4 different program instances to run in parallel (that’s what I usually do on linux with the ffmpeg cli)

@dancerchris: the wrapping seems to go fine (I used the command line). However, in Blender, I experience problems with the new files. Blender 2.69 (release) will freeze at some (seemingly random) point. When skipping through the frames, when rendering out to a H.264 or even after it has rendered out the movie. Same applies to new version from graphicall.org (Date: 2013-12-16 20:26, Hash: d963412 btw: since when are there no release numbers anymore?)

Does anyone else experience this?

Addendum: I used
ffmbc -i “<input file>” -r 25 -acodec copy -sameq -threads 8 “<output file>.mov

and this produces files which blender will accept without crashing. However, conversion takes some time (conversion runs at around 50-60 fps at my PC)

I’ve used the git master of last night to test your file and it imported without problems.
Note that the flag “-threads 8” will only work if your system has 4 cores.
Also note that you don’t need to bother coping the audio (-acodec copy). You can get the audio from the original files like only import audio or import the movie file and delete the video strip.

Changing the container only will be lightning fast. If any encoding is required, it will be slower.

@blendercomp: Yes your tone was very sarcastic, sorry you had a bad day. No, I did not try it on the OP’s files. I am just recommending a process that I have used many times in the past for 30p AVCHD footage in a 60i mts container. It is the same with 25p in a 50i. It is ill advised to use compression for an intermediate other than a low loss one like DNxHD or ProRes (both will be significantly larger than the original). While you can reduce the size by setting the codec to a lossy one you do lose image quality because the H.264 codec is very lossy to begin with. By simply “rewrapping” with a .mov or .mp4 container there will be no loss. You would have the exact same video data. You just have moved it out of the AVCHD format specification. I have gone through several different NLE’s and they all handle the AVCHD format differently. Being aware of the video codecs and containers is critical to getting good quality of renders. My comments directed at the developers are for them to interpret the AVCHD format that uses PsF technique correctly, it had nothing to do with ffmpeg. And finally just FYI all cameras that use the AVCHD format scheme will put 30p and 25p footage into PsF format. That is because there is no standard in AVCHD for those frame rates (but I believe there is 24p). The original footage is 25p not 50i and not 50 frames per second. It is just the way Blender is interpreting the header on the .mts file, which is wrong. Use a utility like MediaInfo and it will tell you that the OPs file is 25p.

Since people are having trouble with the ffmbc conversion I did it myself to .mp4 and .mov. It comes out with 25p footage in a .mp4 container. I played it in VLC media player and it works fine. I checked the specs with MediaInfo and it checks out as 25p media. I imported the file to blender and the video had ~60 frames 2.4 secs of video (rounding) at 25p and plays fine. It has been a long while since I have used the NLE in Blender but I did have a mismatch with the audio (which as I recall is some setting that I have missed). The original source footage also has a mismatch in the audio length vs video according to MediaInfo and as such I am not going to spend the time looking up how to resolve this. Again the issue with using these files in Blender is with Blender, not the file types. Other softwares recognize that it is 25p footage.

The AVCHD footage I have claims to be 50i. I see 50 frames and they are all interlaced but every second frame is duplicated.

So yes the header is incorrect. Should it be 25i?

Anyway persisting with Blender’s VSE (ffmpeg was cool tho’) I can scrub shots and trim the amount of stuff for processing.

So…

I applied a speed effect to multiply speed by 2. This has to be the only time that Blender’s dumb speed effect is useful (dropping frames instead of blending them).

But I really need to retain the interlace at a smaller scale. My goal is to produce some SD Pal interlaced frames. Sadly I can’t. Whether in Blender or other apps, the moment you scale a frame the fields are merged, I don’t know how to comb out individual fields for separate processing. And I can’t do it in nodes either.

How are you determining that it is 50i? If it is from Blender that can be erroneous. Have you tried MediaInfo (freeware). It gives an accurate picture of what your files info is. If it is 25 frame film in a 50i container it will be 25p, not 25i. I can’t tell you what to do with true interlaced footage if that is what you have as I have no experience with it. All my work has been either 24p or 30p in 60i container (ala AVCHD in .mts container). The OP’s file is definitely 25p PsF in a 50i container.

I cut and paste this from the first few lines from http://en.wikipedia.org/wiki/Progressive_segmented_frame . Thought it might be useful if you read this as the doubling schemes and speed control won’t work with PsF footage:

Progressive segmented Frame (PsF, sF, SF) is a scheme designed to acquire, store, modify, and distribute progressive-scan video using interlaced equipment and media.
With PsF, a progressive frame is divided into two segments, with the odd lines in one segment and the even lines in the other segment. Technically, the segments are equivalent to interlaced fields, but unlike native interlaced video, there is no motion between the two fields that make up the video frame: both fields represent the same instant in time. This technique allows for a progressive picture to be processed through the same electronic circuitry that is used to store, process and route interlaced video.
The PsF technique is similar to 2:2 pulldown, which is widely used in 50 Hz television systems to broadcast progressive material recorded at 25 frame/s, but is rarely used in 60 Hz systems. The 2:2 pulldown scheme had originally been designed for interlaced displays, so fine vertical details are usually filtered out to minimize interline twitter. PsF has been designed for transporting progressive content and therefore does not employ such filtering.
The term progressive segmented frame is used predominantly in relation to high definition video. In the world of standard definition video, which traditionally has been using interlaced scanning, it is also known as quasi-interlace[1] or progressive recording.