Aspect ratio changes timing

This really confuses me.

I have a Panasonic SDR-H80 and when I take non-widescreen videos, they come out as being 704x576 (right clicking in Ubuntu to see video properties).

Blender PAL default is 720x576. If I render the video through VSE (get rid of interlacing, maybechange colouring etc, prepare for youtube) the PAL default works fine, although I suspect the picture is minorly stretched.

So, I change the 720 to 704. Aspect I think is better, but the sound and video are out of sync.

Why is this? What should I be doing to stop the problem?

The murky world of TV resolutions and aspect ratios, enough to give you a migraine.


but what frame rate was the video recorded at?

First I would not let blender resize, try to correct the frame back to its correct ratio (translate effect?), otherwise you are loosing lines (smeared across other lines during blowup process).

Also note that audio is evaluated by its sample rate not in frames per second. I often find that my NTSC footage in a PAL sequence looses its relationship to vision.

if you go off standard video sizes, demuxers get all confused or stop working. I have a Samsung widescreen that shoots anamorphic, so i use Blender to stretch it back out. Shoot a square tile or piece of paper to be sure.

Hm… well, it is a Panasonic video camera, so I thought the video sizes would be “standard”.

The output video is…


Dimensions: 704x576
Codec: MPEG-2 video
Framerate: 25 frames per second
Bitrate: N/A

Codec: AC-3 audio
Channels: Stereo
Sample Rate: 48000 Hz
Bitrate: 256 kbps

That is the .MOD from the camera at least. My issues come when I want to get Blender to make standard sized PAL video from this.

As far as I know both 704x576 and 720x576 are both PAL.
Isn’t all this to do with non square pixels or some such thing?


720x576 Is full overscan of a PAL frame. The difference is (are) the bits that get tossed in digital (but retained or regenerated in transmission). It is refered to as blanking and was a guard band of sorts in analogue video. It also carries a form of metadata per frame called User Bits and VITC or timecode.

At least the 576 bit does (vertical resolution). The side to side bit is the horizontal blanking which often gets replaced by editing systems anyway. Few display devices actually display a whole frame anyway, And I believe that you could just stick black curtains in there like Avid does :wink:

Thats why I suggest the translation of your frame to the right size and ignore the pillar-boxing (black curtains). And yes a proper expensive pro camera ussualy does give a full frame, except curiously Panasonic, I notice that the P2 broadcast cameras dont generate full frame images either.

I think when played back 720 gets 8 pixels left and right chopped off to avoid scruffy edges to sides of screen, so 704 is what is actually viewed.

Expanding 704 to 720 will result in marginal loss of quality horizontally unecessarily only to have another 16 pixels chopped off at playback.

Suggest dropping the 704 video over a 720x576 black strip in the VSE centering the 704 horizontally, so you get the 720x576 FFMPEG will demand for encoding to DVD otherwise throwing a tantrum locking blender.

I guess it’s understandable of Panasonic, using 704 saves a bit of bandwidth, maybe getting a slightly better bitrate and a bit more storage but confuses at the same time.