ntsc video sync

Hello everyone,

I’ve been working on a music video type project for awhile now using blender animation and some video shot in ntsc dv. The dv footage is scenes of a singer that must be synced with the audio in post. I will be using premiere.

Ultimately, I need the video to be in pal fit for broadcast. I am wondering what is the best way to go about this so that I preserve sync. Outputing the blender video, with the ntsc footage compositied inside, from blender appears to mess up the sync. Should I do everything in NTSC and then seek out a conversion method, or is there a way, I hope, to do all tbe compositing in blender and go directly to pal?

Thanks for helping me think this through in advance.

One way to do it would be to add the dv movie as a texture on a blender object in a scene, then alter the time ipo for that object, a gradient of 1 on the time ipo gives 1:1 and you want ~30:24, so add a ipo that has a gradient so it passes through (0,0) and the point (29.97,24). Add the scene to the sequence editor and combine it as you would have your movie.

To add a curve to the time ipo add a loc keyframe for the object, go to the ipo window, copy, select time on the right then paste. The set it up as you want.

That should work, I tried it with 10 frames of one of my movies, making it 20 frames long…

Hope it helps


Ok sounds really promising. I am sort of with you, but I don’t understand whether or not the time gradient pass through the point (29.97,24) at the very end of animation. For instance if the animation is 5,000 frames long does the time ipo for the object with video texture go from 0,0 to 29.97, 24 at frame 5,000. Or is the gradient set in some fixed range.

thanks for setting me striaght.

perhaps you could send me a .blend file without the video file, just to see the set up.

The time gradient is weird, if the gradient is 0 then that object will not do anything the entire animation The axes are time of animation and time of object so a gradient of 1 means that they are the same.
if it is 2 then the object will do all its actions at half the speed. if it is 24/29.97 then it will do 29.97 actions in the time it would have done 24. You want the gradient to be constant over your entire animation so you need to use the extrapolate button, (upward right pointing arrow one) once you have set the two points on the ipo, (0,0) and (24,29.97) if this last point contridicts what I have said before don’t worry, this is the right point, not what I have said before (I hope!!, if not and it plays too slow reverse the values!!).

Here’s a blend (ignore file format, Geocities won’t let me upload .blend so this is what I had to rename it to, just use RMB save as then rename it .blend once you download :wink: :

http://www.geocities.com/alicopey158/convert.avi (.blend :wink: :wink: )

Hope they don’t realise!!


Actually, when doing linear frame rate conversion (30 fps to 24 fps for example), there’s something a LOT easier than playing with Time IPOs.
It’s the Map Old / Map New buttons in the Animation Buttons window (F7).

It tells Blender that for X number of frame that needs to be render (Map New), it needs to take those in an interval of Y (Map Old).
By default, those two values are 100, that means that for each 100 frames to render, Blender takes them in an interval of 100 frame (nothing is changed).

Now, in your example, you want to take intervals of 30 frames of your 30 fps animation and output intervals of 24 frames in a 24 fps animation. (Map Old = 30, Map New = 24).

hope that’s helpful.

I’d be leary of doing this in Blender. The best way to do an NTSC->PAL is with a dedicated tool that can do the conversion on a field-wise basis. I’m not sure, but I don’t think that Blender can do this. I’m also assuming that you’ll be deinterlacing your video before you take it into Blender for the composite.

Smoother motion will be achieved going from 60 fields ps to 50 fields ps than from 30 fps to 25 fps (BTW, PAL spec is 25 fps, not 24). I’d use Premiere to move all of your raw footage to PAL first, then create a deinterlaced master to use for your composite positioning. Then generate rgba targas from Blender and use AfterEffects (if you have access to it) for the real compositing. You can achieve superior results this way.

If however, you looking for quick and dirty, the above-mentioned methods (and the in-Blender composite) will do.