which video formats can do alpha?

i tried H.264 cos it was the only one other than aviraw that would even work. but it showed a black background.
yes transparency was selected.
yes even old 2.69 should be able to do this.
i looked in preferences but did not see anything useful.
other formats gave an error like shown.
as usual i am only trying to do simple stuff.

thanks all



yes transparency was selected.
yes even old 2.69 should be able to do this
No it won’t. Very few video formats support transparency

Use an image sequence using a format such as .png
Quicktime animation codec is a video format that supports transparency

isnt H.240 quicktime?
Quicktime animation codec is a video format that supports transparency where do i get the codec and will blender play nice with it?
should i just make the images and then convert them to transparency with other software?
Thanks Richard

How about the greenscreen method, would that work?

i think the greenscreen would then have to be converted with other software to get the video transparency.i tried and the 555 kb file ended up at 29MB!

thanks

Sir I have installed https://support.apple.com/kb/DL837 in my laptop even then blender249 and blender 279 are not prompting for
mov render, Why?
When I was using win xp i never felt this problem.

You might want to check the release notes for Blender 2.79: https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.79/More_Features

Quicktime is available once FFmpeg is chosen as encoder.

png is the best

The only file format that I would use for all “working files” is OpenEXR MultiLayer.

This format is designed – by Industrial Light & Magic with a little help from Blender Foundation – to represent digital data accurately, efficiently, and without loss. Its content consists of several numeric data-sets, one per “layer,” exactly as they were produced by the renderer (or by whatever did produce them). The data consists of floating-point numbers, with no attempt to “map” them to any display-space. The files exist to allow you to break up a pipeline into multiple stages without losing anything.

The “final cut” of your movie might be a non-multilayer OpenEXR dataset, simply because you don’t need layers anymore.

(The files are big, and there’s a lot of 'em. Who cares …)

Only then do you concern yourself with producing “deliverable files” in whatever movie-file format(s) you may require. Each of these are produced by separate blend-files which take the “final cut” data as input and produce whatever may be required as deliverable output. Only at this step is any “lossy” processing done … and of course, by definition nothing is actually “lost” because every one takes the same OpenEXR input that is unchanging.

If, instead, you attempt to use “movie files” or “image files” as intermediates, you’re losing data and picking up noise at every step along the way, with the noise and the error being effectively multiplied with every successive stage. And that – well … – that is why we have formats like OpenEXR! :yes: