I’ve been tinkering with Blender for a while, and now trying to use it for my 1st requirement. I have a 20 minute video, and in a few different segments of the sequence I want labels following motion tracked points. So I can do this fine e.g. with a few hundred frames, and output a short sequence. What I was wondering is what workflow would I use to generate a number of these sequences and then stitch them together with the sequences without labels overlaid? I’m concerned about quality being the same for all sequences, so do I ‘render’ all the bits I haven’t overlaid so they’ve all been through the same pipeline, or is there an accepted way to do this? I expect I’ll need to stitch in an editor (e.g lightworks) at some point.
Sorry if it’s obvious but I’m pretty new to composting etc.
Render the clips with composited labels into format that does not lose quality. Then you can cut the movie together in whichever editing software you want.
Depending on source footage the format and settings for keeping the quality can vary. For example if your source is a h264 video from a camera, you can use 8bit png sequence because h264 itself has bit depth of 8bits and all the information there is fits into png file.
But if the source has higher bit depth, you must also use some format with higher bit depth. For example dpx, exr or something similar.
Depending on the footage, render times and compatibility you could also use some near-lossless video format like Prores, DNxHD or similar with high enough settings. Although I’m not sure if prores can be rendered from Blender at all.
The reasoning behind such workflows is that you do not want to lose any information in the clips that you do some work (compositing, grading etc) on. Losing quality equally on all material is not an option to even think about because it makes no sense at all.