I’m new in 3D animation and I’ve been testing lossless compression schemes to reduce space used storing rendered clips. I’ve noticed Blender outputs a PNG sequence, encoded in RGB, 24 bits per pixel. I’d like to compress the footage using a lossless codec for preservation but during testing I’ve observed that sticking with the RGB colorspace using either FFV1 or libx264rgb would result in files using double of the size than when converting the color information to YUV444p. I’ve read in this thread that this conversion is lossy. I’m not actually sure how relevant this data discarding actually is since most cameras (and as the matter of fact, Blender, when using ffmpeg with H264 codec in lossless mode) actually output video in YUV420p. Also, I wonder if this conversion would cause any side effects when futher editing the clips.
In sum, is the loss of information when transcoding the PNG sequency to YUV444p of any relevant for further modifications or should I stick with the RGB colorspace? And would the profile yuvJ444 profile preserve all the color information?
YUV or YCbCr representation, lets call it luma-chroma, is theoretically lossless, within rounding errors from conversion math. Practically it depends on the bit depth at which math is done and bit depth at which data is stored. Uncompressed 444 should be the same size as RGB, if it is smaller, it is compressed or it is not 444. If you store image at 8bits per channel, deviations from conversion will be bigger than when using higher bit depth etc.
Lost information relevance depends on what you intend to do with the data afterwards. How are you going to process it? For editing, don’t use codecs that pack frames with interframe dependencies (meaning h264, h265 are out of the game) because in this case decoding single frame needs data from several frames and it is slow. Use an intra-frame codec (DNxHD, Prores, Cineform etc) or image sequences.
General rule of thumb is that you want to start with best quality you can allow because every new step in processing chain causes losses. And you should try to minimize these losses the best you can. The fact that some cameras can only capture 420 is not argument for not using 444/RGB, it only says some cameras don’t output the best data there could be.
A couple of ways to visualise any differences through compositor nodes:
for magnitude, and the below for true / false.
Black pixels show no or little difference, coloured or white ones show more or some difference. F12 and check the Render Result pixels.