The way a parallax barrier works is each eye can only see one set of column of pixels while the other eye can see another set of column of pixels. So each eye seems a different image, providing some level of depth information. This is also how Nintendo 3DS 3d effect worked.
Blender by default seems to only support anaglyph 3d in the preview or VR mode.
I’m wondering if for something like this a bpy script is sufficient or if two frames for each eye have to be streamed to a 3rd party program written by me to do the image combining and pixel column rearrangement there instead?