How is real-time 3D anaglyph calculated in Blender?

Hi! I was just simply wondering how the BGE handles calculation of 3D anaglyph. Does it render scene twice from 2 different aspects? Or does it use depth map and kinda filters everything? Or maybe it renders red(and only red) color from one aspect and other colors from different aspect? How much does it all affect performance?

It renders everything through a second camera.

This means it doubles the render effort (not logic, physics etc.).

Anaglyph filters both rendered images (red/blue) and mixes them to a new one. This adds to the effort too.

A depth map makes not much sense as depth belongs to a single camera. The Z-Buffer is a depth map. Each render gets an own one.

OK! So it basicly doubles the amount of calculations in GPU, right? Sounds really bad…

Yes, it does.

But … .you do not need fancy shader and material and such things as it will not be visible anyway (limited colors). So you can start with much less render effort.

OK! So it basicly doubles the amount of calculations in GPU, right? Sounds really bad…

Yes. However this applies to all stereoscopic techniques.