Hello.
I am trying to setup a CUBE system using Blender 2.49/2.5x. Current, I want to learn how projection referential cameras work in Blender.
One of the key features of the visualization component is the ability to use off-axis cameras (a.k.a Asymmetric frustum)
The main idea of the projection referential cameras is to easily represent the displays of the VR installation inside the composition.
As we know that the usual camera’s viewing frustum is defined by: it position, orientation, field of view, aspect ratio and N/F clip plane…
Projection Referential Cameras sets of a 3D plane in the composition as the projection referential of the camera. This 3D Plane is symbolized according to the actual dimensions and positions of a real screen. The viewing frustum of a projection referential camera is defined by: it position, it clipping plane(n/f) and 3D plane projection referential.
Now the frustum can be asymmetric, as opposed to the usual cameras frustums.
Anybody knows how to setup this kind of projection referential camera using python or OpenGL in blender?