I have created a mouse motion Sensor that runs a script named “rotate.py”.
This script detects the units by which the mouse has been moved across the window in x and y direction and uses these two vector components to determine by what angle to rotate the camera left, right, up or down. That angle is determined by dividing the camera’s lens value by the window width or height respectively, so the angular equivalent to the mouse’s pixel position change is calculated.
I then attempt to realize the rotation by then multiplying the camera’s projection matrix with the approprate rotation matrixes I generate for the angles I mentioned (that is, a rotation matrix around the y-axis for left-right-rotations, and around the x-axis for up-down-rotations).
The idea is, that by rotating the world around the camera, which - right before the application of the camera’s projection matrix to it - is referenced in the camera’s view coordinates (after the world-to-camera transform has been applied), you are accomplishing the same as if you are rotating the camera in the opposite direction respectively. Only that rotating the world around the camera is much easier, since you cannot rotate the camera that easily around its own axis once it stands at a random location and orientation within the world coordinate space.
However, this has so far only worked partly, the rotations I am getting are kinda odd. Left-right-rotation seems to be working half-way, while up-down-rotations are still giving me a headache.
Is my idea of pre-multiplying the camera’s projection matrix with rotation matrices the right move at all?
Thanks in advance