I use blender for years, but until you start to actually thinking about whats happened in your viewport, you dont get what everything is a bit more complicated whan it might looks.
I know i can just look at the code, but for not i only need a conceptual part, no math or code.
Lets say our navigation in set to Trackball mode. We didnt select and focus any objects.
Now we start to rotate our view. Where will be the point of rotation, i.e. the point around which camera is rotating and how its calculated?
In default scene rotation will happen around world (0, 0, 0)
But if we pan camera somewhere things get more complicated.
My initial and naive thougths was this:
We shoot ray from a camera and calculate intersection point with world grid, i.e. the virtual plane which normal oriented towards world Z axis.
But immidiately after i check this assumptions i realize what i was wrong.
Second assomptions was:
“You overcomplicate this, camera just rotating around its own location!”… clearly its not, because if camera will be above the world floor, after rotation we will being able to look at the floor from other side.
Next assumption which i kind of proved experimentally was:
We shoot the ray but the vector for it are not camera direction vector, instead its always (0, 0, -1) or (0, 0, 1) if we above the world grid, and the origin from which we shoot are camera location, so we basically shooting ourselves in the foot, like this:
Is this correct?