I want to project mouse 2d coordinates to a face that I am hitting with Python.
Is it possible to store a rotation matrix of the face that I hit, so that I can always move the vertices along the face?
It should be like that, that I project the first point to the face. Then I add a second, third… vertex and so on, but for these ones I dont need to hit the face anymore, they should be projected to the “projecttion plane” that is defined by the rotation the face in 3d space (hope this is understandable).
I could use the method region_2d_to_location_3d then if I use a region3d as parameter that has the rotation maxtrix of the face, right?
And if I store this rotation matrix, it should be possible to again calculate the 2d points from the 3d points?
Ok, thx, with this I will get a vertex on the face. But now let’s assume that I want to define the second on on the “plane” that is defined by the face but I am not hitting the face anymore when moving the mouse - but I still want the vertex to be on this plane.
And what is the plane for the function intersect_line_plane, where to get it from?
To define an infinite plane all you need is a point in space the plane passes through, and a normal vector to define the orientation. The output of a BVHTree ray cast is a tuple which includes both the point of intersection and the surface normal at that point. Those two vectors define the infinite plane you will be testing for intersections via intersect_line_plane.
For the BVHTree.raycast: which origin and direction can I use as params here?
The region_2d_to_location_3d returns a vector, is this the line that I have to use in intersect_line_plane as param? So what are the points of this line (line_a, line_b)?
These params are returned by the ray_cast that I need for intersect_line_plane?
The origin is the viewport camera. You can get the direction via region_2d_to_vector_3d. NOTE! BVHTree does everything in the LOCAL coordinates of the object being tested, so you will need to convert the origin and direction vectors to object space before casting the ray, and convert the returned values back to world space.
Vectors can represent either positions or directions, and region_2d_to_location_3d gives a position vector. The line you are testing goes from the viewport camera to that position.
Yes, those are the params you will get from the BVHTree ray cast. Note again that these will be in local object coordinates, so you will need to convert them to world coordinates first.
You can get line_b a few different ways, the only requirement is that it is directly under the mouse coordinates in the viewport and it needs to be further away than the plane you are testing.
I’m not 100% certain how region_2d_to_location_3d aligns the result with depth_location so it might take some experimentation. The alternative would be to multiply the viewport vector by some amount and add that to the camera position. That’s the approach I would take to be certain that the line is aligned with the mouse.