Mouse hitPosition without using sensor

I would like to get the mouse ray hitPosition in python without using the mouse sensor. Is that possible?

Thanks
Ex.

That is like driving without a car.

hitPosition is an attribute of the KX_MouseFocusSensor.

no sensor = no hitPosition

Alternatives:

  • use a ray sensor (it is the same but just not automatically following the mouse)
  • cast a ray by Python code

But you might want to think about:
If the mouse is over nothing, the sensor does not trigger the controller. Which decreases the processing time in such cases.

Monster

Its possible, but probably not worth it. First, you have to construct a ray to cast in python, but the only information you can get without the sensor is the mouse’s cursors xy coordinates. OpenGL renders from 3D space to 2D space, so if you follow the inverse of the view transformation you can turn a 2D point into a 3D one - or a series of 3D points. It’s pretty math heavy (mathutils will handle most of it though) and requires some knowledge of openGL, but if you’ve got the time theres an article that does an alright job of explaining it:
http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/MousePicking

As for getting the matrices,
camera.projection_matrix
camera.modelview_matrix
may be of use.

The following code will position an object on the xy plane under the mouse

from mathutils import Vector
import bge

bge.render.showMouse(1)

camera = bge.logic.getCurrentScene().active_camera

m_pos = bge.logic.mouse.position

c = Vector([2*m_pos[0]-1, 2*(1-m_pos[1])-1, 0, 1])

Pi = camera.projection_matrix.inverted()

v = c * Pi
v = v / v.w

Mi = camera.modelview_matrix.inverted()

w = v * Mi
c_pos = camera.worldPosition.copy()
r = Vector([c_pos.x, c_pos.y, c_pos.z, 1]) - w

z = camera.worldPosition.z/(-r.z)
x = r.x*z+camera.worldPosition.x
y = r.y*z+camera.worldPosition.y

pos = [x,y,0]

bge.logic.getCurrentController().owner.position = pos

Its now just a matter of casting a ray from the camera’s position to this point.

Thanks for that, I was looking at converting screen co-orindinates to world co-ordinates last night. I read that article you posted up but couldn’t make enough sense of it to write a python script.

Any chance you could give an explanation of what’s going on in each step of the code?

@Monster: Yeah, I agree that its faster using sensors, but I am trying out the new components and I dont think you can get sensors with those.

@Andrew-101: Wow… I’m glad your willing to help, because without you many people would be lost.

Thanks!
Ex.

@battery;
Sure, I’m not that great with openGL but this is what I can gather. A point in world space is converted into viewport space by multiplying it

When following the inverse of the view transformation you take a normalised viewport coordinate ranging from (-1, -1, -1) to (1, 1, 1).

c = Vector([2*m_pos[0]-1, 2*(1-m_pos[1])-1, 0, 1])

I choose 0 for the z value because we want an arbitrary point to cast a ray to, not anything specific. Actually, you could get the z value from the depth buffer and that will give you the hit position at the end.

And multiply that through the inverse of the projection matrix. The projection matrix is defined by the camera’s fov and neaer/far clip planes, its what gives the objects perspective.

Pi = camera.projection_matrix.inverted()

v = c * Pi
v = v / v.w

Where v is a point in view coordinates. v = v/v.w Just scales w component of v to 1, I’m not exactly sure why this is done but the article says to do it and it works.

Next you multiply the view coordinate by the inverse of the modelview matrix to get a point in world coordinates

Mi = camera.modelview_matrix.inverted()

w = v * Mi

Since we set z=0 at the first step, this point can lie anywhere under the mouse.

The next step takes a vector from the camera to the world point and chooses the xy coordinates where z=0.

r = Vector([c_pos.x, c_pos.y, c_pos.z, 1]) - w

z = camera.worldPosition.z/(-r.z)
x = r.x*z+camera.worldPosition.x
y = r.y*z+camera.worldPosition.y

@Ex;
If you read the above you may have read that the hit position could be retrieved using the depth buffer. I’m not sure exactly how to do this but look into bgl.glReadPixels and pass in bgl.GL_DEPTH_COMPONENT and use the resulting value as the z value for c.