A better method to get objects to follow the mouse?

Hello again!

I’ve been messing around in the blender game engine lately, tonight I decided to muck around with mouse input.

First off, here’s the code:


from bge import events
from bge import render
from mathutils import Vector as v

def cursor(cont):
	obj = cont.owner
	sce = logic.getCurrentScene()
	cam = sce.active_camera
	movement = obj.sensors['Mouse']
	hover = obj.sensors['hover']
	data = logic.globalDict
	mouse = logic.mouse.events
	render.showMouse(True)

	#Sets up direction vector for movement and distance from the point
	if mouse[events.LEFTMOUSE] > 0 and hover.hitObject!=None:
		data['direction'] = obj.getVectTo(hover.hitPosition)[1]
		data['destination'] = hover.hitPosition
	
	#Only runs if there's a direction to go toward
	if data['direction'] != [0,0,0]:
		obj.worldLinearVelocity = (data['direction']*10)
	
	#Checks if the object has reached the destination, resets applicable variables
	if obj.getDistanceTo(data['destination']) < 2:
		data['direction'] = [0,0,0]
		obj.applyMovement((0,0,0))
def init():
	#Sets up the variables that are going to be used	
	logic.globalDict['direction'] = [0,0,0]
	logic.globalDict['destination'] = [0,0,0]

Not Terribly complex, but I have some issues with it. For one, the object can still follow the cursor indefinitely if you hold down the left mouse button. Secondly, getting the mouse’s position on the screen is only possible when it’s ‘touching’ an object. Although there are multiple ways to get the mouse input, I could only find a way to get the x and y position on the screen, not in the physical game (unless I used the mouse actuator set to over any).

I tried messing with the projection matrices to map the x and y coordinates given to the 3d view, but I don’t know enough to put it to use!

TL;DR:
I made an attempt at getting an object to follow the mouse and I need to find a better way to get mouse input. Thoughts/suggestions?

Attachments

mousetest.blend (570 KB)

The mouse cursor has no position in 3D space. The mouse cursor is 2D which represents a line in 3D space.
To get a position you need to define what point on the line you want.

Common options:

  • constant distance to the camera (e.g. near the camera)
  • hitpoint of the nearest face under the mouse cursor

There is absolutely no need to fiddle with apply movement and such things. You get an absolute position you place an object at this absolute position. Everything else is a separate task (e.g. following an object) and should be kept separate.

Keeping the “mouse object” up-to-date.
This depends on where you want to place the object. With all methods you have this situation:

  1. when the mouse moved (mouse movement event [True Pulse])

Additionally, when you look for the hitpoint:
2. when the hitpoint changes (mouse over [any] True Pulse)

Any other method requires according sensors.
Remarks: the Always sensor /with True Pulse is as usually the most ineffective solution.

Such things are quite simple with the S2A library.

To provide more help we need to know what exactly you want to achieve. Especially how do you want to choose the point under the cursor (e.g. constant, hitpoint, random)?

Monster

It would be cool to eventually get a script that is useful enough to be able to point multiple objects to a given location (disregarding path finding, etc. for the moment.) I’ll update later with an updated script based off of these tips.

Ah, one more thing:

Common options:

  • constant distance to the camera (e.g. near the camera)
  • hitpoint of the nearest face under the mouse cursor

Could you describe this in more depth? I’m not sure what the first method is specifically.

constant distance
e.g. the pick the point under the mouse cursor that is for example 10 units away from the camera.
if it is 1, 10 or 10,000,000 is up to you.

hitpoint
If there is any face under the cursor, the point where the line from camera through cursor crosses this face is used.
In the BGE this is a ray from a point near the camera (under the cursor) to a point far away from camera (but still under the cursor).

This method does not work when there is nothing under the cursor. In this case you can fall back to the constant distance method

btw. you can have

  • basic movement with trackTo and motion actuator
  • path finding + path following with the steering actuator

I was worried about coding path finding myself, thanks for the tip! Now I can leave that as an exercise down the road.
Since I posted my original code, here’s an update for anyone who may need to achieve the same thing:


from bge import logic, events, render

def getCoords(cont):
	mouse = cont.sensors['Mouse']
	lmb = mouse.getButtonStatus(events.LEFTMOUSE)
	if lmb == 1 and mouse.hitObject.name=='floor': #I'm using the mouseover [any] sensor. Otherwise I don't think it'd be necessary
		cont.owner.position = mouse.hitPosition
		print(cont.owner.position)
	else:
		return

I’ve attached the blend file as well.
pathfinding-BAF.blend (1.37 MB)

cool this is a good demo to know how using the pathfinder

Im actually a few steps behind, and needed some code you posted. =)

A little suggestion: Put your “render.showMouse(True)” in your INIT file.
It is a little less overhead there, instead of calling it every frame, due to the always sensor.
I could be wrong… just a thought. Thx for the code though!

I’m glad I could help you out. One thing that might not be immediately apparent in the scene is the use of the navigation mesh. It’s necessary to regenerate it after you’ve edited the ‘floor’ portion of the scene. More info here.

I’ve got the always sensor set to only pulse once, and no problem! I’ve been a(n inactive) member of this forum for years, I used to be the kid who thought he could make the next MMOFPSRPG in blender. Those were the days. I’m happy to finally be giving back, hopefully next time it will be a bit more substantial. :slight_smile:

but is not possible using directly the floor mesh, as pathfinding and also as physic collision?

seem which if use the pathfinder as ground the ray not work more .

and collision even. very less flexible.

If not wrong agoose have make a blend with pathfinder using a normal mesh .
no reason , to using special object, only new dependence usenless

The navigation mesh is generated from the mesh you would like to use as as the ground. The dependency isn’t useless, it’s essential for the path finding algorithm. Collisions still work, it may require tweaking though.

This article describes how path finding works.

I’ve been a(n inactive) member of this forum for years, I used to be the kid who thought he could make the next MMOFPSRPG in blender. Those were the days. I’m happy to finally be giving back, hopefully next time it will be a bit more substantial. :slight_smile:

Good to have you. as for it being more substantial, im sure you can remember what it is like to be in our shoes. Even the slightest things that you cant figure out will drive you crazy and make you pull your hair out. Thanks.

i not see too difference , the only things can be which pathfinder cannot work with triangles
for all rest a mesh can be initializated as a pathfinder.

well, anyway, is too important which now -exist- a pathfinder and this work and well… the rest here detail…