BGE proposal: nearSensor - fast distance check to origin

Issue:
The nearSensor as it is now checks if any face of a different object is below a given range to the origin of the sensors owner.
This is pretty processing intensive as the sensor needs to check the faces. So I classify the near sensor as “heavy”.
In a lot of cases it is not necessary to check each face. It might be sufficient enough to check the distance to the origins of the other objects.

My suggestion:
Add an option to toggle between mesh distance (faces) and object distance (origins).

Benefits:
Much more efficient distance checks.

Drawbacks:
Can be used incorrectly e.g. detecting large objects/small objects. So mesh distance check should be default.

Workaround:
Sense the distance with Python code.

Remarks:

  • I do not think this is a lot of work
  • requires GUI changes
  • requires changes on the nearSensor

What do you think?

Just saying, I am always using getDistanceTo() , very fast.

…but in Python controller not as C-coded sensor … very faster ;).

i was thinking of this a few days ago, in my game, i need to use many near sensors(to use near.hitObjectList() ), and this really drains the FPS, it would be really awesome to have a origin based near type of sensor!

The near sensor right now is using physics objects and you get a “hit” when Bullet detects a collision. I’m not sure how simple it is to bypass this and do a simple distance check. However, if someone wants to try to implement this, I can point them in the right direction.

AGREE
we say that can become a monster in this way.

anyway also the current code can be optimized pretty much when you want check the face.
i pretty sure that now use the same code of sphere collision .

in my opinion should be just copy and paste of current code, and make some modification , where , when THE FIRST face is True , return since not need any other.
no need chec ALL faces as now (that instead work for make well the collision bounce for the physic)

hum , i have the same dubt .

maybe not exist even “origin”… of the obj in the “bullet world”

PS: anyway not reuse exactly the same calculation already done in bullet .
probably use DATA done by bullet , but a sensors is a thing to add anyway .

should be cool to have some collision function in python, this way , you can trigger it only once , when need .
instead with brick basically have to be ever active(to avoid complexity) .
rayCast of course cannot replace the collision .

I like the proposal. Because I imagine that in most cases (with enemies, portals, …) there’s no need to check faces.

I think I can do this.
Should I change the near sensor add a switch and bypass the physic, or should I make a new distance sensor?

I guess a switch in GUI would be fine. That way you can reuse the filter (property) field.

Internal it might make sense to add a new sensor (maybe inherited from the existing sensor). Just an idea.

I have uploaded a compiled blender version with the modified near sensor.
Now you can check if the distance check faster then the normal near sensor.

At the moment the sensor does not return the nearest object or a object list that triggers the sensor.

HG1

Before this feature becomes official, can you check to see if the radar sensor can be sped up in the same way? I think it also might detect objects on a per-face basis and having it just checking for origins when looking for small objects may allow a boost in speed.

Do you now want a per-face or an origin detection?

I think I can do a simple cone check. This will check if the origin of the object is inside the cone.
But as the simple distance check above, I don’t know if it will be faster then the Bullet engine. Because every sensor must iterate over all game objects.

HG1

In theory this wont make it faster. Bullet collision is already very fast - It checks the AABB against other objects. If you want to check the distances away from the other objects it makes little use because it still requires you to iterate through the entire object list (unless you have some nice octree behaviour too).

I had a quick Benchmark with the above file. I added cubes (static but actor). Each cube detects its neighbors (= two cubes).

Result with distance near:


profiling: TestCase.near.distance

good running with:      340 test objects

result with mesh near:


profiling: TestCase.near.mesh

good running with:      460 test objects

the result:
460 test objects in near.mesh vs. 340 test objects in near.distance.
Ratio of 3:4

Not quite the result that I expected.

Lets think about the reasons: We detect Cubes which have just a few faces. It seems there is not benefit on this amount of faces.

How about objects with more faces?
Lets replace cube (6 faces) with suzanne (500 faces):


profiling: TestCase.near.distance

good running with:      340 test objects


profiling: TestCase.near.mesh

good running with:      11 test objects

ratio of 30:1

This time it is real difference. I’m surprised that performance lost on mesh detecting is that much.
But you see the distance check is the same regardless how many faces we have.

It might be different when having more detectable objects in scene.

Edit:
How about - one sensor many detectable suzannes:
(just two suzannes are in range)

The previous results were wrong. Suzanne was added twice per test object.
Here are the results with one suzanne:


profiling: TestCase.near.distance

good running with:      5300 test objects

obviously better performance then the other test runs. We have one near sensor only. So it should be as expected. Now with mesh near:


profiling: TestCase.near.mesh

good running with:      5300 test objects
bad running with:       5400 test objects; fps: 53

ratio of 1:2

My conclusion:

  • On detecting objects with lots of faces the distance method is better (at least in worst case = mesh in range).

  • On lots of objects in scene (not all in range) there is no big difference in performance.

updated the blend file

Attachments

Benchmark 2.1 - near sensors with suzanne.blend (320 KB)Benchmark 2.1 - near sensors with cube.blend (276 KB)Benchmark 2.1 - single near sensor_fixed.blend (322 KB)

Those results seem to make sense. So, it looks like this might be pretty useful for more complex objects. I think it still should be faster than per-face distance checking once you pass a certain threshold of faces and should hold through a larger number of objects since the near-mesh sensor would have to do the same, right (looping through all objects to find out if they’re close enough)?

Something working wrong in the Benchmark 2.1 - single near sensor.blend. Because the first scene has a higher physic load on start up as the second. If I switch in the first scene the near sensor also to mesh check it is still slower as the second scene.

I made a test with a modified version which is breaking the for loop after the first positive check.
This will increase the amount of cubes in the benchmark result from the Benchmark 2.1 - near sensors with cube.blend from 420 to 520 cubes. But the mesh check is with 620 cubes still faster.
The disadvantage of this modification is, that it is impossible to generate a near object list.

I’ll just drop some info here:

The near sensor uses a btSphere physics shape, which should be one of the easiest to calculate collisions on (it’s just a radius check). So, complexity of the parent mesh (the mesh the sensor is on) should not affect performance. However, as shown, the complexity of other objects in the scene can certainly affect the near sensor. Keeping the other shapes minimal (only enabling Actor on those objects that need to be picked up by near sensors and using simple physics shapes) should help the near sensor performance.

As for the distance check, you might want to see if you can hook into Bullet’s BVH tree. This is a data structure that Bullet uses to accelerate it’s collision checks. By using the BVH tree, the collision checks become O(nlogn) instead of O(n^2), which could really help your simple distance check. Furthermore, remember that you only need to compare squared distances, which means you can avoid a square root.


if obj.position - sensor.position < distance**2:
    print("collision!")

is faster than


if (obj.position - sensor.position).length < distance:
    print("collision!")

The speed difference can certainly be noticed when you’re talking about hundreds of distance checks.

Especially as I would mention that if you were to take your first example and cache the distance ** 2 it would be increasingly fast :wink:

HG1, you are right. Two suzannes were added er test object rather than just one. This doubled the number of objects.

I corrected that and updated the above results.

Good to see that you tested as well.