I had a quick Benchmark with the above file. I added cubes (static but actor). Each cube detects its neighbors (= two cubes).
Result with distance near:
profiling: TestCase.near.distance
good running with: 340 test objects
result with mesh near:
profiling: TestCase.near.mesh
good running with: 460 test objects
the result:
460 test objects in near.mesh vs. 340 test objects in near.distance.
Ratio of 3:4
Not quite the result that I expected.
Lets think about the reasons: We detect Cubes which have just a few faces. It seems there is not benefit on this amount of faces.
How about objects with more faces?
Lets replace cube (6 faces) with suzanne (500 faces):
profiling: TestCase.near.distance
good running with: 340 test objects
profiling: TestCase.near.mesh
good running with: 11 test objects
ratio of 30:1
This time it is real difference. I’m surprised that performance lost on mesh detecting is that much.
But you see the distance check is the same regardless how many faces we have.
It might be different when having more detectable objects in scene.
Edit:
How about - one sensor many detectable suzannes:
(just two suzannes are in range)
The previous results were wrong. Suzanne was added twice per test object.
Here are the results with one suzanne:
profiling: TestCase.near.distance
good running with: 5300 test objects
obviously better performance then the other test runs. We have one near sensor only. So it should be as expected. Now with mesh near:
profiling: TestCase.near.mesh
good running with: 5300 test objects
bad running with: 5400 test objects; fps: 53
ratio of 1:2
My conclusion:
-
On detecting objects with lots of faces the distance method is better (at least in worst case = mesh in range).
-
On lots of objects in scene (not all in range) there is no big difference in performance.
updated the blend file
Attachments
Benchmark 2.1 - near sensors with suzanne.blend (320 KB)Benchmark 2.1 - near sensors with cube.blend (276 KB)Benchmark 2.1 - single near sensor_fixed.blend (322 KB)