Dealing with a large number of agents

Hey there,

With about two hundred AI objects in a scene, what would be a fast way to pass them values?

English might be failing me with that question. Time to elaborate;
Say the AI are put on “stand-by” state when they don’t have any tasks – a central manager keeps track of these objects and runs a function on each of them to determine what they should do next.

This alone works fine in small quantities, but whenever the queue grows too big it spells trouble.
I waltzed around by limiting how many elements in the queue the manager can go through in a single frame, but this only works if there are never enough AI objects onscreen for their braindeath frames to be too noticeable. It bugs me. Slow python loops are back at it, maan.

Thought maybe forego the function altogether, and instead build a numpy array with a custom datatype that holds the values I’d evaluate, then use masking to assign corresponding new values onto a separate array that agents on “stand-by” can look up on their own. Sounds good to me but I’ve been testing, googling and theorizing for hours now and my brain is spaghetti.

Any clues, hints or ideas?

I have the same issue, at 200 harvesters(rts) frames will drop when their idle, the only thing is: each harvester checks if a list/array has something in it (next crystal to mine or head back to base).

so i do like this:

    crystals = [crystal for crystal in own.scene.objects if crystal_type in crystal]
    if crystals:
        own['target'] = sorted(crystals, key=lambda target: own.getDistanceTo(target))[0]
        own['target'] = own.scene.objects[own['refinery']]

So i join you in your question

This may or may not be helpful to you guys. Not sure because I don’t know the full details behind your agents and harvesters: Whenever I have large numbers of independently running objects, I make sure their heavy code doesn’t all run on the same frame.

Can any of your objects survive by only running once every 15, 30 or even 60 frames?

Zombies are slow to react in most situations so I would imagine taking a half of a second to lock on to a new target isn’t that far fetched. You could possibly assign a frame number to the AI object and have your manager only update objects of the matching current frame number. Make a counter that resets every 30 or 60 frames or however many you think should cover your AI objects.

I think the same idea would apply to Cotaks harvesters. Or instead of checking the distance to another crystal, you can break down your map in to quadrants(just as a lookup for your harvester) and have it move to the nearest quadrant with a target. You can fine tune whatever specific targeting you wish to use once the harvester gets close enough to the crystal.

Just a couple a things to consider -

  • If it absolutely doesn’t need to run every frame then don’t run it every frame. Use frame counters or delays.
  • Don’t look through the scene objects list every time you need something. Make your lists smaller and more specific at the map start. You could always update the list at certain times or if a new object needs to be accounted for, add it to your list on the spot.

Do a agent priority where agents closer to the camera, get more cpu time.

I had a good nap and my brain seems to be functional again.

Completely in the abstract, way I’m thinking is

  1. Whenever agents complete a task, pass relevant values to the manager and have the manager return an ID that corresponds to an index within the iterable where these values are being stored.

  2. Build a structured array to ease masking through attributes; arr.attr equals value of attr for all idle agents, thus we can do indexing like so -> arr[ arr.attr == another_value ] returns all values of array where condition is true.

  3. Use this method to run our evaluations, then store output in a new array; have the output be stored at the index the corresponding agent will be looking up.

Now this works inside my head but I’m sure there’s quite a few more layers of complexity to it.

One complication: working with direct object references likely isn’t possible with standard numpy structured arrays – if it is, then I haven’t been stubborn enough to get working. Subclassing np.array to add in extra functionality for game objects might be a thing to try.

With BGMC going on and all, this might be too time consuming to try and pull it off right now.
I’ll come back at it in a week or so and see if anything pops up within my thick skull.

Hmm, i think i gonna add an update/check function with a randomness countdown for every agent. so that not every harvester updates at the same time.

yes that would be a good option to add, maybe i combine that with my own ‘solution’.

so that i basically get 1 to 5 harvesters max. updating per x tics. that should reduce the logic usage by a lot i guess.

not updating every frame is already something i do,but still at high number of agents/harvesters the logic rises trough the roof, cutting the fps down.

It’s not moving and animating that typically are costing alot, it’s pathfinding/ AI logic

are you pathfinding each time the player runs or every X frames?

for me: in idle mode it does nothing more then looping trough a list then set crystal or base location(see my first post).

with 100 harvesters logic goes to ~50%
with 200 it goes higher and frames dropping to around ~45

When they got a target and start to move logic drops back to ~5%.

1 Like

ahh if you want closest X on the cheap- build a KDTree of the bases
rebuild the tree only on constructing a new base.

as for the crystals use a tree of small trees

use ‘radius’ and do

for tree in MacroKdTree.findRange(point,radius)
    kdtree = KdTrees[tree[1]]
    for entry in kdtree.findRange(point,radius):
       if entry[1] not in harvseted[tree[1]]:


KdTrees[index] = a smaller kdtree

I can make a example if you wish :smiley:

Hey there, resuming the topic now that BGMC is over.

Thinking in terms of building code for RTS, I’d like to tinker around and see if agents can be controlled from vectors; sort of like a batchgroup for AI. Furthermore, the game manager itself could be rolled-in with a grid system in place to make scene building simpler and do some maintenance routines like handling broken object references or stray projectiles.

Not sure if I can succeed alone in doing something like this so any advice is appreciated.

what about starting with the furthest agent out -
and then if the path crosses other agents -> assign ranks that way

I like the sound of that.
Still want to see if a handle of sorts can be put on groups of objects, so to tell them what to do in one go rather than going through them individually. Either there’s a funky pythonic way or I need to resort to another language.

Say, couldn’t game logic be written in C++? That would solve a bunch of problems.

I would get the furthest actor, path from him, and if the path crosses other agents store it, and then feed a path to each agent in the line and pop then from the ‘waiting for path list’

Also remember asynchronous using jobs and await and yield can process big jobs over 2 or more frames

@wkk showed me it.

^Noice, I’ll try that if my crazy idea turns out not to work.

MY CRAZY IDEA: subclass KX_Scene. Override addObject method and have all game objects using agent logic register themselves on init so they are known to the agent array. Agent array is, in turn, yet another subclass of numpy array that has the added coolness to it of creating subarrays of object attributes. I can mask with array[ array.attr (operator) value] type comparisons and do funky complex operations blazingly fast.

Furthermore, these attributes can be modified in the same way for a whole group of agents without a single python loop. Getting down with the getattribute method, we can have agents look up the values for specific attributes directly from our array rather than their own attribute dict. So, with the appropiate waltzarounds, methods that take advantage of numpy’s speed can be built to operate on batches of agents. Revolutionary!

I’m working out what the appropiate waltzarounds should be. May take a few days, but I’m confident that I can get there pretty soon.

Turns out I might not be so crazy after all – numpy arrays CAN handle game types.
In turn, game type methods can be vectorized. So, time to crack out the ufuncs.

basic example;

from bge import logic
from bge.types import KX_GameObject
import numpy as np

cont = logic.getCurrentController()
own = cont.owner

#build object array plus another for args
a = np.asarray(own.scene.objects.filter("", "agent"), dtype=object)
moveArr = np.array( a.shape, dtype=object)

#fill the args array with values
#ironically, i did so with a for loop out of lazyness
for i in range(len(moveArr)):
    moveArr[i] = [0,1,0]

#define vectorized function.
#frompyfunc(function, number of args, number of returned objects)
applyMovement_ufunc = np.frompyfunc(KX_GameObject.applyMovement, 3, 0 )

def run():
    applyMovement_ufunc(a, moveArr, False)

Testing out this approach with map generation: calculating elevation of a few thousand vertices in a single frame is actually doable, mother of god. Could this be love?

I’m sure cons and limitations will make themselves known in time, but I will still try to dispatch orders to agents this way whenever I get to it. Boy, if the speed up is significant enough I might be able to make LIFEBANE killboxes on an even crazier scale.


Do you know if this somehow multithreads the processing? Until now, I thought everything in the BGE runs in single thread, but if some Pyton magic would allow to use more ressources in parallel without much pain and side effects, this might be a nice trick to consider for speed ups.

1 Like

My understanding is limited, so wild guess: depends on which build of numpy Blender is using. says none of the required libraries are present on my install, so setting number of threads through os.environ does nothing.

Whether these libraries would work with Blender’s python, no idea.
Whether python’s threading or multiprocessing modules are of any use here, also no idea.

I have other things in mind at the moment, but this is worth digging into.

Async - await / co-routines is also worth researching.

a agent can ‘think’ over multiple frames,

what this ends up being in my A* version, is waiting for a composite of many small graphs to finish.

  1. is goal on same tile as start? - if yes try and path tile - if fails path all adjacent tiles + tile

  2. if the goal is not on the tile - is it on a tile adjacent to start tile ? if it is pathfind start tile + end tile graph

  3. if not on adjacent pathfind ‘macro graph’ (a graph of graphs)
    if composite graph created by path does not exist create it -> pathfind it

this is doable in realtime for a very large map, but we can save the graphs to disk.

Ideal: queuing tasks + going by priority + vectorization

Seems to me that’s one way to build a proper manager. It’s not just agent handling that can take serious advantage here. I really need to spend more time testing stuff out.