This is an addon I’ve been working on for crowd simulation in Blender. My primary goal is to make a flexible system that could be easily be extended so that I could try different methods for things like predicting collisions.
At the heart of the addon are the fuzzy logic “brains”. Nodes are used to describe how different inputs contribute to the outputs. None of the logic is hard coded so every part of the brain is either about collecting data, processing data or outputting data. Hopefully this means a lot of different things should be possible.
The node editor was originally a separate window using QT (PySide) but is now internal so I’m still in the process of moving the state tree over to the new nodes. I would like to be able to use billboards as agents and play image sequences on them rather than animating an armature but does anyone have any ideas how this might be achieved because the way I currently handle animation is using action strips?
Agents will be able to follow a ground mesh and this is nearly done but I’m just trying to get my head around the maths involved with aligning vectors (Normal of the ground mesh and z axis of agent) while keeping the agent pointing in the right direction!
The other thing I’m struggling with is how to add variation to the agents. Can the addon just scale individual bones of an armature a small amount and still expect the animation to work? Does each agent have to have it’s own shader or can a shader be made that looks slightly different based on some property of the object? Any ideas would be very welcome.
Here are two examples of things I’ve been experimenting with: