Dynamic AI Navigation (Cryengine 3)

Wow, can we have this?

Yeah, I’m aware of this.

It’s not just dynamic navigation generation that’s impressive, but the fact that you can preview it in the viewport (i.e. it’s very fast and multithreaded).

Umbra tech use voxels for some of their technology, it may be used there. So far not sure if recast/detour can do this.

is this like a volumetric A*?

i dont know what you mean by volumetric, but no
dynamic means that the terrain and props can be destroyed and the AI will know it and will make a new path, or atleast that is what i understand

As far as I know BGE has recast which is essentially this.

http://www.blender.org/documentation/blender_python_api_2_65_release/bge.types.html?highlight=navmesh#bge.types.KX_NavMeshObject

http://code.google.com/p/recastnavigation/

The python interface seems a bit simple, but i’m pretty sure it lets you rebuild the mesh while the game is running.

Yes we have that. I just do not know if we can edit (join, break) navigation meshes in-game.

volumetric A* is using a map to generate a navigable terrain, in the least cells possible,

so imagine A* but using cubes in 3d, because people can jump etc, I was reading about this in a Mit doc about a month ago, talking about real-time ever updating decision trees, and how it could be ran on any modern smartphone… the only issue is everything has to be divided into cubes that are x*x,

I will look up the article, but it talked about using the same software to control a dynamic walker as a flying jet, helicopter etc, by understanding it’s handling and also something about using sonar in realtime to adjust a “Softness” assignment to each cube, based on density, and estimated thickness(grass etc)…

I would also assume that you could have little objects everywhere that are marked safe, until the condition changes, and use those
like my piece of candy AI but with the ability to “select” a path through many pieces of candy…

o piece of candy, o piece of candy, o I got the player!!

using a set of vision cones, and the ability to forget waypoints and remember them based on weights, you can navigate using A* to activate waypoints until it is hit, then light up the next one…

http://aigamedev.com/open/review/near-optimal-hierarchical-pathfinding/

this is neat, but not the one I was looking for… but I think it is what you are looking at

Volumetrics, as in 3d A*

what??? O_O’ and is “real” ? realtime without lag ?

i never used to avoid actuator lag …

it seem absolutely in “real time” (python time) :slight_smile: cool.

thanks for the link JaredSmith!

so, is not more a dream this thing :

path = nav.findPath(pos1, pos2)
if not path :
##change target

all in one fraction of code!!! :smiley:

@BluePrintRandom - It would be appreciated if you could please merge your posts together.

@martin.hedin - Interesting. It doesn’t look that impressive to me, though. I’m not sure if Recast can do this (but it probably can, though maybe not as interesting visually), but I’ve made node maps myself, and you can update the map on the fly and find a new path dynamically. It probably wouldn’t be much work to update the map every half second or so, and still keep a high FPS.

what about having each “entity” have it’s own list that is nodes, and if they are safe or not…

imagine a game, where a guy walks across a bridge, and it is safe… then you break the light and cut the bridge…

I know this would get complicated quickly… but it would allow for very metal gear solid debauchery…

Yeah loved this when i found it a while back, looked at this in a different way to whats disscused above. With my tests (which i never carried forward) was simple compute bounds of low poly mesh (box’s,sphere’s etc) place trace point at centre middle of object, and every node of the colision low detail (box,shere etc again) so you end up with generaly 4-12 trace point on the mesh raycast against other geom and map to navigation mesh. This systems looks nice, would love a look at some code :slight_smile: but doubt that will happen.

just use a vision cone that “sees” waypoints based on how observable they are, so if he has a light, he can see the waypoint, if not… he might walk off a ledge…

I can envision shooting out a spotlight, placing a mine, and then throwing a rock at someone, and then hiding…

@ BluePrintRandom, wouldnt work. Even a cone trace from viewpoint would use more cycles. My system used more cycles by the look of it then this one shown, if the tree falls down it doesnt effect the nav mesh untill touching the terrain layer. My system was designed for falling objects that are curved (e.g a building falls over it doesnt all just hit the terrain at an angle, most physics situations the object will disintigrate with an edge threshhold that when simplified is a curve from base to top, so your going to want for real nice dynamic path collision for nav mesh on large scales ((massive sky scraper falling for example)) to have extra raycast points simplified over the entiraty of the collapse. This means AI could still make a realistic run for it under objects dynaicly falling as a section evaluated process. But thats why i didnt take it mush further, eats cycles.

PS you were giving me loads of shit the other day, wind your neck in. But wish you the best, just be less aggressive

I am not talking about what is ideal for CPU iterations etc, I am talking about what would be most realistic, if I can’t see a waypoint, I don’t know what condition is in…

and I can die from it…