any tips on making a basic script AI?
Should we guess? What AI , what is it? A dog,a cat a human? An enemy? Provide more info before you get any support.
This is a good read, the first half goes over the basics of how AI works, the later pages are interesting but not as useful.
Look into finite state machines, if you’re just starting AI they’re probably your best bet.
Mattline1, very interesting article, thanks! I have occasionally been sketching game-AI, in which the keyword is hierarchical planning (short term - long term). Long-term goals are divided to short term subgoals, and short term opportunities and unpredicted events may change long term plans. I’m slowly implementing a game world to start experimenting how to build such AI. What I plan at the moment, is to use simulations and such to create some sort of database, mapping situations and actions to probable results, to be used a sort of “navmap” to create & update goal hierarchy.
I’ve typed this stuff up before, so I shan’t do it again. Here’s some copy/paste from me.
There are two main types of AI (before you get into neural networks…) They are algorithmic method, and the finite state machine. Both have their advantages and disadvantages.
***The Algorithmic Method:
***The Algorithmic method uses a collection of variables added together to produce an outcome. So we take senses like hearing and sight, and try to emulate them. Then we combine them to get the final state of the AI.
It can behave rather realistically, at the cost of some complexity and processing time.
This is some extracts from a PM I sent to blueprintrandom a little while ago. He asked a few questions, and this was my reply:
***You haven’t described the overal goal of the AI. What does it actually do? Does it chase the player? Does it just randomly navigate?
I won’t be writing code for you, instead I’ll try to teach it to you. To get the most out of this, do not copy and paste any of the code I give you. It may take longer, but you will learn more from it.If I Hear or see player cast ray at player
Well, how to we ‘hear’ or ‘see’ things in the game engine? Well, we do it with rays and distances, so you’ll be casting ray’s at the player before then.
So, let’s start our script the normal way:
import bgecont = bge.getCurrentController()own = cont.owner
And then add on some variables:
player = [obj for obj in bge.getCurrentScene().objects if ‘Player’ in obj]alertness = 0hearingQuality = 2sightQuality = 30perception = 15
(In case you’re wondering about the player line, it is a condensed way of searching though lists. If you want me to explain it, go ahead. It assumes that the player has the property ‘Player’)
Now we have that done we have to decide what makes our enemy ‘aware’ of the player.
We’ve defined a variable called ‘alertness’ and this will be used to represent the AI’s perception of the player.
You have given me two different ‘perceptions.’ Hearing and sight. If the player is close, then he can hear him, and if he is far away, the AI might mis-see him. So we need to combine the distance away and the line of sight to get a meaningful alertness level.
dist = own.getDistTo(player) hitOb = own.rayCastTo(player, 100, 'Player') if hitOb != None: #ie the ray hits the player seePlayer = 1 else: seePlayer = 0
Distance is obvious. The raycast less so. What the rayCastTo function does, is it returns the name of the object it hits, or it returns ‘None’ With that if/else I turn that into a numerical value.
Now we have to combine them into a single ‘alertness’ value
hearing = disthearingQualitysight = seePlayer(sightQuality/dist)
Once again, hearing is obvious, we’ve just given the distance some value to turn it into the players hearing ability.
Sight we have to do a bit more work.
First off, by multiplying by seePlayer, we always get a 0 if the player can’t see. Bit if he can, but is far away (dist > sightQuality) we get a small number. If he is close (dist < sightQuality) then it is a large number.
So now we can decide if the player can ‘perceive’ the player:
alertness = hearing+sightif alertness > perception: ‘’‘We can see the player’’‘else: ‘’‘We can’t see the player’’’
You OK with all of that? Do you need more explanation?
From now on it looks like pathfinding, But there are much easier ways to do it now, namely the navmesh feature added in the later versions of blender. I would use the navMesh, and turn on/off the logic brick from this script with:
***The Finite State Machine:
***In a FSM you deal completely with Trues and Falses. You define some states (ie seeking the player, attacking the player, fleeing etc) and use if’s and then’s to navigate between them.
This is a very simple method, so I shan’t go to the full detail here. There is plenty available in wikipedia.
That enough to get you started?
AI has to main tasks:
B) Do it
both are independent from each other but use the results of their operations. Both can run parallel.
Doing nothing is a do-it option as well ;).
A) Pathfinding (it does not matter how)
B) Pathfollowing (it does not matter how)
A) and B) are independent from each other.
While your npc follows a previously decided path the AI can look for a new one ;).
You can even replace the implementation of one of them without effecting the other.
As this is just common information it might help you to understand how to structure your project.
Right -Cone 1 collide Player or waypoint message right
Center-Cone 2 collide Player or waypoint message Center
Left -Cone 3 collide Player or waypoint message Left
Just by using this to trigger rotation (Left/Right) and the center to drive forward with any locomotion method
This is a logic based stateless AI
So spawning way points would make it go wherever you want
Sorry for my lack of information guys what I’m actually trying to make is, is an enemy AI like it has fighting moves. I animated some boss character moves that he will use in a fight. Now, I actually wanna learn some basic enemy AI tnx to sdfgeoff up there
HERE IS LIKE WHAT IM GOING TO DO :
See the way Alex Mercer(Prototype) boss fight AI? thats what I’m planning to do.
i’m interesting to the “Goal state machine”.
the mess, is add task over task.(without forget the old)
for example ->
mission accomplished.but, if you have a door closed in the path?
goTo(X) … door closed
where store X?
actually is the problem
someone can make a “simple” blend , and solve some “basic issue” instead to give “general cousill” please
just a “intelligent cube” that solve problem …
come on , writers
because there only one state:)
Ok so anything labeled target that is near object “think cube” end object,
this is the
o a piece of candy method, as long as the next waypoint is close enough, a simple slow spin when not looking at a target could make him follow a very complex path
he is like look waypoint 1 a piece of candy… walks up to it, eats it (end object) and moves on to the next “piece of candy”
these can be a switch, a animation trigger etc, like drink water, or stab ninja… whatever you want
you can also have slowly moving “Targets” that are like a rabbit to a race hound…
I’m sorry, I’m not yet familiar enough with Blender to make this, but someday I will
Are you familiar with searching algorithms like A*? If you are not, it is worth of studying these. Although many people associate path finding algorithms to find paths in “real world”, they are in fact generic search algorithms for solving many different kinds of problems.
The traditional way to do this is to “react” to closed door during the search phase. When you find a path from A to B, it is in fact a list of actions to reach the position. These actions can be just anything, not only movements - “take one step north, unlock the door, then one step northeast, …”
I assume that the built-in pathfinder only deals with navpoints, that is, it has no knowledge about the actions possible in your game (opening doors etc), so you need to build your own search algorithm at the top of that, searching solutions with actions possible in your game. That is, you have home-brew A* at the top of built-in A*; give the target location to the upper search algorithm, and make it split it to possible subgoals, e.g. finding paths to every reachable door from current location (naturally, you need an own data structure to make this). Give the subgoals (door locations) to built-in pathfinder (ask it to find the path to the door, not through it), and combine the results in the upper search.
When you are doing navmesh, make no direct routes through doors or any other (possible) obstacles like walls, if you don’t want the built-in search algorithm to return routes through them. Then, when the algorithm meets a closed door, it checks if the actor can open it - if it can, it adds that action to the list and continues searching path onwards, and if it can’t, the route is rejected.
If you want it to be even more intelligent, when (custom) pathfinding finds a closed door and notices that the actor does not have a key to open it, this direct route is rejected, and a new goal is added to canditates - to first go to fetch the key and then to the original target location. If you are using A*, this new goal first has zero length, so it will quickly be opened to lengths of current candidate routes. You may also want to consider recursive algorithms, that split goals to possible subgoals.
The situation is different, if you want the actor to react “unpredicted” events, that is, your search algorithm (falsely) assumes that the actor can open the door. What you do is that when the actor reaches the door and applies “open” action, it reacts to the failure of the action, and starts making a new plan, e.g. to get the key or something very different. This could lead a little bit more natural behavior. Where to get these new plans? Well, you already had some sort of algorithm that generated the original plan to reach position X, now it just have to adapt to the fact that it can’t reach the position at least directly; there was some reason, why you wanted that character to go to the position X, and that reason might need reconsideration.
Where to store the original target location when making new plans? Well, in general you have to have a data structure that stores the goals of your characters. You probably need a stack of goals, so that your character can solve subgoals first; when your AI meets a new situation, it adds a new goal to the list, and possibly remove the previous goals, if they are not valid anymore.
EDIT: I’ll try to prepare a simple example with Python about this in near future.
Here is quickly made, partially tested A* search in which paths are different actions (in this case: go, pass, open [door], dive [through window]). It does not yet contain the logic to fetch a key, if a door is locked, but I’ll try to add it in near future. The very crude solution: when meeting a locked door (during search), find the location of the key, call (recursion) findpath(startxy, keyxy) + findpath(keyxy, doorxy), insert unlocking action and add that as candidate. The better solution is to add subgoals to paths, so that the pathfinding does not unnecessarily calculate route to key and back to door, if there is other shorter routes available.
For some reason, I was not able to put the script as attachment, so I placed it to my home pages:
EDIT: Key things - you need data structures to store “rooms” and the locations of pathways to other rooms; you need to know, what kind of action is needed to go through the pathway. You need Blender pathfinding to solve “go” actions, that is, walking/running inside rooms to pathways. To start, you need to find out, in which room the character is. The search algorithm should generate such paths (lists of actions) your character can follow (perform actions from list, one by one). If everything went well, your character is then able to open doors, climb ladders, jump over empty spaces, dive through windows, duck behind obstacles and so on.
EDIT2: And ah, you don’t need to sort the paths, just find the path with minimum length, if you don’t need debugging information.
Here’s a quick version of the pathfinder that uses recursion to complete subgoals (fetching keys, in this case):
Use with extreme care, for demonstration purposes only. First, it can make lots of unnecessary work, as it always looks the keys when meeting locked doors, even if they are not even needed to unlock to reach the goal. Second, it is uncertain, how complex combinations of locked doors and keys it can solve. I tried to make a more sophisticated version, but it needs some thinking & testing first (it wasn’t as easy as I thought)