Intro
For my specialization in TGA, I chose to focus on AI and behavior. I have developed an AI that makes decisions and performes actions using a set of personal stats that determined how well it performes each action. The stats are mainly represented as classic gaming stats such as endurance, awareness, charisma, and so on, letting these stats affect actions tied to the specific stat. In this system I can now create an arbitrary amount of AI actors, randomizing or manually tweaking their personal stats in order to get very different performance level. I also had an initial plan to explore the possibility of using fuzzy logic, something I had no previous experience with. Unfortunately, I had to scrap this idea mid-project, partly because of the limited time I had but also because it did not make much sense in my structure, and I could easily achieve the desired result without it.
Getting statred
I made the decision to do my specialization in Sad Dad Motors, a game engine my friend and I had developed during our time at TGA. I naively made this decision solely based on the fact that I wanted to, even though I was fully aware that this would come with additional work in order to make the engine ready for the task at hand. This meant spending a significant amount of time implementing a lot of things that would be necessary for the AI actor, most significantly the ability to create and move around on a navmesh.
With all the engine prep done, I began to set up a structure for my AI. I wanted to create a state machine since I had quite a lot of experience with it, but also because I had some new ideas I wanted to try out. For the opposite reason, more specifically that I wanted to gain more experience, I wanted to add a behavior tree to each state, combining the two methods, something I had never done before.
The ”game”
The setting I built to use my AI in was a simple sneak/flee ’game.’ I created a player which, in addition to being visible and able to move around the scene, also emits ’sound’ of different volumes depending on its movement speed. The AI actors patrol around the scene, looking and listening for the player. If the player is noticed, the actor moves in to investigate and starts its detection phase. If the player is detected, the AI actor starts chasing the player until it loses track of them. It then searches the position where it last saw or heard the player, and if the player isn’t detected again, it goes back to patrolling around the scene.
I also wanted the environment to be a part of the game, so I created walls and bushes to help the player hide. The walls obscure sight completely but only partly hinder sound, and the bushes partly hinder sight but have no effect on sound. This also gave me additional opportunities to let the AI actor’s stats affect how much the obstacles will hinder detection.
AI actor stats and state
In the context of this game, I needed stats connected to detection and movement in order to affect the player’s performance. I deviated a bit from classic game stats here, implementing separate stats for vision and hearing to see how an almost completely blind or deaf actor would affect the gameplay. Apart from that, I stayed quite true to my initial plan. Each of these stats can then be controlled and changed by the user to alter the overall performance of the actor. In addition to the stats, I also implemented some variables that the user cannot change. Their values also affect the actor’s performance but are altered in code during the game, representing more the physical and mental state that the actor is in.
Vision
Actor stat that affects how far the actor can see unobscured but also has a small effect on how far it can see through bushes.
hearing
Actor stat that affects from what distance an actor can detect sound unobscured, but also has a small effect on how much walls hinder sound.
awareness
Actor stat that affects both how much walls hinder sound and how much bushes obscure vision, as well as the width of the vision cone.
reaction
Actor stat that affects how fast the actor rotates and also has some effect on how quickly it detects the player when the actor sees them.
speed
Actor stat that affects the speed at which the actor is moving.
endurance
Actor stat that affects the rate at which fatigue is added when moving and subtracted while sitting still.
carisma
Dump stat.
fatigue
Physical actor state that affects the speed of the actor.
vigilance
Mental actor state that affects how far the actor can see through bushes and hear through walls, as well as how quickly the actor detects the player when it sees them.
player detection
This variable is increased when the actor sees the player. When it reaches maximum, the player is detected. If the actor loses the player, this variable is then decreased until it reaches 0, triggering the actor to continue patrolling.
State machine
During my work, I had some iterations in the structure of my state machine but landed with a structure where each state machine in the scene can have a arbitrary amount of actors. This was important to me, partly because I hadn’t done it like this before, but mainly because I wanted the state machine to set the world parameters for the actors connected to it – such as how fast it is possible for an actor to move or accelerate, how far it is possible for an actor to see or hear, and so on. Some of the variables in the state machine were created as world variables that were always the same, while others were state-specific. I also chose to expose all variables in order to create an opportunity to create multiple state machines with different world parameters so that actors connected to another state machine play by differnt rules (for example elite actors). This also to made it easy to tweak and change them during runtime in order to test.
Each state was then constructed using a behavior tree that handles the update and a blackboard that holds all information needed. I chose to use Pär Arvidsson’s Braintree mainly because I didn’t want to spend any time building my own, and I had used it once before, so I was vaguely familiar with it. It had more than enough of the functionalities I sought, and I had to avoid a lot of them in order to make it work well with my state transitions.
State structure
There were a couple of issues with this structure, mainly connected to how each actor would update without affecting all other actors while not expanding the actor class more than necessary. I achieved this by creating an unordered map in each state with the actor ID as the key and a pair containing a behavior tree and a pointer to a blackboard as the value. All of this is then created when each actor is added; first, the blackboard is created, then each state is initialized inputting the newly created blackboard and emplacing it all in the unordered maps in each state. With this structure, the state machine can update the kinematic data for each individual actor while the actor itself is only aware of its own stats and an enum corresponding to its current state. And since the blackboard is created outside and added to each state as a pointer, data can also be stored between states for each actor.
The AI actor class
The actor itself does very little. It contains a struct that holds all the actor stats with corresponding getters and setters utilized by the state machine, an enum representing the current state the actor is in, and a struct with kinematic data that is updated by the state machine during the machine’s update loop. In its own update loop, the actor then uses the kinematic data to update its transform.