AI Decision Making in Games

Autonomous decision making in video game AI agents typically involves the use of algorithms that enable the agents to make decisions based on various factors, such as the state of the game world, the behavior of other agents, and the objectives that the agent is trying to achieve. One common approach is to use a rule-based system, where the AI agent is given a set of rules to follow based on the game’s mechanics and objectives. For example, an AI agent in a first-person shooter game may have rules such as “shoot at the player when in sight” and “take cover when health is low.” These rules are typically programmed by game developers and can be modified based on the desired behavior of the AI agent. Another approach is to use machine learning algorithms, where the AI agent learns from experience and adapts its behavior over time. This approach is often used in more complex games where the behavior of other agents is unpredictable, and the game world is constantly changing. Machine learning algorithms can be used to train the AI agent to recognize patterns and make decisions based on them, such as predicting the behavior of other agents or identifying the best route to reach an objective. In both approaches, the AI agent typically has access to a set of game state variables, such as the position of other agents, the status of its health and resources, and the location of objectives. The agent processes this information and uses it to make decisions based on its programmed rules or learned behavior. These decisions can then be used to control the actions of the AI agent, such as moving, shooting, or interacting with objects in the game world.

    Decision making in games is comprised of several different methodologies to achieve desired AI Agent optimal performance.  Each game has different methods that could be used, along with some that are seen in more then one genre of game, such as Goal Oriented Action Planning and Behavior Trees. The following concise overview of Decision Making is from Millington & Funge 2016):

In reality, decision making is typically a small part of the effort needed to build great game AI. Most games use very simple decision making systems: state machines and decision trees.

Rule-based systems are rarer, but important. The character processes a set of information that it uses to generate an action that it wants to carry out. The input to the decision making system is the knowledge that a character possesses, and the output is an action request. The knowledge can be further broken down into external and internal knowledge. External knowledge is the information that a character knows about the game environment around it: the position of other characters, the layout of the level, whether a switch has been thrown, the direction that a noise is coming from, and so on. Internal knowledge is information about the character’s internal state or thought processes: its health, its ultimate goals, what it was doing a couple of seconds ago, and so on. Typically, the same external knowledge can drive any of the algorithms in this chapter, whereas the algorithms themselves control what kinds of internal knowledge can be used (although they don’t constrain what that knowledge represents, in game terms).

Actions, correspondingly, can have two components: they can request an action that will change the external state of the character (such as throwing a switch, firing a weapon, moving into a room) or actions that only affect the internal state. Changes to the internal state are less obvious in game applications but are significant in some decision making algorithms. They might correspond to changing the character’s opinion of the player, changing its emotional state, or changing its ultimate goal. Again, algorithms will typically have the internal actions as part of their makeup, while external actions can be generated in a form that is identical for each algorithm. The format and quantity of the knowledge depend on the requirements of the game. Knowledge representation is intrinsically linked with most decision making algorithms. It is difficult to be completely general with knowledge representation… Actions, on the other hand, can be treated more consistently.

Decision Trees in Video Games
    A decision tree is made up of connected decision points. The tree has a starting decision, its root. For each decision, starting from the root, one of a set of ongoing options is chosen.

Finite state machines (FSM)

     In Simulation development, we have what is known as the state, not to be confused with quantum state.  


State machines are the technique most often used for this kind of decision making and, along with scripting…make up the vast majority of decision making systems used in current games. State machines take account of both the world around them (like decision trees) and their internal makeup (their state). States are connected together by transitions. Each transition leads from one state to another, the target state, and each has a set of associated conditions. If the game determines that the conditions of a transition are met, then the character changes state to the transition’s target state. (Millington, Funge 2016) 

So how can we generate complex behaviors out of a fairly simple codebase.  Complex behaviors were first generated in video games (perhaps, most famously in Batman Arkham) as the first step in the evolution of ever increasing complex behaviors from NPCs.  

A Finite State Machine is a model of computation, i.e. a conceptual tool to design systems. It processes a sequence of inputs that changes the state of the system. When all the input is processed, we observe the system’s final state to determine whether the input sequence was accepted or not. (Sanatan 2019)

Software Engineer Marcus Sanatan provides a very good overview of FSM:

Enemy AI


Finite State Machines allows us to map the flow of actions in a game’s computer-controlled players. Let’s say we were making an action game where guards patrol an area of the map. We can have a Finite State Machine with the following properties:


States: 

    For our simplistic shooter we can have: Patrol, Attack, Reload, Take Cover, and Deceased.
    Initial State: As it’s a guard, the initial state would be Patrol.
    Accepting States: An enemy bot can no longer accept input when it’s dead, so our Deceased state will be our accepting one.
     Alphabet: For simplicity, we can use string constants to represent a world state: Player approaches, Player runs, Full health, Low health, No health, Full ammo, and Low ammo.
     Transitions: As this model is a bit more complex than traffic lights, we can separate the transitions by examining one state at a time:

Patrol
     If a player approaches, go to the Attack state.
     If we run out of health, go to the Deceased state.

Attack
     If ammo is low, go to the Reload state.
     If health is low, go to the Take Cover state.
     If the player escapes, go to the Patrol state.
     If we run out of health, go to the Deceased state.

Reload
     If ammo is full, go to the Attack state.
     If health is low, go to the Take Cover state.
     If we run out of health, go to the Deceased state.

Take Cover
    If health is full, go to the Attack state.
    If ammo is low, go to the Reload state.
    If we run out of health, go to the Deceased state.

This Finite State Machine can be drawn as follows:

Behavior Trees

    A behavior tree is a data structure commonly used in video game design to control the behavior of non-player characters (NPCs) in a game. It is used to represent a decision-making process for an AI-controlled character and defines how the character should react to different situations and stimuli in the game world.

A behavior tree consists of a hierarchy of nodes that represent decisions and actions. The root node of the tree represents the overarching goal or behavior of the NPC, and the branches of the tree represent different paths the NPC can take based on different conditions or events in the game.

    Each node in the tree can be one of two types: a control node, which makes decisions and selects the next action to take, or a task node, which represents an action to be taken. For example, a control node might decide whether the NPC should attack the player or retreat, while a task node might specify the exact animation or action to be taken. Behavior trees are useful in game design because they allow for the creation of complex and flexible AI behavior, while still being easy to understand and debug. They also allow for the reuse of AI behavior across different NPCs and can be modified and updated during development to improve the AI’s performance and realism.

     The downfall of many FSMs is their simplicity, though it is true you can generate many complex patterns from a simple 0,1 cellular automata, out of which you may even create chaos.  The desire for having more advanced and nuanced, which is to say ‘believable’ NPCs led to the creation of what was known as Behavior Trees. 

A Behavior Tree (BT) is a way to structure the switching between different tasks in an autonomous agent, such as a robot or a virtual entity in a computer game. An example of a BT performing a pick and place task can be seen in Fig. 1.1a [see below]. As will be explained, BTs are a very efficient way of creating complex systems that are both modular and reactive. These properties are crucial in many applications, which has led to the spread of BT from computer game programming to many branches of AI and Robotics. 

Behavior Trees are Event-Driven: 

    All NPC’s (non-player characters) continue to execute a specific behavior until an event (by friend or foe) triggers a change-up. The Director system or other management AI must monitor the ‘environment’ of the game, then passes on sensory feedback to the AI Agent NPC to change its behavior given what is happening in the environment.  It keeps track of changes in the game, depending on the frame rate of the game usually 60fps, via observers or polling agents, whose job it is to keep track of the game and update the Director AI.  

     As an example of different things an NPC can do with Behavior Trees is that of Squad Triggers when playing in a group of NPCs with the gamer, or enemy AI NPC groups or squads. Squad Triggers – volumes placed in the unreal engine editor that trigger specific responses in non-player characters. When players walk into a AI NPC what behaviors to execute are triggered and specific voice lines to occur and allow the story to play out. Preferred locations and allowed positions. It forces the main game characters to adhere to the story and allow it to be played out in the game. (Steers the game to its desired end or goal) Enemy NPCs are able to shout and chatter, call outs, move position, reload, panic/flee, etc.  Ally AI and Enemy AI part of one AI system to move the story. Akin to behaviouralism’s notion of positive and negative feedback.

Again Millington and Funge give more information on the use of Behavior Trees:

Behavior trees have become a popular tool for creating AI characters. Halo 2 [Bungie Software, 2004] was one of the first high-profile games for which the use of behavior trees was described in detail and since then many more games have followed suit.
They are a synthesis of a number of techniques that have been around in AI for a while: Hierarchical State Machines, Scheduling, Planning, and Action Execution. Their strength comes from their ability to interleave these concerns in a way that is easy to understand and easy for non-programmers to create. Despite their growing ubiquity, however, there are things that are difficult to do well in behavior trees, and they aren’t always a good solution for decision making.

Behavior trees have a lot in common with Hierarchical State Machines but, instead of a state, the main building block of a behavior tree is a task. A task can be something as simple as looking up the value of a variable in the game state, or executing an animation.

Tasks are composed into sub-trees to represent more complex actions. In turn, these complex actions can again be composed into higher level behaviors. It is this composability that gives behavior trees their power. Because all tasks have a common interface and are largely self-contained, they can be easily built up into hierarchies (i.e., behavior trees) without having to worry about the details of how each sub-task in the hierarchy is implemented. (Millington, Funge 2016) 


Types of Task

Tasks in a behavior tree all have the same basic structure. They are given some CPU time to do their thing, and when they are ready they return with a status code indicating either success or failure (a Boolean value would suffice at this stage). Some developers use a larger set of return values, including an error status, when something unexpected went wrong, or a need more time
status for integration with a scheduling system. three kinds of tasks: Conditions, Actions, and Composites. (Millington, Funge 2016) 

Behavior Trees and Reactive Planning

Behavior trees implement a very simple form of planning, sometimes called reactive planning. Selectors allow the character to try things, and fall back to other behaviors if they fail. This isn’t a very sophisticated form of planning: the only way characters can think ahead is if you manually add the correct conditions to their behavior tree. Nevertheless, even this rudimentary planning can give a good boost to the believability of your characters.

The behavior tree represents all possible Actions that your character can take. The route from the top level to each leaf represents one course of action,2 and the behavior tree algorithm searches among those courses of action in a left-to-right manner. In other words, it performs a depth-first search.
In the context of a behavior tree, a Decorator is a type of task that has one single child task and modifies its behavior in some way. You could think of it like a Composite task with a single child. Unlike the handful of Composite tasks we’ll meet, however, there are many different types of useful Decorators.

One simple and very common category of Decorators makes a decision whether to allow their child behavior to run or not (they are sometimes called “filters”). If they allow the child behavior to run, then whatever status code it returns is used as the result of the filter. If they don’t allow the child behavior to run, then they normally return in failure, so a Selector can choose an alternative action.(Millington, Funge 2016) 


External Datastore in Behaviour Trees

The most sensible approach is to decouple the data that behaviors need from the tasks themselves. We will do this by using an external data store for all the data that the behavior tree needs. We’ll call this data store a blackboard. For now it is simply important to know that the blackboard can store any kind of data and that interested tasks can query it for the data they need. Using this external blackboard, we can write tasks that are still independent of one another but can communicate when needed. (Millington, Funge 2016)

Finally, a code sample of programming a BT in C# for the Unity Game Engine:


using UnityEngine;

public class BehaviorTree : MonoBehaviour
{
    public enum TaskResult
    {
        Success,
        Failure,
        Running
    }

    public abstract class Task
    {
        public abstract TaskResult Run();
    }

    public class Sequence : Task
    {
        public Task[] tasks;

        public override TaskResult Run()
        {
            foreach (Task task in tasks)
            {
                TaskResult result = task.Run();
                if (result != TaskResult.Success)
                {
                    return result;
                }
            }
            return TaskResult.Success;
        }
    }

    public class Selector : Task
    {
        public Task[] tasks;

        public override TaskResult Run()
        {
            foreach (Task task in tasks)
            {
                TaskResult result = task.Run();
                if (result != TaskResult.Failure)
                {
                    return result;
                }
            }
            return TaskResult.Failure;
        }
    }

    public class Action : Task
    {
        public delegate TaskResult ActionDelegate();

        private ActionDelegate action;

        public Action(ActionDelegate action)
        {
            this.action = action;
        }

        public override TaskResult Run()
        {
            return action();
        }
    }

    // Example usage:
    private Task rootTask;

    private void Start()
    {
        // Create the behavior tree
        rootTask = new Selector
        {
            tasks = new Task[]
            {
                new Sequence
                {
                    tasks = new Task[]
                    {
                        new Action(() =>
                        {
                            Debug.Log("Performing task 1");
                            return TaskResult.Success;
                        }),
                        new Action(() =>
                        {
                            Debug.Log("Performing task 2");
                            return TaskResult.Success;
                        })
                    }
                },
                new Action(() =>
                {
                    Debug.Log("Performing task 3");
                    return TaskResult.Failure;
                }),
                new Action(() =>
                {
                    Debug.Log("Performing task 4");
                    return TaskResult.Success;
                })
            }
        };
    }

    private void Update()
    {
        // Run the behavior tree
        TaskResult result = rootTask.Run();
        if (result == TaskResult.Success)
        {
            Debug.Log("Behavior tree succeeded!");
        }
        else if (result == TaskResult.Failure)
        {
            Debug.Log("Behavior tree failed!");
        }
        else if (result == TaskResult.Running)
        {
            Debug.Log("Behavior tree is still running.");
        }
    }
}

This code defines a simple behavior tree that consists of three types of tasks: Sequence, Selector, and Action. A Sequence task runs each of its child tasks in order until one fails or all succeed. A Selector task runs each of its child tasks in order until one succeeds or all fail. An Action task represents a single atomic action that the AI agent can take. To use this behavior tree in a game, you would need to modify the Action tasks to perform actual game-related actions, such as moving the player character or firing a weapon. You would also need to modify the Selector and Sequence tasks to control the flow of the behavior tree based on the game state and player actions. This code provides a starting point for implementing a behavior tree in Unity, but it is by no means a complete solution.

GOAP 

    With the ever-increasing need for complexity and entertainment in the Game industry a further development in planning, behaviors and goals was that of the creation of Goal Oriented Action Planning (GOAP).  Prof. Tommy Thompson has provided an informative introduction to GOAP.  The early predecessor to GOAP, according to its innovator in the Game Industry- Jeff Orkin (MIT), was the STRIPS (Stanford Research Institute Problem Solver) on which GOAP was based.  Developed at SRI in 1971 along with A* search, it is recounted by its developers, Nilsson and Fikes:

“STRIPs is often cited as providing a seminal framework for attacking the ‘classical planning problem’ in which the world is regarded as being in a static state and is transformable to another static state only by a single agent performing any of a given set of actions. The planning problem is then to find a sequence of agent actions that will transform a given initial world state into any of a set of given goal states. For many years, automatic planning research was focused on that simple state-space problem formulation and was frequently based on the representation framework and reasoning methods developed in the STRIPS system.” (Fikes, 2, 1993)

A* Search and STRIPS would work hand-in-hand to find optimal paths.  Automated Planning: is a form of AI that is focused on long-term and abstract decision making. Decisions are modeled at a high level, using methods based on STRIPS. The STRIPS plan would cause the transitions between states, as the FSM needed to shift state several times in order to execute all actions within a plan.
     Related to this project was the development at SRI in the 70s of ‘Shackey the Robot’ of which A* developed [cf. STRIPS, a retrospective, Artificial Intelligence 59 (1993) 227-32 Fikes and Nilsson] A* developed by Hart also worked on region-finding scene analysis programs (Duda and Hart, R.O. Duda and P.E. Hart, Experiments in scene analysis, Tech. Note 20, Artificial Intelligence Center, SRI International, Menlo Park, CA, 1970). Thus having ‘understood’ the scene new plans could be created to address goals.  In video games this was first delivered in the title ‘F.E.A.R.’ A first-person shooter (FPS).  Thompson gives a short overview of GOAP:

One of the key issues that we need to consider when developing intelligent and engaging NPCs for games is their ability to build some kind of strategy.  Traditionally, an NPC implementation will rely on finite state machines or a similar formalism.  These are really useful for a number of practical reasons, notably: We can model explicitly (and often visually) the sort of behaviors we expect our NPC to exhibit. We can often express explicit rules or conditions for when certain changes happen to the NPC state. We can be confident that any behavior the NPC exhibits can be ‘debugged’, since their procedural nature often exhibit the Markov property: meaning we can always infer how a state of the bot has arisen and also what subsequent states it will find itself in. However the problem that emerges here is that behaviors will neither be deliberative, nor emergent.  Any bot that uses these approaches is tied to the specific behaviors that the designers had intended.  In order for the more interesting NPCs in F.E.A.R. to work, they need to think long-term.  Rather, they need to plan.

Planning (and scheduling) is a substantial area of Artificial Intelligence research, with research dating back as far as the 1960’s. As my game AI students will soon learn in more detail, it is an abstract approach to problem solving; reliant upon symbolic representations of state conditions, actions and their influence.  Often planning problems (and the systems that solve them) distance themselves from the logistics of how an action is performed.  Instead they focus on what needs done when.  For those interested, I would encourage reading  which provides a strong introduction to planning and scheduling systems.

The NPCs in F.E.A.R. are not merely reactive in nature, they plan to resolve immediate threats by utilizing the environment to full effect. The NPCs in F.E.A.R. are not merely reactive in nature, they plan to resolve immediate threats by utilizing the environment to full effect. One benefit of a planning system is that we can build a number of actions that show how to achieve certain effects in the world state. These actions can dictate who can make these effects come to pass, as well as what facts are true before they can execute, often known as preconditions.  In essence, we decouple the goals of an AI from one specific solution.  We can provide a variety of means by which goals can be achieved and allow the planner to search for the best solution.

However, at the time F.E.A.R. was developed, planning was not common in commercial games.  Planning systems are often applied in real-world problems that require intelligent sequences of actions across larger periods of time.  These problems are often very rich in detail – examples include power station management and control of autonomous vehicles underwater or space – and rely on the optimizations that planning systems make (which I will discuss another time) as well as time.  Planning systems are often computationally expensive.  While we are looking at time-frames of a couple of seconds when dealing with plans of 20-30 actions in games, this is still dedicating a fair chunk of overall resource to the planner; thus ignoring the issues that a game running in-engine at 30-60 frames per second may face. Put simply, you must find a way to optimize this approach should you wish to implement planning systems, lest you suffer a dip in the games performance at runtime.



G.O.A.P.: Goal Oriented Action Planning
The approach taken by Monolith… was known as Goal Oriented Action Planning.  The implementation attempted to reduce the potential number of unique states that the planning system would need to manage by generalizing the state space using a Finite State Machine.

As shown in the figure above, the behaviour of the agent is distilled into three core states within the FSM:

Goto – It is assumed that the bot is moving towards a particular physical location.  These locations are often nodes that have been denoted to permit some actions to take place when near them.
Animate – each character in the game has some basic animations that need to be executed in order for it to maintain some level of emergence with the player.  So this state enforces that bots will run animations that have context within the game world.  These can be peeking out from cover, opening fire or throwing a grenade.
Use Smart Object – this is essentially an animation node.  The only difference is the animation is happening in the context of that node.  Examples of this can be jumping over a railing or flipping a table on its side to provide cover.

That’s it!  The entire FSM for the NPCs in the game is three states. Note that these states are very abstract; we do not know what locations are being visited, the animations played and the smart objects used.  This is because a search is being conducted that determines how the FSM is navigated.

Which state are these NPCs currently in now?
    Bear in mind we traditionally utilize events in order to move from one state to another in a FSM.  These events are typically the result of a sensor determining whether some boolean flag is now true, or an ‘oracle’ system forcing the bot to change the state.  This then results in a change of behavior.  However, in F.E.A.R., each NPC uses sensor data to determine a relevant goal and then conduct a plan that will navigate the FSM (using grounded values) that will achieve that goal. (Thompson, 2018)

Versatility:
    – each agent could be given different goals and actions that would become part of its behavior.
    – Developers could easily tweak the action sets of different characters.

These goal actions can be customized to each character in the game. Character: Surveillance Cyborg: AI/Actions/Idle -> talk/text on phone and lounge around on couch, which during action sequence involving player could add AI/Actions/GetInWay in the AI/Goals/StopHim

NPCs are responding to the player and their actions, and they are coordinated, they work together.  A peculiar attribute of communicating AI Agents is that they tend to develop their own unique language to speak to each other in Genetic Algorithm contexts.  A hive mind of AI Agents has its own emergent properties which could be foreign to human understanding or reasoning, it could even appear contradictory and irrational to a human intelligence. In video game and simulations of combat, squads are managed in squads, courtesy of a squad manager system, squad enrollment is based largely on proximity, provides useful information they might need. Squad manager tells them what goals to achieve, but it doesn’t override their base instincts or drivers.

Goal Selection is based on threat level.  The objective of a Goal Selection is to minimize the threat.  Then invoke a Plan to meet that Goal.  The focus shifts from building actions that can complete goals when put together, to creating behaviors of multiple actions.

Millington and Funge (2016) give some important components of GOAP as:

Goal-Oriented Behavior

So far we have focused on approaches that react: a set of inputs is provided to the character, and a behavior selects an appropriate action. There is no implementation of desires or goals. The character merely reacts to input. It is possible, of course, to make the character seem like it has goals or desires, even with the simplest decision making techniques. A character whose desire is to kill an enemy will hunt one down, will react to the appearance of an enemy by attacking, and will search for an enemy when there is a lack of one. To present the character with a suite of possible actions and have
it choose the one that best meets its immediate needs. This is goal-oriented behavior (GOB), explicitly seeking to fulfill the character’s internal goals. Like many algorithms in this book, the name can only be loosely applied. GOB may mean different things to different people, and it is often used either vaguely to refer to any goal seeking decision maker or to specific algorithms similar to those here
Goal-oriented behavior is a blanket term that covers any technique taking into account goals or desires. There isn’t a single technique for GOB, and some of the other techniques in this chapter, notably rule-based systems, can be used to create goal-seeking characters. Goal-oriented behavior is still fairly rare in games, so it is also difficult to say what the most popular techniques are. (Millington, Funge 2016) 

Goals

A character may have one or more goals, also called motives. There may be hundreds of possible goals, and the character can have any number of them currently active. They might have goals such as eat, regenerate health, or kill enemy. Each goal has a level of importance (often called insistence among GOB aficionados) represented by a number. A goal with a high insistence will tend to influence the character’s behavior more strongly.
     For the purpose of making great game characters, goals and motives can usually be treated as the same thing or at least blurred together somewhat. In some AI research they are quite distinct, but their definitions vary from researcher to researcher: motives might give rise to goals based on a character’s beliefs, for example (i.e., we may have a goal of killing our enemy motivated by revenge for our colleagues, out of the belief that our enemy killed them). This is an extra layer we don’t need for this algorithm, so we’ll treat motives and goals as largely the same thing and normally refer to them as goals. (Millington, Funge 2016) [emphasis added]


Actions

In addition to a set of goals, we need a suite of possible actions to choose from. These actions can be generated centrally, but it is also common for them to be generated by objects in the world. We can use a range of decision making tools to select an action and give intelligent-looking behavior. A simple approach would be to choose the most pressing goal (the one with the largest insistence) and find an action that either fulfills it completely or provides it with the largest decrease in insistence. In the example above, this would be the “get raw-food” action (which in turn might lead to cooking and eating the food). The change in goal insistence that is promised by an action is a heuristic estimate of its utility—the use that it might be to a character. The character naturally wants to choose the action with the highest utility, and the change in goal is used to do so.
    We can do this by introducing a new value: the discontentment of the character. It is calculated based on all the goal insistence values, where high insistence leaves the character more discontent. The aim of the character is to reduce its overall discontentment level. It isn’t focusing on a single goal any more, but on the whole set.
     Discontentment is simply a score we are trying to minimize; we could call it anything. In search literature (where GOB and GOAP are found in academic AI), it is known as an energy metric. This is because search theory is related to the behavior of physical processes (particularly, the formation of crystals and the solidification of metals), and the score driving them is equivalent to the energy. (Millington, Funge 2016) 


Timing

In order to make an informed decision as to which action to take, the character needs to know how long the action will take to carry out. To allow the character to properly anticipate the effects and take advantage of sequences of actions, a level of planning must be introduced. Goal-oriented action planning extends the basic decision making process. It allows characters to plan detailed sequences of actions that provide the overall optimum fulfillment of their goals. This is basically the structure for GOAP: we consider multiple actions in sequence and try to find the sequence that best meets the character’s goals in the long term. In this case, we are using the discontentment value to indicate whether the goals are being met. This is a flexible approach and leads to a simple but fairly inefficient algorithm. In the next section we’ll also look at a GOAP algorithm that tries to plan actions to meet a single goal.

To support GOAP, we need to be able to work out the future state of the world and use that to generate the action possibilities that will be present. When we predict the outcome of an action, it needs to predict all the effects, not just the change in a character’s goals. To accomplish this, we use a model of the world: a representation of the state of the world that can be easily changed and manipulated without changing the actual game state. For our purposes this can be an accurate model of the game world. It is also possible to model the beliefs and knowledge of a character by deliberately limiting what is allowed in its model. A character that doesn’t know about a troll under the bridge shouldn’t have it in its model. Without modeling the belief, the character’s GOAP algorithm would find the existence of the troll and take account of it in its planning. That may look odd, but normally isn’t noticeable (Millington, Funge 2016) 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *