AI in Video Game Design

AI in Video Game Design

    Video games or simulation software are of several genres.  An overview is presented below before we dive into the programming involved in creating military and intelligence simulations or real games. 

Game and/or Simulation Genres:

     Action – fast-paced requiring quick judgement and snap decisions. Often they can be single player, with AI Agents as team members, fighting adversarial AI agents, or Non-Player Characters (NPCs). 

     Adventure – a sequence basis to the game, with a rigid structure. With no wide game maps, usually the gamer is presented only with their immediate surroundings. 

     Role-Playing (RPGs) – gamer takes on the role of a character.  

     Vehicle Simulation – where the gamer controls a vehicle, such as ‘Pole Position’

     Strategy – of course the big dog in this category is ‘Real-Time Strategy Games’ (RTS), which are covered extensively below.  In this genre the gamer has a high level view of the game world. 

     Management – sometimes referred to as ‘god’ games, lhas a high-level games, like creating cities or forts, resources can be manipulated in the game world, change it in different ways for the NPCs in the game world.

     Puzzle – small games in single player mode, gamer uses logic and deduction to complete goals. 

     One concept that is important to understand in Simulation Software is that of Game World- the environment the game is played in.  In video game design, a game world is the virtual space or environment where the game takes place. It is the setting in which the player interacts with the game’s characters, objects, and systems. The game world is often designed to be immersive, allowing the player to feel as if they are a part of the game’s fictional universe. A well-designed game world can contribute greatly to the overall gaming experience by providing a sense of place and atmosphere. It can also impact gameplay by providing different environments that require different strategies and tactics to navigate successfully.

    A game world can take on many different forms, depending on the genre and design of the game. It can range from a realistic representation of a historical city, to a fantastical world of magic and mythical creatures. The game world can also change over time, such as in games with dynamic weather or day/night cycles. Creating a game world is a complex process that involves many different elements, including level design, art design, narrative, and gameplay mechanics. The world must be designed to support the game’s objectives and the player’s experience. This requires careful consideration of the game’s story, characters, and gameplay mechanics, as well as the technical limitations and capabilities of the game engine.

 Game World types:

    Accessible vs. Inaccessible 

Accessible- an actor has knowledge of every aspect of the game world and knows of everything that is going on within that game world
 Inaccessible- there are limits to what an actor may know about the game world (for example using the concept of fog-of-war, stochastic information or incomplete information about game world)

    Environmentally Discrete vs. Environmentally Continuous


Actors within the game world may make a number of possible actions at any point, determined by the range of potential actions within a game world
    Discrete-a finite set of actions that an actor can take (for example only being able to move one square in any one of the cardinal directions on a grid)
    Continuous- there is a continuum of possible actions, such as allowing an actor to turn to any direction


Static vs. Dynamic

    Static- the game world remains the same until an participant has made a move
    Dynamic- the game world alters whilst an participant is “thinking”

Deterministic vs. Non-deterministic

    Deterministic-the next state can be explicitly concluded from the present state of the game world and the actions carried out by the actors.
   Non-deterministic-there is an element of uncertainty, or if the game world changes despite actions by the actors

Episodic vs. Non-Episodic

    Episodic-If an actor can take an action, the results of which have no relation on future actions, the environment is episodic.
    Non-episodic-the consequence of one action relates directly or indirectly to the available information or set of actions at a future point

Turn-Based vs. Real-time    

 
    Turn-based-place the players in a game-playing sequence. Whilst this type of game could be, theoretically, applied to any game, there are only a small number of genres where this mechanic is used, primarily 4X, strategy, some role-playing, and some life-simulation games. These games can require a great deal of strategic thinking, and as such having the time to analyse a situation and make decisions based on that is almost necessary.
     Semi-Turn-based- the player has the opportunity to pause the game to make decisions or queue up actions, and then return to normal real-time playing afterwards; or where certain sections of the game are turn-based, and
the rest is real-time.
    Real-Time- Non-turn-based games


     Artificial Intelligence is used in almost all simulations or games currently.  There are different AI’s that meet different programming needs in different game genres.  The following are the different components of AI used in video game design:

Pathfinding and Navigation: This is the ability of game characters to navigate the game world intelligently. It allows them to move around obstacles, find the best routes, and avoid dangerous areas. This is important in games where movement and positioning are key components of gameplay, such as strategy games or first-person shooters.

Behavior Trees: This is a way of creating intelligent and realistic game characters. A behavior tree is a decision-making system that allows game developers to create complex behaviors and actions for their characters. This can include things like attacking, defending, or running away.

Procedural Generation: This is a technique used in game design to create game content automatically. It allows game developers to create large, complex game worlds with less manual effort. This can include things like generating terrain, landscapes, and even enemy AI.

Machine Learning: This is the ability of AI systems to learn and adapt over time. Machine learning can be used in games to create more intelligent and challenging opponents for players. This can include things like adaptive difficulty levels, where the game AI adjusts to the player’s skill level.

Physics Simulations: This is the ability of games to simulate the laws of physics in the game world. It allows game developers to create realistic environments, where objects can be manipulated and interacted with in a realistic way. This is important in games that involve physics-based puzzles or real-time physics-based combat.

Natural Language Processing: This is the ability of AI systems to understand and interpret human language. In games, this can be used to create more immersive and interactive dialogue systems, where players can engage with game characters in a more natural and intuitive way.

World Interfacing
    …building robust and reusable world interfaces using two different techniques: event passing and polling. The event passing system will be extended to include simulation of sensory perception, a hot topic in current game AI. Polling [when the polling objects are also game characters] Looking for interesting information is called polling. The AI code polls various elements of the game state to determine if there is anything interesting that it needs to act on. (Millington & Funge 2016) 

Discrete action games within game theory consist of a finite number of participants, turns, or outcomes, resulting in a finite set of strategies which can be plotted in a matrix format for evaluation. Continuous action games, however, can have participants joining and leaving the game, or the stakes changing between actions, resulting in a continuous set of strategies. This represents a subset of the potential actions that the game world allows. Within our taxonomy, this relates directly to environmentally discrete and environmentally continuous game worlds.

Simultaneous vs. Sequential

In direct relation to turn-based versus real-time games, sequential games have all the players within a game make their moves in sequence, and one at a time. Simultaneous games are those where any or all players may make their moves at the same time. Classically, sequential games are also called dynamic games: however this would cause confusion in our taxonomy. Sequential games allow the construction of the extensive form of the game – essentially a hybrid decision tree of all players and all possible moves with their rewards.

Information Visibility

It is not necessary for all participants within a game to have access to all information about the state of the game at any point. The available information can be perfect, where all participants have access to
the current state of the game, all possible strategies from the current state, and all past moves made by all other participants. The latter implies that all games that impart perfect information to the participants are by their nature also sequential games. Imperfect games impart partial information about the game to at least one participant. A special case of imperfect information visibility is complete information where all participants are aware of all possible strategies and the current state of the game, however the previous moves by other participants are hidden. In this taxonomy, this relates to accessible and inaccessible game worlds. However, a common observation in games is that artificial
participants have access to perfect information of the game world, whereas the human player only has imperfect information. This breaks the immersion of the game.

Noisy vs. Clear

Noisy game worlds are those in which there is a significant amount of information that an intelligence must analyze to either make a decision on what to do next, but where not all of that information is
appropriate to the goal. Both dynamic and non-deterministic games provide levels of noise: the former due to the fact that the state of the game keeps changing even during the times when the intelligence
needs to make a decision or form behavior from learning; and the latter where there is no clear progressive state from which to base rules and analyze the game world. Although these two definitions
provide the clearest example of noise, the level of information available can also create noise. Even in perfect information game worlds it is possible that the available information is an overly large data set
for any intelligence, and thus noise is introduced.

In recent years, neuro-evolution techniques have emerged as a powerful tool in video game design. These techniques use evolutionary algorithms to optimize artificial neural networks (ANNs), which can be used to create intelligent game agents that can learn and adapt over time. By using neuro-evolution techniques, game developers can create more engaging and challenging games that provide players with a more immersive experience.

    One of the key benefits of using neuro-evolution techniques in video game design is that it allows game developers to create game agents that can learn from their experiences. This is achieved through a process of evolution, where the Artificial Neural Networks (ANNs) are optimized over multiple generations. In each generation, the ANNs are evaluated based on their performance in the game, and the best performing ANNs are selected to produce the next generation. This process is repeated until an optimal set of ANNs is found.

    Another benefit of using neuro-evolution techniques is that it allows game developers to create game agents that can adapt to different game scenarios. For example, an ANN that is trained to play a specific game level can be retrained to play a different level or even a completely different game. This means that game developers can create more flexible and adaptable game agents that can be used across multiple games or game modes.

Neuro-evolution techniques can be used in a variety of ways in video game design. For example, they can be used to create more intelligent enemy AI in first-person shooter games, or to create more challenging opponents in strategy games. They can also be used to generate game content, such as game levels or puzzles, or to assist players by providing in-game hints or tutorials. One example of a video game that uses neuro-evolution techniques is the racing game, Forza Motorsport 4. In this game, the AI opponents are generated using a neuro-evolution algorithm, which allows them to learn and adapt to the player’s driving style. This creates a more immersive and challenging experience for players, as the game opponents become more skilled and competitive over time.

As researchers remark, pointing out the use of Neuro-Evolution techniques (AI creating AI):

However existing work is primarily focused on the specific implementation of AI methodologies in specific problem areas. For example, the use of neuro-evolution to train behavior in NERO. With greater analysis of the problems faced in implementing AI methods in computer games, more accurate and efficient methodologies can be developed to create more realistic behavior of artificial characters within games.
(Gunn et al, 2009, 1)

There are no universal methodologies that cross all simulation categories.  AI in games can be thought of in the following ways taken from Gunn et al 2009:


Hierarchical Intelligence

Player-as-manager games provide us with a potential hierarchy of
intelligences that would be required: the artificial player and the artificial
player’s actors. Different AI methods would be required in
this case, as the artificial player would require high-level strategic
decision making. On the other hand, individual actors might only require
reflexive behavior (i.e. “I am being shot, I shall move away.”)
Currently in these types of games (especially strategy games) there
is little intelligence at the artificial player level, merely consisting of
such static tactics as “build up a force of x units, and send them along
y path”. Observation of such tactics in has shown that there is a reliance
on some form of state analysis. By considering the hierarchical
nature of the player and the actors under that player’s control, suitable
mechanisms can be introduced. First to provide adequate highlevel
strategic planning for the artificial player. Secondly to provide
low-level tactical planning for the artificial player’s actors.


Co-operative vs. Non-Cooperative


Cooperative games are those where the participants can form binding
agreements on strategies, and there is a mechanism in place to enforce
such behaviour. Non-co-operative games are where every
participant is out to maximise their own pay-off. Some games may
have elements of both co-operative and non-co-operative behaviour:
coalitions of participants enforce co-operative behaviour, but it is still
possible for members of the coalition to perform better, or receive
better rewards than the others if working alone. These are hybrid
games.

Competing and Cooperation in Games

Nash Equilibria

Within game theory, a Nash equilibrium (NE) exists where an overall highest level of pay-off for all players takes place. There may be many such equilibria within game strategies for a particular game, or
there may be only one – in which case it is a unique NE.

Zero-sum, these are a type of game in game theory that are a special case of general sum games – ones where there is a fixed overall value to winning (or losing) the game. The specific case where for any
winning value v, there is a losing value of 0 – v, is a zero-sum game. In other words, what one player wins, the other loses.

By understanding these concepts, game developers can create games that are challenging and fair, while also providing players with a satisfying experience.

Nash equilibrium is a concept in game theory that refers to a situation where no player can improve their outcome by changing their strategy, assuming that all other players keep their strategies the same. In other words, in a Nash equilibrium, each player’s strategy is the best response to the strategies of all other players. This concept is important in video game design because it ensures that games are balanced and fair, and that players cannot gain an unfair advantage by changing their strategy.

    Zero-sum games are a specific type of game where one player’s gain is another player’s loss. In other words, the sum of the gains and losses for all players is zero. This concept is also important in video game design because it creates a competitive environment where players must outperform their opponents to succeed. The combination of Nash equilibrium and zero-sum games is a powerful tool in video game design. By creating games that are zero-sum, game developers can create a competitive environment where players must outperform their opponents to succeed. This can lead to engaging gameplay and can keep players engaged and interested.

    Additionally, by using Nash equilibrium, game developers can ensure that games are balanced and fair, and that players cannot gain an unfair advantage by changing their strategy. This can lead to a more satisfying experience for players, as they feel that they are competing on an even playing field. One example of a game that uses Nash equilibrium and zero-sum games is the classic board game, chess. In chess, each player’s objective is to checkmate the opponent’s king, which is a zero-sum game because one player’s gain is the other player’s loss. Additionally, chess is a game of perfect information, meaning that both players have access to all of the same information. This ensures that the game is fair and balanced, and that each player must rely on strategy and skill to succeed.

    Hiding information can be an effective game design strategy for creating a more engaging and challenging gaming experience for players. By concealing certain information from the player, game developers can create a sense of mystery and unpredictability that can keep players engaged and interested. Here are a few ways that hiding information can help a gamer playing a video game:

Increases Challenge: By hiding information from the player, game developers can make the game more challenging. For example, in a puzzle game, if the solution to the puzzle is immediately visible, the challenge of the game may be diminished. However, if the solution is hidden and must be discovered by the player, the game becomes more challenging and engaging.

Creates Mystery and Intrigue: Hiding information can create a sense of mystery and intrigue in the game. For example, in a detective game, if the identity of the suspect is revealed at the beginning of the game, the player may lose interest. However, if the identity is hidden, the player will be motivated to continue playing to discover the truth.

Provides a Sense of Discovery: When information is hidden in a game, players are given the opportunity to discover it on their own. This can be a rewarding experience for players as they feel a sense of accomplishment when they uncover hidden information. This can also create a sense of immersion in the game world, as players feel like they are uncovering secrets and unlocking hidden areas.

Adds Replayability: Hiding information can also add replayability to a game. For example, in a role-playing game, if the player knows the outcome of every decision they make, they may be less likely to replay the game. However, if the game has hidden storylines or multiple outcomes, the player will be more motivated to replay the game to uncover all of the possibilities.

    A video game that uses Nash equilibrium is the popular battle royale game, Fortnite. In Fortnite, up to 100 players are dropped onto an island where they must scavenge for weapons and resources while trying to be the last person or team standing. The game uses a combination of Nash equilibrium and zero-sum games to create a competitive and engaging experience for players. First, the game is a zero-sum game because there can only be one winner. This creates a competitive environment where players must outperform their opponents to succeed. Additionally, the game uses Nash equilibrium by ensuring that each player’s strategy is the best response to the strategies of all other players. For example, if a player chooses to land in a popular area of the map where there are many other players, they will have a higher chance of finding better weapons and resources, but they also risk being eliminated early in the game. Alternatively, a player may choose to land in a quieter area of the map where there are fewer players and resources, but they are also less likely to encounter other players early on. The game also incorporates elements of imperfect information, as players do not always have access to the locations of all other players on the map. This creates a sense of mystery and unpredictability, as players must be constantly vigilant and adapt their strategies to changing circumstances.

     Game designers use a variety of psychological tricks to influence gameplay in video games. These tricks are designed to keep players engaged, motivated, and entertained. Here are a few common psychological tricks used by game designers:

Reward Systems: Game designers use reward systems to motivate players to keep playing. Rewards can include in-game items, achievements, or unlocking new levels or areas. These rewards tap into the player’s desire for achievement and progression, creating a sense of accomplishment and satisfaction.

Randomness: Game designers use randomness to create an unpredictable game environment. Random events and rewards can keep players engaged and interested by creating a sense of mystery and excitement. Randomness can also increase replayability by creating a different experience each time the game is played.

Fear of Missing Out (FOMO): Game designers use FOMO to encourage players to keep playing. This can include limited-time events, in-game sales, or exclusive items that are only available for a short time. FOMO can create a sense of urgency and make players feel like they are missing out if they don’t play.

Social Pressure: Game designers use social pressure to create a sense of community and competition among players. This can include leaderboards, online multiplayer, or social media integration. Social pressure can motivate players to keep playing to beat their friends or improve their rankings.

Skinner Box Mechanics: Game designers use Skinner box mechanics, which are named after the famous behavioral psychologist B.F. Skinner. These mechanics use operant conditioning, which means rewarding a desired behavior and punishing an undesired behavior. This can create a sense of compulsion in players and keep them playing the game even if they are not enjoying it.

Personalization: Game designers use personalization to create a sense of ownership in players. This can include customization options for characters, weapons, or other in-game items. Personalization can create an emotional connection between the player and the game, making the player more invested in the experience. 

Game designers use a variety of psychological tricks to influence gameplay in video games. These tricks tap into players’ desires for achievement, progression, unpredictability, and social interaction. By using these tricks, game designers can create engaging and entertaining games that keep players coming back for more. As video game technology continues to advance, we can expect to see even more innovative and sophisticated uses of psychology in game design.

Game designers use player expectations to steer gameplay by leveraging what players know or expect from previous games or experiences. By building on players’ existing knowledge and expectations, designers can create more engaging, immersive, and satisfying experiences. Here are some common ways that game designers use player expectations to steer gameplay:

Genre Expectations: Game designers can use genre expectations to establish a baseline of gameplay mechanics and features. For example, a first-person shooter game might include familiar mechanics such as aiming, shooting, and reloading, which are common in the genre. By building on these established mechanics, designers can create more nuanced and complex gameplay experiences that build on what players already know.

Narrative Expectations: Game designers can use narrative expectations to create suspense, surprise, or emotional resonance. By playing with players’ expectations, designers can create memorable moments that stick with players long after the game is over. For example, a game might subvert players’ expectations by revealing a character to be a villain after previously portraying them as a hero.

Gameplay Expectations: Game designers can use gameplay expectations to create a sense of challenge and progression. By building on what players already know, designers can create more difficult and rewarding gameplay experiences. For example, a game might start with simple puzzles and gradually increase the difficulty over time, challenging players to use their existing skills and knowledge to progress.

Brand Expectations: Game designers can use brand expectations to create a sense of familiarity and continuity. By building on a well-established brand, designers can create more engaging and resonant gameplay experiences. For example, a game that is part of a larger franchise might use familiar characters, settings, or music to create a sense of continuity and connection to the larger brand.

A drama manager AI is a type of artificial intelligence system that is used in video game design to control the pacing and flow of the game’s narrative. The drama manager AI is responsible for dynamically adjusting the game’s story and events in response to the player’s actions and decisions.

In a video game with a drama manager AI, the system is constantly monitoring the player’s progress and making decisions about what events should happen next in order to create a compelling and engaging narrative. The AI may also be responsible for managing the game’s difficulty level, adjusting it in real-time to ensure that the player is challenged but not overwhelmed.

Some examples of games that use a drama manager AI include “The Elder Scrolls V: Skyrim” and “Fable III”. These games use sophisticated algorithms and decision-making systems to create a dynamic and personalized experience for each player, ensuring that the game’s story and events unfold in a way that feels unique and engaging.

Agents
    In this context, “agent-based AI is about producing autonomous characters that take in information from the game data, determine what actions to take based on the information, and carry out those actions. for in-game AI, behaviorism is often the way to go. We are not interested in the nature of reality or mind; we want characters that look right. In most cases, this means starting from human behaviors and trying to work out the easiest way to implement them in software. (Millington & Funge 2016) AI agents in video game design are essentially software programs that simulate intelligent behavior in a game. These agents can be designed to perform a variety of tasks, such as controlling non-player characters (NPCs), generating content, and providing assistance to players. The way AI agents work in video game design can vary depending on the specific implementation, but generally follows these steps:

Perception: AI agents first perceive the game world, either through direct sensors or indirectly through game events or other agents. For example, an NPC in a first-person shooter might use sensory inputs like sound and vision to detect the player’s presence.

Decision-Making: Based on the perceived state of the game world, the AI agent must decide what actions to take. This decision-making process is often based on a set of rules or algorithms that take into account the agent’s objectives, available resources, and current game state. For example, a strategy game AI might decide to build a certain type of structure based on the current game state.

Action: The AI agent then takes action based on the decision-making process. This can include performing physical movements, generating content, or providing feedback to the player. For example, an AI agent controlling an NPC might cause the character to move towards the player or attack them.

Feedback: After taking action, the AI agent must receive feedback from the game world to adjust its perception and decision-making processes. This feedback can be in the form of game events, player interactions, or other agents. For example, an AI agent controlling an enemy might adjust its behavior based on how well the player is performing against it.

Overall, AI agents in video game design work by simulating intelligent behavior through a process of perception, decision-making, action, and feedback. This allows game developers to create more immersive and engaging games, and to provide players with more challenging and dynamic opponents. As AI technology continues to evolve, we can expect to see even more advanced and innovative uses of AI agents in video game design.

Reflex Agents
These agents use a conditional statement to provide the “intelligence”. Currently, most actors within games follow reflex systems, to the extent that players can monitor the input-output action pairs
of specific actors. Once a pattern has emerged, the human player can modify their strategy sufficiently so that the opponent artificial actor will make a significant loss whilst the human player will make a significant gain. Reflex agents can fall into infinite loops, as there is no concept of context within if-then statements.

Temporal agents can be considered a special sub-group of reflex agents, where actions are carried out after measuring the passing of time. This specific type of agent would be applicable in dynamic and
semi-dynamic game worlds, where time is a factor. Reflex agents, are useful in situations where a high level of complexity is not required by a participant. The more limited the scope of possible actions in a discrete actions game world, for example, the less complexity is required in decision making. In some cases reflex agents might present the best compromise of complexity versus believability.

Model-Based Agents
An agent that monitors the environment and creates a model of it on which to base decisions is called a model-based agent. This type of agent would be best applied to dynamic, real-time games where
constant monitoring of the environment is required on which to base decisions on actions. This would also be highly beneficial in a cooperative game, where although the actions of other actors are independent, they are inter-related, and so a broader monitoring range covering other co-operating actors can be introduced.


Goal-Based Agents
Using a model of the environment, goals can be created and planning carried out to achieve those goals, even within inaccessible game worlds and with other participants. Although the artificially
controlled participants will generally have broad goals built in to determine their over-all behaviour (such as “stop the human-player at all costs”), there is still scope within that command to create subgoals (such as “find the player”) Goal-based agents are also highly beneficial in inaccessible game worlds, as they can change their own sub-goals as the information they are made aware of changes

Utility-Based Agents
A further refinement on the model- and goal-based agent methodologies is the ability to manage multiple goals at the same time, based on the current circumstances. By applying utility theory to define the relative “best” goal in any situation, we have utility-based agents.

Swarm Intelligence

Swarm intelligences have also been used to compute Nash Equilibria. Ant colony modelling provides a strong methodology for actors to explore the game world to complete goals by providing path-finding around obstacles and creating search patterns to achieve their goals – something that has been seen to be lacking in games. Exploring the game world is important in imperfect information worlds, and such group goal finding resulting from ant colony optimization is useful in co-operative game play.

Believable Agents

These agents are expected to behave as a human would in similar situations. Given this is one of the core purposes of developing better AI in games, any agents that are developed should fall into this category, unless (through the narrative) the actor is expected to behave differently. Even then they should be consistent in their behavior which could be construed as providing believability in the behaviour across all actor types. Specifically, player-as-manager games require a great deal of believability given the large number of actors available to the player, and the semi-autonomous nature of those actors. Player-as-actor games will also require a high level of believability, as all the interactions with other actors in the game world must provide a sufficient level of immersion, give

Chart of Video Game Taxonomy (Gunn et al, 2009)

 3 Elements of AI in Games

Video Game AI is comprised of several different elements of Artificial Intelligence.  Most NPCs in games are AI agents.  With each game having something like a Director System, which manages the overall game.  Agents are aware of each other and communicate with each other, while some have dialogue trees assigned to their agent behavior, there are strictly hidden means of communication between agents as well as different AI sub-systems that manage different elements of the Game.  You will encounter AI management systems that even cover Diplomacy in the Game (i.e. Total War) as well as battle AI management and Resource Management AI.  As well you can have Finance Managers in the game and Nemisys Systems which manage opponents in the game, some even using Shadow AI to mimic a players tactics, therefore fighting as the gamer plays the game adapting the players strategies into their own counter-measures against the player. 

In video game design, a “director system” refers to a set of algorithms or rules that dynamically adjust the gameplay experience based on the player’s actions, performance, or other factors. The director system is typically designed to increase the player’s engagement and enjoyment by adapting the game’s difficulty, pacing, and other elements in response to the player’s behavior.

The director system can be used in many different types of games, including first-person shooters, role-playing games, and strategy games. Some examples of how a director system might be used include:

Difficulty Scaling: The director system can adjust the game’s difficulty in real-time based on the player’s performance, making the game easier or harder as needed. For example, if the player is struggling with a particular level, the director system might reduce the number of enemies or increase the amount of health the player has to give them a better chance of success.

Pacing: The director system can adjust the pacing of the game by adding or removing obstacles, enemies, or other challenges. For example, if the player is moving too quickly through a level, the director system might add more enemies or traps to slow them down and make the game more challenging.

Storytelling: The director system can adjust the game’s story and narrative elements based on the player’s choices or performance. For example, if the player is making choices that are leading them down a particular path, the director system might adjust the story to reflect those choices and create a more personalized experience.

    As Tommy Thompson describes them:

So a director is for all intents and purposes any system in a game that makes decisions on in-game settings or behaviors that impact pacing and difficulty and is influenced by what players are doing in either a single or multiplayer context. Now this isn’t necessarily an AI system, although you tend to find there is some simple AI formulation in many cases given it’s making some for of intelligent decisions. Ultimately, it boils down to a fairly straight forward process:

– The system records information about the players current activity. This could include where they are in the world, what activities they are currently doing and how well they’re doing it.

– The system then considers what the players should be experiencing at this point in time, such as a temporary increase in difficulty, a dynamic event in the world that perhaps has not occurred for a while, as well as what elements it can change or create right now to inject some new activity for the player to experience. Conversely, it could actually do the opposite and give the player some respite, allowing for you to catch your breath, take a moment to re-assess the situation and what you want to achieve.

– Lastly, it’s checking whether the player is playing the game as it is intended. Quite often a director is useful for creating situations that ensure the player adheres to what the designer intended. As we’ll see in a moment, Left 4 Dead is a fantastic example of this, given the director deliberately targets players who fail to adhere to the rules that  the game communicates to you. (Thompson, ‘Director AI for Balancing In-Game Experiences’, 2021, https://youtu.be/Mnt5zxb8W0Y?t=237)

Some of the common architectures used in games are polling, events, event managers and Sense Management.  Each of which has its own domains of control and delegation.  The following is based on the work of Millington and Funge (2016).

Polling is the part in the game which keeps track of all the actions, goals and data in the game.


The polling can rapidly grow in processing requirements through sheer numbers, even though each check may be very fast. For checks that need to be made between a character and a lot of similar sources of information, the time multiplies rapidly. For a level with a 100 characters, 10,000 trajectory checks would be needed to predict any collisions. Because each character is requesting information as it needs it, polling can make it difficult to track where information is passing through the game. Trying to debug a game where information is arriving in many different locations can be challenging. (Millington, Funge 2016) 

Polling Stations

There are ways to help polling techniques become more maintainable. A polling station can be used as a central place through which all checks are routed. This can be used to track the requests and responses for debugging. It can also be used to cache data
(Millington, Funge 2016) 

Events

we want a central checking system that can notify each character when something important has happened. This is an event passing mechanism. A central algorithm looks for interesting information and tells any bits of code that might benefit from that knowledge when it finds something.
The event mechanism can be used in the siren example. In each frame when the siren is sounding, the checking code passes an event to each character that is within earshot. This approach is used when we want to simulate a character’s perception in more detail…. The event mechanism is no faster in principle than polling. Polling has a bad reputation for speed, but in many cases event passing will be just as inefficient. To determine if an event has occurred, checks need to be made. The event mechanism still needs to do the checks, the same as for polling. In many cases, the event mechanism can reduce the effort by doing everybody’s checks at once. However, when there is no way to share results, it will take the same time as each character checking for itself. In fact, with its extra message-passing code, the event management approach will be slower.
(Millington, Funge 2016)

EventManagers

Event passing is usually managed by a simple set of routines that checks for events and then processes and dispatches them. Event managers form a centralized mechanism through which all events pass. They keep track of characters’ interests (so they only get events that are useful to them) and can queue events over multiple frames to smooth processor use. (Millington, Funge 2016) 

An event-based approach to communication is centralized. There is a central checking mechanism, which notifies any number of characters when something interesting occurs. The code that does this is called an event manager.


The event manager consists of four elements:
    1. A checking engine (this may be optional)
    2. An event queue
    3. A registry of event recipients
    4. An event dispatcher

The interested characters who want to receive events are often called “listeners” because they are listening for an event to occur. This doesn’t mean that they are only interested in simulated sounds. The events can represent sight, radio communication, specific times (a character goes home at 5 P.M. for example), or any other bit of game data. The checking engine needs to determine if anything has happened that one of its listeners may be interested in. It can simply check all the game states for things that might possibly interest any character, but this may be toomuch work.More efficient checking engines take into consideration the interests of its listeners.
(Millington, Funge 2016) 

Event Casting

There are two different philosophies for applying event management. You can use a few very general event managers, each sending lots of events to lots of listeners. The listeners are responsible for working out whether or not they are interested in the event. Or, you can use lots of specialized event managers. Each will only have a few listeners, but these listeners are likely to be interested in more of the events it generates. The listeners can still ignore some events, but more will be delivered correctly. The scattergun approach is called broadcasting, and the targeted approach is called narrowcasting. Both approaches solve the problem of working out which agents to send which events. Broadcasting solves the problem by sending them everything and letting them work out what they need. Narrowcasting puts the responsibility on the programmer: the AI needs to be registered with
exactly the right set of relevant event managers. This approach is called broadcasting. A broadcasting event manager sends lots of events to its listeners. Typically, it is used to manage all kinds of events and, therefore, also has lots of listeners. (Millington, Funge 2016) 

Inter-Agent Communication

While most of the information that an AI needs comes from the player’s actions and the game environment, games are increasingly featuring characters that cooperate or communicate with each other. A polling station has two purposes. First, it is simply a cache of polling information that can be used by multiple characters. Second, it acts as a go-between from the AI to the game level. Because all requests pass through this one place, they can be more easily monitored and the AI debugged.
(Millington, Funge 2016) 

Sense Management

Up until the mid-1990s, simulating sensory perception was rare (at most, a ray cast check was made to determine if line of sight existed). Since then, increasingly sophisticated models of sensory perception have been developed. In games such as Splinter Cell [Ubisoft Montreal Studios, 2002], Thief: The Dark Project [Looking Glass Studios, Inc., 1998], and Metal Gear Solid [Konami Corporation, 1998], the sensory ability of AI characters forms the basis of the gameplay.
Indications are that this trend will continue. AI software used in the film industry (such as Weta’s Massive) and military simulation use comprehensive models of perception to drive very sophisticated group behaviors. It seems clear that the sensory revolution will become an integral part of real-time strategy games and platformers, as well as third-person action games.

A more sophisticated approach uses event managers or polling stations to only grant access to the information that a real person in the game environment might know. At the final extreme, there are sense managers distributing information based on a physical simulation of the world.Even in a game with sophisticated sense management, it makes sense to use a blended approach. Internal knowledge is always available, but external knowledge can be accessed in any of the following three ways: direct access to information, notification only of selected information, and perception simulation.
We will exclusively use an event-based model for our sense management tools. Knowledge from the game state is introduced into the sense manager, and those characters who are capable of perceiving it will be notified. They can then take any appropriate action, such as storing it for later use or acting immediately.
(Millington, Funge 2016) 

Overview of search

    Most of this is managed either through Finite State Machines, Markov Chains, Behavior Trees, all of which are outgrowths of what is known as Graph Searching.  Graph searching is a means of searching interconnected nodes in a matrix or network.  Some early examples of Graph Search are Depth First Search, and Breadth First Search. Where one searches the graph tree horizontally first, as in Breadth First, the other searching through one depth to the next as in Depth First.  In the early years of AI at the Stanford Research Institute, which was a CIA contractor that in the early days conducted research in Remote Viewing as well as AI and Robotics, for their robotic research came up with A* search, which examines edge weights and costs to give a heuristic approach to AI search, a heuristic is a rule of thumb, an approximate solution that might work in many situations but is unlikely to work in all.
.  The main examination in determining a heuristic approach is how well the particularly path or trajectory of a search result achieves the goal, which is to win the game.  Another area that SRI was involved in was developing Planning systems in Simulations, known as STRIPS, which we shall cover in the Goal Oriented Action Planning section. 

Moving around

      All games utilize the foundational mechanics in Games for moving characters around.  Since it is based on statistical learning, you see a usually non-human nature to their movements.  For instance, one thing a game character must do to target the opposing player is look at the player, automatically zeroing in on the player.  This is known as the LookAt() method in movement libraries and is a fairly universal component of video games.  Some Targeted Individuals report that routinely they are targeted through involuntary staring on the part of hypnotized people around them.  That no matter how obscure their location that people entering their area LookAt() them. 

Moving in a game is based on vector mathematics, a ‘array’ of three elements.  

A point is a location in space.  A vector is a direction and length.  A vector has no predetermined starting point.  There is only one point in space that has its coordinates, however vectors with the same values can be anywhere.  The x,y,z of the point is a single location in space, the x,y,z of a vector is the length of the vector in each of those dimensions.

Vector Mechanics:

    a point plus a vector will result in a point
    a vector plus a vector will result in a vector
In games we add vectors to points to move objects around.  For example, the movement of characters from one location to another.  The vector calculated between one point and another tells us the direction and distance.

    If Stevie (the zombie) wants to get from her position to Granny’s position then the vector she must travel is granny’s position minus Stevie’s position.  For example, if Stevie was at (10,15, 5) and Granny was at (20, 15, 20) then the vector from Stevie to Granny would be (20, 15, 20) – (10, 15, 5) = (10, 0, 15).  If Granny wanted to find her vector to Stevie the equation is reversed.
Then to move Stevie from her current location to that of Granny you would add the vector (V1) to Stevie’s position thus: (10,15, 5) + (10, 0, 15) = (20, 15, 20) which you can check is correct because it’s the position of Granny!
     Vectors can also be added together to give a total direction and magnitude. The length of a vector is called its magnitude. When the direction toward a character is calculated as we’ve done in the previous examples, by taking one position away from another, the resulting length of that vector is the distance between the characters.
    In games distance between locations is used by decision making AI as well as moving objects around.  For example, an NPC might work out the distance to a player before deciding whether to attack or not. In determining the direction in which to travel to get from one location to another you might also require an angle that indicates how much a character needs to turn to be facing that location, otherwise you’ll get a character that moves sideways.  Once you have calculated the angle between the way the character is facing and the direction it is about to travel you’ll be able to program its turning. 
    The smoothing of a NPCs movement is a product of statistical mechanics.  Instead of making jagged or quick movements it will walk in an arc like fashion or take a straight angle approach.  This is also noticed in Self-Driving or Vehicle Simulations:  cars proceed in smooth accelerations, decelerations, turning is rounder, etc.  All due to the AI used in their systems for managing movements and being statistical based rather then natural or chaotic. Which of course does not give a totally satisfactory simulation of real-time conditions in a real world, but for training purposes it may be good enough if you can account for these unnatural attributes and they do not become ingrained in muscle memory of the soldiers.  

    To move an agent NPC around, you could rely on older technology such as waypoints: a reference point used for navigation purposes by in-game characters. Most commonly used in strategy games and squad based games. For instance if you wanted to statically block an entrance you would simple give the coordinates to a waypoint near the door and have the NPC proceed to that waypoint.  Single-Rail games are a good example of using waypoints.  A newer approach is to use Navmeshes: in use since the mid-80s in robotics as meadow maps, became part of video game code around 2000, an abstract data structure used in artificial intelligence applications to aid agents in pathfinding through complicated spaces. 

    Aside from moving individual NPCs you can also group NPCs to move as flocks or as swarms, both video games and military drones use behaviour trees for complicated actions including swarming.  

    Some common movement related elements of game design are presented below, again based on the work of (Millington, Funge 2016): 

Sight Cone

a sight cone of around 60◦ is often used. It takes into account normal eye movement, but effectively blinds the character to the large area of space it can see but is unlikely to pay any attention to. (Millington, Funge 2016)


Movement

Movement refers to algorithms that turn decisions into some kind of motion. When an enemy character without a gun needs to attack the player in Super Mario Sunshine , it first heads directly for the player. When it is close enough, it can actually do the attacking. The decision to attack is carried out by a set of movement algorithms that home in on the player’s location. Only then can the attack animation be played and the player’s health be depleted. Movement algorithms can be more complex than simply homing in. the AI needs information from the game to make sensible decisions. This is sometimes called “perception” (especially in academic AI): working out what information the character knows. In practice, it is much broader than just simulating what each character can see or hear, but includes all interfaces between the game world and the AI. This world interfacing is often a large proportion of the work done by an AI programmer, and in our experience it is the largest proportion of the AI debugging effort. (Millington, Funge 2016) 

Steering Behavior-

Steering behaviors is the name given by Craig Reynolds to his movement algorithms; they are not kinematic, but dynamic. Dynamic movement takes account of the current motion of the character. A dynamic algorithm typically needs to know the current velocities of the character as well as its position. A dynamic algorithm outputs forces or accelerations with the aim of changing the velocity of the character. Craig Reynolds also invented the flocking algorithm used in countless films and games to animate flocks of birds or herds of other animals….Because flocking is the most famous steering behavior, all steering (in fact, all movement) algorithms are sometimes wrongly called “flocking.” (Millington, Funge 2016) 

Characters as Points

Although a character usually consists of a three-dimensional (3D) model that occupies some space in the game world, many movement algorithms assume that the character can be treated as a single point. Collision detection, obstacle avoidance, and some other algorithms use the size of the character to influence their results, but movement itself assumes the character is at a single point. This is a process similar to that used by physics programmers who treat objects in the game as a “rigid body” located at its center of mass. Collision detection and other forces can be applied to anywhere on the object, but the algorithm that determines the movement of the object converts them so it can deal only with the center of mass. (Millington, Funge 2016)

Seek

A kinematic seek behavior takes as input the character’s and its target’s static data. It calculates the direction from the character to the target and requests a velocity along this line. The orientation values are typically ignored, although we can use the getNewOrientation function above to face in the direction we are moving. (Millington, Funge 2016) 


Steering Behaviors

Steering behaviors extend the movement algorithms in the previous section by adding velocity and rotation. They are gaining larger acceptance in PC and console game development. In some genres (such as driving games) they are dominant; in other genres they are only just beginning to see serious use.

Obstacle avoidance behaviors take a representation of the collision geometry of the world. It is also possible to specify a path as the target for a path following behavior. In these behaviors some processing is needed to summarize the set of targets into something that the behavior can react to. This may involve averaging properties of the whole set (to find and aim for their center of mass, for example) (Millington, Funge 2016) 


Variable Matching

The simplest family of steering behaviors operates by variable matching: they try to match one or more of the elements of the character’s kinematic to a single target kinematic. (Millington, Funge 2016) 

Seek and Flee

Seek tries to match the position of the character with the position of the target. Exactly as for the kinematic seek algorithm, it finds the direction to the target and heads toward it as fast as possible. Because the steering output is now an acceleration, it will accelerate as much as possible. Seek will always move toward its goal with the greatest possible acceleration (Millington, Funge 2016) 

Velocity Matching

So far we have looked at behaviors that try to match position with a target.We could do the same with velocity, but on its own this behavior is seldom useful. It could be used to make a character mimic the motion of a target… [useful in psychological operations] (Millington, Funge 2016) 



Face

The face behavior makes a character look at its target. It delegates to the align behavior to perform the rotation but calculates the target orientation first. Wander, the wander behavior controls a character moving aimlessly about (Millington, Funge 2016)

Path Following

So far we’ve seen behaviors that take a single target or no target at all. Path following is a steering behavior that takes a whole path as a target. A character with path following behavior should move along the path in one direction. Path following, as it is usually implemented, is a delegated behavior. It calculates the position of a target based on the current character location and the shape of the path. It then hands its target off to seek. There is no need to use arrive, because the target should always be moving along the path.We shouldn’t need to worry about the character catching up with it. The target position is calculated in two stages. First, the current character position is mapped to the nearest point along the path. This may be a complex process, especially if the path is curved or made up of many line segments. Second, a target is selected which is further along the path than the mapped point by a fixed distance. (Millington, Funge 2016)

Separation

The separation behavior is common in crowd simulations, where a number of characters are all heading in roughly the same direction. It acts to keep the characters from getting too close and being crowded. (Millington, Funge 2016)

Attraction

Using the inverse square law, we can set a negative valued constant of decay and get an attractive force. The character will be attracted to others within its radius, but this is rarely useful. Some developers have experimented with having lots of attractors and repulsors in their level and having character movement mostly controlled by these. Characters are attracted to their goals and repelled from obstacles, for example. Despite being ostensibly simple, this approach is full oftraps for the unwary. (Millington, Funge 2016) 


Collision Avoidance

In urban areas, it is common to have large numbers of characters moving around the same space. These characters have trajectories that cross each other, and they need to avoid constant collisions with other moving characters. A simple approach is to use a variation of the evade or separation behavior, which only engages if the target is within a cone in front of the character. (Millington, Funge 2016) 

Obstacle and Wall Avoidance

The collision avoidance behavior assumes that targets are spherical. It is interested in avoiding getting too close to the center point of the target. This can also be applied to any obstacle in the game that is easily represented by a bounding sphere. Crates, barrels, and small objects can be avoided simply this way. The obstacle and wall avoidance behavior uses a different approach to avoiding collisions. The moving character casts one or more rays out in the direction of its motion. If these rays collide with an obstacle, then a target is created that will avoid the collision, and the character does a basic seek on this target. Typically, the rays are not infinite. They extend a short distance ahead of the character (usually a distance corresponding to a few seconds of movement). Decision trees, state machines, and blackboard architectures have all been used to control steering behaviours. (Millington, Funge 2016) 


Targeters

Targeters generate the top-level goal for a character. There can be several targets: a positional target, an orientation target, a velocity target, and a rotation target. We call each of these elements a channel of the goal (e.g., position channel, velocity channel). All goals in the algorithm can have any or all of these channels specified. An unspecified channel is simply a “don’t care.” Individual channels can be provided by different behaviors (a chase-the-enemy targeter may generate the positional target, while a look-toward targeter may provide an orientation target), or multiple channels can be requested by a single targeter. (Millington, Funge 2016) 

Decomposers

Decomposers are used to split the overall goal into manageable sub-goals that can be more easily achieved. (Millington, Funge 2016) 

Constraints

Constraints limit the ability of a character to achieve its goal or sub-goal. They detect if moving toward the current sub-goal is likely to violate the constraint, and if so, they suggest a way to avoid it. Constraints tend to represent obstacles: moving obstacles like characters or static obstacles like walls. (Millington, Funge 2016) 

The Actuator

Unlike each of the other stages of the pipeline, there is only one actuator per character. The actuator’s job is to determine how the character will go about achieving its current sub-goal. Given a sub-goal and its internal knowledge about the physical capabilities of the character, it returns a path indicating how the character will move to the goal. The actuator also determines which channels of the sub-goal take priority and whether any should be ignored. (Millington, Funge 2016) 


Coordinated Movement

Games increasingly require groups of characters to move in a coordinated manner. Coordinated motion can occur at two levels. The individuals can make decisions that compliment each other, making their movements appear coordinated. Or they can make a decision as a whole and move in a prescribed, coordinated group. (Millington, Funge 2016) 


Emergent Formations

Emergent formations provide a different solution to scalability. Each character has its own steering system using the arrive behavior. The characters select their target based on the position of other characters in the group. (Millington, Funge 2016) 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *