Unit 2
Unit 2
Unit 2
o In previous topics, we have studied the search strategies which are only
associated with a single agent that aims to find the solution which often
expressed in the form of a sequence of actions.
o But, there might be some situations where more than one agent is searching
for the solution in the same search space, and this situation usually occurs in
game playing.
o The environment with more than one agent is termed as multi-agent
environment, in which each agent is an opponent of other agent and playing
against each other. Each agent needs to consider the action of other agent
and effect of that action on their performance.
o So, Searches in which two or more players with conflicting goals are
trying to explore the same search space for the solution, are called
adversarial searches, often known as Games.
o Games are modeled as a Search problem and heuristic evaluation function,
and these are the two main factors which help to model and solve games in
AI.
Note: In this topic, we will discuss deterministic games, fully observable environment,
zero-sum, and where each agent acts alternatively.
Zero-Sum Game
o Zero-sum games are adversarial search which involves pure competition.
o In Zero-sum game each agent's gain or loss of utility is exactly balanced by
the losses or gains of utility of another agent.
o One player of the game try to maximize one single value, while other player
tries to minimize it.
o Each move by one player in the game is called as ply.
o Chess and tic-tac-toe are examples of a Zero-sum game.
o What to do.
o How to decide the move
o Needs to think about his opponent as well
o The opponent also thinks what to do
Each of the players is trying to find out the response of his opponent to their actions.
This requires embedded thinking or backward reasoning to solve the game problems
in AI.
Game tree:
A game tree is a tree where nodes of the tree are the game states and Edges of the
tree are the moves by players. Game tree involves initial state, actions function, and
result Function.
The following figure is showing part of the game-tree for tic-tac-toe game. Following
are some key points of the game:
o From the initial state, MAX has 9 possible moves as he starts first. MAX place x
and MIN place o, and both player plays alternatively until we reach a leaf node
where one player has three in a row or all squares are filled.
o Both players will compute each node, minimax, the minimax value which is the
best achievable utility against an optimal adversary.
o Suppose both the players are well aware of the tic-tac-toe and playing the
best play. Each player is doing his best to prevent another one from winning.
MIN is acting against Max in the game.
o So in the game tree, we have a layer of Max, a layer of MIN, and each layer is
called as Ply. Max place x, then MIN puts o to prevent Max from winning, and
this game continues until the terminal node.
o In this either MIN wins, MAX wins, or it's a draw. This game-tree is the whole
search space of possibilities that MIN and MAX are playing tic-tac-toe and
taking turns alternately.
Hence adversarial Search for the minimax procedure works as follows:
o It aims to find the optimal strategy for MAX to win the game.
o It follows the approach of Depth-first search.
o In the game tree, optimal leaf node could appear at any depth of the tree.
o Propagate the minimax values up to the tree until the terminal node
discovered.
In a given game tree, the optimal strategy can be determined from the minimax
value of each node, which can be written as MINIMAX(n). MAX prefer to move to a
state of maximum value and MIN prefer to move to a state of minimum value then:
Let us start with games with two players, whom we’ll refer to as MAX and MIN for
obvious reasons. MAX is the first to move, and then they take turns until the game is
finished. At the conclusion of the game, the victorious player receives points, while
the loser receives penalties. A game can be formalized as a type of search problem
that has the following elements:
S0: The initial state of the game, which describes how it is set up at the start.
Player (s): Defines which player in a state has the move.
Actions (s): Returns a state’s set of legal moves.
Result (s, a): A transition model that defines a move’s outcome.
Terminal-Test (s): A terminal test that returns true if the game is over but
false otherwise. Terminal states are those in which the game has come to a
conclusion.
Utility (s, p): A utility function (also known as a payout function or
objective function ) determines the final numeric value for a game that
concludes in the terminal state s for player p. The result in chess is a win, a
loss, or a draw, with values of +1, 0, or 1/2. Backgammon’s payoffs range
from 0 to +192, but certain games have a greater range of possible
outcomes. A zero-sum game is defined (confusingly) as one in which the
total reward to all players is the same for each game instance. Chess is a
zero-sum game because each game has a payoff of 0 + 1, 1 + 0, or 1/2 +
1/2. “Constant-sum” would have been a preferable name, 22 but zero-sum
is the usual term and makes sense if each participant is charged 1.
The game tree for the game is defined by the beginning state, ACTIONS function, and
RESULT function—a tree in which the nodes are game states and the edges represent
movements. The figure below depicts a portion of the tic-tac-toe game tree (noughts
and crosses). MAX may make nine different maneuvers from his starting position.
The game alternates between MAXs setting an X and MINs placing an O until we
reach leaf nodes corresponding to terminal states, such as one player having three in a
row or all of the squares being filled. The utility value of the terminal state from the
perspective of MAX is shown by the number on each leaf node; high values are
thought to be beneficial for MAX and bad for MIN
The game tree for tic-tac-toe is relatively short, with just 9! = 362,880 terminal nodes.
However, because there are over 1040 nodes in chess, the game tree is better viewed
as a theoretical construct that cannot be realized in the actual world. But, no matter
how big the game tree is, MAX’s goal is to find a solid move. A tree that is
superimposed on the whole game tree and examines enough nodes to allow a player to
identify what move to make is referred to as a search tree.
The optimal strategy can be found from the minimax value of each node, which we
express as MINIMAX, given a game tree (n). Assuming that both players play
optimally from there through the finish of the game, the utility (for MAX) of being in
the corresponding state is the node’s minimax value. The usefulness of a terminal state
is obviously its minimax value. Furthermore, if given the option, MAX prefers to shift
to a maximum value state, whereas MIN wants to move to a minimum value state. So
here’s what we’ve got:
Optimal Decision Making in Multiplayer Games
Let’s use these definitions to analyze the game tree shown in the figure above. The
game’s UTILITY function provides utility values to the terminal nodes on the bottom
level. Because the first MIN node, B, has three successor states with values of 3, 12,
and 8, its minimax value is 3. Minimax value 2 is also used by the other two MIN
nodes. The root node is a MAX node, with minimax values of 3, 2, and 2, resulting in
a minimax value of 3. We can also find the root of the minimax decision: action a1 is
the best option for MAX since it leads to the highest minimax value.
This concept of optimal MAX play requires that MIN plays optimally as well—it
maximizes MAX’s worst-case outcome. What happens if MIN isn’t performing at its
best? Then it’s a simple matter of demonstrating that MAX can perform even better.
Other strategies may outperform the minimax method against suboptimal opponents,
but they will always outperform optimal opponents.
Since this is a backtracking based algorithm, it tries all possible moves, then
backtracks and makes a decision.
Maximizer goes LEFT: It is now the minimizers turn. The minimizer now
has a choice between 3 and 5. Being the minimizer it will definitely choose
the least among both, that is 3
Maximizer goes RIGHT: It is now the minimizers turn. The minimizer now
has a choice between 2 and 9. He will choose 2 as it is the least among the
two values.
Being the maximizer you would choose the larger value that is 3. Hence the optimal
move for the maximizer is to go LEFT and the optimal value is 3.
Now the game tree looks like below :
The above tree shows two possible scores when maximizer makes left and right
moves.
Note: Even though there is a value of 9 on the right subtree, the minimizer will never
pick that. We must always assume that our opponent plays optimally.
Below is the implementation for the same.
Time complexity : O(b^d) b is the branching factor and d is count of depth or ply of
graph or tree.
Space Complexity : O(bd) where b is branching factor into d is maximum depth of
tree similar to DFS.
Alpha-Beta Pruning
o Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game
states it has to examine are exponential in depth of the tree. Since we cannot
eliminate the exponent, but we can cut it to half. Hence there is a technique
by which without checking each node of the game tree we can compute the
correct minimax decision, and this technique is called pruning. This involves
two threshold parameter Alpha and beta for future expansion, so it is
called alpha-beta pruning. It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it
not only prune the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:
b. Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.
Note: To better understand this topic, kindly study the minimax algorithm.
1. α>=β
Step 1: At the first step the, Max player will start first move from node A where α= -
∞ and β= +∞, these value of alpha and beta passed down to node B where again α=
-∞ and β= +∞, and Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α
is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at
node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this
is a turn of Min, Now β= +∞, will compare with the available subsequent nodes
value, i.e. min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E,
and the values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The
current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E
α= 5 and β= 3, where α>=β, so the right successor of E will be pruned, and
algorithm will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At
node A, the value of alpha will be changed the maximum available value is 3 as max
(-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A which is
Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0,
and max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still
α remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the
value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3
and β= 1, and again it satisfies the condition α>=β, so the next child of C which is G
will be pruned, and the algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are computed
and nodes which has never computed. Hence the optimal value for the maximizer is
3 for this example.
Move Ordering in Alpha-Beta pruning:
The effectiveness of alpha-beta pruning is highly dependent on the order in which
each node is examined. Move order is an important aspect of alpha-beta pruning.
o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune
any of the leaves of the tree, and works exactly as minimax algorithm. In this
case, it also consumes more time because of alpha-beta factors, such a move
of pruning is called worst ordering. In this case, the best move occurs on the
right side of the tree. The time complexity for such an order is O(b m).
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of
pruning happens in the tree, and best moves occur at the left side of the tree.
We apply DFS hence it first search left of the tree and go deep twice as
minimax algorithm in the same amount of time. Complexity in ideal ordering
is O(bm/2).
This is a standard backgammon position. The object of the game is to get all of one’s
pieces off the board as quickly as possible. White moves in a clockwise direction
toward 25, while Black moves in a counterclockwise direction toward 0. Unless there
are many opponent pieces, a piece can advance to any position; if there is only one
opponent, it is caught and must start over. White has rolled a 6–5 and must pick
between four valid moves: (5–10,5–11), (5–11,19–24), (5–10,10–16), and (5–11,11–
16), where the notation (5–11,11–16) denotes moving one piece from position 5 to 11
and then another from 11 to 16.
Stochastic game tree for a backgammon position
White knows his or her own legal moves, but he or she has no idea how Black will
roll, and thus has no idea what Black’s legal moves will be. That means White won’t
be able to build a normal game tree-like in chess or tic-tac-toe. In backgammon, in
addition to M A X and M I N nodes, a game tree must include chance nodes. The
figure below depicts chance nodes as circles. The possible dice rolls are indicated by
the branches leading from each chance node; each branch is labelled with the roll and
its probability. There are 36 different ways to roll two dice, each equally likely, yet
there are only 21 distinct rolls because a 6–5 is the same as a 5–6. P (1–1) = 1/36
because each of the six doubles (1–1 through 6–6) has a probability of 1/36. Each of
the other 15 rolls has a 1/18 chance of happening.
The following phase is to learn how to make good decisions. Obviously, we want to
choose the move that will put us in the best position. Positions, on the other hand, do
not have specific minimum and maximum values. Instead, we can only compute a
position’s anticipated value, which is the average of all potential outcomes of the
chance nodes.
As a result, we can generalize the deterministic minimax value to an expected-
minimax value for games with chance nodes. Terminal nodes, MAX and MIN nodes
(for which the dice roll is known), and MAX and MIN nodes (for which the dice roll
is unknown) all function as before. We compute the expected value for chance nodes,
which is the sum of all outcomes, weighted by the probability of each chance action.
where r is a possible dice roll (or other random events) and RESULT(s,r) denotes the
same state as s, but with the addition that the dice roll’s result is r.
Partially observable games in
artificial intelligence
Partially Observable Games, often referred to as Partially
Observable Markov Decision Processes (POMDPs), are a class of
problems and models in artificial intelligence that involve
decision-making in situations where an agent's observations do
not provide complete information about the underlying state of
the environment. POMDPs are an extension of Markov Decision
Processes (MDPs) to scenarios where uncertainty and partial
observability are significant factors. They are commonly used to
model and solve problems in various domains, including
robotics, healthcare, finance, and game playing.
In constraint satisfaction, domains are the areas wherein parameters were located
after the restrictions that are particular to the task. Those three components make up
a constraint satisfaction technique in its entirety. The pair "scope, rel" makes up the
number of something like the requirement. The scope is a tuple of variables that
contribute to the restriction, as well as rel is indeed a relationship that contains a list
of possible solutions for the parameters should assume in order to meet the
restrictions of something like the issue.
For a constraint satisfaction problem (CSP), the following conditions must be met:
o States area
o fundamental idea while behind remedy.
AD
AD
The definition of a state in phase space involves giving values to any or all of the
parameters, like as
o Unary restrictions are the easiest kind of restrictions because they only limit
the value of one variable.
o Binary resource limits: These restrictions connect two parameters. A value
between x1 and x3 can be found in a variable named x2.
o Global Resource limits: This kind of restriction includes a unrestricted amount
of variables.
The main kinds of restrictions are resolved using certain kinds of resolution
methodologies:
Note: The preferences restriction is a unique restriction that operates in the actual world.
Think of a Sudoku puzzle where some of the squares have initial fills of certain
integers.
You must complete the empty squares with numbers between 1 and 9, making sure
that no rows, columns, or blocks contains a recurring integer of any kind. This solving
multi - objective issue is pretty elementary. A problem must be solved while taking
certain limitations into consideration.
The integer range (1-9) that really can occupy the other spaces is referred to as a
domain, while the empty spaces themselves were referred as variables. The values of
the variables are drawn first from realm. Constraints are the rules that determine how
a variable will select the scope.
C=((v1,v2),v1!=v2)