Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

AI Module2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

Artificial Intelligence (BAD402)

Module 2
SOLVING PROBLEMS BY SEARCHING
3.1 PROBLEM-SOLVING AGENTS

Problem-solving agent.: goal-based agent called a problem-solving agent.

Goal formulation: Goal formulation is the process of defining and structuring the objectives
that an individual, team, or organization aims to achieve. It involves clarifying what needs to
be accomplished, establishing criteria for success, and breaking down larger goals into
smaller, manageable tasks.

Problem formulation: is the process of clearly defining and structuring a problem or


challenge that needs to be addressed. It involves identifying the key issues, understanding the
context and stakeholders involved, and framing the problem in a way that facilitates effective
problem-solving.

Unknown : It seems like you're referencing the word "unknown." In the context of problem-
solving or analysis, "unknown" typically refers to variables, factors, or aspects of a situation
that are not fully understood or identified. These unknowns can present challenges or
uncertainties that need to be addressed in order to effectively formulate and solve problems.

In general, an agent with several immediate options of unknown value can decide what to do by first
examining future actions that eventually lead to states of known value.
observable: "Observable" refers to something that can be perceived or detected through the
senses or through instrumentation.
Deterministic: "Deterministic" refers to a system or process in which every state or outcome
is uniquely determined by its preceding state, without any randomness or uncertainty. In a
deterministic system, given the same initial conditions, the subsequent states or outcomes will
always be the same.
Search: The process of looking for a sequence of actions that reaches the goal is called
search. A search algorithm takes a problem as input and returns a solution in the form of an
action sequence.
Execution: Once a solution is found, the actions it recommends can be carried out. This is
called the execution phase.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 1


Artificial Intelligence (BAD402)

Control theorists call this an open-loop system, because ignoring the percepts breaks the loop
between agent and environment.

Well-defined problems and solutions


A problem can be defined formally by five components:
 initial state
 actions
 transition model,
 Goal test
 Path cost

 The initial state that the agent starts in. For example, the initial state for our agent in
Romania might be described as In(Arad).
 A description of the possible actions available to the agent. Given a particular state s,
ACTIONS ACTIONS(s) returns the set of actions that can be executed in s. We say
that each of these actions is applicable in s. For example, from the state In(Arad), the
applicable APPLICABLE actions are
{Go(Sibiu), Go(Timisoara), Go(Zerind)}.
 A description of what each action does; the formal name for this is the transition
model, specified by a function RESULT(s, a) that returns the state that results from

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 2


Artificial Intelligence (BAD402)

TRANSITION MODEL doing action a in state s. We also use the term successor to refer to
any state reachable SUCCESSOR from a given state by a single action.2 For example, we
have
RESULT(In(Arad),Go(Zerind)) = In(Zerind) .
Together, the initial state, actions, and transition model implicitly define the state space of
the problem
 Path in the state space is a sequence of states connected by a sequence of actions

 The goal test, which determines whether a given state is a goal state. Sometimes the
goal is specified by an abstract property rather than an explicitly enumerated set of
states. For example, in chess, the goal is to reach a state called “checkmate,” where
the opponent’s king is under attack and can’t escape.
 path cost function that assigns a numeric cost to each path. The problem-solving
agent chooses a cost function that reflects its own performance measure.

The process of removing detail from a representation is called abstraction.

3.2 EXAMPLE PROBLEMS


Toy problem is intended to illustrate or exercise various problem-solving methods. It can be
given a concise, exact description and hence is usable by different researchers to compare the

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 3


Artificial Intelligence (BAD402)

performance of algorithms.
real-world problem is one whose solutions people actually care about. Such problems tend
not to have a single agreed-upon description, but we can give the general flavor of their
formulations.
3.2.1 Toy problems
 vacuum world

The first example we examine is the vacuum world first introduced in Chapter 2. (See
Figure 2.2.) This can be formulated as a problem as follows:
• States: The state is determined by both the agent location and the dirt locations. The agent
is in one of two locations, each of which might or might not contain dirt. Thus, there are
2 × 22 = 8 possible world states. A larger environment with n locations has n · 2n states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left, Right, and Suck.
Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving Left in the
leftmost square, moving Right in the rightmost square, and Sucking in a clean square have no
effect. The complete state space is shown in Figure 3.3.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 4


Artificial Intelligence (BAD402)

 The 8-puzzle:
The 8-puzzle, an instance of which is shown in Figure 3.4, consists of a 3×3 board with eight
numbered tiles and a blank space. A tile adjacent to the blank space can slide into the space. The
object is to reach a specified goal state, such as the one shown on the right of the figure. The standard
formulation is as follows:

States: A state description specifies the location of each of the eight tiles and the blank in one
of the nine squares.
• Initial state: Any state can be designated as the initial state. Note that any given goal can be
reached from exactly half of the possible initial states (Exercise 3.4).
• Actions: The simplest formulation defines the actions as movements of the blank space
Left, Right, Up, or Down. Different subsets of these are possible depending on where the
blank is.
• Transition model: Given a state and action, this returns the resulting state; for example, if
we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blank
switched.
• Goal test: This checks whether the state matches the goal configuration shown in Figure
3.4. (Other goal configurations are possible.)
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as test
problems for new search algorithms in AI. This family is known to be NP-complete, so one
does not expect to find methods significantly better in the worst case than the search
algorithms described in this chapter and the next. The 8-puzzle has 9!/2 = 181, 440 reachable
states and is easily solved. The 15-puzzle (on a 4×4 board) has around 1.3 trillion states, and
random instances can be solved optimally in a few milliseconds by the best search

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 5


Artificial Intelligence (BAD402)

algorithms. The 24-puzzle (on a 5 × 5 board) has around 1025 states, and random instances
take several hours to solve optimally.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 6


Artificial Intelligence (BAD402)

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 7


Artificial Intelligence (BAD402)

 8-queens problem
The goal of the 8-queens problem is to place eight queens on a chessboard such that no
queen attacks any other. (A queen attacks any piece in the same row, column or diagonal.)
Figure 3.5 shows an attempted solution that fails: the queen in the rightmost column is
attacked by the queen at the top left.

There are two main kinds of formulation. An incremental formulation involves operators
that augment the state description, starting with an empty state; for the 8-queens problem, this
means that each action adds a queen to the state. A complete-state formulation starts with
all 8 queens on the board and moves them around. In either case, the path cost is of no
interest because only the final state counts. The first incremental formulation one might try is
the following:
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified square.
• Goal test: 8 queens are on the board, none attacked.
In this formulation, we have 64 · 63 ··· 57 ≈ 1.8 × 1014 possible sequences to investigate. A
better formulation would prohibit placing a queen in any square that is already attacked:
• States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per column in the
leftmost n columns, with no queen attacking another.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 8


Artificial Intelligence (BAD402)

• Actions: Add a queen to any square in the leftmost empty column such that it is not
attacked by any other queen. This formulation reduces the 8-queens state space from 1.8 ×
1014 to just 2,057, and solutions are easy to find. On the other hand, for 100 queens the
reduction is from roughly 10400 states to about 1052 states (Exercise 3.5)—a big improvement,
but not enough to make the problem tractable. Section 4.1 describes the complete-state
formulation, and Chapter 6 gives a simple algorithm that solves even the million-queens
problem with ease.
Explanation:
 To find a solution to the 8 Queen problem, which consists of placing 8 queens on a
chessboard in such a way that no two queens threaten each other.
 The algorithm starts by placing a queen on the first column, then it proceeds to the next
column and places a queen in the first safe row of that column.
 If the algorithm reaches the 8th column and all queens are placed in a safe position, it
prints the board and returns true. If the algorithm is unable to place a queen in a safe
position in a certain column, it backtracks to the previous column and tries a different
row.
 The “isSafe” function checks if it is safe to place a queen on a certain row and column
by checking if there are any queens in the same row, diagonal or anti-diagonal.
 It’s worth to notice that this is just a high-level pseudo code and it might need to be
adapted depending on the specific implementation and language you are using.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 9


Artificial Intelligence (BAD402)

 Knuth’s 4 problem
Our final toy problem was devised by Donald Knuth (1964) and illustrates how
infinite state spaces can arise. Knuth conjectured that, starting with the number 4, a
sequence of factorial, square root, and floor operations will reach any desired positive
integer. For example, we can reach 5 from 4 as follows:

The problem definition is very simple:


• States: Positive numbers.
• Initial state: 4.
• Actions: Apply factorial, square root, or floor operation (factorial for integers only).
• Transition model: As given by the mathematical definitions of the operations.
• Goal test: State is the desired positive integer.
To our knowledge there is no bound on how large a number might be constructed in the
process of reaching a given target—for example, the number
620,448,401,733,239,439,360,000 is generated in the expression for 5—so the state space for
this problem is infinite. Such state spaces arise frequently in tasks involving the generation of
mathematical expressions, circuits, proofs, programs, and other recursively defined objects.

3.2.2 Real-world problems


We have already seen how the route-finding problem is defined in terms of specified
locations and transitions along links between them. Route-finding algorithms are used in a
variety of applications. . Some, such as Web sites and in-car systems that provide driving
directions,
The Romania example.
Consider the airline travel problems that must be solved by a travel-planning Web site:

States: Each state obviously includes a location (e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight segment) may depend on previous
segments, their fare bases, and their status as domestic or international, the state must record
extra information about these “historical” aspects.
• Initial state: This is specified by the user’s query.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 10


Artificial Intelligence (BAD402)

• Actions: Take any flight from the current location, in any seat class, leaving after the
current time, leaving enough time for within-airport transfer if needed.
• Transition model: The state resulting from taking a flight will have the flight’s destination
as the current location and the flight’s arrival time as the current time.
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage
awards, and so on.
Touring problems are closely related to route-finding problems, but with an important
difference. Consider, for example, the problem “Visit every city in Figure 3.2 at least once,
starting and ending in Bucharest.” As with route finding, the actions correspond to trips
between adjacent cities. The state space, however, is quite different. Each state must include
not just the current location but also the set of cities the agent has visited. So the initial state
would be In(Bucharest), Visited({Bucharest}), a typical intermediate state would be
In(Vaslui), Visited({Bucharest, Urziceni, Vaslui}), and the goal test would check whether the
agent is in Bucharest and all 20 cities have been visited.
The travelling salesperson problem (TSP) is a touring problem in which each city must be
visited exactly once. The aim is to find the shortest tour. The problem is known to be NP-
hard, but an enormous amount of effort has been expended to improve the capabilities of TSP
algorithms. In addition to planning trips for travelling salespersons, these algorithms have
been used for tasks such as planning movements of automatic circuit-board drills and of
stocking machines on shop floors.
Eg: Consider the following graph with six cities and the distances between them

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 11


Artificial Intelligence (BAD402)

From the given graph, since the origin is already mentioned, the solution must always start
from that node. Among the edges leading from A, A → B has the shortest distance.

Then, B → C has the shortest and only edge between, therefore it is included in the output
graph.

There’s only one edge between C → D, therefore it is added to the output graph.

There’s two outward edges from D. Even though, D → B has lower distance than D → E, B
is already visited once and it would form a cycle if added to the output graph. Therefore, D
→ E is added into the output graph.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 12


Artificial Intelligence (BAD402)

There’s only one edge from e, that is E → F. Therefore, it is added into the output graph.

Again, even though F → C has lower distance than F → A, F → A is added into the output
graph in order to avoid the cycle that would form and C is already visited once.

The shortest path that originates and ends at A is A → B → C → D → E → F → A

The cost of the path is: 16 + 21 + 12 + 15 + 16 + 34 = 114.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 13


Artificial Intelligence (BAD402)

Even though, the cost of path could be decreased if it originates from other nodes but the
question is not raised with respect to that.

A VLSI layout problem requires positioning millions of components and connections on a chip to
minimize area, minimize circuit delays, minimize stray capacitances, and maximize manufacturing
yield. The layout problem comes after the logical design phase and is usually split into two parts: cell
layout and channel routing. In cell layout, the primitive components of the circuit are grouped into
cells, each of which performs some recognized function. Each cell has a fixed footprint (size and
shape) and requires a certain number of connections to each of the other cells. The aim is to place the
cells on the chip so that they do not overlap and so that there is room for the connecting wires to be
placed between the cells. Channel routing finds a specific route for each wire through the gaps
between the cells. These search problems are extremely complex, but definitely worth solving. Later
in this chapter, we present some algorithms capable of solving them.

Fig: VLSI layout

Robot navigation is a generalization of the route-finding problem described earlier. Rather


than following a discrete set of routes, a robot can move in a continuous space with (in
principle) an infinite set of possible actions and states. For a circular robot moving on a flat
surface, the space is essentially two-dimensional. When the robot has arms and legs or wheels
that must also be controlled, the search space becomes many-dimensional. Advanced
techniques are required just to make the search space finite.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 14


Artificial Intelligence (BAD402)

Fig: Robot navigation using Stereo vision

Automatic assembly sequencing of complex objects by a robot was first demonstrated by


FREDDY (Michie, 1972). Progress since then has been slow but sure, to the point where the
assembly of intricate objects such as electric motors is economically feasible. In assembly
problems, the aim is to find an order in which to assemble the parts of some object. If the
wrong order is chosen, there will be no way to add some part later in the sequence without
undoing some of the work already done. Checking a step in the sequence for feasibility is a
difficult geometrical search problem closely related to robot navigation. Thus, the generation
of legal actions is the expensive part of assembly sequencing. Any practical algorithm must
avoid exploring all but a tiny fraction of the state space. Another important assembly problem
is protein design, in which the goal is to find a sequence of amino acids that will fold into a
three-dimensional protein with the right properties to cure some disease.

Fig: Automatic assembly sequencing

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 15


Artificial Intelligence (BAD402)

Fig: Protein design

3.3 SEARCHING FOR SOLUTIONS


Having formulated some problems, we now need to solve them. A solution is an action
sequence, so search algorithms work by considering various possible action sequences. The
possible action sequences starting at the initial state form a search tree with the initial state at
the root; the branches are actions and the nodes correspond to states in the state space of the
problem. Figure 3.6 shows the first few steps in growing the search tree for finding a route
from Arad to Bucharest. The root node of the tree corresponds to the initial state, In(Arad).
The first step is to test whether this is a goal state. Then we need to consider taking various
actions. We do this by expanding the current state; that is, applying each legal action to the
current state, thereby generating a new set of states. In this case, we add three branches from
the parent node In(Arad) leading to three new child nodes: In(Sibiu), In(Timisoara), and
In(Zerind). Now we must choose which of these three possibilities to consider further.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 16


Artificial Intelligence (BAD402)

This is the essence of search—following up one option now and putting the others aside for
later, in case the first choice does not lead to a solution. Suppose we choose Sibiu first. We
check to see whether it is a goal state (it is not) and then expand it to get In(Arad),
In(Fagaras), In(Oradea), and In(RimnicuVilcea). We can then choose any of these four or go
back and choose Timisoara or Zerind. Each of these six nodes is a leaf node, that is, a node
with no children in the tree. The set of all leaf nodes available for expansion at any given
point is called the frontier. (Many authors call it the open list, which is both geographically
less evocative and less accurate, because other data structures are better suited than a list.) In
Figure 3.6, the frontier of each tree consists of those nodes with bold outlines.
The process of expanding nodes on the frontier continues until either a solution is found or
there are no more states to expand. The general TREE-SEARCH algorithm is shown
informally in Figure 3.7. Search algorithms all share this basic structure; they vary primarily
according to how they choose which state to expand next—the so-called search strategy.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 17


Artificial Intelligence (BAD402)

The eagle-eyed reader will notice one peculiar thing about the search tree shown in Figure
3.6: it includes the path from Arad to Sibiu and back to Arad again! We say that In(Arad) is a
repeated state in the search tree, generated in this case by a loopy path. Considering such
loopy paths means that the complete search tree for Romania is infinite because there is no
limit to how often one can traverse a loop. is no need to consider loopy paths. We can rely on
more than intuition for this: because path costs are additive and step costs are nonnegative, a loopy
path to any given state is never better than the same path with the loop removed.

Loopy paths are a special case of the more general concept of redundant paths, which exist
whenever there is more than one way to get from one state to another. Consider the paths
Arad–Sibiu (140 km long) and Arad–Zerind–Oradea–Sibiu (297 km long). Obviously, the
second path is redundant—it’s just a worse way to get to the same state. If you are concerned
about reaching the goal, there’s never any reason to keep more than one path to any given
state, because any goal state that is reachable by extending one path is also reachable by
extending the other.
In some cases, it is possible to define the problem itself so as to eliminate redundant paths.
For example, if we formulate the 8-queens problem (page 71) so that a queen can be placed in
any column, then each state with n queens can be reached by n! different paths; but if we

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 18


Artificial Intelligence (BAD402)

reformulate the problem so that each new queen is placed in the leftmost empty column, then
each state can be reached only through one path.

In other cases, redundant paths are unavoidable. This includes all problems where the actions
are reversible, such as route-finding problems and sliding-block puzzles. Route- finding on a
rectangular grid (like the one used later for Figure 3.9) is a particularly important example
in computer games. In such a grid, each state has four successors, so a search tree of depth d
that includes repeated states has 4d leaves; but there are only about 2d2 distinct states within
d steps of any given state. For d = 20, this means about a trillion nodes but only about 800
distinct states. Thus, following redundant paths can cause a tractable problem to become
intractable. This is true even for algorithms that know how to avoid infinite loops.
As the saying goes, algorithms that forget their history are doomed to repeat it. The
way to avoid exploring redundant paths is to remember where one has been. To do this, we
augment the TREE-SEARCH algorithm with a data structure called the explored set (also
known as the closed list), which remembers every expanded node. Newly generated nodes

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 19


Artificial Intelligence (BAD402)

that match previously generated nodes—ones in the explored set or the frontier—can be
discarded instead of being added to the frontier. The new algorithm, called GRAPH-
SEARCH, is shown informally in Figure 3.7. The specific algorithms in this chapter draw on
this general design.
Clearly, the search tree constructed by the GRAPH-SEARCH algorithm contains at
most one copy of each state, so we can think of it as growing a tree directly on the state-space
graph, state-space graph into the explored region and the unexplored region, so that every
path from the initial state to an unexplored state has to pass through a state in the frontier. (If
this seems completely obvious, try Exercise 3.13 now.) This property is illustrated in Figure
3.9. As every step moves a state from the frontier into the explored region while moving
some states from the unexplored region into the frontier, we see that the algorithm is
systematically examining the states in the state space, one by one, until it finds a solution.

3.3.1 Infrastructure for search algorithms


Search algorithms require a data structure to keep track of the search tree that is being
constructed. For each node n of the tree, we have a structure that contains four components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state to
the node, as indicated by the parent pointers.

Given the components for a parent node, it is easy to see how to compute the necessary
components for a child node. The function CHILD-NODE takes a parent node and an action
and returns the resulting child node:

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 20


Artificial Intelligence (BAD402)

The node data structure is depicted in Figure 3.10. Notice how the PARENT pointers string
the nodes together into a tree structure. These pointers also allow the solution path to be
extracted when a goal node is found; we use the SOLUTION function to return the sequence
of actions obtained by following parent pointers back to the root. Up to now, we have not
been very careful to distinguish between nodes and states, but in writing detailed algorithms
it’s important to make that distinction. A node is a bookkeeping data structure used to
represent the search tree. A state corresponds to a configuration of the world. Thus, nodes are
on particular paths, as defined by PARENT pointers, whereas states are not. Furthermore,
two different nodes can contain the same world state if that state is generated via two
different search paths.
Now that we have nodes, we need somewhere to put them. The frontier needs to be stored in
such a way that the search algorithm can easily choose the next node to expand according to
its preferred strategy. The appropriate data structure for this is a queue. The operations on a
queue are as follows:

• EMPTY?(queue) returns true only if there are no more elements in the queue.
• POP(queue) removes the first element of the queue and returns it.
• INSERT(element, queue) inserts an element and returns the resulting queue.
Queues are characterized by the order in which they store the inserted nodes. Three common variants
are the first-in, first-out or FIFO queue, which pops the oldest element of the queue; the last-in, first
out or LIFO queue (also known as a stack), which pops the newest element of the queue; and the
priority queue, which pops the element of the queue with the highest priority according to some
ordering function.

3.3.2 Measuring problem-solving performance


Before we get into the design of specific search algorithms, we need to consider the criteria
that might be used to choose among them. We can evaluate an algorithm’s performance in
four ways:

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 21


Artificial Intelligence (BAD402)

• Completeness: Is the algorithm guaranteed to find a solution when there is one?


• Optimality: Does the strategy find the optimal solution, as defined on page 68?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search?
The problem difficulty. In theoretical computer science, the typical measure is the size of the
state space graph, |V | + |E|, where V is the set of vertices (nodes) of the graph and E is the set
of edges (links). This is appropriate when the graph is an explicit data structure that is input
to the search program. (The map of Romania is an example of this.) In AI, the graph is often
represented implicitly by the initial state, actions, and transition model and is frequently
infinite. For these reasons, complexity is expressed in terms of three quantities: b, the
branching factor or maximum number of successors of any node; d, the depth of the
shallowest goal node (i.e., the number of steps along the path from the root); and m, the
maximum length of any path in the state space. Time is often measured in terms of the
number of nodes generated during the search, and space in terms of the maximum number of
nodes stored in memory. For the most part, we describe time and space complexity for search
on a tree; for a graph, the answer depends on how “redundant” the paths in the state space
are.
To assess the effectiveness of a search algorithm, we can consider just the search cost—
which typically depends on the time complexity but can also include a term for memory
usage—or we can use the total cost, which combines the search cost and the path cost of the
solution found. For the problem of finding a route from Arad to Bucharest, the search cost is
the amount of time taken by the search and the solution cost is the total length of the path in
kilometres.

Thus, to compute the total cost, we have to add milliseconds and kilometres. There is
no “official exchange rate” between the two, but it might be reasonable in this case to convert
kilometres into milliseconds by using an estimate of the car’s average speed (because time is
what the agent cares about). This enables the agent to find an optimal tradeoffs point at which
further computation to find a shorter path becomes counterproductive.
3.4 UNINFORMED SEARCH STRATEGIES (also called blind search)
The term means that the strategies have no additional information about states beyond that
provided in the problem definition. All they can do is generate successors and distinguish a
goal state from a non-goal state. All search strategies are distinguished by the order in which
nodes are expanded.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 22


Artificial Intelligence (BAD402)

Strategies that know whether one non-goal state is “more promising” than another are called
informed search or heuristic search strategies.

3.4.1 Breadth-first search


Breadth-first search is a simple strategy in which the root node is expanded first, then all
the successors of the root node are expanded next, then their successors, and so on. In
general, all the nodes are expanded at a given depth in the search tree before any nodes at the
next level are expanded.
Breadth-first search is an instance of the general graph-search algorithm (Figure 3.7)
in which the shallowest unexpanded node is chosen for expansion. This is achieved very
simply by using a FIFO queue for the frontier. Thus, new nodes (which are always deeper
than their parents) go to the back of the queue, and old nodes, which are shallower than the
new nodes, get expanded first. There is one slight tweak on the general graph-search
algorithm, which is that the goal test is applied to each node when it is generated rather than
when it is selected for expansion. This decision is explained below, where we discuss time
complexity. Note also that the algorithm, following the general template for graph search,
discards any new path to a state already in the frontier or explored set; it is easy to see that
any such path must be at least as deep as the one already found. Thus, breadth-first search
always has the shallowest path to every node on the frontier.

Pseudo code is given in Figure 3.11. Figure 3.12 shows the progress of the search on a simple
binary tree.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 23


Artificial Intelligence (BAD402)

We can easily see that it is complete—if the shallowest goal node is at some finite depth d,
breadth-first search will eventually find it after generating all shallower nodes (provided the
branching factor b is finite). Note that as soon as a goal node is generated, we know it is the
shallowest goal node because all shallower nodes must have been generated already and
failed the goal test. Now, the shallowest goal node is not necessarily the optimal
one.technically, breadth-first search is optimal if the path cost is a non decreasing function of
the depth of the node. The most common such scenario is that all actions have the same cost.
So far, the news about breadth-first search has been good. The news about time and
space is not so good. Imagine searching a uniform tree where every state has b successors.
The root of the search tree generates b nodes at the first level, each of which generates b more
nodes, for a total of b2 at the second level. Each of these generates b more nodes, yielding b3
nodes at the third level, and so on. Now suppose that the solution is at depth d. In the worst
case, it is the last node generated at that level. Then the total number of nodes generated is
b + b2 + b3 + ··· + bd = O(bd) .
(If the algorithm were to apply the goal test to nodes when selected for expansion, rather than
when generated, the whole layer of nodes at depth d would be expanded before the goal was
detected and the time complexity would be O(bd+1).)
As for space complexity: for any kind of graph search, which stores every expanded
node in the explored set, the space complexity is always within a factor of b of the time

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 24


Artificial Intelligence (BAD402)

complexity. For breadth-first graph search in particular, every node generated remains in
memory. There will be O(bd−1) nodes in the explored set and O(bd) nodes in the frontier,

so the space complexity is O(bd), i.e., it is dominated by the size of the frontier. Switching
to a tree search would not save much space, and in a state space with many redundant paths,
switching could cost a great deal of time.
An exponential complexity bound such as O(bd) is scary. Figure 3.13 shows why. It
lists, for various values of the solution depth d, the time and memory required for a breadth-
first search with branching factor b = 10. The table assumes that 1 million nodes can be
generated per second and that a node requires 1000 bytes of storage. Many search problems
fit roughly within these assumptions (give or take a factor of 100) when run on a modern
personal computer.

Two lessons can be learned from Figure 3.13.


 First, the memory requirements are a bigger problem for breadth-first search than is
the execution time. Fortunately, other strategies require less memory.
 The second lesson is that time is still a major factor. If your problem has a solution at
depth 16, then (given our assumptions) it will take about 350 years for breadth-first search (or

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 25


Artificial Intelligence (BAD402)

indeed any uninformed search) to find it. In general, exponential-complexity search problems
cannot be solved by uninformed methods for any but the smallest instances.

3.4.2 Uniform-cost search


When all step costs are equal, breadth-first search is optimal because it always expands the
shallowest unexpanded node. By a simple extension, we can find an algorithm that is optimal
with any step-cost function. Instead of expanding the shallowest node, uniform-cost search
expands the node n with the lowest path cost g(n). This is done by storing the frontier as a
priority queue ordered by g. The algorithm is shown in Figure 3.14.
In addition to the ordering of the queue by path cost, there are two other significant
differences from breadth-first search.
 The first is that the goal test is applied to a node when it is selected for expansion (as
in the generic graph-search algorithm shown in Figure 3.7) rather than when it is first
generated. The reason is that the first goal node that is generated may be on a
suboptimal path.
 The second difference is that a test is added in case a better path is found to a node
currently on the frontier.
Both of these modifications come into play in the example shown in Figure 3.15, where the
problem is to get from Sibiu to Bucharest. The successors of Sibiu are Rimnicu Vilcea and
Fagaras, with costs 80 and 99, respectively. The least-cost node, Rimnicu Vilcea, is expanded
next, adding Pitesti with cost 80 + 97 = 177. The least-cost node is now Fagaras, so it is
expanded, adding Bucharest with cost 99 + 211 = 310. Now a goal node has been generated,
but uniform-cost search keeps going, choosing Pitesti for expansion and adding a second
pathto Bucharest with cost 80+ 97+ 101 = 278. Now the algorithm checks to see if this new
path is better than the old one; it is, so the old one is discarded. Bucharest, now with g-cost
278, is selected for expansion and the solution is returned.

It is easy to see that uniform-cost search is optimal in general. First, we observe that
whenever uniform-cost search selects a node n for expansion, the optimal path to that node
has been found.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 26


Artificial Intelligence (BAD402)

Uniform-cost search does not care about the number of steps a path has, but only about their
total cost. Therefore, it will get stuck in an infinite loop if there is a path with an infinite
sequence of zero-cost actions—for example, a sequence of NoOp actions.6 Completeness is
guaranteed provided the cost of every step exceeds some small positive constant.
Uniform-cost search is guided by path costs rather than depths, so its complexity is
not easily characterized in terms of b and d. Instead, let C∗ be the cost of the optimal
solution, and assume that every action costs at least . Then the algorithm’s worst-case time
and space complexity is O(b1+ C∗/ ), which can be much greater than bd. This is because
uniform cost search can explore large trees of small steps before exploring paths involving
large and perhaps useful steps. When all step costs are equal, b1+ C∗/ is just bd+1. When all

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 27


Artificial Intelligence (BAD402)

step costs are the same, uniform-cost search is similar to breadth-first search, except that the
latter stops as soon as it generates a goal, whereas uniform-cost search examines all the nodes
at the goal’s depth to see if one has a lower cost; thus uniform-cost search does strictly more
work by expanding nodes at depth d unnecessarily.
3.4.3 Depth-first search
Depth-first search always expands the deepest node in the current frontier of the search tree.
The progress of the search is illustrated in Figure 3.16. The search proceeds immediately to
the deepest level of the search tree, where the nodes have no successors. As those nodes are
expanded, they are dropped from the frontier, so then the search “backs up” to the next
deepest node that still has unexplored successors.
The depth-first search algorithm is an instance of the graph-search algorithm in Figure
3.7; whereas breadth-first-search uses a FIFO queue, depth-first search uses a LIFO queue. A
LIFO queue means that the most recently generated node is chosen for expansion. This must
be the deepest unexpanded node because it is one deeper than its parent—which, in turn, was
the deepest unexpanded node when it was selected.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 28


Artificial Intelligence (BAD402)

As an alternative to the GRAPH-SEARCH-style implementation, it is common to


implement depth-first search with a recursive function that calls itself on each of its children
in turn. (A recursive depth-first algorithm incorporating a depth limit is shown in Figure
3.17.)

The properties of depth-first search depend strongly on whether the graph-search or tree
search version is used. The graph-search version, which avoids repeated states and redundant
paths, is complete in finite state spaces because it will eventually expand every node.
Depth-first tree search can be modified at no extra memory cost so that it checks new
states against those on the path from the root to the current node; this avoids infinite loops in
finite state spaces but does not avoid the proliferation of redundant paths. In infinite state
spaces, both versions fail if an infinite non-goal path is encountered. For example, in Knuth’s
4 problem, depth-first search would keep applying the factorial operator forever.
The time complexity of depth-first graph search is bounded by the size of the state
space (which may be infinite, of course). A depth-first tree search, on the other hand, may
generate all of the O(bm) nodes in the search tree, where m is the maximum depth of any
node; this can be much greater than the size of the state space. Note that m itself can be much
larger than d (the depth of the shallowest solution) and is infinite if the tree is unbounded.
A variant of depth-first search called backtracking search uses still less memory. In
backtracking, only one successor is generated at a time rather than all successors; each
partially expanded node remembers which successor to generate next. In this way, only O(m)
memory is needed rather than O(bm).

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 29


Artificial Intelligence (BAD402)

3.4.4 Depth-limited search


The embarrassing failure of depth-first search in infinite state spaces can be alleviated by
supplying depth-first search with a predetermined depth limit . That is, nodes at depth are
treated as if they have no successors. This approach is called depth-limited search. The
depth limit solves the infinite-path problem. Unfortunately, it also introduces an additional
source of incompleteness if we choose <d, that is, the shallowest goal is beyond the depth
limit. (This is likely when d is unknown.) Depth-limited search will also be non optimal if we
choose >d. Its time complexity is O(b ) and its space complexity is
O(b ). Depth-first search can be viewed as a special case of
depth-limited search with =∞.
Sometimes, depth limits can be based on knowledge of the problem. For example, on
the map of Romania there are 20 cities. Therefore, we know that if there is a solution, it must
be of length 19 at the longest, so = 19 is a possible choice. But in fact if we studied the map
carefully, we would discover that any city can be reached from any other city in at most 9
steps. This number, known as the diameter of the state space, gives us a better depth limit,
which leads to a more efficient depth-limited search. For most problems, however, we will
not know a good depth limit until we have solved the problem.

Depth-limited search can be implemented as a simple modification to the general tree


or graph-search algorithm. Alternatively, it can be implemented as a simple recursive
algorithm as shown in Figure 3.17. Notice that depth-limited search can terminate with two
kinds of failure: the standard failure value indicates no solution; the cut off value indicates no
solution within the depth limit.
3.4.5 Iterative deepening depth-first search
Iterative deepening search (or iterative deepening depth-first search) is a general strategy,
often used in combination with depth-first tree search, that finds the best depth limit. It does
this by gradually increasing the limit—first 0, then 1, then 2, and so on—until a goal is found.
This will occur when the depth limit reaches d, the depth of the shallowest goal node. The
algorithm is shown in Figure 3.18. Iterative deepening combines the benefits of depth-first
and breadth-first search. Like depth-first search, its memory requirements are modest: O(bd)
to be precise. Like breadth-first search, it is complete when the branching factor is finite and
optimal when the path cost is a nondecreasing function of the depth of the node. Figure 3.19
shows four iterations of ITERATIVE-DEEPENING-SEARCH on a binary search tree, where
the solution is found on the fourth iteration. Iterative deepening search may seem wasteful

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 30


Artificial Intelligence (BAD402)

because states are generated multiple times. It turns out this is not too costly. The reason is
that in a search tree with the same (or nearly the same) branching factor at each level, most of
the nodes are in the bottom level, so it does not matter much that the upper levels are
generated multiple times. In an iterative deepening search, the nodes on the bottom level
(depth d) are generated once, those on the generated d times. So the total number of nodes
generated in the worst case is

N(IDS)=(d)b + (d − 1)b2 + ··· + (1)bd ,


which gives a time complexity of O(bd)—asymptotically the same as breadth-first search.
There is some extra cost for generating the upper levels multiple times, but it is not large. For
example, if b = 10 and d = 5, the numbers are
N(IDS) = 50 + 400 + 3, 000 + 20, 000 + 100, 000 = 123, 450
N(BFS) = 10 + 100 + 1, 000 + 10, 000 + 100, 000 = 111, 110 .
If you are really concerned about repeating the repetition, you can use a hybrid approach that
runs breadth-first search until almost all the available memory is consumed, and then runs
iterative deepening from all the nodes in the frontier. In general, iterative deepening is the
preferred uninformed search method when the search space is large and the depth of the
solution is not known.
Iterative deepening search is analogous to breadth-first search in that it explores a
complete layer of new nodes at each iteration before going on to the next layer. It would seem
worthwhile to develop an iterative analog to uniform-cost search, inheriting the latter
algorithm’s optimality guarantees while avoiding its memory requirements. The idea is to use
increasing path-cost limits instead of increasing depth limits. The resulting algorithm, called
iterative lengthening search, is explored in Exercise 3.17. It turns out, unfortunately, that
iterative lengthening search iterative lengthening incurs substantial overhead compared to
uniform-cost search.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 31


Artificial Intelligence (BAD402)

3.4.6 Bidirectional search


The idea behind bidirectional search is to run two simultaneous searches—one forward from
the initial state and the other backward from the goal—hoping that the two searches meet in

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 32


Artificial Intelligence (BAD402)

the middle (Figure 3.20). The motivation is that bd/2 + bd/2 is much less than bd, or in the
figure, the area of the two small circles is less than the area of one big circle cantered on the
start and reaching to the goal.
Bidirectional search is implemented by replacing the goal test with a check to see
whether the frontiers of the two searches intersect; if they do, a solution has been found. (It is
important to realize that the first such solution found may not be optimal, even if the two
searches are both breadth-first; some additional search is required to make sure there isn’t
another short-cut across the gap.) The check can be done when each node is generated or
selected for expansion and, with a hash table, will take constant time. For example, if a
problem has solution depth d = 6, and each direction runs breadth-first search one node at a
time, then in the worst case the two searches meet when they have generated all of the nodes
at depth 3. For b = 10, this means a total of 2,220 node generations, compared with 1,111,110
for a standard breadth-first search. Thus, the time complexity of bidirectional search using
breadth-first searches in both directions is O(bd/2). The space complexity is also O(bd/2). We
can reduce this by roughly half if one of the two searches is done by iterative deepening, but
at least one of the frontiers must be kept in memory so that the intersection check can be
done. This space requirement is the most significant weakness of bidirectional search.

Let the predecessors of a state x be all those states that have x as a successor. Bidirectional
search requires a method for computing predecessors. When all the actions in the state space
are reversible, the predecessors of x are just its successors. Other cases may require
substantial ingenuity.
Consider the question of what we mean by “the goal” in searching “backward from
the goal.” For the 8-puzzle and for finding a route in Romania, there is just one goal state, so
the backward search is very much like the forward search. If there are several explicitly listed
goal states—for example, the two dirt-free goal states in Figure 3.3—then we can construct a

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 33


Artificial Intelligence (BAD402)

new dummy goal state whose immediate predecessors are all the actual goal states. But if the
goal is an abstract description, such as the goal that “no queen attacks another queen” in the
n-queens problem, then bidirectional search is difficult to use.
3.4.7 Comparing uninformed search strategies
Figure 3.21 compares search strategies in terms of the four evaluation criteria. This
comparison is for tree-search versions. For graph searches, the main differences are that
depth-first search is complete for finite state spaces and that the space and time complexities
are bounded by the size of the state space.

SUMMARY

This chapter has introduced methods that an agent can use to select actions in environments
that are deterministic, observable, static, and completely known. In such cases, the agent can
construct sequences of actions that achieve its goals; this process is called search.
• Before an agent can start searching for solutions, a goal must be identified and a
welldefined problem must be formulated.
• A problem consists of five parts: the initial state, a set of actions, a transition model
describing the results of those actions, a goal test function, and a path cost function. The
environment of the problem is represented by a state space. A path through the state space
from the initial state to a goal state is a solution.
• Search algorithms treat states and actions as atomic: they do not consider any internal
structure they might possess.
• A general TREE-SEARCH algorithm considers all possible paths to find a solution,
whereas a GRAPH-SEARCH algorithm avoids consideration of redundant paths.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 34


Artificial Intelligence (BAD402)

• Search algorithms are judged on the basis of completeness, optimality, time complexity,
and space complexity. Complexity depends on b, the branching factor in the state space, and
d, the depth of the shallowest solution.
• Uninformed search methods have access only to the problem definition. The basic
algorithms are as follows:
– Breadth-first search expands the shallowest nodes first; it is complete, optimal for unit
step costs, but has exponential space complexity.
– Uniform-cost search expands the node with lowest path cost, g(n), and is optimal for
general step costs.
– Depth-first search expands the deepest unexpanded node first. It is neither complete nor
optimal, but has linear space complexity. Depth-limited search adds a depth bound.
– Iterative deepening search calls depth-first search with increasing depth limits until a goal
is found. It is complete, optimal for unit step costs, has time complexity comparable to
breadth-first search, and has linear space complexity.
– Bidirectional search can enormously reduce time complexity, but it is not always
applicable and may require too much space.

EXERCISES
1. Explain why problem formulation must follow goal formulation.
Sol: Problem formulation is the process of clearly defining the issue or challenge that needs
to be addressed. It involves identifying the specific aspects of the situation that are causing
difficulty or concern. Goal formulation, on the other hand, is the process of establishing what
you want to achieve or accomplish.
It's essential for problem formulation to follow goal formulation because:
 Clarity of Purpose: Establishing clear goals provides a direction for problem
formulation. When you know what you want to achieve, it becomes easier to identify
the obstacles or problems that stand in the way of reaching those goals.
 Relevance: Problem formulation should be directly related to the goals you've set. By
formulating goals first, you ensure that the problems identified are relevant to the
objectives you're trying to accomplish. This prevents wasting time and resources on
solving issues that don't contribute to your desired outcomes.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 35


Artificial Intelligence (BAD402)

 Focus: Goals provide a focus for problem-solving efforts. They help you prioritize
which problems are most important to address, allowing you to allocate resources
effectively and avoid getting sidetracked by less significant issues.
 Evaluation: Having clearly defined goals allows you to evaluate the effectiveness of
your problem formulation. By comparing the solutions generated against the desired
outcomes, you can assess whether the identified problems were accurately understood
and addressed.
 Alignment: When problem formulation follows goal formulation, it ensures alignment
between the problems being tackled and the overarching objectives of the endeavor.
This alignment increases the likelihood of success because all efforts are directed
towards fulfilling the intended purpose.

2. Your goal is to navigate a robot out of a maze. The robot starts in the centre of the maze
facing north. You can turn the robot to face north, east, south, or west. You can direct
the robot to move forward a certain distance, although it will stop before hitting a wall.
a. Formulate this problem. How large is the state space?
b. In navigating a maze, the only place we need to turn is at the intersection of two or
more corridors. Reformulate this problem using this observation. How large is the state
space now?
c. From each point in the maze, we can move in any of the four directions until we reach
a turning point, and this is the only action we need to do. Reformulate the problem
using these actions. Do we need to keep track of the robot’s orientation now?
d. In our initial description of the problem we already abstracted from the real world,
restricting actions and removing details. List three such simplifications we made.
Sol: a. Original Problem Formulation:
- State Space: In the original problem, the state space consists of the position of the robot
(x, y coordinates) and its orientation (north, east, south, or west).
b. Reformulated Problem with Intersection Observation:
- In this reformulation, we only need to consider turning at intersections.
- State Space: The state space now consists of the position of the robot (x, y coordinates)
and the direction it needs to turn at intersections. Since there are four directions (north, east,
south, west) and three possible actions (turn left, go straight, turn right) at each intersection,
the state space is reduced to the position of the robot and one of three actions to take.
c. Reformulated Problem with Simplified Actions:

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 36


Artificial Intelligence (BAD402)

- In this reformulation, we only consider moving until we reach a turning point.


- State Space: The state space consists solely of the position of the robot (x, y coordinates).
We no longer need to track the robot's orientation since it only moves until it reaches a
turning point, where it automatically adjusts its orientation.
We don't need to keep track of the robot's orientation now because at each point, it can only
move forward until it reaches a turning point. Upon reaching a turning point, the robot can
then choose a new direction without needing to remember its previous orientation.
d. Simplifications Made in the Problem:
1. Abstracted Movement: We simplified movement to just forward, abstracting away details
like the robot's velocity, acceleration, or precise steps taken.
2. Restricted Actions: We restricted the actions available to the robot to turning and moving
forward, omitting more complex actions or decisions like backtracking or side-stepping.
3. Removed Details: We abstracted away details of the maze itself, focusing solely on the
robot's navigation within it, omitting features like different types of corridors, obstacles, or
landmarks within the maze.
3. Show that the 8-puzzle states are divided into two disjoint sets, such that any state is
reachable from any other state in the same set, while no state is reachable from any
state in the other set. (Hint: See Berlekamp et al. (1982).) Devise a procedure to decide
which set a given state is in, and explain why this is useful for generating random states.

Solu: The 8-puzzle states can be divided into two disjoint sets based on their parity, namely
the "even" and "odd" states. This division is based on the number of inversions in the puzzle,
where an inversion occurs when a tile precedes another tile with a lower number but comes
after it in the goal state. For instance, in the goal state:
123
456
780
If we represent the puzzle as a 1D array, an inversion happens if a tile with a higher value
appears before a tile with a lower value.
To demonstrate:
- In the puzzle state `[1, 2, 3, 4, 5, 6, 7, 0, 8]`, there are no inversions.
- In the puzzle state `[1, 2, 3, 4, 5, 6, 0, 8, 7]`, there's one inversion (7 precedes 8).
- In the puzzle state `[1, 2, 3, 4, 5, 0, 7, 8, 6]`, there are four inversions (5 precedes 7, 5
precedes 8, 7 precedes 8, and 6 precedes 8).

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 37


Artificial Intelligence (BAD402)

The property of the 8-puzzle is such that only states with an even number of inversions are
solvable. Thus, we can divide the states into "even" and "odd" sets based on this property.
The goal state has zero inversions, making it an "even" state.
To decide which set a given state is in:
1. Count the number of inversions in the puzzle state.
2. If the number of inversions is even, the state belongs to the "even" set; otherwise, it
belongs to the "odd" set.
This method is useful for generating random states because:
1. It ensures that any random state generated is solvable. Since only states in the same parity
set are reachable from one another, generating a random state within a particular set
guarantees its solvability within that set.
2. It simplifies the process of generating random states by limiting the search space. Instead
of searching randomly through all possible states, we can focus on generating states within a
specific parity set, reducing the computational complexity of the task.
4. Consider the n-queens problem using the “efficient” incremental formulation Explain
why the state space has at least 3√ n! states and estimate the largest n for which
exhaustive exploration is feasible. (Hint: Derive a lower bound on the branching factor
by considering the maximum number of squares that a queen can attack in any
column.)

Solu: The n-queens problem is a classic puzzle where you have to place n queens on an n×n
chessboard such that no two queens attack each other. In the "efficient" incremental
formulation, you place one queen in each column, ensuring that no two queens share the
same row or diagonal.
To understand why the state space has at least 3√n! states, let's break it down:
1. Branching Factor: In the incremental formulation, each queen is placed in a column, and
we need to find a valid row for each queen. Let's denote the maximum number of squares that
a queen can attack in any column as k. In a column, there are n rows, but each queen can only
attack k squares in that column. So, the effective branching factor for each column is at most
n/k.
2. State Space Size: We have n columns to place n queens. Each column has a branching
factor of at most n/k. So, the total number of possible states can be estimated by multiplying
the branching factor of each column:
Total States ≥ (k/n)n

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 38


Artificial Intelligence (BAD402)

3. Lower Bound on k: The maximum number of squares that a queen can attack in any
column is equal to the number of rows in the column. So, k is at least n.
Substituting this lower bound for k into the equation for total states:

4. Estimation of Largest n for Feasible Exploration: For exhaustive exploration to be feasible,


the number of states should be manageable. Let's consider an upper bound on the number of
states:

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 39


Artificial Intelligence (BAD402)

So, the largest n for which exhaustive exploration is feasible depends on the manageable limit
of states.
5. Give a complete problem formulation for each of the following. Choose a formulation
that is precise enough to be implemented.

a. Using only four colors, you have to color a planar map in such a way that no two
adjacent regions have the same color.
b. A 3-foot-tall monkey is in a room where some bananas are suspended from the 8-foot
ceiling. He would like to get the bananas. The room contains two stackable, movable,
climbable 3-foot-high crates.

c. You have a program that outputs the message “illegal input record” when fed a
certain file of input records. You know that processing of each record is independent of
the other records. You want to discover what record is illegal.
d. You have three jugs, measuring 12 gallons, 8 gallons, and 3 gallons, and a water
faucet. You can fill the jugs up or empty them out from one to another or onto the
ground. You need to measure out exactly one gallon.
Solu:
a. Problem Formulation for Coloring a Planar Map:
- Initial State:
- Planar map with regions and adjacency information.
- All regions uncolored.
- Actions:
- Choose a region.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 40


Artificial Intelligence (BAD402)

- Assign one of the four colors to the chosen region, ensuring it doesn't conflict with
neighboring regions.
- Transition Model:
- Assigning a color to a region.
- Update the state by coloring the chosen region and checking if the coloring violates
adjacency constraints.
- Goal State:
- All regions are colored, and no two adjacent regions have the same color.
- Cost Function:
- Minimize the number of color changes or transitions between states.
- Constraints:
- The chosen colors must not violate the adjacency rule.
- The map must be planar.
b. Problem Formulation for the Monkey and Bananas Problem:
- Initial State:
- Monkey's position, banana's position, crate's position.
- Monkey is not holding anything.
- Actions:
- Move left.
- Move right.
- Climb up on crate.
- Climb down from crate.
- Grab banana (if reachable).
- Transition Model:
- Changing the monkey's position or state.
- Moving the crate's position.
- Updating the state based on the action taken.
- Goal State:
- Monkey successfully grabs the banana.
- Cost Function:
- Minimize the number of actions taken.
- Constraints:
- Monkey cannot reach bananas directly without climbing on crates.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 41


Artificial Intelligence (BAD402)

- Crates are stackable and movable.


- Monkey cannot climb beyond the height of the crates or the ceiling.
c. Problem Formulation for Finding the Illegal Input Record:
- Initial State:
- The input file with multiple records.
- The program outputting "illegal input record" for the entire file.
- Actions:
- Select a record from the input file.
- Feed the selected record to the program.
- Transition Model:
- Running the program on a selected record.
- Observing the output of the program.
- Goal State:
- Identify the record that causes the program to output "illegal input record".
- Cost Function:
- Minimize the number of attempts or records tested.
- Constraints:
- Each record processing is independent of others.
- The goal is to isolate the specific record causing the error.
d. Problem Formulation for Measuring Exactly One Gallon:
- Initial State:
- Three jugs with capacities of 12 gallons, 8 gallons, and 3 gallons, respectively.
- Jugs initially filled with arbitrary amounts of water.
- Actions:
- Fill a jug from the faucet.
- Pour water from one jug to another.
- Empty a jug onto the ground.
- Transition Model:
- Changes in the water levels in the jugs due to filling, pouring, or emptying.
- Updates in the state based on the actions taken.
- Goal State:
- Exactly one gallon of water is measured in any of the jugs.
- Cost Function:

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 42


Artificial Intelligence (BAD402)

- Minimize the number of actions taken to measure one gallon.


- Constraints:
- Only operations allowed are filling, pouring, and emptying.
- The total amount of water is conserved during operations.
- The goal is to achieve exactly one gallon in any of the jugs.
6. The missionaries and cannibals problem is usually stated as follows. Three
missionaries and three cannibals are on one side of a river, along with a boat that can
hold one or two people. Find a way to get everyone to the other side without ever leaving
a group of missionaries in one place outnumbered by the cannibals in that place. This
problem is famous in AI because it was the subject of the first paper that approached
problem formulation from an analytical viewpoint (Amarel, 1968).
a. Formulate the problem precisely, making only those distinctions necessary to ensure
a valid solution. Draw a diagram of the complete state space.
b. Implement and solve the problem optimally using an appropriate search algorithm.
Is it a good idea to check for repeated states?
c. Why do you think people have a hard time solving this puzzle, given that the state
space is so simple?
Solu:
a. Problem Formulation for Missionaries and Cannibals:
- Initial State:
- Three missionaries and three cannibals on one side of the river.
- The boat is on the same side as the missionaries and cannibals.
- The other side of the river is empty.
- Actions:
- Move one or two people (missionaries or cannibals) from one side to the other.
- Only valid moves are those where the number of missionaries on either side is not
outnumbered by cannibals.
- Transition Model:
- Moving people from one side to the other side of the river.
- Updating the state based on the actions taken.
- Goal State:
- All missionaries and cannibals are on the other side of the river, with the boat.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 43


Artificial Intelligence (BAD402)

- Cost Function:
- Minimize the number of moves (boat trips) required to reach the goal state.
- Constraints:
- Missionaries cannot be outnumbered by cannibals on either side of the river.
- The boat can only carry one or two people at a time.
Diagram of the Complete State Space:
Initial State:
[MMM CCC B] -> [--- --- ---]
Possible States:
[MMM CCC B] -> [--- --- ---] (Boat on original side)
[MM CC B] -> [MMM CCC ---] (Boat on original side)
[MM CC B] -> [--- --- MMM CCC] (Boat on opposite side)
[MMM CC B] -> [--- CC MMM] (Boat on original side)
[MMM B] -> [CCC MM ---] (Boat on original side)
[MMM B] -> [--- MM CCC] (Boat on opposite side)
[MM B] -> [M CCC MM] (Boat on original side)
[MM B] -> [--- CCC MMM] (Boat on opposite side)
[CCC B] -> [MMM --- CCC] (Boat on original side)
[CCC B] -> [MMM CCC ---] (Boat on opposite side)
[CCC MM B] -> [--- MM CCC] (Boat on original side)
[CC MM B] -> [M CCC MM] (Boat on original side)
[CC MM B] -> [--- CCC MMM] (Boat on opposite side)
[CC B] -> [MM CCC M] (Boat on original side)
[CC B] -> [--- CCC MMM] (Boat on opposite side)
[MMM B] -> [--- MM CCC] (Boat on original side)
[MMM B] -> [CCC MM ---] (Boat on opposite side)
[MM B] -> [--- CCC MM] (Boat on original side)
[MM B] -> [MM CCC ---] (Boat on opposite side)
[B] -> [MMM CCC ---] (Boat on original side)

b. To solve the problem optimally, you can use the Breadth-First Search (BFS) algorithm,
which guarantees finding the shortest path to the solution. Since the state space is relatively

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 44


Artificial Intelligence (BAD402)

small, repeated states can be checked without significant computational overhead. However,
in larger state spaces, checking for repeated states might become inefficient.

c. Despite the simplicity of the state space, people often find this puzzle challenging due to
the need to consider multiple constraints simultaneously. It requires careful planning and
systematic exploration of possible moves while ensuring that no group of missionaries is
outnumbered by cannibals on either side of the river. Additionally, the optimal solution might
not be immediately apparent, leading to trial and error for many people.
7. An action such as Go(Sibiu) really consists of a long sequence of finer-grained
actions: turn on the car, release the brake, accelerate forward, etc. Having composite
actions of this kind reduces the number of steps in a solution sequence, thereby
reducing the search time. Suppose we take this to the logical extreme, by making super-
composite actions out of every possible sequence of Go actions. Then every problem
instance is solved by a single supercomposite action, such as Go(Sibiu)Go(Rimnicu
Vilcea)Go(Pitesti)Go(Bucharest). Explain how search would work in this formulation.
Is this a practical approach for speeding up problem solving?
Solu:
In this extreme formulation where every possible sequence of "Go" actions is encapsulated
into a single super-composite action, the search process becomes significantly simplified.
Essentially, the search algorithm would treat each super-composite action as a single atomic
step. This means that instead of considering individual actions (like turning on the car,
releasing the brake, etc.), the algorithm directly evaluates the consequences of executing the
entire sequence of actions represented by the super-composite action.
Here's how the search would work in this formulation:
1. Initial State: The initial state of the problem.
2. Actions: The available super-composite actions, each representing a sequence of finer-
grained actions.
3. Transition Model: Executing a super-composite action results in a new state.
4. Goal State: The desired state that satisfies the problem's criteria.
5. Search Algorithm: The search algorithm, whether it's BFS, DFS, A*, etc., would operate
on these super-composite actions. It would explore the state space by applying these actions
and evaluating their consequences until a goal state is reached.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 45


Artificial Intelligence (BAD402)

While this approach theoretically reduces the number of steps in the solution sequence,
making the search process more efficient, it has several practical limitations:
1. Complexity of Super-Composite Actions: Super-composite actions may encapsulate a
large number of finer-grained actions, making them complex and difficult to understand or
manage.
2. Loss of Flexibility: By predefining all possible sequences of actions, the system loses the
flexibility to adapt its behaviour based on changing circumstances or constraints.
3. Memory and Computational Resources: Storing and processing super-composite actions
for large state spaces can be memory-intensive and computationally expensive.
4. Optimality Concerns: Predefined super-composite actions may not always lead to the most
optimal solutions, especially in dynamic or uncertain environments.

Smt. L N Shylaja, Associate Professor & HOD, AIML Page 46

You might also like