Unit 2
Unit 2
Unit 2
Problem
solving by
searching
A solution to a problem is an
action sequence that leads
from the initial state to a goal
state
*Together, the initial state, actions, and transition
model implicitly define the state space
*The state space forms a directed network or graph in
which the nodes are states and the links between
nodes are actions
*A path in the state space is a sequence of states
connected by a sequence of actions
*Solution quality is measured by the path cost
function, and an optimal solution has the lowest path
cost among all solutions
Example problems
problem
• usable by different researchers to
compare the performance of
algorithms
:
Real
world • solutions people actually care
about
problem
• Such problems tend not to have a
single agreed-upon description,
but we can give the general flavor
: of their formulations
Example for toy problems-
vacuum world
1) States: The state is determined by both the agent location
and the dirt locations. Each of which might or might not
contain dirt. Thus, there are 2 × 22 =8possible world states. A
larger environment with n locations has n· 2n states.
2) Initial state: Any state can be designated as the initial state.
3) Actions: In this simple environment, each state has just three
actions: Left, Right and Suck. Larger environments might also
include Up and Down.
4) Transition model: The actions have their expected effects,
except that moving Left in the leftmost square, moving Right in
the rightmost square, and Sucking in a clean square have no
effect.
5) Goal test: This checks whether all the squares are clean.
6) Path cost: Each step costs 1, so the path cost is the number
of steps in the path
State space for vacuum world
2) 8-PUZZLE(sliding block
puzzle)
*consists of a 3×3 board with eight numbered tiles and a
blank space.
*A tile adjacent to the blank space can slide into the
space
*The object is to reach a specified goal state
1) States: A state description specifies the location of
each of the eight tiles and the blank in one of the
nine squares. Initial state: Any state can be
designated as the initial state
2) Actions: Left, Right, Up, or Down
3) Transition model: Given a state and action, this
returns the resulting state. for example, if we apply
Left to the start state in Figure 3.4, the resulting
state has the 5 and the blank switched.
4) Goal test: This checks whether the state matches
the goal configuration
5) Path cost: Each step costs 1, so the path cost is
the number of steps in the path
The 8-puzzle has 9!/2=181,440 reachable states and is
easily solved which are often used as test problems for
new search algorithms in AI.
3) 8-queens problem
The goal of the 8-queens problem is to place eight
queens on a chessboard such that no queen attacks any
other.
1) States: Any arrangement of 0 to 8 queens on the
board is a state.
2) Initial state: No queens on the board.
3) Actions: Add a queen to any empty square.
4) Transition model: Returns the board with a queen
added to the specified square.
5) Goal test: 8 queens are on the board, none attacked.
Real-world problems
1) Route-finding problem: specified locations and
transitions along links between them
Route-finding algorithms are used in a variety of
applications
Web sites and in-car systems that provide driving
directions
military operations planning, and airline travel-planning
systems
Airline travel problems that must be solved by a
travel-planning Web site:
2) States: Each state obviously includes a location
3) Initial state: This is specified by the user’s query
4) Actions: Take any flight from the current location,
in any seat class, leaving after the current time,
leaving enough time for within-airport transfer if
4) Transition model: The state resulting from taking a
flight will have the flight’s destination as the current
location and the flight’s arrival time as the current time
5) Goal test: Are we at the final destination specified by
the user?
6) Path cost: This depends on cost, waiting time, flight
time, customs and immigration procedures, seat
quality, time of day, type of airplane and so on
2
)
2) Rectangular grid: Is a particularly important example
in computer games
Each state has four successors, so a search tree of depth d
that includes repeated states has 4d leaves
Avoid exploring redundant paths
3) Search tree by graph search:
avoid exploring redundant paths is to remember where
one has been
TREE-SEARCH algorithm with a data structure called the
explored set or closed set, which remembers every
expanded node
ones in the explored can be discorded
the frontier separates the state-space graph into the
explored region and the unexplored region, so that every
path from the initial state to an unexplored state has to
pass through a state in the frontier
-++
9f
Un-informed search
strategies/blind search
*no additional information about states beyond that
provided in the problem definition
*goal state from a non-goal state
*Strategies that know whether one non-goal state is
“more promising” than another are called informed
search or heuristic search
1. Breadth first search(BFS)
2. Uniform cost search
3. Depth first search
4. Depth limited search
5. Iterative deepening DFS
6. Bi directional search
1.Breadth-first search
*Root node is expanded first
*All the successors of the root node are expanded next
*Then their successors, and so on
*In general, all the nodes are expanded at a given depth
*When all step costs are equal breadth-first search is
optimal because it always expands the nearest
unexpanded node
*Similar to general graph search algorithm
by using a FIFO queue for the frontier
new nodes go to the back of the queue
new nodes, get expanded first
goal test is applied to each node when it is generated
rather than when it is selected for expansion
2.Uniform cost search
*Instead of expanding the nearest node, uniform-cost
search expands the node n with the lowest path cost g(n)
*Goal test is applied to a node when it is selected for
expansion
*Test is added in case a better path is found to a node
currently on the frontier
*Uniform-cost search does not care about the number of
steps a path has, but only about their total cost
*It will get stuck in an infinite loop if there is a path with
an infinite sequence of zero-cost actions—for example, a
sequence of NoOp actions
*Uniform-cost search is guided by path costs rather than
depths
*uniform-cost search is similar to breadth-first search,
except that the latter stops as soon as it generates a goal
3.Depth-first search
*Depth-first search always expands the deepest
node in the current frontier of the search tree.
*search proceeds immediately to the deepest level of
the search tree, where the nodes have no
successors.
*depth-first search uses a LIFO queue, A LIFO queue
means that the most recently generated node is
chosen for expansion
*Backtracking
*Infinite state space
4. Depth-limited search
* To stop infinite state space providing depth limit l.
* That is, nodes at depth are treated as if they have no
successors
* depth limits can be based on knowledge of the problem
* Example: the map of Romania there are 20 cities.
Therefore, we know that if there is a solution, it must be of
length 19 at the longest, so =19is a possible choice.
5. Iterative deepening depth-first
search
*Combination of depth limited search and depth
first search.
*It does this by gradually increasing the limit—first
0, then 1, then 2, and so on—until a goal is found.
6. Bi-directional search
*run two simultaneous searches
*one forward from the initial state and the other backward
from the goal
*hoping that the two searches meet in the middle
*Bidirectional search is implemented by replacing the
goal test with a check to see whether the frontiers of the
two searches intersect; if they do, a solution has been
found.
*The check can be done when each node is generated or
selected for expansion and, with a hash table, will take
constant time