3.0 Search
3.0 Search
3.0 Search
Initial Goal
state state
Actions
What is the goal to be achieved?
Could describe a situation we want to achieve, a set
of properties that we want to hold, etc.
Requires defining a “goal test” so that we know
what it means to have achieved/satisfied our goal.
Certainly psychologists and motivational speakers
always stress the importance of people establishing
clear goals for themselves as the first step towards
solving a problem.
What are your goals???
What are the actions?
Characterize the primitive actions or events that
are available for making changes in the world in
order to achieve a goal.
Deterministic world: no uncertainty in an
action’s effects. Given an action (a.k.a. operator
or move) and a description of the current world
state, the action completely specifies
whether that action can be applied to the current world
(i.e., is it applicable and legal), and
what the exact state of the world will be after the action
is performed in the current world (i.e., no need for
“history” information to compute what the new world
looks like).
Representing actions
Note also that actions in this framework can all be
considered as discrete events that occur at an instant
of time.
For example, if “Mary is in class” and then performs the
action “go home,” then in the next situation she is “at
home.” There is no representation of a point in time where
she is neither in class nor at home (i.e., in the state of
“going home”).
The number of actions / operators depends on the
representation used in describing a state.
In the 8-puzzle, we could specify 4 possible moves for each
of the 8 tiles, resulting in a total of 4*8=32 operators.
On the other hand, we could specify four moves for the
“blank” square and we would only need 4 operators.
Depth-first search
Breadth-first search
Iterative deepening search
Bi-directional search
Uninformed Search?
S Select a child
convention: left-to-right
A
Repeatedly go to next child, as
B long as possible.
C E
Return to left-over alternatives
D F (higher-up) only when needed.
G
Depth-first algorithm:
1. QUEUE <-- path only containing the root;
3. IF goal reached
THEN success;
ELSE failure;
A 4 B 4 C
3
S 5 5
G
4 D 2 E F 3
4
3. IF goal reached
THEN success;
ELSE failure;
Trace of depth-first for running example:
(S) S removed, (SA,SD) computed and added
(SA, SD) SA removed, (SAB,SAD,SAS) computed,
(SAB,SAD) added
(SAB,SAD,SD) SAB removed, (SABA,SABC,SABE)
computed, (SABC,SABE) added
(SABC,SABE,SAD,SD) SABC removed, (SABCB) computed,
nothing added
(SABE,SAD,SD) SABE removed, (SABEB,SABED,SABEF)
computed, (SABED,SABEF)added
(SABED,SABEF,SAD,SD) SABED removed,
(SABEDS,SABEDA.SABEDE)
computed, nothing added
(SABEF,SAD,SD) SABEF removed, (SABEFE,SABEFG)
computed, (SABEFG) added
(SABEFG,SAD,SD) goal is reached: reports success
Evaluation criteria:
Completeness
Does the algorithm always find a path?
created?
Memory (worst space complexity) :
What is the largest amount of nodes that may need to be
stored?
Expressed in terms of:
d = depth of the tree
b = (average) branching shallowest factor of the tree
m = depth of the solution
Repeated States problem
In the above discussion we have ignored an
important complication that often arises in search
processes – the possibility that we will waste time
by expanding states that have already been
expanded before somewhere else on the search
tree.
Avoiding Repeated States
1. Never return to the state you have just come
from
2. Never create search paths with cycles in them
3. Never generate states that have already been
generated before – store all generated states in
memory.
Note: approximations !!
In our complexity analysis, we do not take the built-in loop-
detection into account.
The results only ‘formally’ apply to the variants of our
algorithms WITHOUT loop-checks.
Studying the effect of the loop-checking on the complexity is
hard:
overhead of the checking MAY or MAY NOT be
IMPORTANT:
This is due to integration of LOOP-checking in this version
3 S 4
4 A D
5 5 2
B D A E
4 5 2 4 5 4
C E E B B F
2 4 5 4 4 5 4 4 3
D F B F C E A C G
3 4 3 4
G C G F
3
G
Speed (depth-first)
In the worst case:
the (only) goal node may be on the right-most branch,
b d
Time complexity == bd + bd-1 + … + 1 = bd+1 -1
b-1
Thus: O(bd)
Memory (depth-first)
Largest number of nodes in QUEUE is reached in bottom
left-most node.
Example: d = 3, b = 3 :
...
In other words,
Expand root node first
Expand all nodes at level 1 before expanding level 2
OR
Expand all nodes at level d before expanding nodes at level
d+1
Breadth-First Search
Breadth-first search:
S
A D Move
downwards,
B D A E level by
level, until
C E E B B F goal is
reached.
D F B F C E A C G
G C G F
G
Breadth-first algorithm:
1. QUEUE <-- path only containing the root;
(SABED,SABEF,SADEB,SADEF,SDABC,SDABE,SDEBA,SDEBC,
SDEFG) goal is reached: reports success
Completeness (breadth-first)
Complete
even for infinite implicit NETS !
m
d
b G
Thus: O(bm)
note: depth-first would also visit deeper nodes.
Memory (breadth-first)
Largest number of nodes in QUEUE is reached on the
level m of the goal node.
m
b d
G
Solutions ??
Non-deterministic search
Iterative deepening
Non-deterministic search:
1. QUEUE <-- path only containing the root;
3. IF goal reached
THEN success;
ELSE failure;
3. Iterative deepening Search
1. DEPTH <-- 1
bm-1 + bm-2 + … + 1 = bm -1 = O(bm-1)
b-1
While the work spent at DEPTH = m itself is O(bm)
In general: VERY good trade-off
4. Bi-directional Search
Compute the tree from the start node and from a
goal node, until these meet.
Bi-directional search
IF you are able to EXPLICITLY describe the GOAL
state, AND
you have BOTH rules for FORWARD reasoning AND
BACKWARD reasoning:
Start Goal
Bi-directional algorithm:
1. QUEUE1 <-- path only containing the root;
QUEUE2 <-- path only containing the goal;
Best-first
search
Greedy search
A*Search
A* Variants
INFORMED SEARCHES
Informed search uses some kind of evaluation
function to tell us how far each expanded state is
from a goal state, and/or some kind of heuristic
function to help us decide which state is likely to
be the best one to expand next.
Goal:
To see how information about the state space can
prevent algorithms from stumbling in the dark
Heuristic Functions
A heuristic search may use a heuristic function,
which is a calculation to estimate how costly (in
terms of the path cost) a path from a state will be to
a goal state.
Heuristic functions can be derived in a number of
ways;
Cont.
Mathematically
Think them up as good ideas
Identify common elements in various search solutions,
and convert these to useful heuristic functions.
Use of computer programs to derive heuristic
functions. E.g. ABSOLVER program.
Heuristic Search Strategies
Uniform Path Cost Search
Similar to a BFS
A uniform path cost search chooses which node to
expand by looking at the path cost for each node: the
node which has least cost to get to is expanded first.
It is an optimal search strategy. It is guaranteed to find
the least expensive goal.
Best-first search
= An instance of TREE/GRAPH-SEARCH algorithm
Expand the most desirable node:
Goal:
Minimize the total estimated solution cost
Evaluation function:
f(n) = g(n) + h(n)
g(n) = the cost to reach n
h(n) = the estimated cost from n to the goal node
f(n) = estimated cost of the cheapest solution through n
Minimize g(n) + h(n)
A* = optimal & complete
Admissible Heuristic
A heuristic h(n) is admissible if:
h(n) never overestimates the cost to reach the goal, i.e. it
is optimistic
A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the
goal state from n.
Also require h(n)≥0, so h(G)=0 for any goal G
Class definition?
Types of Problems
There are essentially 4 types of problems;
Single state problems – the action sequence is a 1 step
transition.
Example;
Keep your eyes open while walking
Beware! Have your senses awake
Cont.
Exploration Problem
The agent has no information about the effects of
its actions e.g. an intelligent agent in a strange city
must gradually discover what its actions do and
what states exist.
Important Terms
States
“Places” where the search can visit
Search space
The set of possible states
Search path
The states which the search agent actually visits
Solution
A state with a particular property that solves the problem (achieves
the task) at hand
There may exist more than one solution to a problem
Strategy
How to choose the next state in the path at any given state
Search Problem considerations
Consider the following when specifying a search
problem;
1. Initial state
Inform the agent where the search begins e.g. a city
2. Operator set
Description of the possible actions available to the agent e.g.
functions that change one state to another
Specify how the agent can move around search space
Final search strategy boils down to choosing states &
operators
Cont.
3. Goal test
Describes how the agent knows if the solution (goal)
has been achieved.
4. Path Cost
This is a function that assigns a numeric cost to each
path. The problem solving agent chooses a cost function
that reflects its own performance measure i.e. time,
distance etc.
Formulating Problems:
Example 1 - Chess
Initial state
As in picture
Operators
Moving pieces
Goal test
Checkmate
king cannot move
without being taken
Example 2 – Route Planning
Initial state
City the journey starts in
Operators
Driving from city to city
Goal test
Destination city
Properties of Search Algorithms
Consider the following when selecting an
algorithm;
Completeness – Is a solution guaranteed to be found
if at least one solution exists?
Optimality – Is the solution found guaranteed to be
the best (or lowest cost) solution if there exists more
than one solution?
Cont. of Search Properties
Time Complexity – The upper bound on the time
required to find a solution, as a function of the
complexity of the problem.
- How long does it take to find the solution?
Space Complexity – The upper bound on the storage
space (memory) required at any point during the
search, as a function of the complexity of the
problem.
- what are the memory requirements?
TREES
These are state graphs that have no cycles in them.
Many AI searches can be represented as Trees.
The root of the tree represents the search node
corresponding to the initial state.
The Leaf nodes correspond to all the other possible
states in the problem.
Reading Assignment
Read on the IDA* search
MA* search (memory bounded A*)
Source: