chapter 3-problem solving(tea)
chapter 3-problem solving(tea)
chapter 3-problem solving(tea)
6
What is a Problem?
• It is a gap between
Definition
what actually is and what is desired.
• A problem exists when an individual becomes aware of the existence of
an obstacle which makes it difficult to achieve a desired goal or
objective.
A number of problems are addressed in AI, both:
•Toy problems: are problems that are useful to test and demonstrate
methodologies.
− Can be used by researchers to compare the performance of different
algorithms
− E.g. 8-puzzle, n-queens, vacuum cleaner world, towers of Hanoi, etc.
• Real-life problems: are problems that have much greater
commercial/economic impact if solved.
− Such problems are more difficult and complex to solve, and there is no
single agreed-upon description
− E.g. Route finding, Traveling sales person, etc.
e)
Solving a Problem
• Problem-solving agent: a type of goal-based agent
− Decide what to do by finding sequences of actions that lead to
desirable states =⇒ solution
− Solution is a sequence of world state in which the final state satisfy
the goal or solution is action sequence in which the last action will
result the goal state.
− The process of looking for such a sequence of actions is called
search
• Rational agents or Problem-solving agents in AI mostly used search
strategies or algorithms to solve a specific problem and provide the best
result.
− Formalize the problem: Identify the collection of information that the
agent will use to decide what to do.
− Define states: States describe distinguishable stages during the
problem-solving process
e)
Cont...
− Define the available operators/rules for getting from one state to the
next
• Operators cause an action that brings transitions from one state to
another by applying on a current state
− Suggest a suitable representation for the problem space/state space:
graph, table, list, set or a combination of them
State Space of a Problem
• The state space defines the set of all relevant states reachable from
the initial state by (any) sequence of actions through iterative
application of all permutations and combinations of operators
• A state space represents a problem in terms of states and
operators that change states.
• State space (also called search space/problem space) of the
problem includes the various states
− Initial state: defines where the agent starts or begins its task
− Goal state: describes the situation the agent attempts
to achieve
− Operators /actions : describe what the agent can execute
− Transition states: other states in between initial and goal
states
Examples for Defining State Space of a Problem
Coloring Problem
There are 3 rectangles. All are initially white. The problem is to change
all the rectangles with white color to black color. Color change is one
rectangle at a time. Define state space for coloring problem.
Examples for Defining State Space of a Problem
The 8 Puzzle Problem
Arrange the tiles so that all the tiles are in the correct positions. You do this
by moving tiles or space. You can move a tile/space up, down, left, or right,
as long as the following conditions are met:
• there’s no other tile blocking you in the direction of the
movement; and
• you’re not trying to move outside of the boundaries/edges.
Examples for Defining State Space of a Problem
Tower of Hanoi
Build the toweron the third peg (on peg C), obeying the following rules:
• move one disk a time,
• do not stack bigger disk on smaller one.
e)
Exercises
1 Missionary-and-Cannibal Problem: Three missionaries and three
cannibals are on one side of a river that they wish to cross. There is a
boat that can hold one or two people. Find an action sequence that
brings everyone safely to the opposite bank (i.e. Cross the river). But
you must never leave a group of missionaries outnumbered by cannibals
on the same bank (in any place).
2 Water Jug Problem: We have one 3 liter jug, one 5 liter jug and
unlimited supply of water. The goal is to get exactly one liter of
water in either of the jug. Either jug can be emptied, filled or poured
into the other.
For the above-mentioned problems:
• Identify the set of states and operators
• Show using suitable representation the state space of the
problem
Steps in Problem Solving
• Goal formulation
− is a step that specifies exactly what the agent is trying to achieve
− this step narrows down the scope that the agent has to look at
• Problem formulation
− is a step that puts down the actions and states that the agent has to
consider given a goal (avoiding any redundant states)
• Search
− is the process of looking for the various sequence of actions that lead to a
goal state, evaluating them and choosing the optimal sequence.
• Execute
− is the final step that the agent executes the chosen sequence of actions to
get it to the solution/goal
Example: Road Map of Ethiopia
Example: Road Map of Ethiopia
• The agent execute to determine if it has reached the goal state or not
− Is a function which determines if the state is a goal state or not
• Example:
− Route finding problem: Reach Gondar → IsGoal(x, Gondar)
− Coloring problem: All rectangles black → IsGoal(rectangle[], n)
− 8-puzzle ??
Well-defined Problems and Solutions
Path Cost Function
• A function that assigns a cost to a path (sequence of actions).
− Is often denoted by g. Usually, it is the sum of the costs of the
individual actions along the path (from one state to another state)
− Measure path cost of a sequence of actions in the state space.
For example, we may prefer paths with fewer or less costly
actions
• Example:
− Route finding problem: Path cost from initial to goal state
− Coloring problem: One for each transition from state to state till
goal state is reached
− 8-puzzle?
e)
Problem-solving agents
Restricted form of general agent:
function Simple-Problem-Solving-Agent(percept) returns an action
static: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal, initially null
problem, a problem formulation
state← Update-State(state,percept)
if seq is empty then
goal ← Formulate-Goal(state)
problem← Formulate-Problem(state,goal) seq
← Search( problem)
action ← Recommendation(seq,state)
seq← Remainder(seq,state)
return action
9
Example: Romania
Oradea
71
Neamt
Zerind 87
75 151
Iasi
Arad
140
92
Sibiu Fagaras
99
118 Vaslui
80
Timisoara Rimnicu Vilcea
142
111 Pitesti 211
Lugoj 97
70 98
85 Hirsova
Mehadia 146 101 Urziceni
75 138 86
Bucharest
Dobreta 120
90
Craiova Eforie
Giurgiu
10
Selecting a state space
Real world is complex⇒ state space must be abstracted for problem solving
(Abstract) state = set of real states
(Abstract) action = complex combination of real actions e.g., “Arad →
Zerind” represents a complex set of possible routes, detours, rest
stops, etc. For guaranteed realizability, any real state “in Arad”
must get to some real state “in Zerind”
(Abstract) solution = set of real paths that are solutions in the real world
Each abstract action should be “easier” than the original problem!
Atomic representation
12
Example : Vacuum world state space
graph
R
L R
S S
R R
L R L R
L L
S S
S S
R
L R
S S
states??: integer dirt and robot locations (ignore dirt amounts etc.)
actions??: Left, Right, Suck, NoOp
transition model??: arcs in the digraph
goal test??: no dirt
path cost??: 1 per action (0 for NoOp)
13
Example: The 8-puzzle
2 8 3 5
1 2 3
1 6 4 8 4
7 5 7 6 5
Start Goal
State State
16
Example: vacuum world
17
Searching for Solution
• A solution to a problem is an action sequence that leads from the
initial state to a goal state.
• Solution quality is measured by the path cost function, and an
optimal solution has the lowest path cost among all solutions.
• An agent with several immediate options of unknown value can
decide what to do by first examining different possible sequences of
actions that lead to states of known value, and then choosing the best
sequence.
− This process of looking for such a sequence is called search
− This is done by a search through the state space.
• Examine different possible sequences of actions & states, and come
up with specific sequence of operators/actions that will take you
from the initial state to the goal state
Searching in State Space
• choose min-cost/max-profit node/state in the state space,
• test the state to know whether we reached to the goal state or not,
• if not, expand the node further to identify its successor.
Search Tree
• The searching process is like building the search tree that is super
imposed over the state space
• A search tree is a representation in which nodes denote paths and branches
connect paths. The node with no parent is the root node. The nodes with
no children are called leaf nodes.
• Given state S and valid actions being at S, the set of next state
generated by executing each action is called successor of S
• The Successor-Function generate all the successors states and the
action that leads moves the current state into the successor state
Example: Road Map of Ethiopia
Tree search example: Route finding Problem
SearchAlgorithms
• Search algorithms all share this basic structure; they vary primarily
according to how they choose which state to expand next—the so-
called search strategy.
Example Search Algorithm Implementation
Algorithm 2 Example for a Search Algorithm Implementation
1: Input: a given problem (Initial State + Goal state + transit states and oper- ators)
2: Output: returns optimal sequence of actions to reach the goal. 3:
function GeneralSearch (problem, strategy)
35/
Uninformed Search
36/
Breadth-first Search (BFS)
• Breadth-first search is a simple strategy in which the root
node is expanded first, then all the successors of the root
node are expanded next, then their successors, and so on.
• In general, all the nodes are expanded at a given depth in the
search tree before any nodes at the next level are expanded.
• Breadth-first search is an instance of the general search
algorithm in which the shallowest unexpanded node is chosen
for expansion.
− This is achieved very simply by using a FIFO queue for the
frontier.
• Thus, new nodes (which are always deeper than their parents)
go to the back of the queue, and old nodes, which are
shallower than the new nodes, get expanded first.
Breadth-first Search (BFS)
• Expand shallowest unexpanded node, i.e. expand all nodes on a
given level of the search tree before moving to the next level BFS
Implementation
Use queue data structure to store the list:
• Fringe (open list) is a FIFO queue, i.e., new successors go at the end
• Expansion: put successors at the end of queue
• Pop nodes from the front of the queue
Properties of BFS
• Complete? Yes (if b is finite, which is true in most cases)
− it is complete if the shallowest goal node is at some finite depth d,
breadth-first search will eventually find it after expanding all shallower
nodes (provided the branching factor b is finite)
• Optimal? Yes (if cost = constant (k) per step)
− breadth-first search is optimal if the path cost is a non-decreasing
function of the depth of the node. For example, when all actions have
the same cost.
• Time? 1 + b + b2 + b3 + . . . + bd = O(bd+ 1)
− at depth value = i , there are bi nodes expanded for i ≤ d
− If the algorithm were to apply the goal test to nodes when selected for
expansion, rather than when generated, the whole layer of nodes at
depth d would be expanded before the goal was detected and the time
complexity would be O(bd +1)
− If the algorithm were to apply the goal test to nodes when generated,
rather than when selected for expansion, the whole layer of nodes at
depth d would not be expanded and the time complexity would be
O(bd )
e)
Cont…
Algorithm 3 BFS
1: function BFS (problem) {
2: open = (C_0); //put initial state C_0 in the List
3: closed = {}; //maintain list of nodes examined earlier 4:
while (not (empty (open))) {
5: f = remove_first(open);
6: if IsGoal (f) then return (f); 7:
closed = append (closed, f);
8: l = not-in-closed (Successors (f), closed );
9: open = merge (rest(open), l); //append l to the open list 10:
}
11: return (’fail’)
12: }
Uniform Cost Search (UCS)
• Uniform-cost search expands the node n with the lowest path cost g(n).
This is done by storing the frontier as a priority queue ordered by total
path cost.
• Nodes in the open list keep track of total path length from start to that
node
• Open list kept in priority queue ordered by path cost
• The goal of this technique is to find the shortest path to the goal in
terms of cost.
− It modifies the BFS by always expanding least-cost unexpanded node
42/
Uniform Cost Search (UCS)
Implementation
• Nodes in list keep track of total path length from start to that node
• List kept in priority queue ordered by path cost
• Uniform-cost search does not care about the number of steps a path
has, but only about their total cost.
Uniform Cost Search (UCS)
Properties
It finds the cheapest solution if the cost of a path never decrease as we go
along the path i.e. g(sucessor(n)) ≥ g(n) for every node n.
• Completeness is guaranteed provided the cost of every step exceeds
some small positive constant ϵ. (if all cost of actions
>0 )
• It is optimal if all cost of actions > 0.
− Nodes expanded in increasing order of g(n)
• Uniform-cost search is guided by path costs rather than depths, so its
complexity is not easily characterized in terms of b and d.
• Instead, let C∗be the cost of the optimal solution, and assume that every
action costs at least ϵ.
Uniform Cost Search (UCS)
Properties
• Time? # of nodes with g ≤ cost of optimal solution
− Now you can ask, In the worst case, what will be the number of
nodes that exist all of which has step cost = ϵ, branching factor
b and path cost ≤ C ∗. The resulting tree will have depth value
(C ∗/ϵ)
∗
− Hence the total node will be (bC /є). Therefore, time
∗
complexity becomes O(b(C /є))
• Space? # of nodes with g ≤ cost of optimal solution,
∗
O(b(C /є) )
• When all step costs are equal, b1+(C ∗/є) is just bd+1.
Algorithm for Uniform Cost Search
Algorithm 4 UCS
1: function UCS (problem) {
2: open = (C_0); //put initial state C_0 in the List 3:
g(s)=0;
4: closed = {}; //maintain list of nodes examined earlier
5: while (not (empty (open))) { 6:
f = remove_first(open);
7: if IsGoal (f) then return (f);
8: closed = append (closed, f);
9: l = not-in-closed (Successors (f), closed );
10: g(f, li ) = g(f) + c(f, li )
11: open = merge(rest(open), l, g(f , li )); //keep the open list sorted in
ascending order by edge cost
12: }
13: return (’fail’)
14: }
Depth-first Search (DFS)
• It always expands the deepest node from the current
frontier (open list) of the search tree
• It uses a LIFO queue for the open list (frontier)
− the most recently generated node is chosen for expansion, which is
the deepest unexpanded node (it is deeper than its parent)
• Expand one of the node at the deepest level of the tree.
− Only when the search hits a non-goal dead end does the search go
back and expand nodes at shallower levels
Depth-first Search (DFS)
Algorithm 5 DFS
1: function DFS (problem) {
2: open = (C_0); //put initial state C_0 in the List
3: closed = {}; //maintain list of nodes examined earlier 4:
while (not (empty (open))) {
5: f = remove_first(open);
6: if IsGoal (f) then return (f); 7:
closed = append (closed, f);
8: l = not-in-set (Child-Nodes(f), closed );
9: open = merge ( rest(open), l); //prepend to the list 10:
}
11: return (’fail’)
12: }
Depth-limited Search
• We can see that the breadth search is complete which can be taken as its
advantage though its space complexity is the worst
• Similarly the depth first search strategy is best in terms of space
complexity even if it is the worst in terms of its completeness and time
complexity compared to breadth first search
• Hence, we can find an algorithm that incorporate both benefits and
avoid the limitation
• Such algorithm is called depth limited search and its improved version
is called iterative deepening search strategy.
• These two strategies will be explored in the following sections
Depth-limited Search
Definition
Depth-first search with depth limit l, will truncate all nodes having depth
value greater than l from the search space and apply depth first search on
the rest of the structure
• It return solution if solution exist, if there is no solution
• It return cutoff if l < m, failure otherwise
Properties
Complete? No (fail if all solution exist at depth > l
Time? O(bl )
Space? O(bl )
Optimal? No
Depth-limited search
Recursive implementation:
function Depth-Limited-Search( problem,limit) returns soln/fail/cutoff Recursive-
DLS(Make-Node(Initial-State[problem]), problem,limit)
37
Iterative Deepening Search (IDS)
• IDS solves the issue of choosing the best depth limit by trying all
possible depth limit:
− Perform depth-first search to a bounded depth d, starting at
d = 1 and increasing it by 1 at each iteration.
• This search combines the benefits of DFS and BFS
− DFS is efficient in space, but has no path-length guarantee
− BFS finds min-step path towards the goal, but requires memory space
− IDS performs a sequence of DFS searches with increasing depth-
cutoff until goal is found
Iterative Deepening Search (IDS)
• It is the search strategy that combines the benefits of depth-first and
breadth-first search
• Like depth-first search, its memory requirement is O(bd) to be precise.
• Like breadth-first search, it is complete when the branching factor is
finite and optimal when the path cost is a non-decreasing function of
the depth of the node.
Iterative deepening search
Limit = 3 A A A A
B C B C B C B C
D E F G D E F G D E F G D E F G
H I J K L M N O H I J K L M N O H I J K L M N O H I J K L M N O
A A A A
B C B C B C B C
D E F G E F G E F G D E F G
I J K L M N O J K L M N O J K L M N O K L M N O
A A A A
C B C C C
F G F G F G F G
L M N O L M N O L M N O M N O
39
Algorithm for IDS
Algorithm 6 IDS
1: function IDS (problem) {
2: open = (C_0); //put initial state C_0 in the List
3: closed = {}; //maintain list of nodes examined earlier 4:
while (not reached maxDepth) {
5: while (not (empty (open))) {
6: f = remove_first(open);
7: if IsGoal (f) then return (f);
8: closed = append (closed, f);
9: l = not-in-set (Child-Nodes(f), closed ); 10:
if (depth(l) < maxDepth) then
11: open = merge ( rest(open), l); //prepend to the list
12: }
13: }
14: return (’fail’)
15: }
Properties of iterative deepening search
Complete?? Yes
Optimal?? Yes, if step cost = 1
Can be modified to explore uniform-cost tree Time?? (d
+ 1)b0 + db1 + (d − 1)b2 + . . . + bd = O (bd ) Space??
O(bd)
IDS does better because other nodes at depth d are not expanded BFS can be
modified to apply goal test when a node is generated Iterative lenghtening
not as successul as IDS
40
Exercise
1
S
5
9 6
A 3 6 2 D2
2 B 1
C
2 E
9 7
5 F 7
G1
G2 8 G3
Bidirectional Search
41
Difficulties in Bidirectional Search
42
Comparing Uninformed Search
• b is branching factor,
• d is depth of the shallowest solution,
• m is the maximum depth of the search tree,
Summary
• Before an agent can start searching for solutions, it must formulate a
goal and then use that goal to formulate a problem.
• A problem consists of five parts: The state space, initial
situation, actions, goal test, and path costs.
• A path from an initial state to a goal state is a solution.
• A general search algorithm can be used to solve any problem.
− Specific variants of the algorithm can usedifferent search strategies.
• Search algorithms are judged on the basis of completeness,
optimality, time complexity, and space complexity.
• Iterative deepening in general is the preferred uninformed search method
when there is a large search space and the depth of the solution is not
known.
Informed Search
• Search efficiency would improve greatly if there is a way to order the
choices so that the most promising are explored first.
− This requires domain knowledge of the problem (i.e. heuristic) to
undertake focused search
• Informed search is a strategy that uses information about the cost that
may incur to achieve the goal state from the current state.
• The information may not be accurate. But it will help the agent to make
better decision
− This information is called heuristic information
• The generic name to the class of informed search is called Best First
Search
Heuristic Function
Definition
• Most best-first algorithms include as a component of f a
heuristic function, denoted h(n):
− h(n)= estimated cost of the cheapest path from the state at node n to
a goal state
• Heuristic functions are the most common form in which additional
knowledge of the problem is imparted to the search algorithm.
• Heuristic functions estimates the goodness of a node n, based on
domain specific information that is computable from the current state
description.
Heuristic Function
• Heuristic function for Route finding and 8-puzzle:
− Route finding: h(n): straight line distance from the goal state
− 8-puzzle:
1 h1(n): Number of mismatched tiles and
2 h2(n): Manhattan or city block distance (sum of distances each tile is
from its goal position)
− h1(S) = 6
− h2(S) = 4 + 0 + 3+ 3 + 1 + 0+ 2 + 1 = 14
Best First Search
• It is a generic name to the class of informed methods
• The two best first approaches to find the shortest path:
− Greedy search: minimizes estimated cost to reach a goal
− A∗-search: minimizes the total path cost
• When expanding a node n in the search tree, greedy search uses the
estimated cost to get from the current state to the goal state, define as
h(n).
− In route finding problem
• We also possess the sum of the cost to reach that node from the start
state, define as g(n).
− In route finding problem this is the sum of the step costs for the
search path.
• For each node in the search tree, an evaluation function f (n) can be
defined as the sum of these functions.
f(n) = g(n) + h(n)
Heuristic Function
• Heuristic function for Route finding and 8-puzzle:
− Route finding: h(n): straight line distance from the goal state
− 8-puzzle:
1 h1(n): Number of mismatched tiles and
2 h2(n): Manhattan or city block distance (sum of distances each tile is
from its goal position)
− h1(S) = 6
− h2(S) = 4 + 0 + 3+ 3 + 1 + 0+ 2 + 1 = 14
Best First Search
• It is a generic name to the class of informed methods
• The two best first approaches to find the shortest path:
− Greedy search: minimizes estimated cost to reach a goal
− A∗-search: minimizes the total path cost
• When expanding a node n in the search tree, greedy search uses
the estimated cost to get from the current state to the goal
state, define as h(n).
− In route finding problem
• We also possess the sum of the cost to reach that node from the
start state, define as g(n).
− In route finding problem this is the sum of the step costs for the
search path.
• For each node in the search tree, an evaluation function f (n)
can be defined as the sum of these functions.
f(n) = g(n) + h(n)
Admissibility
• Search algorithms (such as greedy and A∗- search) are admissible when
the heuristic function never over estimated the actual cost so that
− the algorithm always terminates in an optimal path from the initial
state to goal node if one exists.
• Check whether estimated cost h(n) is not overestimated
− g (n): Actual cost of shortest path from n to z
− h(n) is said to be an admissible heuristic function if for all n,
h(n) ≤ g(n)
− The closer estimated cost to actual cost the fewer extra nodes that will
be expanded
− Using an admissible heuristics guarantees that the solution found
by searching algorithm is optimal
Greedy Search
2 B 1
C C 4
2 E D 6
9 7
5 7 E 5
G F F 6
G G
1 2 8 3 G1,G2,G3 0
A*-search Algorithm
• The problem with greedy search is that it didn’t take costs so far into
account.
− We need a search algorithm (like A∗ search) that takes into consideration
this cost so that it avoids expanding paths that are already expensive
• It considers both estimated cost of getting from n to the goal node, h(n),
and cost of getting from initial node to node n, g(n)
• Apply three functions over every nodes
− g(n): Cost of path found so far from initial state to n
− h(n): Estimate cost of shortest path from n to goal state
− f(n): estimated cost of the cheapest solution through n
− Evaluation function f(n) = h(n) + g(n)
A*-search Algorithm
Implementation
• Expand the node for which the evaluation function f(n) is lowest
• Rank nodes by f(n) that goes from the start node to goal node via given
node
• The algorithm is identical to UNIFORM-COST-SEARCH
except that A∗ uses h(n) + g(n) instead of g(n) alone
A*-search Algorithm
Properties
A* search is complete, optimal, and optimally efficient for any given
admissible heuristic function
• Complete? Yes-It guarantees to reach to goal state
• Time? and Space?
− It keeps all generated nodes in memory
− Both time and space complexity is exponential
− Time and space complexity of heuristic algorithms depends on the
quality of the heuristic function.
• Worst case is exponential
• Optimal? If the given heuristic function is admissible, then A* search
is optimal.
Romania with step costs in km
Straight−line distance
Oradea to Bucharest
71
Neamt Arad 366
87 Bucharest 0
Zerind 151 Craiova
75 160
Iasi Dobreta 242
Arad 140 Eforie 161
92 Fagaras 178
Sibiu 99 Fagaras Giurgiu 77
118 Hirsova 151
Vaslui
80 Iasi 226
Timisoara
Rimnicu Vilcea Lugoj 244
Mehadia 241
142
111 211 Neamt 234
Lugoj 97 Pitesti
Oradea 380
70 98 Pitesti 98
146 85 Hirsova Rimnicu Vilcea
Mehadia 101 Urziceni
193
86 Sibiu 253
75 138 Timisoara
Bucharest 329
Dobreta
120 Urziceni 80
90
Craiova
Vaslui 199
Eforie
Giurgiu Zerind 374
51
A ∗ search
55
A ∗ search example
Arad
56
Exercise
2 8 3 1 2 3
1 6 4 8 4
7 5 7 6 5
Initial State Final State
2 B 1
C C 4
2 E D 6
9 7
5 7 E 5
G F F 6
G G
1 2 8 3 G1,G2,G3 0
Constraint Satisfaction Problem (CSP)
76
Standard search formulation
States are defined by the values assigned so far
♦ Initial state: the empty assignment, { }
♦ Successor function: assign a value to an unassigned variable that does not conflict
with current assignment.
= ⇒ fail if no legal assignments (not fixable!)
♦ Goal test: the current assignment is complete
77
Backtracking search
78
Backtracking search
79
Summary
80
Games As Search Problems