1.module - Searching 2024
1.module - Searching 2024
1.module - Searching 2024
and Search
Problem Solving: Introduction
Problem
•Problems are the issues which comes across any system. A solution is
needed to solve that particular problem.
Problem Searching
•In general, searching refers to as finding information one needs.
•Searching is the most commonly used technique of problem solving in
artificial intelligence.
•The searching algorithm helps us to search for solution of particular
problem
Problem Formulation
•Initial state
•Action
•Transition model
•Path cost
•Goal state
Road map of Romania
Road map of Romania
• The initial state that the agent starts in. For example, the initial state for
our agent in Romania might be described as In(Arad)
• Action: A description of the possible actions available to the agent.
{Go(Sibiu), Go(Timisoara), Go(Zerind)}.
• TRANSITION MODEL : RESULT(s, a) that returns the state that results
from doing action a in state
• RESULT(In(Arad),Go(Zerind)) = In(Zerind)
• Path cost: assign numeric cost to each path. In present case distance in
KM.
• goal test, which determines whether a given state is a goal state.
Vacuum World Problem
8 puzzle problem
8 – queen problem
State Problem
• A search problem consists of:
• A State Space. Set of all possible states where you can be.
• A Start State. The state from where the search begins.
• A Goal Test. A function that looks at the current state returns whether
or not it is the goal state.
• Rules. describe the action available
A water Jug Problem
Breadth-first search is a simple strategy in which the root node is expanded first, then all the
successors of the root node are expanded next, then their successors, and so on.
FIFO queue
The goal test is applied to each node when it is generated rather than when it is selected for expansion
Breadth First Search
• In this strategy, the root node is expanded first, then all the nodes
generated by the root node are expanded next, and then their
successors,and so on.
BFS for 8 puzzle problem
Uniform – Cost search
• Expands the node with the lowest path cost from the available nodes
• Different from BFS
• First goal state is applied to a node when it is selected for expansion
rather than when it is generated
• Test is added in case a better path is found to a node currently on the
frontier.
Depth-first search
• DFS always expands the deepest node in the current
frontier of search tree
• The search tree proceeds to the deepest level where node
have no successor
• As nodes are expanded they are dropped from the frontier
so then the search backs up to the next deepest node that
still has unexplored successors
• LIFO
• Most recently generated node is chosen for expansion.
Depth-First Search
• We purse a single branch of the tree until it yields a solution or until a
decision to terminate the path is made.
• It make sense to terminate a path if it reaches a dead-end, produce
previous state
• Backtracking: The current state from which the alternate moves are
available will be revisited and new state will be created called
chronological backtracking
Algorithm
Push the root node onto stack
While (stack not empty)
Pop a node
If it is a goal node
Push all the children of the node in the stack
Return failure
Performance BFS,Uniform,DFS
• Strategies are evaluated along the following criteria:
– Completeness: does it always find a solution if one exists?
– Optimality: does it always find a least-cost solution?
– Time complexity: number of nodes generated
– Space complexity: maximum number of nodes in memory
• Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the optimal solution
– m: maximum length of any path in the state space (may be infinite)
Properties of breadth-first search
• Complete?
Yes (if branching factor b is finite)
• Optimal?
Yes – if cost = 1 per step
• Time?
Number of nodes in a b-ary tree of depth d: O(bd)
(d is the depth of the optimal solution)
• Space?
O(bd)
• Complete?
Fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
🡪 complete in finite spaces
• Optimal?
No – returns the first solution it finds
• Time?
Could be the time to reach a solution at maximum depth m: O(bm)
Terrible if m is much larger than d
But if there are lots of solutions, may be much faster than BFS
• Space?
Iterative deepening search
• Use DFS as a subroutine
1. Check the root
2. Do a DFS searching for a path of length 1
3. If there is no path of length 1, do a DFS searching for a path of
length 2
4. If there is no path of length 2, do a DFS searching for a path of
length 3…
Iterative deepening search
Iterative deepening search
Iterative deepening search
Iterative deepening search
Properties of iterative deepening search
• Complete?
Yes
• Optimal?
Yes, if step cost = 1
• Time?
(d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
• Space?
O(bd)
Depth Limited search
• DFS is performed upto a predefined depth
• It can terminate with two kind of failure
• The standard failure value indicate no solution
• The cutoff value indicate no solution with depth limit.
BFS vs DFS
BFS DFS
BFS Traverse the tree level wise DFS traverse tree depth wise
Queue Stack
BFS never gets trapped into infinite loop DFS gets trapped into infinite loops
Travelling Salesman Problem – Need of Heuristics:
Informed Search
• A salesman has a list of cities, each of which he must visit exactly once. There are direct roads
between each pair of cities on the list. Find the route the salesman should follow for the shortest
possible round trip that both starts and finishes at anyone of the cities.
• N cities possible path (N-1)!
• Heuristic Search: No longer guarantees to find the best answer but will find a very good answer.
Definition :
Newell, Shaw, and Simon (1958):
✔ “A process that MAY solve a given problem, but offers no guarantees of doing so is called a heuristic
for that problem.”
Feigenbaum and Feldman (1963):
✔ “A state heuristic (heuristic rule, heuristic method) is a rule of thumb, strategy, trick, simplification, or
any other kind of device which drastically limits search for solutions in large problem spaces.
Heuristics do not guarantee optimal solutions; in fact, they do not guarantee any solution at all. At
the most what can be said for a useful heuristic is that it offers solutions which are good enough most
of the time.”
General Purpose
Applying it to the travelling salesman problem we get the following
procedure:
1. Arbitrarily select a starting city.
2. To select the next city, look at all cities not visited and select the one
closest to the current city. Go to it.
3. Repeat step 2 until all cities have been visited.
With the heuristic the TSM problem can be executed in time proportional
to N2, instead of N-1!
Best First Search
• Search start from root node
• The node to be expanded next is selected in the basis of a evaluation
function f(n)
• The node having lowest value of f(n) is selected first
• Lowest value of f(n) indicate that the goal is nearest from this node
• It is implemented using priority queue highest priority is given to a node
which have low f(n) value.
Informed Search
Heuristic
• It’s a Guided Search
• Method that might not always find best solution but guarantees a good
function in reasonable time
• Helpful for problem which takes infinite time
• sacrifice completeness but increase efficiency
• Heuristic Function :
• Function is a way to inform search about the direction to a goal
• It provide informed way to guess which neighbor of a node will lead to a goal
• Estimate cost of the cheapest path from node ‘n’ to a goal node.
• Guide the search process in the most profitable path among all the available
path.
Greedy Best First Search
• Greedy best-first search tries to expand the node that is closest to the
goal, on the grounds that this is likely to lead to a solution quickly.
Thus, it evaluates nodes by using just the heuristic function;
that is, f(n) = h(n)
• problems in Romania; we use the straight line distance heuristic
• .If the goal is Bucharest, we need to STRAIGHT-LINE DISTANCE
know the straight-line distances to Bucharest, hSLD (In(Arad)) =
366.
• Notice that the values of hSLD cannot be computed from the problem
description itself(it is provided)
Example
◻ The AO* method divides any given difficult problem into a smaller group of
problems that are then resolved using the AND-OR graph concept.
◻ AND OR graphs are specialized graphs that are used in problems that can be
divided into smaller problems.
◻ The AND side of the graph represents a set of tasks that must be completed to
achieve the main goal, while the OR side of the graph represents different
methods for accomplishing the same main goal.
example of a simple AND-OR graph.
Working of AO* algorithm:
◻ They are both informed search and works on given heuristics values.
◻ A* always gives the optimal solution but AO* doesn’t guarantee to give the
optimal solution.
◻ Once AO* got a solution doesn’t explore all possible paths but A* explores all
paths.
◻ When compared to the A* algorithm, the AO* algorithm uses less memory.
◻ opposite to the A* algorithm, the AO* algorithm cannot go into an endless loop.
Example
Example: Step 1
• Here in the above example below the Node which is given is the heuristic value i.e h(n).
• Edge length is considered as 1 i.e g(n).
With help of f(n) = g(n) + h(n) evaluation function,
Example: Step 2
◻ The AO* algorithm is not optimal because it stops as soon as it finds a solution
and does not explore all the paths.
◻ However, AO* is complete, meaning it finds a solution, if there is any, and does
not fall into an infinite loop.
◻ Moreover, the AND feature in this algorithm reduces the demand for memory.
◻ the strength of this algorithm comes from a divide and conquer strategy
Real-Life Applications of AO* algorithm:
◻ The vehicle routing problem is determining the shortest routes for a fleet of
vehicles to visit a set of customers and return to the depot, while minimizing the
total distance traveled and the total time taken. The AO* algorithm can be used to
find the optimal routes that satisfy both objectives.
◻ Portfolio Optimization:
● The working of the minimax algorithm can be easily described using an example. Below we
have taken an example of game-tree which is representing the two-player game.
● In this example, there are two players one is called Maximizer and other is called Minimizer.
● Maximizer will try to get the Maximum possible score, and Minimizer will try to get the
minimum possible score.
● This algorithm applies DFS, so in this game-tree, we have to go all the way through the leaves
to reach the terminal nodes.
● At the terminal node, the terminal values are given so we will compare those value and
backtrack the tree until the initial state occurs. Following are the main steps involved in
solving the two-player game tree:
Step-1: In the first step, the algorithm generates the entire
game-tree and apply the utility function to get the utility values
for the terminal states. In the below tree diagram, let's take A is
the initial state of the tree. Suppose maximizer takes first turn
which has worst-case initial value =- infinity, and minimizer will
take next turn which has worst-case initial value = +infinity.
Min-Max Algorithm:
● For node D max(-1,- -∞) => max(-1,4)= 4 Step 2: Now, first we find the utilities
value for the Maximizer, its initial
● For Node E max(2, -∞) => max(2, 6)= 6 value is -∞, so we will compare each
value in terminal state with initial
● For Node F max(-3, -∞) => max(-3,-5) = -3
value of Maximizer and determines
● For node G max(0, -∞) = max(0, 7) = 7 the higher nodes values. It will find
the maximum among the all.
Min-Max Algorithm: