AI-UNIT-I-PPT
AI-UNIT-I-PPT
AI-UNIT-I-PPT
• Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data
structures. The algorithm starts at the root node (selecting some arbitrary node as the root
node in the case of a graph) and explores as far as possible along each branch before
backtracking.
Example:
• Question. Which solution would DFS find to move from node S to node G if run on the
graph below?
Solution. The equivalent search tree for the above graph is as follows. As DFS traverses
the tree “deepest node first”, it would always pick the deeper branch until it reaches the
solution (or it runs out of nodes, and goes to the next branch). The traversal is shown in
blue arrows.
Path: S -> A -> B -> C -> G
= the depth of the search tree = the number of levels of the search tree. =
number of nodes in level .
Time complexity: Equivalent to the number of nodes traversed in DFS
Space complexity: Equivalent to how large can the fringe get.
Completeness: DFS is complete if the search tree is finite, meaning for a
given finite search tree, DFS will come up with a solution if it exists.
Optimality: DFS is not optimal, meaning the number of steps in reaching
the solution, or the cost spent in reaching it is high.
DEPTH FIRST SEARCH
• Example: Question. Which solution would BFS find to move from node S to node
G if run on the graph below?
• Solution. The equivalent search tree for the above graph is as
follows. As BFS traverses the tree “shallowest node first”, it would
always pick the shallower branch until it reaches the solution (or it
runs out of nodes, and goes to the next branch). The traversal is
shown in blue arrows.
Path: S -> D -> G
Time complexity: Equivalent to the number of nodes traversed in BFS until the
shallowest solution.
Completeness: BFS is complete, meaning for a given search tree, BFS will come up
Optimality: BFS is optimal as long as the costs of all edges are equal.
INFORMED SEARCH ALGORITHMS
The following search algorithms.
1. Greedy Search
2. A* Tree Search
3. A* Graph Search
Search Heuristics: A heuristic is a function that estimates how close a state is to
the goal state.
Ex: Manhattan distance, Euclidean distance, etc. (Lesser the distance, closer the goal.)
Greedy Search: In greedy search, we expand the node closest to the goal node. The “closeness” is
estimated by a heuristic h(x).
Heuristic: A heuristic h is defined as
h(x) = Estimate of distance of node x from the goal node.
Lower the value of h(x), closer is the node from the goal.
Ex: Q). Find the path from S to G using greedy search. The heuristic values
h of each node below the name of the node.
Solution. Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it has the
lower heuristic cost. Now from D, we can move to B(h=4) or E(h=3). We choose E with a
lower heuristic cost. Finally, from E, we go to G(h=0). This entire traversal is shown in the
search tree below, in blue.
Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in the fringe at
each step, choosing the node with the lowest sum.
we get two paths with equal summed cost f(x), so we expand them both in the next set. The path
with a lower cost on further expansion is the chosen path.
S -> A 9 3 12
S -> D 5 2 7
o On Y-axis we have taken the function which can be an objective function or cost
function, and state-space on the x-axis. If the function on Y-axis is cost then, the goal
of search is to find the global minimum and local minimum. If the function of Y-axis
is Objective function, then the goal of the search is to find the global maximum and
local maximum.
Different regions in the State Space Diagram:
• Local maximum: It is a state which is better than its neighboring state however there
exists a state which is better than it(global maximum). This state is better because here
the value of the objective function is higher than its neighbors.
• Global maximum: It is the best possible state in the state space diagram. This is because,
at this stage, the objective function has the highest value.
• Plateau/flat local maximum: It is a flat region of state space where neighboring states
have the same value.
• Ridge: It is a region that is higher than its neighbors but itself has a slope. It is a special
kind of local maximum.
• Current state: The region of the state space diagram where we are currently present
during the search.
• Shoulder: It is a plateau that has an uphill edge.
HILL CLIMBING PROCEDURE
Hill Climbing Algorithm:
We will assume we are trying to maximize a function. To find a point in the search
space that is better than all the others. And by "better" we mean that the evaluation
is higher. We might also say that the solution is of better quality than all the others.
The idea behind hill climbing is as follows.
1. Pick a random point in the search space.
2. Consider all the neighbors of the current state.
3. Choose the neighbor with the best quality and move to that state.
4. Repeat 2 thru 4 until all the neighboring states are of lower quality.
5. Return the current state as the solution state.
Algorithm:
Function HILL-CLIMBING(Problem) returns a solution state
Inputs: Problem, problem
Local variables: Current, a node Next, a node
Current = MAKE-NODE(INITIAL-STATE[Problem])
Loop do
Next = a highest-valued successor of Current
If VALUE[Next] < VALUE[Current] then return Current
Current = Next
End
If two neighbors have the same evaluation and they are both the best quality,
then the algorithm will choose between them at random.
Types of Hill Climbing Algorithm
1. Simple hill Climbing
2. Steepest-Ascent hill-climbing
3. Stochastic hill Climbing
Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current
state.
c. Else if not better than the current state, then return to step2.
Step 5: Exit.
2. Steepest-Ascent hill climbing
All the neighboring nodes of the current state and selects one neighbor node which is
closest to the goal state.
Algorithm for Steepest-Ascent hill climbing:
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current
state as
initial state.
Step 2: Loop until a solution is found or the current state does not change.
Step 3. Let SUCC be a state such that any successor of the current state will be better than it.
Step 4. For each operator that applies to the current state:
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to the SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current state to SUCC.
3. Stochastic hill climbing
Stochastic hill climbing does not examine for all its neighbor before moving.
Rather, this search algorithm selects one neighbor node at random and decides
whether to choose it as a current state or examine another state.
Problems in Hill Climbing Algorithm:
1. Local Maximum: A local maximum is a peak state in the landscape which is
better than each of its neighboring states, but there is another state also present which
is higher than the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state
space landscape. Create a list of the promising path so that the algorithm can
backtrack the search space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of
the current state contains the same value, because of this algorithm does not find any best
direction to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from
the current state so it is possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is
higher than its surrounding areas, but itself has a slope, and cannot be reached in a
single move.
Solution: With the use of bidirectional search, or by moving in different
directions, we can improve this problem.
SIMULATED ANNEALING SEARCH
Parameters of problem:
Number of dimensions (d)
Lower bound (minx)
Upper bound (maxx)
Hyperparameters of the algorithm:
Number of particles (N)
Maximum number of iterations (max_iter)
Inertia (w)
Cognition of particle (C1)
Social influence of swarm (C2)
Covariance matrix adaptation evolution strategies (CMA-ES) are a more
advanced type of evolutionary strategy that adaptively change the mutation
distribution. In CMA-ES, the algorithm starts with an initial population of
candidate solutions, just like ES. The algorithm then uses a covariance matrix to
adaptively change the mutation distribution. This allows the algorithm to
efficiently explore the search space and find optimal solutions.