Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

AI-UNIT-I-PPT

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 50

INTRODUCTION TO AI

• AI is increasingly used across various industries, with McKinsey


estimating it could create 600 billion dollars in retail, 50% more in
banking, and 89% more in transport and logistics.
• AI can automate repetitive tasks, allowing sales representatives to
focus on high-value tasks. Implementing AI at scale leads to cost
reduction and increased revenue, allowing companies to handle
complex data and focus on high-value tasks.
What Is AI?
• Artificial Intelligence (AI) is a man-
made thinking power that allows
machines to function like humans,
enabling them to learn, reason, and
solve problems.
• It is used in various fields like self-
driving cars, chess, music, and
painting. AI can be programmed to
work with its own intelligence,
eliminating the need for preprograms.
INTELLIGENT AGENTS

• An AI system consists of an agent and


its environment, which can include
other agents.
• Agents can perceive their environment
through sensors and act upon it through
effectors. Human agents use sensory
organs, robotic agents use motors and
actuators, and software agents use
encoded bit strings.
Cont..
Agent Terminology:
 Performance Measure of Agent − It is the criteria, which determines how
successful an agent is.
 Behavior of Agent − It is the action that agent performs after any given
sequence of percepts.
 Percept − It is agent’s perceptual inputs at a given instance.
 Percept Sequence − It is the history of all that an agent has perceived till
date.
• Agent Function − It is a map from the precept sequence to an action
The Structure of Intelligent Agents
Agent’s structure can be viewed as −
Agent = Architecture + Agent Program
Architecture = The machinery that an agent executes on.
Agent Program = An implementation of an agent
function.
1.Simple Reflex Agents
They choose actions only based on the current percept.
They are rational only if a correct decision is made only
on the basis of current precept.
Their environment is completely observable.
2. Condition-Action Rule − It is a rule that
maps a state (condition) to an action
3. Model Based Reflex Agents

• They use a model of the world to choose their


actions. They maintain an internal state.
• Model − knowledge about “how the things
happen in the world”.
• Internal State − It is a representation of
unobserved aspects of current state depending
on percept history.
• Updating the state requires the information
about −
 How the world evolves.

 How the agent’s actions affect the world.


4. Goal Based Agents

• They choose their actions in order to


achieve goals. Goal-based approach is
more flexible than reflex agent since the
knowledge supporting a decision is
explicitly modeled, thereby allowing for
modifications.

• Goal − It is the description of desirable


situations.
5. Utility Based Agents

• They choose actions based on a


preference (utility) for each state.
• Goals are inadequate when −
 There are conflicting goals, out
of which only few can be
achieved.
 Goals have some uncertainty of
being achieved and you need to
weigh likelihood of success
against the importance of a goal.
PROBLEM-SOLVING AGENTS
• Problem-solving is a goal-driven process that involves defining
problems and their solutions. It is a part of artificial intelligence
and involves techniques like algorithms and heuristics.
• The process begins with goal formulation, organizing steps and
actions to achieve a goal. The most important step is problem
formulation, which includes initial state, actions, transition
model, goal test, and path cost.
• The optimal solution has the lowest path cost among all
solutions, ensuring the agent's performance measure is met.
EXAMPLE PROBLEMS

Basically, there are two types of problem approaches:


• Toy Problem: It is a concise and exact description of the problem which is used by the researchers
to compare the performance of algorithms.
• Real-world Problem: It is real-world based problems which require solutions. Unlike a toy
problem, it does not depend on descriptions, but we can have a general formulation of the problem.

Some Toy Problems


8 Puzzle Problem: Here, we have a 3×3 matrix with movable tiles numbered from 1 to 8 with a blank
space. The tile adjacent to the blank space can slide into that space. The objective is to reach a
specified goal state similar to the goal state, as shown in the below figure.
• In the figure, our task is to convert the current state into goal state by sliding digits into the blank
space.
The problem formulation is as follows:
• States: It describes the location of each numbered tiles and the blank tile.
• Initial State: We can start from any state as the initial state.
• Actions: Here, actions of the blank space is defined, i.e., either left, right, up or down
• Transition Model: It returns the resulting state as per the given state and actions.
• Goal test: It identifies whether we have reached the correct goal-state.
• Path cost: The path cost is the number of steps in the path where the cost of each step
• 8-queens problem: The aim of this problem is to place eight queens on a chessboard in
an order where no queen may attack another. A queen can attack other queens either
diagonally or in same row and column.
Cont..
For this problem, there are two main kinds of formulation:
• Incremental formulation: It starts from an empty state where the operator

augments a queen at each step.


Following steps are involved in this formulation:
• States: Arrangement of any 0 to 8 queens on the chessboard.
• Initial State: An empty chessboard
• Actions: Add a queen to any empty box.
• Transition model: Returns the chessboard with the queen added in a box.
• Goal test: Checks whether 8-queens are placed on the chessboard without any
attack.
• Path cost: There is no need for path cost because only final states are counted.
SEARCHING FOR SOULTIONS
Artificial Intelligence is the study of building agents that act rationally. Most of the time, these
agents perform some kind of search algorithm in the background in order to achieve their tasks.
A search problem consists of:
1. A State Space. Set of all possible states where you can be.
2. A Start State. The state from where the search begins.
3. A Goal Test. A function that looks at the current state returns whether or not it is the goal state.
4. The Solution to a search problem is a sequence of actions, called the plan that transforms the start
state to the goal state.
5. This plan is achieved through search algorithms.
Types of search algorithms:
• There are far too many powerful search algorithms out there to fit in a single article. Instead, this
article will discuss six of the fundamental search algorithms, divided into two categories.
UNINFORMED SEARCH ALGORITHMS
The search algorithms in this section have no additional information on the goal node other than the one
provided in the problem definition. The plans to reach the goal state from the start state differ only by the
order and/or length of actions. Uninformed search is also called Blind search.
The following uninformed search algorithms are discussed in this section.
1. Depth First Search
2. Breadth First Search
3. Uniform Cost Search
Each of these algorithms will have:
• A problem graph, containing the start node S and the goal node G.
• A strategy, describing the manner in which the graph will be traversed to get to G.
• A fringe, which is a data structure used to store all the possible states (nodes) that you can go from the
current states.
• A tree, that results while traversing to the goal node.
• A solution plan, which the sequence of nodes from S to G.
BREADTH FIRST SEARCH

• Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data
structures. The algorithm starts at the root node (selecting some arbitrary node as the root
node in the case of a graph) and explores as far as possible along each branch before
backtracking.

Example:

• Question. Which solution would DFS find to move from node S to node G if run on the
graph below?
Solution. The equivalent search tree for the above graph is as follows. As DFS traverses
the tree “deepest node first”, it would always pick the deeper branch until it reaches the
solution (or it runs out of nodes, and goes to the next branch). The traversal is shown in
blue arrows.
Path: S -> A -> B -> C -> G
= the depth of the search tree = the number of levels of the search tree. =
number of nodes in level .
Time complexity: Equivalent to the number of nodes traversed in DFS
Space complexity: Equivalent to how large can the fringe get.
Completeness: DFS is complete if the search tree is finite, meaning for a
given finite search tree, DFS will come up with a solution if it exists.
Optimality: DFS is not optimal, meaning the number of steps in reaching
the solution, or the cost spent in reaching it is high.
DEPTH FIRST SEARCH

• Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph


data structures. It starts at the tree root (or some arbitrary node of a graph,
sometimes referred to as a ‘search key’), and explores all of the neighbour nodes at
the present depth prior to moving on to the nodes at the next depth level.

• Example: Question. Which solution would BFS find to move from node S to node
G if run on the graph below?
• Solution. The equivalent search tree for the above graph is as
follows. As BFS traverses the tree “shallowest node first”, it would
always pick the shallower branch until it reaches the solution (or it
runs out of nodes, and goes to the next branch). The traversal is
shown in blue arrows.
Path: S -> D -> G

= the depth of the shallowest solution.

= number of nodes in level.

Time complexity: Equivalent to the number of nodes traversed in BFS until the

shallowest solution.

Space complexity: Equivalent to how large can the fringe get.

Completeness: BFS is complete, meaning for a given search tree, BFS will come up

with a solution if it exists.

Optimality: BFS is optimal as long as the costs of all edges are equal.
INFORMED SEARCH ALGORITHMS
The following search algorithms.
1. Greedy Search
2. A* Tree Search
3. A* Graph Search
Search Heuristics: A heuristic is a function that estimates how close a state is to
the goal state.
Ex: Manhattan distance, Euclidean distance, etc. (Lesser the distance, closer the goal.)

Greedy Search: In greedy search, we expand the node closest to the goal node. The “closeness” is
estimated by a heuristic h(x).
Heuristic: A heuristic h is defined as
h(x) = Estimate of distance of node x from the goal node.
Lower the value of h(x), closer is the node from the goal.
Ex: Q). Find the path from S to G using greedy search. The heuristic values
h of each node below the name of the node.
Solution. Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it has the
lower heuristic cost. Now from D, we can move to B(h=4) or E(h=3). We choose E with a
lower heuristic cost. Finally, from E, we go to G(h=0). This entire traversal is shown in the
search tree below, in blue.

Path: S -> D -> E -> G


Advantage: Works well with informed search problems, with fewer steps to reach a goal.

Disadvantage: Can turn into unguided DFS in the worst case.


Ex : Q). Find the path to reach from S to G using A* search.

Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in the fringe at
each step, choosing the node with the lowest sum.
we get two paths with equal summed cost f(x), so we expand them both in the next set. The path
with a lower cost on further expansion is the chosen path.

Path h(x) g(x) f(x)


S 7 0 7

S -> A 9 3 12
S -> D 5 2 7

S -> D -> B 4 2+1=3 7


S -> D -> E 3 2+4=6 9

S -> D -> B -> C 2 3+2=5 7


S -> D -> B -> E 3 3+1=4 7

S -> D -> B -> C -> G 0 5+4=9 9


S -> D -> B -> E -> G 0 4+3=7 7

Path: S -> D -> B -> E -> G


Cost: 7
A* Graph Search
• A* tree search works well, except that it takes time re-exploring the
branches it has already explored. If the same node has expanded twice in
different branches of the search tree, A* search might explore both of those
branches, thus wasting time.
• A* Graph Search, or simply Graph Search, removes this limitation by
adding this rule: do not expand the same node more than once.
• Heuristic. Graph search is optimal only when the forward cost between two
successive nodes A and B, given by h(A) – h(B), is less than or equal to the
backward cost between those two nodes g(A --> B). This property of the
graph search heuristic is called consistency.

Consistency: h(A)-h(B) ≤ g(A-->B)


Ex: Q). Use graph searches to find paths from S to G in the following graph.

Solution. we keep a track of nodes explored so that we don’t re-explore them.


Path: S -> D -> B -> E -> G
Cost: 7
HILL-CLIMBING SEARCH
 Hill climbing algorithm is a local search algorithm which continuously
moves in the direction of increasing value to find the peak of the mountain
or best solution to the problem. It terminates when it reaches a peak value
where no neighbor has a higher value.
In this technique which is used for optimizing the mathematical problems.
Examples of Hill climbing algorithm is Traveling-salesman Problem in
which we need to minimize the distance travelled by the salesman.
A node of hill climbing algorithm has two components which are state and
value.
Hill Climbing is mostly used when a good heuristic is available.
In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.
FEATURES OF HILL CLIMBING
Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The
Generate and Test method produce feedback which helps to decide which direction to move
in the search space.
Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes
the cost.
No backtracking: It does not backtrack the search space, as it does not remember the
previous states.
STATE-SPACE DIAGRAM FOR HILL CLIMBING
o The state-space landscape is a graphical representation of the hill-climbing
algorithm which is showing a graph between various states of algorithm and
Objective function/Cost.

o On Y-axis we have taken the function which can be an objective function or cost
function, and state-space on the x-axis. If the function on Y-axis is cost then, the goal
of search is to find the global minimum and local minimum. If the function of Y-axis
is Objective function, then the goal of the search is to find the global maximum and
local maximum.
Different regions in the State Space Diagram:
• Local maximum: It is a state which is better than its neighboring state however there
exists a state which is better than it(global maximum). This state is better because here
the value of the objective function is higher than its neighbors.
• Global maximum: It is the best possible state in the state space diagram. This is because,
at this stage, the objective function has the highest value.
• Plateau/flat local maximum: It is a flat region of state space where neighboring states
have the same value.
• Ridge: It is a region that is higher than its neighbors but itself has a slope. It is a special
kind of local maximum.
• Current state: The region of the state space diagram where we are currently present
during the search.
• Shoulder: It is a plateau that has an uphill edge.
HILL CLIMBING PROCEDURE
Hill Climbing Algorithm:
We will assume we are trying to maximize a function. To find a point in the search
space that is better than all the others. And by "better" we mean that the evaluation
is higher. We might also say that the solution is of better quality than all the others.
The idea behind hill climbing is as follows.
1. Pick a random point in the search space.
2. Consider all the neighbors of the current state.
3. Choose the neighbor with the best quality and move to that state.
4. Repeat 2 thru 4 until all the neighboring states are of lower quality.
5. Return the current state as the solution state.
Algorithm:
Function HILL-CLIMBING(Problem) returns a solution state
Inputs: Problem, problem
Local variables: Current, a node Next, a node
Current = MAKE-NODE(INITIAL-STATE[Problem])
Loop do
Next = a highest-valued successor of Current
If VALUE[Next] < VALUE[Current] then return Current
Current = Next
End

If two neighbors have the same evaluation and they are both the best quality,
then the algorithm will choose between them at random.
Types of Hill Climbing Algorithm
1. Simple hill Climbing
2. Steepest-Ascent hill-climbing
3. Stochastic hill Climbing

1. Simple Hill Climbing:


It is the simplest way to implement a hill climbing algorithm. It only evaluates
the neighbour node state at a time and selects the first one which optimizes
current cost and set it as a current state. It only checks it's one successor state,
and if it finds better than the current state, then move else be in the same state.
This algorithm has the following features:
o Less time consuming
o Less optimal solution and the solution is not guaranteed
1. Algorithm for Simple Hill Climbing:

Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current
state.
c. Else if not better than the current state, then return to step2.

Step 5: Exit.
2. Steepest-Ascent hill climbing
All the neighboring nodes of the current state and selects one neighbor node which is
closest to the goal state.
Algorithm for Steepest-Ascent hill climbing:
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current
state as
initial state.
Step 2: Loop until a solution is found or the current state does not change.
Step 3. Let SUCC be a state such that any successor of the current state will be better than it.
Step 4. For each operator that applies to the current state:
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to the SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current state to SUCC.
3. Stochastic hill climbing
 Stochastic hill climbing does not examine for all its neighbor before moving.
Rather, this search algorithm selects one neighbor node at random and decides
whether to choose it as a current state or examine another state.
Problems in Hill Climbing Algorithm:
1. Local Maximum: A local maximum is a peak state in the landscape which is
better than each of its neighboring states, but there is another state also present which
is higher than the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state
space landscape. Create a list of the promising path so that the algorithm can
backtrack the search space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of
the current state contains the same value, because of this algorithm does not find any best
direction to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from
the current state so it is possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is
higher than its surrounding areas, but itself has a slope, and cannot be reached in a
single move.
Solution: With the use of bidirectional search, or by moving in different
directions, we can improve this problem.
SIMULATED ANNEALING SEARCH

 A hill-climbing algorithm which never makes a move towards a lower value


guaranteed to be incomplete because it can get stuck on a local maximum. if
algorithm applies a random walk, by moving a successor, then it may complete
but not efficient. Simulated Annealing is an algorithm which yields both
efficiency and completeness.
 In mechanical term Annealing is a process of hardening a metal or glass to a
high temperature then cooling gradually, so this allows the metal to reach a low-
energy crystalline state. The same process is used in simulated annealing in
which the algorithm picks a random move, instead of picking the best move.
 If the random move improves the state, then it follows the same path.
Otherwise, the algorithm follows the path which has a probability of less than 1
or it moves downhill and chooses another path.
Algorithm:
Def hill_climbing(f, x0):
x = x0 # initial solution
while True:
neighbors = generate_neighbors(x) # generate neighbors of x
# find the neighbor with the highest function value
best neighbor = max(neighbors, key=f)
if f(best_neighbor) <= f(x): # if the best neighbor is not better
than x, stop
return x
x = best_neighbor # otherwise, continue with the best neighbor
LOCAL SEARCH IN CONTINUOUS SPACES
Local search in continuous spaces is a technique used in AI and optimization to
find an optimal solution within an infinite search space.
It involves an objective function, local optimum, hill climbing, simulated
annealing, genetic algorithms, particle swarm optimization (PSO), and
Covariance Matrix Adaptation Evolution Strategy (CMA-ES). These methods
focus on exploring the neighborhood of a current solution and minimizing the
objective function.
Gradient descent is a common algorithm, while hill climbing involves
iteratively improving the current solution by moving to a neighboring solution
with a higher objective function value. Genetic algorithms, Particle Swarm
Optimization, and CMA-ES are population-based optimization techniques used
to explore the solution space.
 The Particle Swarm Optimization (PSO) algorithm is a computational
technique inspired by the collective behavior of natural organisms, such as birds
or fish, that move together to achieve a common goal. In PSO, a group of
particles navigates through a problem’s solution space to find the best possible
solution.

Parameters of problem:
Number of dimensions (d)
Lower bound (minx)
Upper bound (maxx)
Hyperparameters of the algorithm:
Number of particles (N)
Maximum number of iterations (max_iter)
Inertia (w)
Cognition of particle (C1)
Social influence of swarm (C2)
Covariance matrix adaptation evolution strategies (CMA-ES) are a more
advanced type of evolutionary strategy that adaptively change the mutation
distribution. In CMA-ES, the algorithm starts with an initial population of
candidate solutions, just like ES. The algorithm then uses a covariance matrix to
adaptively change the mutation distribution. This allows the algorithm to
efficiently explore the search space and find optimal solutions.

You might also like