AI module 1 ,2 notes 7 th sem updated
AI module 1 ,2 notes 7 th sem updated
Module 1&2
Artificial intelligence or AI refers to the simulation of human intelligence in machines that are
programmed to think and act like humans... The term was coined by John McCarthy in
1956. AI is the ability to acquire, understand and apply knowledge to achieve goals in
the world.
AI is unique, sharing borders with Mathematics, Computer Science,
Philosophy, Psychology, Biology, Cognitive Science and many
others.
Although there is no clear definition of AI or even Intelligence, it can be described as an
attempt to build machines that like humans can think and act, able to learn and use
knowledge to solve problems on their own.
1.1.2 Applications of AI:
AI algorithms have attracted close attention of researchers and have also been applied
successfully to solve problems in engineering. Nevertheless, for large and complex problems,
AI algorithms consume considerable computation time due to stochastic feature of the search
approaches
8. Information retrieval
9. Space shuttle scheduling
1.1.3 Building AI Systems:
1) Perception
Intelligent biological systems are physically embodied in the world and experience the
world through their sensors (senses). For an autonomous vehicle, input might be images
from a camera and range information from a rangefinder. For a medical diagnosis
system, perception is the set of symptoms and test results that have been obtained and
input to this system manually.
2) Reasoning
Inference, decision-making, classification from what is sensed and what the internal "model" is
of the world. Might be a neural network, logical deduction system, Hidden Markov Model
induction, heuristic searching a problem space, Bayes Network inference, genetic algorithms,
etc. Includes areas of knowledge representation, problem solving, decision theory, planning,
game theory, machine learning, uncertainty reasoning, etc.
3) Action
Biological systems interact within their environment by actuation, speech, etc. All behavior is
centered around actions in the world. Examples include controlling the steering of a Mars rover or
autonomous vehicle, or suggesting tests and making diagnoses for a medical diagnosis system.
Includes areas of robot actuation, natural language generation, and speech synthesis.
1.1.4 The definitions of AI:
a) "The
"Theexciting new effort
automation to make
of] activities that we b) "The
"Thestudy of mental
study of the faculties
computations
c) "The art of creating machines that d) "A field of study that seeks to explain
99 1 )
The definitions on the top, (a) and (b) are concerned with reasoning, whereas those on the
bottom, (c) and (d) address behavior. The definitions on the left, (a) and (c) measure success
interms of human performance, and those on the right, (b) and (d) measure the ideal concept of
intelligence called rationality
1.1.5 Intelligent Systems:
In order to design intelligent systems, it is important to categorize them into four
categories(Luger and Stubberfield 1993), (Russell and Norvig, 2003)
1. Systems that think like humans
2. Systems that think rationally
3. Systems that behave like humans
4. Systems that behave rationally
Human Rational
- Like ly
b. Focus is not just on behavior and I/O, but looks like reasoning process.
c. Goal is not just to produce human-like behavior but to produce a sequence of steps of
the reasoning process, similar to the steps followed by a human in solving the same task.
a. The study of mental faculties through the use of computational models; that it is,
the study of computations that make it possible to perceive reason and act.
b. Focus is on inference mechanisms that are probably correct and guarantee an optimal solution.
c. Goal is to formalize the reasoning process as a system of logical rules and procedures
of inference.
a. The art of creating machines that perform functions requiring intelligence when
performed by people; that it is the study of, how to make computers do things which, at
the moment, people do better.
b. Focus is on action, and not intelligent behavior centered around the representation of the world
in capacitation.
1.2.5 Psychology
• How do humans and animals think and act?
Cognitive psychology, which views the brain as an information-processing device, COGNITIVE
PSYCHOLOGY an be traced back at least to the works of William James (1842–1910). Helmholtz also
Insisted that perception involved a form of unconscious logical inference.
For artificial intelligence to succeed, we need two things: intelligence and an artifact. The computer has been
the artifact of choice. The modern digital electronic computer was invented independently and almost
Simultaneously
By scientists in three countries embattled in world war II.
1.2.7 Control theory and cybernetics3=
• How can artifacts operate under their own control?
Modern control theory, especially the branch known as stochastic optimal control, has as its goal the design of
systems that maximize an objective function over time.
1.2.8 Linguistics
• How does language relate to thought? In 1957, B. F. Skinner published Verbal Behaviour. This was a
Comprehensive,
Detailed account of the behaviourist approach to language learning, written by the foremost expert in the field.
Modern linguistics and AI, then, were “born” at about the same time, and grew up together, intersecting in a
Hybrid field called computational linguistics or natural language COMPUTATIONAL LINGUISTICS processing.
The problem of understanding language soon turned out to be considerably more complex than it seemed in
1957.
In recent years, approaches based on hidden Markov models (HMMs) have come to dominate the area
1.3.9 The emergence of intelligent agents (1995–present)
Perhaps encouraged by the progress in solving the sub problems of AI, researchers have also started to look at the
“whole agent” problem again. The work of Allen Newell, John Laird, and Paul Rosenbloom on SOAR (Newell,
1990; Laird et al., 1987) is the best-known example of a complete agent architecture.
1.3.10 the availability of very large data sets (2001–present)
Some recent work in AI suggests that for many problems, it makes more sense to worry about the data and be
Less picky about what algorithm to apply. This is true because of the increasing availability of very large data
Sources: for example, trillions of words of English and billions of images from the Web .
Agent:
An Agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
A human agent has eyes, ears, and other organs for sensors and hands, legs,
mouth,and other body parts for actuators.
A robotic agent might have cameras and infrared range finders for sensors
andvarious motors foractuators.
A software agent receives keystrokes, file contents, and network packets as
Sensory inputs and acts on the environment by displaying on the screen, writing
files, and sending network packets.
Percept:
We use the term percept to refer to the agent's perceptual inputs at any given instant.
Percept Sequence:
An agent's percept sequence is the complete history of everything the agent has ever perceived.
Agent function:
Mathematically speaking, we say that an agent's behavior is described by the agent
functionthat maps any given percept sequence to an action.
Agent program
Internally, the agent function for an artificial agent will be implemented by an agent
program. It is important to keep these two ideas distinct. The agent function is an
AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML
Agent function
Percept Sequence Action
[A, Clean] Right
[A, Dirty] Suck
Fig 1.4.6: Partial tabulation of a simple agent function for the example: vacuum-cleaner world shown in the
Fig 1.4.6(i): The REFLEX-VACCUM-AGENT program is invoked for each new percept
(location, status) and returns an action each time
A Rational agent is one that does the right thing. we say that the right action is the one that
willcause the agent to be most successful. That leaves us with the problem of deciding how
and when to evaluate the agent's success.
We use the term performance measure for the how—the criteria that determine how
successful an agent is.
Ex-Agent cleaning the dirty floor
Performance Measure-Amount of dirt collected
When to Measure-Weekly for better results
What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.
This leads to a definition of a rational agent
ENVIRONMENTS:
The Performance measure, the environment and the agent’s actuators and sensors comes under
the heading task environment. We also call this as PEAS (Performance, Environment, Actuators,
Sensors)
AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML
10 Goal-based
agents:
A
PROBLEM DEFINITION
To build a system to solve a particular problem, we need to do four things:
(i) Define the problem precisely. This definition must include specification of the initial situations and
also final situations which constitute (i.e) acceptable solution to the problem.
(ii) Analyze the problem (i.e) important features have an immense (i.e) huge impact on the appropriateness
of various techniques for solving the problems.
(iii) Isolate and represent the knowledge to solve the problem.
(iv) Choose the best problem – solving techniques and apply it to the particular problem.
Goal Formulation: It is the first and simplest step in problem-solving. It organizes the
steps/sequence required to formulate one goal out of multiple goals as well as actions to achieve that goal.
Goal formulation is based on the current situation and the agent’s performance measure (discussed below).
Problem Formulation: It is the most important step of problem-solving which decides what actions
should be taken to achieve the formulated goal. There are following five components involved in problem
formulation:
Initial State: It is the starting state or initial step of the agent towards its goal.
Actions: It is the description of the possible actions available to the agent.
Transition Model: It describes what each action does.
Goal Test: It determines if the given state is a goal state.
AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML
Path cost: It assigns a numeric cost to each path that follows the goal. The problem- solving agent
selects a cost function, which reflects its performance measure. Remember, an optimal solution has the
lowest
Note: Initial state, actions, and transition model together define the state-space of the problem implicitly. State-
space of a problem is a set of all states which can be reached from the initial state followed by any sequence of
actions. The state-space forms a directed map or graph where nodes are the states, links between the nodes are
actions,
And the path is a sequence of states connected by the sequence of actions.
Search: It identifies all the best possible sequence of actions to reach the goal state from the current state. It
takes a problem as an input and returns solution as its output.
Solution: It finds the best algorithm out of various algorithms, which may be proven as the best optimal
solution.
Execution: It executes the best optimal solution from the searching algorithms to reach the goal state from
the current state.
1.6 Example Problems
Toy Problem: It is a concise and exact description of the problem which is used by the researchers to
Compare the performance of algorithms.
Real-world Problem: It is real-world based problems which require solutions. Unlike a toy problem, it does
Not depend on descriptions, but we can have a general formulation of the problem.
8 Puzzle Problem: Here, we have a 3×3 matrix with movable tiles numbered from 1 to 8 with a
Blank space. The tile adjacent to the blank space can slide into that space. The objective is to reach a
specified
Goal state similar to the goal state, as shown in the below figure.
In the figure, our task is to convert the current state into goal state by sliding digits into the blank space.
In the above figure, our task is to convert the current (Start) state into goal state by sliding digits into the blank
space.
Note: The 8-puzzle problem is a type of sliding-block problem which is used for testing new search
algorithms in artificial intelligence.
8-queens problem: The aim of this problem is to place eight queens on a chessboard in an order
Where no queen may attack another. A queen can attack other queens either diagonally or in same row
and column.
From the following figure, we can understand the problem as well as its correct solution.
It is noticed from the above figure that each queen is set into the chessboard in a position where no other
queen is placed diagonally, in same row or column. Therefore, it is one right approach to the 8-queens problem.
1. Incremental formulation: It starts from an empty state where the operator augments a queen at each step.
2. Complete-state formulation: It starts with all the 8-queens on the chessboard and moves them around, saving
from the attacks.
States: Arrangement of all the 8 queens’ one per column with no queen attacking the other queen.
Actions: Move the queen at the location where it is safe from the attacks.
This formulation is better than the incremental formulation as it reduces the state space from 1.8 x 1014 to 2057, and
Traveling salesperson problem(TSP): It is a touring problem where the salesman can visit each city
Only once. The objective is to find the shortest tour and sell-out the stuff in each city.
VLSI Layout problem: In this problem, millions of components and connections are positioned on
a chip in order to minimize the area, circuit-delays, stray-capacitances, and maximizing the
Manufacturing yield.
The layout problem is split into two parts:
Cell layout: Here, the primitive components of the circuit are grouped into cells, each performing its
Specific function. Each cell has a fixed shape and size. The task is to place the cells on the
Chip without overlapping each other.
Channel routing: It finds a specific route for each wire through the gaps between the cells.
ROBOT NAVIGATION: It is a generalization of the route-finding problem described earlier. Rather than following
a discrete set of routes, a robot can move in a continuous space with (in principle) an infinite set of possible actions
and states.
Automatic assembly sequencing: In assembly problems, the aim is to find an order in which to
assemble the parts of some object. If the wrong order is chosen, there will be no way to add some part later in
the sequence without undoing some of the work already done.
Protein Design: The objective is to find a sequence of amino acids which will fold into 3D protein
having a property to cure some disease.
We have seen many problems. Now, there is a need to search for solutions to solve them. In this section, we
will understand how searching can be used by the agent to solve a problem.
For solving different kinds of problem, an agent makes use of different strategies to reach the goal by
Searching the best possible algorithms. This process of searching is known as search strategy.
Search algorithms require a data structure to keep track of the search tree that is being constructed. For
each node n of the tree, we have a structure that contains four components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node
; • n.ACTION: the action that was applied to the parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state to the node, as
indicated by the parent pointers.
Together the initial state, actions and transition model implicitly defines the state space of the
problem State space: set of all states reachable from the initial state by any sequence of actions
The goal test, determining whether the current state is a goal state. Here, the goal state is
{In: Bucharest}
The path cost function, which determine the cost of each path, which is reflecting in
the performance measure.
We define the cost function as c(s, a, s’), where s is the current state and a is the action performed by
The agent to reach state’s’.
No
Which search algorithm one should use will generally depend on the problem
domain? T
There are four important factors to consider:
• Completeness: Is the algorithm guaranteed to find a solution when there is one?
• Optimality: Does the strategy find the optimal solution?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search?
State Spaces versus Search Trees:
State Space
o Set of valid states for a problem
o Linked by operators
o e.g., 20 valid states (cities) in the Romanian travel problem
Search Tree
– Root node = initial state
– Child nodes = states that can be visited from parent
– Note that the depth of the tree can be infinite
• E.g., via repeated states
– Partial search tree
• Portion of tree that has been expanded so far
– Fringe
• Leaves of partial search tree, candidates for expansion
Search trees = data structure to search state- space
AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML
Searching
Many traditional search algorithms are used in AI applications. For complex problems, the traditional
algorithms are unable to find the solution within some practical time and space limits. Consequently,
many special techniques are developed; using heuristic functions. The algorithms that use heuristic
Functions are called heuristic algorithms. Heuristic algorithms are not really intelligent; they appear to
be intelligent because they achieve better performance. Heuristic algorithms are more efficient
because they take advantage of feedback from the data to direct the search path.
Uninformed search
Also called blind, exhaustive or brute-force search, uses no information about the problem to guide
the search and therefore may not be very efficient.
Informed Search:
Also called heuristic or intelligent search, uses information about the problem to guide the search, usually
guesses the distance to a goal state and therefore efficient, but the search may not be always possible.
One simple search strategy is a breadth-first search. In this strategy, the root node is
expanded first, then all the nodes generated by the root node are expanded next, and
then their successors, and so on.
In general, all the nodes at depth d in the search tree are expanded before the nodes at depth d
+ 1.
Breadth-first search is an instance of the general graph-search algorithm in which the shallowest
Unexpanded node is chosen for expansion,. This is achieved very simply by using a FIFO queue
for the frontier. Thus, new nodes (which are always deeper than their parents) go to the back of
the queue, and old nodes, which are shallower than the new nodes, get expanded first. There is
one slight tweak on the general graph-search algorithm, which is that the goal test is applied to
each node when it is generated rather than when it is selected for expansion
Illustrated:
Step 1: Initially frontier contains only one node corresponding to the source state A.
Figure 1
Frontier: A
Step 2: A is removed from fringe. The node is expanded, and its children B and C are generated.
They are placed at the back of fringe.
Figure 2
Frontier: B C
Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and
Frontier: C D E
Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to
the back of fringe.
Figure 4
Frontier: D E D G
Step 5: Node D is removed from fringe. Its children C and F are generated and added to the
back of fringe.
Figure 5
Frontier: E D G C F
Figure 6
Frontier: D G C F
Figure 7
Frontier: G C F B F
Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns
the path A C G by following the parent pointers of the node corresponding to G. The
algorithm terminates.
Advantages:
One of the simplest search strategies
Complete. If there is a solution, BFS is guaranteed to find it.
If there are multiple solutions, then a minimal solution will be found
The algorithm is optimal (i.e., admissible) if all operators have the same
cost.
Otherwise, breadth first search finds a solution with the shortest path
length.
Time complexity : O(bd )
Space complexity : O(bd )
Optimality :Yes
b - branching factor(maximum no of successors of
any node), d – Depth of the shallowest goal node
Maximum length of any path (m) in search space
Disadvantages:
Requires the generation and storage of a tree whose size is exponential the depth of
the shallowest goal node.
The breadth first search algorithm cannot be effectively used unless the search space
is quite small.
Applications of Breadth-First Search Algorithm
GPS Navigation systems: Breadth-First Search is one of the best algorithms used to find neighboring
locations by using the GPS system.
Broadcasting: Networking makes use of what we call as packets for communication. These packets
follow a traversal method to reach various networking nodes. One of the most commonly used traversal
Methods is Breadth-First Search. It is being used as an algorithm that is used to communicate
broad casted packets across all the nodes in a network.
DFS illustrated:
Figure 1
FRINGE: A
Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of
fringe.
Figure 2
FRINGE: B C
Step 3: Node B is removed from fringe, and its children D and E are pushed in front of fringe.
FRINGE: D E C
Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.
Figure 4
FRINGE: C F E C
Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe.
Figure 5
Figure 5
FRINGE: G F E C
Step 6: Node G is expanded and found to be a goa node.
Note that the time taken by the algorithm is related to the maximum depth of the search tree. If the search
tree has infinite depth, the algorithm may not terminate. This can happen if the search space is infinite. It
can also happen if the search space contains cycles. The latter case can be handled by checking for cycles
in the algorithm. Thus Depth First Search is not complete. Search