Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
13 views

AI module 1 ,2 notes 7 th sem updated

The document provides an overview of Artificial Intelligence (AI), including its definition, applications, and the foundational principles from various fields such as philosophy, mathematics, and neuroscience. It discusses the history of AI, highlighting key milestones and the evolution of intelligent systems, as well as the concept of agents and their interactions with environments. The document also categorizes AI systems based on their reasoning and behavioral capabilities, emphasizing the importance of understanding both human-like and rational approaches to intelligence.

Uploaded by

Akash
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

AI module 1 ,2 notes 7 th sem updated

The document provides an overview of Artificial Intelligence (AI), including its definition, applications, and the foundational principles from various fields such as philosophy, mathematics, and neuroscience. It discusses the history of AI, highlighting key milestones and the evolution of intelligent systems, as well as the concept of agents and their interactions with environments. The document also categorizes AI systems based on their reasoning and behavioral capabilities, emphasizing the importance of understanding both human-like and rational approaches to intelligence.

Uploaded by

Akash
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

AI&ML

Module 1&2

Introduction, Problem Solving


1. Introduction:

Artificial intelligence or AI refers to the simulation of human intelligence in machines that are
programmed to think and act like humans... The term was coined by John McCarthy in
1956. AI is the ability to acquire, understand and apply knowledge to achieve goals in
the world.
 AI is unique, sharing borders with Mathematics, Computer Science,
Philosophy, Psychology, Biology, Cognitive Science and many
others.
 Although there is no clear definition of AI or even Intelligence, it can be described as an
attempt to build machines that like humans can think and act, able to learn and use
knowledge to solve problems on their own.
1.1.2 Applications of AI:
AI algorithms have attracted close attention of researchers and have also been applied
successfully to solve problems in engineering. Nevertheless, for large and complex problems,
AI algorithms consume considerable computation time due to stochastic feature of the search
approaches

1. Business; financial strategies


2. Engineering: check design, offer suggestions to create new product, expert systems for
all engineering problems
3. Manufacturing: assembly, inspection and maintenance
4. Medicine: monitoring, diagnosing
5. Education: in teaching
6. Fraud detection
7. Object identification
AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML

8. Information retrieval
9. Space shuttle scheduling
1.1.3 Building AI Systems:

1) Perception
Intelligent biological systems are physically embodied in the world and experience the
world through their sensors (senses). For an autonomous vehicle, input might be images
from a camera and range information from a rangefinder. For a medical diagnosis
system, perception is the set of symptoms and test results that have been obtained and
input to this system manually.

2) Reasoning
Inference, decision-making, classification from what is sensed and what the internal "model" is
of the world. Might be a neural network, logical deduction system, Hidden Markov Model
induction, heuristic searching a problem space, Bayes Network inference, genetic algorithms,
etc. Includes areas of knowledge representation, problem solving, decision theory, planning,
game theory, machine learning, uncertainty reasoning, etc.
3) Action
Biological systems interact within their environment by actuation, speech, etc. All behavior is
centered around actions in the world. Examples include controlling the steering of a Mars rover or
autonomous vehicle, or suggesting tests and making diagnoses for a medical diagnosis system.
Includes areas of robot actuation, natural language generation, and speech synthesis.
1.1.4 The definitions of AI:

a) "The
"Theexciting new effort
automation to make
of] activities that we b) "The
"Thestudy of mental
study of the faculties
computations

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

c) "The art of creating machines that d) "A field of study that seeks to explain

perform functions that require and emulate intelligent behavior in

intelligence when performed by people" terms of computational processes"

(Kurzweil, 1990) (Schalkoff, 1 990)

"The branch of computer science that


"The study of how to make is concerned with the automation of
Computers do things at which, at
intelligent behavior"
the moment, people are better"
(Luger and Stubblefield, 1993)
(Rich and Knight, 1

99 1 )

The definitions on the top, (a) and (b) are concerned with reasoning, whereas those on the
bottom, (c) and (d) address behavior. The definitions on the left, (a) and (c) measure success
interms of human performance, and those on the right, (b) and (d) measure the ideal concept of
intelligence called rationality
1.1.5 Intelligent Systems:
In order to design intelligent systems, it is important to categorize them into four
categories(Luger and Stubberfield 1993), (Russell and Norvig, 2003)
1. Systems that think like humans
2. Systems that think rationally
3. Systems that behave like humans
4. Systems that behave rationally

Human Rational
- Like ly

Cognitive Science Approach Laws of thought Approach


Think:
“Machines that think like humans” “ Machines that think Rationally”

Turing Test Approach Rational Agent Approach


Act:
“Machines that behave like humans” “Machines that behave Rationally”

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

Cognitive Science: Think Human-Like

a. Requires a model for human cognition. Precise enough models


allow simulation by computers.

b. Focus is not just on behavior and I/O, but looks like reasoning process.

c. Goal is not just to produce human-like behavior but to produce a sequence of steps of
the reasoning process, similar to the steps followed by a human in solving the same task.

Laws of thought: Think Rationally

a. The study of mental faculties through the use of computational models; that it is,
the study of computations that make it possible to perceive reason and act.

b. Focus is on inference mechanisms that are probably correct and guarantee an optimal solution.

c. Goal is to formalize the reasoning process as a system of logical rules and procedures
of inference.

d. Develop systems of representation to allow inferences to be like

“Socrates is a man. All men are mortal. Therefore Socrates is mortal”


Turing Test: Act Human-Like

a. The art of creating machines that perform functions requiring intelligence when
performed by people; that it is the study of, how to make computers do things which, at
the moment, people do better.

b. Focus is on action, and not intelligent behavior centered around the representation of the world

c. Example: Turing Test

3 rooms contain: a person, a computer and an interrogator.

o The interrogator can communicate with the other 2 by teletype (to


avoid the machine imitate the appearance of voice of the person)

o The interrogator tries to determine which the person is and which


the machine is.

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

o The machine tries to fool the interrogator to believe that it is the


human, and the person also tries to convince the interrogator that it is
the human.

o If the machine succeeds in fooling the interrogator, then conclude that


the machine is intelligent.

Rational agent: Act Rationally

a. Tries to explain and emulate intelligent behavior in terms of computational process;


that it is concerned with the automation of the intelligence.

b. Focus is on systems that act sufficiently if not optimally in all situations.

c. Goal is to develop systems that are rational and sufficient

1.2 The Foundations of Artificial Intelligence


1.2.1 Philosophy

• Can formal rules be used to draw valid conclusions?


• How does the mind arise from a physical brain?
• Where does knowledge come from? • How does knowledge lead to action?
i) Dualism
ii)rationalism
iii)materialism
iv)emperism
v)induction
1.2.2 Mathematics
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain information?
i)Algorithm
ii)Incompleteness
iii)Traceability
1.2.3 Economics
AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML

• How should we make decisions so as to maximize payoff?


• How should we do this when others may not go along?
• How should we do this when the payoff may be far in the future?
1.2.4 Neuroscience
• How do brains process information?
NEUROSCIENCE
Neuroscience is the study of the nervous system, particularly the brain. Although the exact way in which the
Brain enables thought is one of the great mysteries of science, the fact that it does enable thought has been
Appreciated for thousands of years because of the evidence that strong blows to the head can lead to mental

in capacitation.
1.2.5 Psychology
• How do humans and animals think and act?
Cognitive psychology, which views the brain as an information-processing device, COGNITIVE
PSYCHOLOGY an be traced back at least to the works of William James (1842–1910). Helmholtz also
Insisted that perception involved a form of unconscious logical inference.

1.2.6 Computer engineering


• How can we build an efficient computer?
AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML

For artificial intelligence to succeed, we need two things: intelligence and an artifact. The computer has been
the artifact of choice. The modern digital electronic computer was invented independently and almost
Simultaneously
By scientists in three countries embattled in world war II.
1.2.7 Control theory and cybernetics3=
• How can artifacts operate under their own control?
Modern control theory, especially the branch known as stochastic optimal control, has as its goal the design of
systems that maximize an objective function over time.
1.2.8 Linguistics
• How does language relate to thought? In 1957, B. F. Skinner published Verbal Behaviour. This was a
Comprehensive,
Detailed account of the behaviourist approach to language learning, written by the foremost expert in the field.
Modern linguistics and AI, then, were “born” at about the same time, and grew up together, intersecting in a
Hybrid field called computational linguistics or natural language COMPUTATIONAL LINGUISTICS processing.
The problem of understanding language soon turned out to be considerably more complex than it seemed in
1957.

1.3 The History of Artificial Intelligence

1.3.1 The gestation of artificial intelligence (1943–1955)


The first work that is now generally recognized as AI was done by Warren McCulloch and Walter Pitts (1943).
They drew on three sources: knowledge of the basic physiology and function of neurons in the brain; a formal
analysis of propositional logic due to Russell and Whitehead; and Turing’s theory of computation. They
proposed a model of artificial neurons in which each neuron is characterized as being “on” or “off,” with a
switch to “on” occurring in response to stimulation by a sufficient number of neighboring neurons.
1.3.2 The birth of artificial intelligence (1956)
Princeton was home to another influential figure in AI, John McCarthy. After receiving his PhD there in 1951
And working for two years as an instructor, McCarthy moved to Stanford and then to Dartmouth College, which
Was to become the official birthplace of the field.

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

1.3.3 Early enthusiasm, great expectations (1952–1969)


GPS was probably the first program to embody the “thinking humanly” approach. The success of GPS and
Subsequent programs as models of cognition led Newell and Simon (1976) to formulate the famous physical
Symbol system hypothesis, which states that “a physical symbol system has the necessary and PHYSICAL
SYMBOL SYSTEM sufficient means for general intelligent action.”
1.3.4 A dose of reality (1966–1973)
From the beginning, AI researchers were not shy about making predictions of their coming successes. The
following statement by Herbert Simon in 1957 is often quoted: Terms such as “visible future” can be interpreted
in various ways, but Simon also made more concrete predictions: that within 10 years a computer would be
Chess champion, and a significant mathematical theorem would be proved by machine
1.3.5 Knowledge-based systems: The key to power? (1969–1979)
The picture of problem solving that had arisen during the first decade of AI research was of a general-purpose
Search mechanism trying to string together elementary reasoning steps to WEAK METHOD find complete
Solutions. Such approaches have been called weak methods because, although general, they do not scale up to
Large or difficult problem instances. The alternative to weak methods is to use more powerful, domain-specific
Knowledge that allows larger reasoning steps and can more easily handle typically occurring cases in narrow
Areas of expertise.
1.3.6 AI becomes an industry (1980–present)
The first successful commercial expert system, R1, began operation at the Digital Equipment Corporation
(McDermott, 1982). The program helped configure orders for new computer systems; by 1986, it was saving the
Company an estimated $40 million a year.
1.3.7 The return of neural networks (1986–present)
BACK-PROPAGATION In the mid-1980s at least four different groups reinvented the back-propagation
Learning algorithm first found in 1969 by Bryson and Ho. The algorithm was applied to many learning problems
in computer science and psychology, and the widespread dissemination of the results in the collection Parallel
Distributed Processing (Rumelhart and McClelland, 1986) caused great excitement. CONNECTIONIST These
So-called connectionist models of intelligent systems were seen by some as direct competitors both to the
Symbolic models promoted by Newell and Simon and to the logicist approach of McCarthy and others.
1.3.8 AI adopts the scientific method (1987–present)

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

In recent years, approaches based on hidden Markov models (HMMs) have come to dominate the area
1.3.9 The emergence of intelligent agents (1995–present)
Perhaps encouraged by the progress in solving the sub problems of AI, researchers have also started to look at the
“whole agent” problem again. The work of Allen Newell, John Laird, and Paul Rosenbloom on SOAR (Newell,
1990; Laird et al., 1987) is the best-known example of a complete agent architecture.
1.3.10 the availability of very large data sets (2001–present)
Some recent work in AI suggests that for many problems, it makes more sense to worry about the data and be
Less picky about what algorithm to apply. This is true because of the increasing availability of very large data
Sources: for example, trillions of words of English and billions of images from the Web .

1.4 Agents and Environments:

Fig 1.1: Agents and Environments

Agent:
An Agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.

 A human agent has eyes, ears, and other organs for sensors and hands, legs,
mouth,and other body parts for actuators.
 A robotic agent might have cameras and infrared range finders for sensors
andvarious motors foractuators.
 A software agent receives keystrokes, file contents, and network packets as
Sensory inputs and acts on the environment by displaying on the screen, writing
files, and sending network packets.

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

Percept:
We use the term percept to refer to the agent's perceptual inputs at any given instant.

Percept Sequence:
An agent's percept sequence is the complete history of everything the agent has ever perceived.

Agent function:
Mathematically speaking, we say that an agent's behavior is described by the agent
functionthat maps any given percept sequence to an action.

Agent program
Internally, the agent function for an artificial agent will be implemented by an agent
program. It is important to keep these two ideas distinct. The agent function is an
AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML

abstract mathematical description; the agent program is a concrete implementation,


running on theagent architecture.
To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world shown in
Fig 1.4.5. This particular world has just two locations: squares A and B. The vacuum agent
perceives which square it is in and whether there is dirt in the square. It can choose to move
left,move right, suck up the dirt, or do nothing. One very simple agent function is the
following: ifthe current square is dirty, then suck, otherwise move to the other square. A partial
tabulation ofthis agent function is shown in Fig 1.4.6.

Fig 1.4.5: A vacuum-cleaner world with just two locations.

Agent function
Percept Sequence Action
[A, Clean] Right
[A, Dirty] Suck

[B, Clean] Left

[B, Dirty] Suck


[A, Clean], [A, Right
Clean]
[A, Clean], [A, Suck
Dirty]

Fig 1.4.6: Partial tabulation of a simple agent function for the example: vacuum-cleaner world shown in the

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

Function REFLEX-VACCUM-AGENT ([location, status]) returns an

action If status=Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Fig 1.4.6(i): The REFLEX-VACCUM-AGENT program is invoked for each new percept
(location, status) and returns an action each time

 A Rational agent is one that does the right thing. we say that the right action is the one that
willcause the agent to be most successful. That leaves us with the problem of deciding how
and when to evaluate the agent's success.
We use the term performance measure for the how—the criteria that determine how
successful an agent is.
 Ex-Agent cleaning the dirty floor
 Performance Measure-Amount of dirt collected
 When to Measure-Weekly for better results
 What is rational at any given time depends on four things:
 • The performance measure that defines the criterion of success.
 • The agent’s prior knowledge of the environment.
 • The actions that the agent can perform.
 • The agent’s percept sequence to date.
 This leads to a definition of a rational agent

ENVIRONMENTS:
The Performance measure, the environment and the agent’s actuators and sensors comes under
the heading task environment. We also call this as PEAS (Performance, Environment, Actuators,
Sensors)
AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML

10 Goal-based
agents:
 A

goal-based agent has an agenda.


 It operates based on a goal in front of it and makes decisions based on how best to reach that goal.
 A goal-based agent operates as a search and planning function, meaning it targets the goal ahead
andfinds the right action in order to reach it.

1.5 Problem Solving Agents:


AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML

 Problem solving agent is a goal-based agent.


 Problem solving agents decide what to do by finding sequence of actions that lead to desirable states.
Goal Formulation:
It organizes the steps required to formulate/ prepare one goal out of multiple goals available.
Problem Formulation:
It is a process of deciding what actions and states to consider to follow goal
formulation. The process of looking for a best sequence to achieve a goal is called
Search.
A search algorithm takes a problem as input and returns a solution in the form of action sequences.
Once the solution is found the action it recommends can be carried out. This is called Execution
phase.

PROBLEM DEFINITION
To build a system to solve a particular problem, we need to do four things:
(i) Define the problem precisely. This definition must include specification of the initial situations and
also final situations which constitute (i.e) acceptable solution to the problem.
(ii) Analyze the problem (i.e) important features have an immense (i.e) huge impact on the appropriateness
of various techniques for solving the problems.
(iii) Isolate and represent the knowledge to solve the problem.
(iv) Choose the best problem – solving techniques and apply it to the particular problem.

Steps performed by Problem-solving agent:

 Goal Formulation: It is the first and simplest step in problem-solving. It organizes the
steps/sequence required to formulate one goal out of multiple goals as well as actions to achieve that goal.
Goal formulation is based on the current situation and the agent’s performance measure (discussed below).
 Problem Formulation: It is the most important step of problem-solving which decides what actions
should be taken to achieve the formulated goal. There are following five components involved in problem
formulation:
 Initial State: It is the starting state or initial step of the agent towards its goal.
 Actions: It is the description of the possible actions available to the agent.
 Transition Model: It describes what each action does.
 Goal Test: It determines if the given state is a goal state.
AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML

 Path cost: It assigns a numeric cost to each path that follows the goal. The problem- solving agent
selects a cost function, which reflects its performance measure. Remember, an optimal solution has the
lowest

Path cost among all the solutions.

Note: Initial state, actions, and transition model together define the state-space of the problem implicitly. State-
space of a problem is a set of all states which can be reached from the initial state followed by any sequence of
actions. The state-space forms a directed map or graph where nodes are the states, links between the nodes are
actions,
And the path is a sequence of states connected by the sequence of actions.

 Search: It identifies all the best possible sequence of actions to reach the goal state from the current state. It
takes a problem as an input and returns solution as its output.
 Solution: It finds the best algorithm out of various algorithms, which may be proven as the best optimal
solution.
 Execution: It executes the best optimal solution from the searching algorithms to reach the goal state from
the current state.
1.6 Example Problems

Basically, there are two types of problem approaches:

 Toy Problem: It is a concise and exact description of the problem which is used by the researchers to
Compare the performance of algorithms.
 Real-world Problem: It is real-world based problems which require solutions. Unlike a toy problem, it does
Not depend on descriptions, but we can have a general formulation of the problem.

Some Toy Problems

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

 8 Puzzle Problem: Here, we have a 3×3 matrix with movable tiles numbered from 1 to 8 with a
Blank space. The tile adjacent to the blank space can slide into that space. The objective is to reach a
specified
Goal state similar to the goal state, as shown in the below figure.
 In the figure, our task is to convert the current state into goal state by sliding digits into the blank space.

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

In the above figure, our task is to convert the current (Start) state into goal state by sliding digits into the blank
space.

The problem formulation is as follows:


 States: It describes the location of each numbered tiles and the blank tile.
 Initial State: We can start from any state as the initial state.
 Actions: Here, actions of the blank space is defined, i.e., either left, right, up or down
 Transition Model: It returns the resulting state as per the given state and actions.
 Goal test: It identifies whether we have reached the correct goal-state.
 Path cost: The path cost is the number of steps in the path where the cost of each step is 1.

 Note: The 8-puzzle problem is a type of sliding-block problem which is used for testing new search
algorithms in artificial intelligence.

 8-queens problem: The aim of this problem is to place eight queens on a chessboard in an order

 Where no queen may attack another. A queen can attack other queens either diagonally or in same row

and column.
From the following figure, we can understand the problem as well as its correct solution.

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

It is noticed from the above figure that each queen is set into the chessboard in a position where no other
queen is placed diagonally, in same row or column. Therefore, it is one right approach to the 8-queens problem.

For this problem, there are two main kinds of formulation:

1. Incremental formulation: It starts from an empty state where the operator augments a queen at each step.

Following steps are involved in this formulation :


 States: Arrangement of any 0 to 8 queens on the chessboard.

Initial State: An empty chessboard


 Actions: Add a queen to any empty box.
 Transition model: Returns the chessboard with the queen added in a box.
 Goal test: Checks whether 8-queens are placed on the chessboard without any attack.
 Path cost: There is no need for path cost because only final states are counted. In this formulation,
there is approximately 1.8 x 1014 possible sequence to investigate.

2. Complete-state formulation: It starts with all the 8-queens on the chessboard and moves them around, saving
from the attacks.

Following steps are involved in this formulation


AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML

 States: Arrangement of all the 8 queens’ one per column with no queen attacking the other queen.
 Actions: Move the queen at the location where it is safe from the attacks.
This formulation is better than the incremental formulation as it reduces the state space from 1.8 x 1014 to 2057, and

It is easy to find the solutions.

Some Real-world problems


 Traveling salesperson problem(TSP): It is a touring problem where the salesman can visit each city
Only once. The objective is to find the shortest tour and sell-out the stuff in each city.
 VLSI Layout problem: In this problem, millions of components and connections are positioned on
 a chip in order to minimize the area, circuit-delays, stray-capacitances, and maximizing the
 Manufacturing yield.
The layout problem is split into two parts:
 Cell layout: Here, the primitive components of the circuit are grouped into cells, each performing its

 Specific function. Each cell has a fixed shape and size. The task is to place the cells on the
 Chip without overlapping each other.

 Channel routing: It finds a specific route for each wire through the gaps between the cells.

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

 ROBOT NAVIGATION: It is a generalization of the route-finding problem described earlier. Rather than following
a discrete set of routes, a robot can move in a continuous space with (in principle) an infinite set of possible actions
and states.

 Automatic assembly sequencing: In assembly problems, the aim is to find an order in which to
assemble the parts of some object. If the wrong order is chosen, there will be no way to add some part later in
the sequence without undoing some of the work already done.

 Protein Design: The objective is to find a sequence of amino acids which will fold into 3D protein
having a property to cure some disease.

1.7 Searching for solutions

We have seen many problems. Now, there is a need to search for solutions to solve them. In this section, we
will understand how searching can be used by the agent to solve a problem.

For solving different kinds of problem, an agent makes use of different strategies to reach the goal by
Searching the best possible algorithms. This process of searching is known as search strategy.

1.Infrastructure for search algorithms/Data structure


AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML

Search algorithms require a data structure to keep track of the search tree that is being constructed. For
each node n of the tree, we have a structure that contains four components:

• n.STATE: the state in the state space to which the node corresponds;

• n.PARENT: the node in the search tree that generated this node

; • n.ACTION: the action that was applied to the parent to generate the node;

• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state to the node, as
indicated by the parent pointers.

2.Well Defined problems and solutions:

A problem can be defined formally by 4 components:


 The initial state of the agent is the state where the agent starts in. In this case, the initial state can be
Described as In: Arad
 The possible actions available to the agent, corresponding to each of the state the agent
residesin.
For example, ACTIONS (In: Arad) = {Go: Sibiu, Go: Timisoara, Go:
Zerind}. Actions are also known as operations.
 A description of what each action does. The formal name for this is Transition model, Specified
by thefunction Result(s,a) that returns the state that results from the action an in states.
We also use the term Successor to refer to any state reachable from a given state by a single
action.
For EX: Result (In (Arad), GO (Zerind)) =In (Zerind)

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

Together the initial state, actions and transition model implicitly defines the state space of the
problem State space: set of all states reachable from the initial state by any sequence of actions
 The goal test, determining whether the current state is a goal state. Here, the goal state is
{In: Bucharest}
 The path cost function, which determine the cost of each path, which is reflecting in
the performance measure.
We define the cost function as c(s, a, s’), where s is the current state and a is the action performed by
The agent to reach state’s’.

State Space Search/Problem Space Search:


The state space representation forms the basis of most of the AI methods.
 Formulate a problem as a state space search by showing the legal problem states, the
legal operators, and the initial and goal states.
 A state is defined by the specification of the values of all attributes of interest in the world
 An operator changes one state into the other; it has a precondition which is the value of
certain attributes prior to the application of the operator, and a set of effects, which are

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

the attributes altered by the operator


 The initial state is where you start
 The goal state is the partial description of the solution

Formal Description of the problem:


1. Define a state space that contains all the possible configurations of the relevant objects.
2. Specify one or more states within that space that describe possible situations from
which the problem solving process may start ( initial state)
3. Specify one or more states that would be acceptable as solutions to the problem. ( goal states)
Specify a set of rules that describe the actions (operations) available

State-Space Problem Formulation:

Example: A problem is defined by four items:


1. initial state e.g., "at Arad“
2. actions or successor function : S(x) =set of action–state
pairse.g., S(Arad) = {<Arad → Zerind, Zerind>,
…}
3. goal test (or set of goal states)
e.g., x = "at Bucharest”, Checkmate(x)
4. path cost (additive)
e.g., sum of distances, number of actions executed, etc.
c(x,a,y) is the step cost, assumed to be ≥ 0
A solution is a sequence of actions leading from the initial state to a goal state

No

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

1.8 Search strategies:


Search: Searching is a step by step procedure to solve a search-problem in a given search space. A
search problem can have three main factors:
Search Space: Search space represents a set of possible solutions, which a system may have.
Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and returns whether the goal state is
achieved or not.

Properties of Search Algorithms/Measuring problem-solving performance

Which search algorithm one should use will generally depend on the problem
domain? T
There are four important factors to consider:
• Completeness: Is the algorithm guaranteed to find a solution when there is one?
• Optimality: Does the strategy find the optimal solution?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search?
State Spaces versus Search Trees:
 State Space
o Set of valid states for a problem
o Linked by operators
o e.g., 20 valid states (cities) in the Romanian travel problem
 Search Tree
– Root node = initial state
– Child nodes = states that can be visited from parent
– Note that the depth of the tree can be infinite
• E.g., via repeated states
– Partial search tree
• Portion of tree that has been expanded so far
– Fringe
• Leaves of partial search tree, candidates for expansion
Search trees = data structure to search state- space
AI&ML, VIT 2023-2024 LAVANYA.K
AI&ML

Searching
Many traditional search algorithms are used in AI applications. For complex problems, the traditional
algorithms are unable to find the solution within some practical time and space limits. Consequently,
many special techniques are developed; using heuristic functions. The algorithms that use heuristic

Functions are called heuristic algorithms. Heuristic algorithms are not really intelligent; they appear to
be intelligent because they achieve better performance. Heuristic algorithms are more efficient
because they take advantage of feedback from the data to direct the search path.

Uninformed search

Also called blind, exhaustive or brute-force search, uses no information about the problem to guide
the search and therefore may not be very efficient.

Informed Search:

Also called heuristic or intelligent search, uses information about the problem to guide the search, usually
guesses the distance to a goal state and therefore efficient, but the search may not be always possible.

1.9 Uninformed Search (Blind searches):

1. Breadth First Search:

 One simple search strategy is a breadth-first search. In this strategy, the root node is
expanded first, then all the nodes generated by the root node are expanded next, and
then their successors, and so on.
 In general, all the nodes at depth d in the search tree are expanded before the nodes at depth d
+ 1.
Breadth-first search is an instance of the general graph-search algorithm in which the shallowest
Unexpanded node is chosen for expansion,. This is achieved very simply by using a FIFO queue
for the frontier. Thus, new nodes (which are always deeper than their parents) go to the back of
the queue, and old nodes, which are shallower than the new nodes, get expanded first. There is
one slight tweak on the general graph-search algorithm, which is that the goal test is applied to
each node when it is generated rather than when it is selected for expansion

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

Illustrated:
Step 1: Initially frontier contains only one node corresponding to the source state A.

Figure 1
Frontier: A

Step 2: A is removed from fringe. The node is expanded, and its children B and C are generated.
They are placed at the back of fringe.

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

Figure 2
Frontier: B C

Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and

Put at the back of Figure 3


fringe.

Frontier: C D E

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to
the back of fringe.

Figure 4
Frontier: D E D G

Step 5: Node D is removed from fringe. Its children C and F are generated and added to the
back of fringe.

Figure 5
Frontier: E D G C F

Step 6: Node E is removed from fringe. It has no children.

Figure 6
Frontier: D G C F

Step 7: D is expanded; B and F are put in OPEN.

AI&ML, VIT 2023-2024 LAVANYA.K


AI&ML

Figure 7
Frontier: G C F B F

Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns
the path A C G by following the parent pointers of the node corresponding to G. The
algorithm terminates.

Breadth first search is:

Advantages:
One of the simplest search strategies
 Complete. If there is a solution, BFS is guaranteed to find it.
 If there are multiple solutions, then a minimal solution will be found
 The algorithm is optimal (i.e., admissible) if all operators have the same
cost.

 Otherwise, breadth first search finds a solution with the shortest path
length.
 Time complexity : O(bd )
 Space complexity : O(bd )
 Optimality :Yes
b - branching factor(maximum no of successors of
any node), d – Depth of the shallowest goal node
Maximum length of any path (m) in search space

AI&ML, VIT 2023-2024 LAVANYA.K


Artificial Intelligence BAD402

 BFS will provide a solution if any solution exists.


 If there are more than one solutions for a given problem, then BFS will provide the minimal
solution which requires the least number of steps.

Disadvantages:
 Requires the generation and storage of a tree whose size is exponential the depth of
the shallowest goal node.
 The breadth first search algorithm cannot be effectively used unless the search space
is quite small.
Applications of Breadth-First Search Algorithm
GPS Navigation systems: Breadth-First Search is one of the best algorithms used to find neighboring
locations by using the GPS system.
Broadcasting: Networking makes use of what we call as packets for communication. These packets
follow a traversal method to reach various networking nodes. One of the most commonly used traversal
Methods is Breadth-First Search. It is being used as an algorithm that is used to communicate
broad casted packets across all the nodes in a network.

1.10 Depth- First- Search.


Depth-first search always expands the deepest node in the current frontier of the search tree.
DEPTH-FIRST SEARCH the progress of the search is illustrated in Figure The search proceeds
immediately to the deepest level of the search tree, where the nodes have no successors. As those
nodes are expanded, they are dropped from the frontier, so then the search “backs up” to the next
deepest node that still has unexplored successors. The depth-first search algorithm is an instance
of the graph-search algorithm in Figure; whereas breadth-first-search uses a FIFO queue, depth-
first search uses a LIFO queue. A LIFO queue means that the most recently generated node is
chosen for expansion. This must be the deepest unexpanded node because it is one deeper than
its parent—which, in turn, was the deepest unexpanded node when it was selected.

AI&ML, VIT 2023-2024 LAVANYA.K


Artificial Intelligence BAD402

DFS illustrated:

A State Space Graph

Step 1: Initially fringe contains only the node for A.

Figure 1
FRINGE: A

Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of
fringe.

Figure 2
FRINGE: B C
Step 3: Node B is removed from fringe, and its children D and E are pushed in front of fringe.

AI&ML, VIT 2023-2024 LAVANYA.K


Artificial Intelligence BAD402

AI&ML, VIT 2023-2024 LAVANYA.K


Artificial Intelligence BAD402

FRINGE: D E C

Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.

Figure 4
FRINGE: C F E C

Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe.

Figure 5

Figure 5

FRINGE: G F E C
Step 6: Node G is expanded and found to be a goa node.

AI&ML, VIT 2023-2024 LAVANYA.K


FRINGE: G F E C

The solution path A-B-D-C-G is returned and the algorithm terminates.

Depth first search

1. Takes exponential time.


2. If N is the maximum depth of a node in the search space, in the worst case the algorithm will
Take time O(b ).
3. The space taken is linear in the depth of the search tree, O(bN).

Note that the time taken by the algorithm is related to the maximum depth of the search tree. If the search
tree has infinite depth, the algorithm may not terminate. This can happen if the search space is infinite. It
can also happen if the search space contains cycles. The latter case can be handled by checking for cycles
in the algorithm. Thus Depth First Search is not complete. Search

You might also like