Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
40 views36 pages

AI Unit-1

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 36

ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

UNIT-I

• Introduction: Introduction to Artificial Intelligence-History of AI- AI Techniques - Data Acquisition and


Learning Aspects in AI - Typical Intelligent Agents - General Search algorithm – BFS- A* Search- AO*
Search- Memory Bounded Heuristic Search

INTRODUCTION

Q.WHAT IS ARTIFICIAL INTELLIGENCE?

 The science and engineering of making intelligent machines, especially intelligent computer
programs”. -John McCarthy

 John McCarthy was a pioneer in the AI field and known as the father of Artificial intelligence.
He was not only the known as the father of AI but also invented the term Artificial Intelligence.

 Artificial Intelligence is an approach to make a computer, a robot, or a product to think how


smart human think

 AI is a study of how human brain think, learn, decide and work, when it tries to solve problems.

 And finally this study outputs intelligent software systems.

 The aim of AI is to improve computer functions which are related to human knowledge, for
example, reasoning, learning, and problem-solving.

Artificial Intelligence is a branch of computer science that deals with the creation of computer
programs that can provide solutions, otherwise human would have to solve.

Artificial Intelligence definitions are given on the basis of

(i) Based on thought process and reasoning.

(ii) Based on the behaviour.

(iii) Based on human performance.

(iv) Based on Rationality.

The intelligence is intangible. It is composed of


 Reasoning
 Learning
 Problem Solving
 Perception
III YR /VI SEM Page 1
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

 Linguistic Intelligence

Q.Different Categories and approaches of AI


Views of AI fall into four categories:
Thinking humanly
Thinking rationally
―The automation of activities that we ― The study of mental faculties
associate with human thinking, through the use of computational
activities such as decision-making, models‖
problem solving, learning…‖

Acting humanly Acting rationally


―The study of how to make computers ―Computational intelligence is
do things at which, at the moment, the study of the design of
people are better‖ intelligent agents‖

Acting humanly: Turing Test


• The Turing Test, Proposed by Alan Turing (1950) ,was designed to provide a satisfactory
operational definition of Intelligence ."Computing machinery and intelligence":

III YR /VI SEM Page 2


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

• "Can machines think?"  "Can machines behave intelligently?"


• Operational test for intelligent behavior: The Imitation Game

• Predicted that by 2000, a machine might have a 30% chance of fooling a person for 5 minutes
• Anticipated all major arguments against AI in following 50 years
• Suggested major components of AI: knowledge, reasoning, language understanding, learning.

Turing test
• Three rooms contain a person, a computer, and an interrogator.

• The interrogator can communicate with the other two by teleprinter.

• The interrogator tries to determine which the person is and which the machine is.

• The machine tries to fool the interrogator into believing that it is the person.

• If the machine succeeds, then we conclude that the machine can think.

The computer would need to possess the following capabilities:

 Natural language processing to enable it to communicate successfully in English.

 Knowledge representation to store what it knows or hears;


 Automated reasoning to use the stored information to answer questions and to draw new
conclusions.
 Machine learning to adapt to new circumstances and to detect and extrapolate patterns.
 Computer vision to perceive objects.
 Robotics to manipulate objects and move about.

Thinking humanly: cognitive modeling


• In 1960’s "cognitive revolution": information-processing psychology
• Requires scientific theories of internal activities of the brain
III YR /VI SEM Page 3
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

• How to validate? Requires

1) Predicting and testing behaviour of human subjects (top-down)

2) Direct identification from neurological data (bottom-up)


• Both approaches (roughly, Cognitive Science and Cognitive Neuroscience) are now distinct from
Artificial Intelligence
Try to understand how the mind works. How do we think?

Two possible routes to find answers:

 By introspection: We figure it out ourselves!


 By experiment: Draw upon techniques of psychology to conduct controlled experiments. (Rat
in a box!)

Thinking rationally: "laws of thought"


• Aristotle: what are correct arguments/thought processes?
• Several Greek schools developed various forms of logic: notation and rules of derivation for
thoughts; may or may not have proceeded to the idea of mechanization
• Direct line through mathematics and philosophy to modern AI
• Problems:
– Not all intelligent behaviour is mediated by logical deliberation
– What is the purpose of thinking? What thoughts should I have?
 Trying to understand how we actually think is one route to AI. But how about how we should
think.
 Use logic to capture the laws of rational thought as symbols.
 Reasoning involves shifting symbols according to well-defined rules (like algebra).
 Result is idealized reasoning.

Acting rationally: rational agent


• Rational behaviour: doing the right thing
• The right thing: that which is expected to maximize goal achievement, given the available
information
III YR /VI SEM Page 4
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

• Doesn't necessarily involve thinking – e.g., blinking reflex – but thinking should be in the service
of rational action

Rational agents
• An agent is an entity that perceives and acts
• This course is about designing rational agents
• Abstractly, an agent is a function from percept histories to actions:

[f: P*  A]

• For any given class of environments and tasks, we seek the agent (or class of agents) with the best
performance.
• Caveat: computational limitations make perfect rationality unachievable.
 Design best program for given machine resources.

Q. FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

PHILOSHOPY

Aristotle (384-322 B.C.) was the first to formulate a precise set of laws governing the rational
part of the mind. He developed an informal system of syllogisms for proper reasoning, which in
principle allowed one to generate conclusions mechanically, given initial premises.
Thomas Hobbes (1588-1679) proposed that reasoning was like numerical computation that "we add and
subtract in our silent thoughts." The automation of computation itself was already well under way. The
first known calculating machine was constructed around 1623 by the German scientist. The idea of a set
of rules that can describe the formal, rational part of the mind, the next step is to consider the mind as a
physical system.
MATHEMATICS
Philosophers staked out most of the important ideas of k1, but the leap to a formal science
required a level of mathematical formalization in three fundamental areas: logic, computation, and
probability. The idea of formal logic can be traced back to the philosophers of ancient Greece, but its
mathematical development really began with the work of George Boole who worked out the details of
propositional, or Boolean.

III YR /VI SEM Page 5


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

 Algorithm
The first nontrivial algorithm is thought to be Euclid's algorithm for computing greatest common
denominators. The final problem asks whether there is an algorithm for deciding the truth of any logical
proposition involving the natural numbers-the famous Entscheidungs problem, or decision problem.
 Incompleteness Theorem
His incompleteness theorem showed that in any language expressive enough to describe the Theorem
properties of the natural numbers, there are true statements that are undecidable in the sense that their
truth cannot be established by any algorithm.
 Intractability
Although undecidability and noncomputability are important to an understanding of computation, the
notion of intractability has had a much greater impact. Roughly speaking, a problem is called
intractable if the time required to solve instances of the problem grows exponentially with the size of the
instances. The distinction between polynomial and exponential growth in complexity.
 NP-completeness
The existence of large classes of canonical cornbinat.oria1 search and reasoning problems that are NP-
complete. Any problem class to which the: class of NP-complete problems can be reduced is likely to be
intractable. These results contrast with the optimism with which the popular press greeted the first
computers-"Electronic Super-Brains" that were "Faster than Einstein!" Despite the increasing speed of
computers, careful use of resources will characterize intelligent systems.
Probability
Besides logic and computation, the third great contribution of mathematics to AI is the theory of
probability. The idea of probability, describing it in terms of the possible outcomes of gambling events.
Probability quickly became an invaluable part of all the quantitative sciences, helping to deal with
uncertain measurements and incomplete theories.
Economics
The science of economics got its start in 1776, when Scottish philosopher Adam Smith (1723-
1790) published An Inquiry into the Nature and Causes of the Wealth of Nations.
 Decision Theory
Decision theory, which combines probability theory with utility theory, provides a formal and complete
framework for decisions (economic or otherwise) made under uncertainty that is, in cases where

III YR /VI SEM Page 6


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

probabilistic descriptions appropriately capture the decision-maker's environment. This is suitable for
"large" economies where each agent need pay no attention to the actions of other agents as individuals.
For "small" economies, the situation is much more like a game: the actions of one player can
significantly affect the utility of another.
Neuroscience
Neuroscience is the study of the nervous system, particularly the brain. The exact way in which
the brain enables thought is one of the great mysteries of science. It has been appreciated for thousands
of years that the brain is somehow involved in thought, because of the evidence that strong blows to the
head can lead to mental incapacitation.

The parts of a nerve cell or neuron. Each neuron consists of a cell body, or soma, that contains a
cell nucleus. Branching out from the cell body are a number of fibers called dendrites and a single long
fiber called the axon. The axon stretches out for a long distance, much longer than the scale in this
diagram indicates. Typically they are 1 cm long (100 times the diameter of the cell body), but can reach
up to 1 meter. A neuron makes connections with 10 to 100,000 other neurons at junctions called
synapses. Signals are propagated from neuron to neuron by a complicated electrochemical reaction. The
signals control brain activity in the short term, and also enable long-term changes in the position and
connectivity of neurons. These mechanisms are thought to form the basis for learning in the brain. Most
information processing goes on in the cerebral cortex, the outer layer of the brain. The basic
organizational unit appears to be a column of tissue about 0.5 mm in diameter, extending the full depth
of the cortex, which is about 4 mm in humans. A column contains about 20,000 neurons.

Psychology
III YR /VI SEM Page 7
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

The view of the brain as an information-processing device, which is a principal characteristic of


cognitive psychology
Computer engineering
For artificial intelligence to succeed, we need two things: intelligence and an artifact. The
computer has been the artifact of choice. The modern digital electronic computer was invented
independently and almost simultaneously by scientists.
Control theory
 Design systems that maximize an objective function over time.
Linguistics
 Knowledge representation, grammar.

 Q. HISTORY OF ARTIFICIAL INTELLIGENCE

 The gestation of artificial intelligence


They drew on three sources: knowledge of the basic physiology and function of neurons in the
brain; a formal analysis of propositional logic. They proposed a model of artificial neurons in which
each neuron is characterized as being "on" or "off," with a switch to "on" occurring in response to
stimulation by a sufficient number of neighboring neurons. A simple updating rule for modifying the
connection strengths between neurons. This rule, now called as Hebbian learning.
The birth of artificial intelligence (1956)
 U.S. researchers interested in automata theory, neural nets, and the study of intelligence. They
organized a two-month workshop at Dartmouth in the summer of 1956.
 The others had ideas and in some cases programs for particular applications such as checkers.
 They invented a computer program capable of thinking non-numerically, and thereby solved the
venerable mind-body problem.
Early enthusiasm, great expectations (1952-1969)
 Given the primitive computers and programming tools of the time, and the fact that only a few
years earlier computers were seen as things that could do arithmetic and no more.
 Newel and Simon's early success was followed up with the General Problem Solver.
 Unlike Logic Theorist, this program was designed from the start to imitate human problem-
solving protocols. Within the limited class of puzzles it could handle, it turned out that the order

III YR /VI SEM Page 8


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

in which the program considered subgoals and possible actions was similar to that in which
humans approached the same problems
A dose of reality (1966-1973)
 The first kind of difficulty arose because most early programs contained little or no knowledge of
their subject matter; they succeeded by means of simple syntactic manipulations.
 The second kind of difficulty was the intractability of many of the problems that A1 was
attempting to solve. Most of the early A1 programs solved problems by trying out different
combinations of steps until the solution was found.
 The illusion of unlimited computational power was not confined to problem-solving programs.
 A third difficulty arose because of some fundamental limitations on the basic structures
being used to generate intelligent behavior.
Knowledge-based systems:
 A general-purpose search mechanism trying to string together elementary reasoning steps to find
complete solutions. Such approaches have been called weak methods, because, although general,
they do not scale up to large or difficult problem instances. The alternative to weak methods is to
use more powerful, domain-specific knowledge that allows larger reasoning steps and can more
easily handle.
AI becomes an industry:
 In 1981, the Japanese announced the "Fifth Generation" project, a 10-year plan to build
intelligent computers running Prolog. In response the United States formed the Microelectronics
and Computer Technology Corporation (MCC) as a research consortium designed to assure
national competitiveness. In both cases, A1 was part of a broad effort, including chip design and
human-interface research. However, the A1 components of MCC and the Fifth Generation
projects never met their ambitious goals.
 Overall, the A1 industry boomed from a few million dollars in 1980 to billions of dollars in 1988.
The return of neural networks (1986-present)
 Although computer science had largely abandoned the field of neural networks in the late 1970s,
work continued in other fields.
 The algorithm was applied to many learning problems in computer science and psychology, and
the widespread dissemination of the results in the collection Parallel Distributed Processing.
AI becomes a science (1987-present)

III YR /VI SEM Page 9


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

 Recent years have seen a revolution in both the content and the methodology of work in artificial
intelligence.
 It is now more common to build on existing theories than to propose brand new ones, to base
claims on rigorous theorems or hard experimental evidence rather than on intuition, and to show
relevance to real-world applications rather than toy examples.
The emergence of intelligent agents (1995-present)
 AI, researchers have also started to look at the "whole agent" problem again.
 One of the most important environments for intelligent agents is the Internet. AS systems have
become so common in web-based applications that the "-bot" suffix has entered everyday
language. Moreover, AS technologies underlie many Internet tools, such as search engines,
recommender systems, and Web site construction systems.
 A second major consequence of the agent perspective is that A1 has been drawn into much closer
contact with other fields, such as control theory and economics that also deal with agents.
Q. AI techniques
 There are three important AI techniques:
 Search — Provides a way of solving problems for which no direct approach is available. It also provides
a framework into which any direct techniques that are available can be embedded.
 AI technique is a method that achieves knowledge. The main AI techniques are:
 Search
 Use of knowledge
 Abstraction
 1. Search:-
 Search provides a way of solving problems for which no more direct approach is available as well as a
framework into which any direct techniques that are available can be embedded. A search program finds
a solutions for a problem by trying various sequences of actions or operators until a solution is found.
 Advantages
 It is the best way so far as no better way has been found to solve the problems.
 To solve a problem using search, it is only necessary to code the operator that can be used; the search
will find the sequence of actions that will provide the desired results.
 Disadvantages
 Most problems have search spaces so large that it is impossible to search for the whole space.
III YR /VI SEM Page 10
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1


 2. Use of knowledge — Provides a way of solving complex problems by exploiting the structure of the
objects that are involved.
 The use of knowledge provides a way of solving complicated problems by manipulating the structures of
the objects that are concerned.
 The way in which knowledge can be represented for usage in AI techniques:
 AI technique is a method that achieves knowledge that should be represented in such a way that:-
 Knowledge captures generalization. This meaning grouping situations that share important properties
rather than representing each situation separately with such an arrangement of knowledge, an
unreasonable amount of memory, and updating will no longer be required. Anything without this
property is called data rather than knowledge.
 It should be represented in such a way that it can be understood by the people who must prepare it. For
many programs, the size of the data can be achieved automatically by taking a reading from a number of
instruments, but in many AI areas, most of the knowledge a program has must be provided by people in
terms that they understand it.
 It could easily be adjusted to correct errors end to demonstrate changes in the world.
 It can be used to overcome its own through volume by helping to restrict the range of possibilities that
must usually be considered or discussed.
 It could be used in different situations even though it may not entirely be complete
3.Abstraction — Provides a way of separating important features and variations from many unimportant
ones that would otherwise overwhelm any process.
 Abstraction finds a way of separating important features and notifications from the unimportant
ones that would otherwise confuse any process.

Q.DATA acquisition and learning aspects of AI
 I) ―Data acquisition - process of sampling signals that measure real world physical conditions and
converting the resulting samples into digital numeric values that a computer can manipulate.‖
 Data acquisition systems (DAS or DAQ) convert physical conditions of analog waveforms into digital
values for further storage, analysis, and processing.
 Data Acquisition is composed of two words:
 Data and Acquisition
III YR /VI SEM Page 11
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

 Data is the raw facts and figures, which could be structured and unstructured
 Acquisition means acquiring data for the given task at hand.
 Data acquisition meaning is to collect data from relevant sources before it can be stored, cleaned, pre-
processed, and used for further mechanisms.
 Process of retrieving relevant business information, transforming the data into the required business
form, and loading it into the designated system.
II) Learning Aspects in AI
 Factoring its representation of knowledge, AI learning models can be classified in two main types:
inductive and deductive.
 — Inductive Learning: This type of AI learning model is based on inferring a general rule from
datasets of input-output pairs.. Algorithms such as knowledge based inductive learning(KBIL) are a
great example of this type of AI learning technique. KBIL focused on finding inductive hypotheses on
a dataset with the help of background information.
 — Deductive Learning: This type of AI learning technique starts with the series of rules and infers
new rules that are more efficient in the context of a specific AI algorithm. Explanation-Based
Learning(EBL) and Relevance-0Based Learning(RBL) are examples examples of deductive
techniques. EBL extracts general rules from examples by ―generalizing‖ the explanation. RBL focuses
on identifying attributes and deductive generalizations from simple example.
 Based on the feedback characteristics, AI learning models can be classified as supervised,
unsupervised, semi-supervised or reinforced.
 — Unsupervised Learning: Unsupervised models focus on learning a pattern in the input data
without any external feedback. Clustering is a classic example of unsupervised learning models.
 — Supervised Learning: Supervised learning models use external feedback to learning functions that
map inputs to output observations. In those models the external environment acts as a ―teacher‖ of the
AI algorithms.
 — Semi-supervised Learning: Semi-Supervised learning uses a set of curated, labeled data and tries
to infer new labels/attributes on new data data sets. Semi-Supervised learning models are a solid
middle ground between supervised and unsupervised models.
 — Reinforcement Learning: Reinforcement learning models use opposite dynamics such as rewards
and punishment to ―reinforce‖ different types of knowledge. This type of learning technique is

III YR /VI SEM Page 12


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

becoming really popular in modern AI solutions.




Q. Applications of AI

Autonomous planning and scheduling

1. Route planning 2. Automated scheduling of actions in spacecrafts


Game playing

 IBM's Deep Blue defeated G.Kasparov (the human world champion) (1997)
 The program FRITZ running on an ordinary PC drawed with V.Kramnik (the human
world champion) (2002)
Autonomous control

 Automated car steering and the Mars mission.


Diagnosis

 Medical diagnosis programs based on probabilistic analysis have been able to perform at the level
of an expert physician in several areas of medicine.
 Literature describes a case where a leading expert was convinced by a computer diagnostic.
Logistic planning

III YR /VI SEM Page 13


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

 Defence Advanced Research Project Agency stated that this single application more than
paid back DARPA's 30-year investment in AI
Robotics
 Microsurgery and RoboCup. By the year 2050, develop a team of fully autonomous humanoid
robots that can win against the human world soccer champion team.

Q.INTELLIGENT AGENTS

Agents and environments


 An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.
 Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts
for actuators
 Robotic agent: cameras and infrared range finders for sensors; various motors for actuators.
 A software agent receives keystrokes, file contents, and network packets as sensory inputs and
acts on the environment by displaying on the screen, writing files, and sending network packets.
 We use the term percept to refer to the agent's perceptual inputs at any given instant. An agent's percept
sequence is the complete history of everything the agent has ever perceived.

 We can imagine tabulating the agent function that describes any given agent; for most agents, this
would be a very large table-infinite, in fact, unless we place a bound on the length of percept
sequences we want to consider.
III YR /VI SEM Page 14
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

 Construct this table by trying out all possible percept sequences and recording which actions the
agent does in response.
Agent Program:
Internally, the agent function for an artificial agent will be implemented by an agent program. It
is important to keep these two ideas distinct. The agent function is an abstract mathematical description;
the agent program is a concrete implementation, running on the agent architecture.

 This world is so simple that we can describe everything that happens; it's also a made-up world,
so we can invent many variations. This particular world has just two locations: squares A and B.
The vacuum agent perceives which square it is in and1 whether there is dirt in the square. It can
III YR /VI SEM Page 15
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

choose to move left, move right, suck up the dirt, or do nothing. One very simple agent function
is the following: if the current square is dirty, then suck, otherwise move to the other square.
Good Behaviour: The concept of Rationality

Q. what is Rational agents


 An agent should strive to "do the right thing", based on what it can perceive and the actions it can
perform. The right action is the one that will cause the agent to be most successful.
 Performance measure: An objective criterion for success of an agent's behavior
 E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up,
amount of time taken, amount of electricity consumed, amount of noise generated, etc.
 Rational Agent: For each possible percept sequence, a rational agent should select an action that
is expected to maximize its performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
 Rationality is distinct from omniscience (all-knowing with infinite knowledge)
 Agents can perform actions in order to modify future percepts so as to obtain useful information
(information gathering, exploration)
 An agent is autonomous if its behavior is determined by its own experience (with ability to learn
and adapt)

Q. what is PEAS
 PEAS: Performance measure, Environment, Actuators, Sensors
 Must first specify the setting for intelligent agent design
 Consider, e.g., the task of designing an automated taxi driver:
– Performance measure
– Environment
– Actuators
– Sensors
 Must first specify the setting for intelligent agent design.
 Consider, e.g., the task of designing an automated taxi driver:
– Performance measure: Safe, fast, legal, comfortable trip, maximize profits
– Environment: Roads, other traffic, pedestrians, customers
– Actuators: Steering wheel, accelerator, brake, signal, horn
– Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard
III YR /VI SEM Page 16
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

Example :
Agent: Medical diagnosis system
 Performance measure: Healthy patient, minimize costs, lawsuits
 Environment: Patient, hospital, staff
 Actuators: Screen display (questions, tests, diagnoses, treatments, referrals)
 Sensors: Keyboard (entry of symptoms, findings, patient's answers)
 Agent: Part-picking robot
 Performance measure: Percentage of parts in correct bins
 Environment: Conveyor belt with parts, bins
 Actuators: Jointed arm and hand
 Sensors: Camera, joint angle sensors
 Agent: Interactive English tutor
 Performance measure: Maximize student's score on test
 Environment: Set of students
 Actuators: Screen display (exercises, suggestions, corrections)

Environment types
 Fully observable (vs. partially observable): An agent's sensors give it access to the complete
state of the environment at each point in time.
 Deterministic (vs. stochastic): The next state of the environment is completely determined by
the current state and the action executed by the agent. (If the environment is deterministic except
for the actions of other agents, then the environment is strategic)
 Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode
consists of the agent perceiving and then performing a single action), and the choice of action in
each episode depends only on the episode itself.
 Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The
environment is semi dynamic if the environment itself does not change with the passage of time
but the agent's performance score does)
 Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions.
 Single agent (vs. multiagent): An agent operating by itself in an environment.

III YR /VI SEM Page 17


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

Chess with a clock Chess without a clock Taxi Driving

Fully Observable Yes Yes No

Deterministic Strategic Strategic No

Episodic No No No

Static Semi Yes No

Discrete Yes Yes No

Single agent No No No

 The environment type largely determines the agent design.


 The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous,
multi-agent.
Q. THE STRUCTURE OF AGENTS

5.1 Agent functions and programs


• An agent is completely specified by the agent function mapping percept sequences to actions.
• One agent function (or a small equivalence class) is rational.
• Aim: find a way to implement the rational agent function concisely.

Table-lookup agent and its Drawbacks:

– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn the table entries

5.2 Q. Agent types or types of agents

• Four basic types in order of increasing generality:


 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents

III YR /VI SEM Page 18


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

1. Simple reflex agents

The simplest kind of agent is the simplex reflex agent. These agents select actions on the basis of
the current percept, ignoring the rest of the percept history.

Example: Agent function for vacuum agent

Percept sequence Action

[A, clean] Right

[A, Dirty] Suck

[B,Clean] Left

[B, Dirty] Suck

[A, clean],[A, clean] Right

[A, clean],[A, Dirty] Suck.

[A, clean],[A, clean], [A, clean] Right

[A, clean],[A, clean], [A, dirty] Suck

..An agent program for this agent is

Function REFLEX-VACUUM-AGENT([location,status]) returns an action

If status=dirty then return suck

Else if location = A then return right

III YR /VI SEM Page 19


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

Else if location = B then return left

The vacuum agent program is very small. But some processing is done on the visual input to establish
the condition-action rule. For eg, if car-in-front-is-braking then initiate braking. The following figure
shows how the condition-Action rules allow the agent to make the connection from percept to action.

Rectangles: Current internal state of the agent’s decision process


Ovals: Background information used in the process

The agent program is given below:

Function SIMPLE-REFLEX-AGENT (percept) returns an action

Static: rules, a set of condition-action rules

State<- INTERPRET-INPUT (percept)


Rule<- RULE-MATCH (state, rules)
Action <- RULE-ACTION [rule]
Return action
Function

INTERPRET-INPUT: generates an abstracted description of the current state from the percept

RULE-MATCH: returns the first rule in the set of rules that matches the given state description.

This agent will work only if the correct decision can be made on the basis of only the current percept.
i.e. only if the environment is fully observable.

2. Model-based reflex agents


To handle partial observability, the agent should maintain some sort of internal state that depends
on the percept history and thereby reflects at least some of the unobserved aspects of the current state.

Updating this internal state information requires two kinds of knowledge to be encoded in the
agent program.

o How the world evolves independently of the agent


o How the agent’s actions affect the world.

III YR /VI SEM Page 20


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

This knowledge can be implemented in simple Boolean circuits called model of the world. An
agent that uses such a model is called a model-based agent.

The following figure shows the structure of the reflex agent with internal state, showing how the current
percept is combined with the old internal state to generate the updated description of the current state.

A model-based, reflex agent

The agent program is shown below:


Function REFLEX-AGENT-WITH-STATE(percept)returns an action Static: state, a description of the
current world state

Rules, a set of condition-action rules


Action, the most recent action, initially none
State <- UPDATE-STATE (state, action , percept)
Rule<- RULE-MATCH (state, rules)
Action<- RULE-ACTION [rule]
Return action
UPDATE-STATE: for creating the new internal state description.

3. Goal-based agents
Here, along with current-state description, the agent needs some sort of goal information that describes
situations that are desirable – for eg, being at the passenger’s destination.

III YR /VI SEM Page 21


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

Goal –based agents structure is shown below:

A model-based, goal-based agent.

 Knowing about the current state of the environment is not always enough to decide what
to do.
 For example, at a road junction, the taxi can turn left, turn right, or go straight on. The
correct decision depends on where the taxi is trying to get to. In other words, as well as a
current state description, the agent needs some sort of goal information that describes
situations that are desirable-for example, being at the passenger's destination. The agent
program can combine this with information about the results of possible actions (the same
information as was used to update internal state in the reflex agent) in order to choose
actions that achieve the goal.
 Sometimes goal-based action selection is straightforward, when goal satisfaction results
immediately from a single action. Sometimes it will be more tricky, when the agent has to
consider long sequences of twists and turns to find a way to achieve the goal.
4. Utility-based agents
Goals alone are not enough to generate high-quality behaviour in most environments. A more
general performance measure should allow a comparison of different world states according to exactly
how happy they would make the agent if they could be achieved. A utility function maps a state onto a
real number, which describes the associated degree of happiness. The utility-based agent structure
appears in the following figure.

III YR /VI SEM Page 22


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

5. Learning agents
It allows the agent to operate in initially unknown environments and to become more competent
than its initial knowledge alone might allow. A learning agent can be divided into four conceptual
components, as shown in figure:

Learning element: responsible for making improvement.

Performance element: responsible for selecting external actions

The learning element uses feedback from the critic on how the agent is doing and determines how
the performance element should be modified to do better in the future. The critic tells the learning
element how well the agent is doing with respect to a fixed performance standard. The critic is
necessary because the percepts themselves provide no indication of the agent’s success. The last
component of the learning agent is the problem generator. It is responsible for suggesting actions that
will lead to new and informative experiences.

III YR /VI SEM Page 23


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

Q. Problem spaces and search

Steps to solve a problem

To build a system to solve a particular problem we need four things.


1. Define a problem
2. Analyze the problem
3. Isolate & represent task knowledge to solve the problem
4. Choose the best problem solving technique and apply to the particular problem.
Problem solving involves:
• The process of solving a problem consists of five steps. These are:

1. Defining The Problem: The definition of the problem must be included precisely. It
should contain the possible initial as well as final situations which should result in
acceptable solution.
2. Analyzing The Problem: Analyzing the problem and its requirement must be done as
few features can have immense impact on the resulting solution.
3. Identification Of Solutions: This phase generates reasonable amount of solutions to
the given problem in a particular range.
4. Choosing a Solution: From all the identified solutions, the best solution is chosen basis
on the results produced by respective solutions.
5. Implementation: After choosing the best solution, its implementation is done.

In order to solve the problem play a game, which is restricted to two person table or board games, we
require the rules of the game and the targets for winning as well as a means of representing positions in
the game. The opening position can be defined as the initial state and a winning position as a goal state,
there can be more than one. Legal moves allow for transfer from initial state to other states leading to the
goal state. However the rules are far to copious in most games especially chess where they exceed the
number of particles in the universe. Thus the rules cannot in general be supplied accurately and
computer programs cannot easily handle them. The storage also presents another problem but searching
can be achieved by hashing.

The number of rules that are used must be minimized and the set can be produced by expressing each
rule in as general a form as possible. The representation of games in this way leads to a state space
representation and it is natural for well organized games with some structure. This representation allows
for the formal definition of a problem which necessitates the movement from a set of initial positions to
one of a set of target positions. It means that the solution involves using known techniques and a
systematic search. This is quite a common method in AI.

 Well organized problems (e.g. games) can be described as a set of rules.


 Rules can be generalized and represented as a state space representation:
o formal definition.
o move from initial states to one of a set of target positions.
o move is achieved via a systematic search.
A search problem can have three main factors:
III YR /VI SEM Page 24
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

o Search Space: Search space represents a set of possible solutions, which a system may
have.
o Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and returns whether the goal state is achieved or not
Problem-solving agents:

In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational agents or Problem-
solving agents in AI mostly used these search strategies or algorithms to solve a specific problem and provide the
best result. Problem-solving agents are the goal-based agents and use atomic representation. In this topic, we will
learn various problem-solving search algorithms.

Q. Search Algorithms in Artificial Intelligence

Search Algorithm Terminologies:

 Search: Searchingis a step by step procedure to solve a search-problem in a given search space. A search problem
can have three main factors:
1. Search Space: Search space represents a set of possible solutions, which a system may have.
2. Start State: It is a state from where agent begins the search.
3. Goal test: It is a function which observe the current state and returns whether the goal state is achieved or
not.
 Search tree: A tree representation of search problem is called Search tree. The root of the search tree is the root node
which is corresponding to the initial state.
 Actions: It gives the description of all the available actions to the agent.
 Transition model: A description of what each action do, can be represented as a transition model.
 Path Cost: It is a function which assigns a numeric cost to each path.
 Solution: It is an action sequence which leads from the start node to the goal node.
 Optimal Solution: If a solution has the lowest cost among all solutions.

Properties of Search Algorithms:

Following are the four essential properties of search algorithms to compare the efficiency of these algorithms:

Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least any solution
exists for any random input.

Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all
other solutions, then such a solution for is said to be an optimal solution.

Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.

Space Complexity: It is the maximum storage space required at any point during the search, as the complexity of
the problem.

Q. Types of search algorithms

Based on the search problems we can classify the search algorithms into uninformed (Blind search) search
and informed search (Heuristic search) algorithms.
III YR /VI SEM Page 25
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

 Uninformed/Blind Search:

The uninformed search does not contain any domain knowledge such as closeness, the location of the goal. It
operates in a brute-force way as it only includes information about how to traverse the tree and how to identify leaf
and goal nodes. Uninformed search applies a way in which search tree is searched without any information about
the search space like initial state operators and test for the goal, so it is also called blind search.It examines each
node of the tree until it achieves the goal node.

It can be divided into five main types:

 Breadth-first search
 Uniform cost search
 Depth-first search
 Iterative deepening depth-first search
 Bidirectional Search

 Informed Search

Informed search algorithms use domain knowledge. In an informed search, problem information is available which
can guide the search. Informed search strategies can find a solution more efficiently than an uninformed search
strategy. Informed search is also called a Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find a good solution
in reasonable time.

Informed search can solve much complex problem which could not be solved in another way.

An example of informed search algorithms is a traveling salesman problem.

1. Greedy Search
2. A* Search
3. AO* Search

Heuristic search

A heuristic is a technique that improves the efficiency of a search process, possibly by sacrificing claims of completeness.
Heuristics are like tour guides. They are good to extract that they point in generally interesting directions; they are bad to the
extent that they miss points of interest to particular individuals.

Heuristic function

A heuristic function is a function that maps from problem state descriptions to measure of desirability usually represented as
numbers. Well designed heuristic functions can play an important part in efficiently guiding a search process towards a
solution.

III YR /VI SEM Page 26


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

Uninformed/Blind Search:

 Breadth-first Search:

 Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm searches
breadthwise in a tree or graph, so it is called breadth-first search.
 BFS algorithm starts searching from the root node of the tree and expands all successor node at the current level
before moving to nodes of next level.
 Breadth-first search implemented using FIFO queue data structure.

Advantages:

 BFS will provide a solution if any solution exists.


 If there are more than one solutions for a given problem, then BFS will provide the minimal solution which requires
the least number of steps.

Disadvantages:

 It requires lots of memory since each level of the tree must be saved into memory to expand the next level.
 BFS needs lots of time if the solution is far away from the root node.

Algorithm: BFS

1. Place the starting node S on the queue.

2. If the queue is empty, return failure and stop

3. If the first element on the queue is a goal node g, return success and stop. Otherwise

4. Remove and expand the first element from the queue and place all the children at the end of the queue in any order.
5. Return to step2.

Example:

In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S to
goal node K. BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted arrow,
and the traversed path will be:

S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K

III YR /VI SEM Page 27


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes traversed in BFS
until the shallowest Node. Where the d= depth of shallowest solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then BFS will
find a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.

Depth-first Search

 Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
 It is called the depth-first search because it starts from the root node and follows each path to its greatest depth node
before moving to the next path.
 DFS always expands the deepest node in the current fringe of the search tree.
 The search proceeds immediately to the deepest level of the search tree, where the nodes have no successors. As
those nodes are expanded, they are dropped from the fringe. So then the search ―backup‖ to the next shallowest node
that still has unexplored successors
 DFS uses a stack data structure for its implementation.
 The process of the DFS algorithm is similar to the BFS algorithm.

Advantage:

 DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the current
node.
 It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).

Disadvantage:

 There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution.
 DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

Example:

In the below search tree, we have shown the flow of depth-first search, and it will follow the order as:

Root node--->Left node ----> right node.

III YR /VI SEM Page 28


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will backtrack
the tree as E has no other successor and still goal node is not found. After backtracking it will traverse node C and
then G, and here it will terminate as it found goal node.

Completeness: DFS search algorithm is complete within finite state space as it will expand every node within a
limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm. It is given
by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node, hence space complexity of
DFS is equivalent to the size of the fringe set, which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost to reach to the goal
node.

Uniform-cost Search Algorithm:

The primary goal of the uniform-cost search is to find a path to the goal node which has the lowest cumulative cost.
Uniform-cost search expands nodes according to their path costs form the root node. It can be used to solve any
graph/tree where the optimal cost is in demand. A uniform-cost search algorithm is implemented by the priority
queue. It gives maximum priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS
algorithm if the path cost of all edges is the same.

Advantages:

 Uniform cost search is optimal because at every state the path with the least cost is chosen.

Disadvantages:

 It does not care about the number of steps involve in searching and only concerned about path cost. Due to
which this algorithm may be stuck in an infinite loop.

Example:

III YR /VI SEM Page 29


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

Completeness:

Uniform-cost search is complete, such as if there is a solution, UCS will find it.

Time Complexity:

Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then the number of steps
is = C*/ε+1. Here we have taken +1, as we start from state 0 and end to C*/ε.

Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε])/.

Space Complexity:

The same logic is for space complexity so, the worst-case space complexity of Uniform-cost search is O(b1 + [C*/ε]).

Optimal:

Uniform-cost search is always optimal as it only selects a path with the lowest path cost.

The informed search algorithm is more useful for large search space. Informed search algorithm uses the idea of
heuristic, so it is also called Heuristic search.

Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the most promising
path. It takes the current state of the agent as its input and produces the estimation of how close agent is from the
goal. The heuristic method, however, might not always give the best solution, but it guaranteed to find a good
solution in reasonable time. Heuristic function estimates how close a state is to the goal. It is represented by h(n),
and it calculates the cost of an optimal path between the pair of states. The value of the heuristic function is always
positive.

Q. A* Search Algorithm

A* search is the most commonly known form of best-first search. It uses heuristic function h(n), and cost to reach
the node n from the start state g(n). It has combined features of UCS and greedy best-first search, by which it solve
the problem efficiently. A* search algorithm finds the shortest path through the search space using the heuristic
function. This search algorithm expands less search tree and provides optimal result faster. A* algorithm is similar
to UCS except that it uses g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we can combine both
costs as following, and this sum is called as a fitness number.

III YR /VI SEM Page 30


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

Algorithm of A* search:

Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation function (g+h), if node n is
goal node then return success and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each successor n', check
whether n' is already in the OPEN or CLOSED list, if not then compute evaluation function for n' and place into
Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer which
reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:

 A* search algorithm is the best algorithm than other search algorithms.


 A* search algorithm is optimal and complete.
 This algorithm can solve very complex problems.

Disadvantages:

 It does not always produce the shortest path as it mostly based on heuristics and approximation.
 A* search algorithm has some complexity issues.
 The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it is not practical
for various large-scale problems.

Example:
In this example, we will traverse the given graph using the A* algorithm. The heuristic value of all states is given in the
below table so we will calculate the f(n) of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any
node from start state.
Here we will use OPEN and CLOSED list.

III YR /VI SEM Page 31


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

Solution:

• Solution :
• Initialization: {(S, 5)}
• Iteration1: {(S--> A, 4), (S-->G, 10)}
• Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
• Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}
• Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with cost 6.
• Points to remember:
• A* algorithm returns the path which occurred first, and it does not search for all remaining paths.
• The efficiency of A* algorithm depends on the quality of heuristic.
• A* algorithm expands all nodes which satisfy the condition f(n) <="" li="">
• Complete: A* algorithm is complete as long as
• e: A* algorithm is complete as long as:
• Branching factor is finite.
• Cost at every action is fixed.
• Optimal: A* search algorithm is optimal if it follows below two conditions:
• Admissible: the first condition requires for optimality is that h(n) should be an admissible heuristic
for A* tree search. An admissible heuristic is optimistic in nature.
• Consistency: Second required condition is consistency for only A* graph-search.
• If the heuristic function is admissible, then A* tree search will always find the least cost path.
• Time Complexity: The time complexity of A* search algorithm depends on heuristic function, and the
number of nodes expanded is exponential to the depth of solution d. So the time complexity is O(b^d),
where b is the branching factor.
• Space Complexity: The space complexity of A* search algorithm is O(b^d)
• AO* Algorithm basically based on problem decompositon (Breakdown problem into small pieces)
• When a problem can be divided into a set of sub problems, where each sub problem can be solved
separately and a combination of these will be a solution, AND-OR graphs or AND - OR trees are used for
representing the solution.
• The decomposition of the problem or problem reduction generates AND arcs.

Q. AO* search or AND-OR Graph :

III YR /VI SEM Page 32


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

• The figure shows an AND-OR graph


• To pass any exam, we have two options, either cheating or hard work.
• In this graph we are given two choices, first do cheating or (The red line) work hard and (The arc) pass.
• When we have more than one choice and we have to pick one, we apply OR condition to choose
one.(That's what we did here).
• Basically the ARC here denote AND condition.
• Here we have replicated the arc between the work hard and the pass because by doing the hard work
possibility of passing an exam is more than cheating.
Example:
• Let's try to understand it with the following diagram
• The algorithm always moves towards a lower cost value.

Memory bounded heuristic search:


• •To reduce memory- Iterative deepening to the heuristic search.
• •2 memory bounded algorithm:
• 1) RBFS (recursive best-first search).
• 2) MA* (Memory-bounded A*) and
• SMA*(simplified memory MA*)
• RBFS:

• •It attempts to mimic the operation of BFS.
• •It replaces the f-value of each node along the path with the best f-value of its children.
• •Suffers from using too little memory.
• •Even if more memory were available , RBFS has no way to make use of it.
• SMA*

•Proceeds life A*,expands best leaf until memory is full.
• •Cannot add new node without dropping an old one. (always drops worst one)
• •Expands the best leaf and deletes the worst leaf.
• •If all have same f-value-selects same node for expansion and deletion.
• •SMA* is complete if any reachable solution.
SMA* ( Simplified Memory Bounded A*)
• It is a shortest path algorithm that is based on the A* algorithm.

III YR /VI SEM Page 33


ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

• The difference between SMA* and A* is that SMA* uses a bounded memory, while the A* algorithm
might need exponential memory.
• Like the A*, it expands the most promising branches according to the heuristic. What sets SMA* apart is
that it prunes nodes whose expansion has revealed less promising than expected.
• SMA*, just like A* evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to get
from the node to the goal: f(n) = g(n) + h(n).
• Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated cost of the cheapest
path from n to the goal, we have f(n) = estimated cost of the cheapest solution through n. The lower the f
value is, the higher priority the node will have.
• The difference from A* is that the f value of the parent node will be updated to reflect changes to this
estimate when its children are expanded. A fully expanded node will have an f value at least as high as that
of its successors.
• In addition, the node stores the f value of the best forgotten successor (or best forgotten child). This value is
restored if the forgotten successor is revealed to be the most promising successor.
• Before continuing reading this post, we suggest you read our post on the A* algorithm. Understanding how
A* works will help you a lot in understanding how SMA* works.
• Properties:
• Utilization of allocated memory
• Avoid repeated states
• Is complete if available memory is sufficient enough to store the shallowest path solution.
• It is optimal
• When enough memory is available for search ,then it is optimally efficient.

Q.Memory bounded heuristic search:
• •To reduce memory- Iterative deepening to the heuristic search.
• •2 memory bounded algorithm:
• 1) RBFS (recursive best-first search).
• 2) MA* (Memory-bounded A*) and
• SMA*(simplified memory MA*)
• RBFS:

• •It attempts to mimic the operation of BFS.
• •It replaces the f-value of each node along the path with the best f-value of its children.
• •Suffers from using too little memory.
• •Even if more memory were available , RBFS has no way to make use of it.
• SMA*

•Proceeds life A*,expands best leaf until memory is full.
• •Cannot add new node without dropping an old one. (always drops worst one)
• •Expands the best leaf and deletes the worst leaf.
• •If all have same f-value-selects same node for expansion and deletion.
• •SMA* is complete if any reachable solution.
• IDA* - Iterative-deepening A*
„ Use f-cost (g+h) as cutoff
„ At each iteration, the cutoff value is the smallest f-cost of any node that
exceeded the cutoff on the previous iteration
Š Recursive best-first search (RBFS)
III YR /VI SEM Page 34
ITT63 ARTIFICIAL INTELLIGENCE UNIT 1

„ Best-first search with only linear space


„ Keep track of the f-value of the best alternative
„ As the recursion unwinds, it forgets the sub-tree and back-up the f-value of
the best leaf as its parent’s f-value.
Š SMA* proceeds like A*
„ Expanding the best leaf until memory is full
„ Drop the worst leaf node, and back-up the value of the forgotten node to its
parent.
„ Complete IF there is any reachable solution.
„ Optimal IF any optimal solution is reachable

III YR /VI SEM Page 35

You might also like