Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Ait 307 Artificial Intelligence

Download as pdf or txt
Download as pdf or txt
You are on page 1of 93

AIT 307

ARTIFICIAL INTELLIGENCE

MARY LISA LEENUSE


Assistant Professor
Computer Science and
Engineering
SYLLABUS
Module – 1 (Introduction)
Introduction – What is Artificial Intelligence(AI) ? The Foundations of AI,
History of AI, Applications of AI. Intelligent Agents – Agents and
Environments, Good behavior: The concept of rationality, Nature of
Environments - Specifying the task environment, Properties of task
environments. Structure of Agents - Agent programs, Basic kinds of agent
programs.
Module – 2 (Problem Solving)
Solving Problems by searching-Problem solving Agents, Example problems,
Searching for solutions, Uninformed search strategies, Informed search
strategies, Heuristic functions.
SYLLABUS
Module - 3 (Search in Complex environments)
Adversarial search - Games, Optimal decisions in games, The Minimax
algorithm, Alpha-Beta pruning. Constraint Satisfaction Problems – Defining
CSP, Example Problems, Constraint Propagation- inference in CSPs,
Backtracking search for CSPs, Structure of CSP problems.

Module - 4 (Knowledge Representation and Reasoning)


Logical Agents – Knowledge based agents, Logic, Propositional Logic,
Propositional Theorem proving, Agents based on Propositional Logic. First
Order Predicate Logic - Syntax and Semantics of First Order Logic, Using First
Order Logic, Knowledge representation in First Order Logic. Inference in First
Order Logic – Propositional Vs First Order inference, Unification and Lifting,
Forward chaining, Backward chaining, Resolution. Classical Planning -
Algorithms for planning state space search, Planning Graphs.
SYLLABUS
Module - 5 (Machine Learning)
Learning from Examples – Forms of Learning, Supervised Learning. Learning
Decision TreesThe decision tree representation, Inducing decision trees from
examples, Choosing attribute tests, Generaliztion and overfitting. Evaluating
and choosing the best hypothesis, Regression and classification with Linear
models.
Mark Distribution
Textbook
Stuart Russell and Peter Norvig. Artificial
Intelligence: A Modern Approach, 3rd Edition.
Prentice Hall.
INTRODUCTION
➢Father of Artificial Intelligence:- JOHN McCARTHY
➢According to John McCarthy artificial intelligence, is “ The ṣcience
and engineering of making intelligent machines, especially intelligent
computer programs”.
INTRODUCTION
➢Artificial Intelligence is a way of making a computer, a computer-
controlled robot, a software think intelligently, in a similar manner the
intelligent humans think.
➢The term “ artificial intelligence” is used to describe machines that mimic
“cognitive” functions that humans associate with other human minds such
as “learning” and “problem-solving”.
➢Artificial Intelligence (AI) refers to the simulation of human intelligence
in machines that are programmed to think and act like humans. It involves
the development of algorithms and computer programs that can perform
tasks that typically require human intelligence such as visual perception,
speech recognition, decision-making, and language translation.
INTRODUCTION
➢AI seeks to enable machines to perceive their environment, learn from
experience, reason, and make decisions autonomously.
➢From virtual personal assistants and recommendation systems to
autonomous vehicles and healthcare diagnostics, AI has become
increasingly integrated into various aspects of our lives, revolutionizing
industries and reshaping the way we interact with technology.
➢Artificial intelligence is a wide-ranging branch of computer science
concerned with building smart machines capable of performing tasks that
typically require human intelligence
INTRODUCTION
➢Artificial intelligence allows machines to replicate the capabilities of the
human mind.
➢Artificial intelligence allows machines to replicate the capabilities of the
human mind. From the development of self-driving cars to the
development of smart assistants like Siri and Alexa, AI is a growing part
of everyday life.
INTRODUCTION
❖AI has three different levels:
➢Narrow AI:- Artificial intelligence is said to be narrow when the machine
can perform a specific task better than a human. The current research of
AI is here now.
➢General AI :- Artificial intelligence reaches the general state when it can
perform any intellectual task with the same accuracy level as a human
would.
➢Strong AI:- Artificial intelligence is strong when it can beat humans in
many tasks.
INTRODUCTION
❖Goals of AI:-
1. To Create Expert Systems – The systems which exhibit intelligent
behavior, learn, demonstrate, explain, and advice its users.
2. To Implement Human Intelligent in Machines- Creating systems that
understand, think, learn, and behave like human.
3. General-purpose AI:- Like the robots of science fiction is incredibly
hard human brains appears to have lots of special and general functions,
integrated in some amazing way that we really do not understand at all.
4. Special-purpose AI:- It is more doable or non-trivial. Eg:- Chess/ poker
playing programs, logistics planning, automated translation, voice
recognition, web search, data mining, medical diagnosis, etc.
INTRODUCTION
• Four Approaches Of AI
Thinking humanly: The cognitive Modeling Approach

➢If we are going to say that a given program thinks like a human, we must
have some way of determining how humans think.
➢We need to get inside the actual working of human minds.
➢There are three ways to do this:
1. Introspection:- Trying to catch our own thoughts as they go by.
2. Psychological Experiments:- Observing a person’s action
3. Brain Imaging:- Observing the action of the brain.
Thinking humanly: The cognitive Modeling Approach

➢Once we have a sufficiently precise theory of the mind, it becomes possible


to express the theory as a computer program.
➢If the program’s input- out behavior matches the corresponding human
behavior, then it is evident that it thinks humanly.
➢Cognitive science brings together computer models from AI and
experimental techniques from psychology to construct precise and testable
theories of the human mind.
Acting humanly: The Turing Test approach
• The Turing test was proposed by Alan Turing in 1950.
• It was designed to provide a satisfactory operational definition of
intelligence.
• AI system communicates with a human and a machine over text-only
channel.
• Both humans and machine try to act like a human.
• AI system tries to tell which is which.
Acting humanly: The Turing Test approach
• A computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or from a computer.
• The computer would need to possess the following capabilities:
1. Natural language processing to enable it to communicate successfully in English
2. Knowledge representation to store what it knows or hears
3. Automated reasoning to use the stored information to answer questions and
to draw new conclusions
4. Machine learning to adapt to new circumstances and to detect and extrapolate
patterns
5. TOTAL TURING TEST- To pass the total Turing Test, the computer will need
❖computer vision to perceive objects
❖robotics to manipulate objects and move about
What is the Turing Test in Artificial Intelligence?

• NLP to communicate successfully.


• Knowledge Representation to act as its memory.
• Automated Reasoning to use the stored information to answer questions
and draw new conclusions.
• Machine Learning to detect patterns and adapt to new circumstances.
Thinking rationally: The “laws of thought” approach
SYLLOGISM: an instance of a form of reasoning in which a conclusion is
drawn from two given or assumed propositions, “Socrates is a man; all men
are mortal; therefore, Socrates is mortal.”
These laws of thought were supposed to govern the operation of the mind;
this resulted in the field called logic.
There are two main obstacles to this approach:
1. It is not easy to take informal knowledge and state it in the formal terms
required by logical notation
2. There is a big difference between solving a problem “in principle” and
solving it in practice.
Acting rationally: The rational agent approach

• An agent is just something that acts. All computer programs do


something, but computer agents are expected to do more.
• Computer agents operate autonomously, perceive their environment,
persist over a prolonged time period, adapt to change and create and pursue
goals.
• A rational agent acts to achieve the best outcome or, when there is
uncertainty, the best expected outcome
Acting rationally: The rational agent approach

• Making correct inferences, is sometimes a part of being a rational agent


because one way to act rationally is to reason logically to the conclusion.
• There is also ways of acting rationally that cannot be said to involve
inference.
How do we measure if Artificial Intelligence is
acting like a human?

➢Turing Test
➢The Cognitive Modelling Approach
➢The Law of Thought Approach
➢The Rational Agent Approach
Fields in AI
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

Philosophers made AI conceivable by considering the ideas that:-


✓The mind is in some ways like a machine.
✓The mind operates on knowledge encoded in some internal knowledge.
✓Thought can be used to what actions to take.
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

• Philosophy:-
• The history of philosophy can be organized around the following series of
questions.
❑Can formal rules be used to draw valid conclusions?
❑How does the mind arise from a physical brain?
❑Where does knowledge come from?
❑How does knowledge lead to action?
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

• Philosophy:-
The terms in philosophy which is important to in terms of AI:-
• Rationalism: power of reasoning in understanding the world
• Dualism: there is a part of the human mind (or soul or spirit) that is outside
of nature, exempt from physical laws
• Materialism: brain’s operation according to the laws of physics constitutes
the mind
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

• Philosophy:-
• Induction: general rules are acquired by exposure to repeated associations
between their elements
• Logical positivism: doctrine holds that all knowledge can be characterized
• by logical theories connected, ultimately, to observation sentences that
correspond to sensory inputs; thus logical positivism combines rationalism
and empiricism
• confirmation theory: attempted to analyze the acquisition of knowledge
from experience
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

• Mathematics
• Mathematics provided the tools to manipulate statements of logical certainty
as well as uncertain, probabilistic statements.
• They also set the groundwork for understanding computation and reasoning
about the algorithms.
•What are the formal rules to draw valid conclusions?
•What can be computed?
• How do we reason with uncertain information?
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

• Mathematics
•Three fundamental areas:
1. logic
2. computation
3. probability.
• George Boole: worked out the details of propositional, or Boolean logic
• Gottlob Frege: creating the first-order logic that is used today
• Alan Turing: characterize exactly which functions are computable. Turing machine
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

• Economics

• Economics formulated the problem of making decisions that maximize the


expected outcome to the decision maker.
✓ How should we make decision so as to maximize payoff?
✓ How should we do this when others may not go along?
✓ How should we do this when the payoff may be far in the future?
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

➢Neuroscience
✓Neuroscience is the study of the nervous system, particularly the brain.
✓How do brain process information?
✓Neuroscientists discovered some facts about how the brain works and the
ways in which it is similar to and different from computers.
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

➢Neuroscience
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

➢Psychology
✓ How do humans and animals think and act?
✓ Psychologists adopted the idea that humans and animals can be considered
information processing machines.
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

➢Computer Engineering
✓How can we build an efficient computer?
✓Computer engineers provided the ever more powerful machines that make
AI applications possible.
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

➢Control theory and cybernetics


✓ How can artifacts operate under their own control?
✓Control theory deals with designing devices that act optimally on the basis
of feedback from the environment.
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

➢Linguistics
✓ How does language relate to thought?
✓ Modern linguistics and AI were “born” at about the same time, and grew up
together, intersecting in a hybrid field called Computational Linguistics or
Natural Language Processing.
Gestation of Artificial Intelligence (1943- 1955)
•Year 1943: The first work which is now recognized as AI was done by Warren
McCulloch and Walter pits in 1943. They proposed a model of artificial
neurons.
•Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian learning.
•Year 1950: The Alan Turing who was an English mathematician and pioneered
Machine learning in 1950. Alan Turing publishes "Computing Machinery and
Intelligence" in which he proposed a test. The test can check the machine's
ability to exhibit intelligent behavior equivalent to human intelligence, called a
Turing test.
• Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems, and find new and more elegant proofs
for some theorems.
Gestation of Artificial Intelligence (1943- 1955)
Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence
program Which was named as "Logic Theorist". This program had proved 38 of 52
Mathematics theorems, and find new and more elegant proofs for some theorems.

• The word "Artificial Intelligence" first adopted by American Computer scientist John
McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic field.
• The golden years-Early enthusiasm (1956-1974)
•Year 1966:
• The researchers emphasized developing algorithms which can solve mathematical
problems. Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA.
•Year 1972:
• The first intelligent humanoid robot was built in Japan which was named as WABOT-1.
History of AI- Tutorial 1
INTELLIGENCE
AGENTS
• An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators
AGENTS
• An agent runs in the cycle of perceiving, thinking, and acting. An agent
can be:-
1. Human Agent:- A human agent has eyes, ears, and other organs that
work for sensors, and the hand, legs, and vocal tract work for actuators.
2. Robotic Agent:- A robotic agent can have cameras, infrared range
finder, NLP for sensors and various motors for actuators.
3. Software Agent:- Software agent can have keystrokes, file contents as
sensory input and display output on the screen.
AGENTS
• The term percept to refer to the agent’s perceptual inputs at any given
instant.
• An agent’s percept sequence is the complete history of everything the
agent has ever perceived
• An agent’s choice of action at any given instant can depend on the entire
percept sequence observed to date, but not on anything it hasn’t perceived.
• An agent’s behavior is described by the agent function that maps any given
percept sequence to an action.
• agent = architecture + program
• The agent function is an abstract mathematical description; the agent
program is a concrete implementation, running within some physical
system.
INTELLIGENT AGENTS
INTELLIGENT AGENTS
The Vacuum Cleaner World
• This particular world has just two locations: squares A and B.
• The vacuum agent perceives which square it is in and whether there is
dirt in the square. It can choose to move left, move right, suck up the dirt,
or do nothing.
• One very simple agent function is the following: if the current square is
dirty, then suck; otherwise, move to the other square.
The Vacuum Cleaner World

• Perception: The vacuum cleaner needs to sense its environment


(dirty/clean) to decide what action to take.
• Action: The agent has a limited set of actions (move, suck, etc.) that it
can perform.
• States and Transitions: The environment can be in different states
(rooms clean/dirty) and the agent's actions transition it between these
states.
• Goals: The vacuum cleaner has a clear goal (clean all rooms).
• Search and Planning: Different algorithms can be explored to help the
agent find the most efficient way to achieve its goal (cleaning all rooms).
The Vacuum Cleaner World

Percepts: location and contents, e.g., [A,Dirty]


Actions: Left, Right, Suck, NoOp
Agent’s function → look-up table
For many agents this is a very large table
The Vacuum Cleaner World
The Vacuum Cleaner World
Good Behaviour: Concept of Rationality
• A rational agent is one that does the right thing
• AI is about creating rational agents to use for game theory and decision
theory for various real world scenarios.
• A rational is an agent which has a clear preference, models uncertainty, and
acts in a way to maximize its performance measure with all possible
actions.
• For an AI agent, the rational action is most important because in AI
reinforcement learning algorithm, for each best possible action, agent gets
the positive reward and for each wrong action an agent gets a negative
reward.
Example- Vacuum Cleaner Revisited
• We might propose to measure performance by the amount of dirt cleaned up in
a single eight-hour shift.
• With a rational agent, of course, what you ask for is what you get.
• A rational agent can maximize this performance measure by cleaning up the
dirt, then dumping it all on the floor, then cleaning it up again, and so on.
• A more suitable performance measure would reward the agent for having a clean
floor.
• For example, one point could be awarded for each clean square at each time step
(perhaps with a penalty for electricity consumed and noise generated).
•As a general rule, it is better to design performance measures according to what
one wants in the environment, rather than according to how one thinks the agent
should behave.
Performance measure
• Performance measure: An objective criterion for success of an agent's behavior.

• Performance measures of a vacuum-cleaner agent: amount of dirt cleaned up,


• amount of time taken, amount of electricity consumed, level of noise
generated, etc.

• Performance measures self-driving car: time to reach destination


(minimize), safety, predictability of behavior for other agents, reliability,
etc.

• Performance measure of game-playing agent: win/loss percentage


(maximize), robustness, unpredictability (to “confuse” opponent), etc.
Rationality
The rationality of an agent is measured by its performance measure
• Performance measuring success
• – Agents prior knowledge of environment
• – Actions that agent can perform
• – Agent’s percept sequence to date

•Rational Agent: For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance measure,
given the evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
Omniscience, learning, and autonomy

• Omniscient agent: knows the actual outcome of its actions and can act
accordingly; but omniscience is impossible in reality.
• rationality is not the same as perfection.
• Rationality maximizes expected performance, while perfection maximizes
actual performance.
• Exploration
• Learn as much as possible from what it perceives
• Rational agent should be autonomous—it should learn what it can to
compensate for partial or incorrect prior knowledge
Task Environment
• PEAS (Performance, Environment, Actuators, Sensors)
• In designing an agent, the first step must always be to specify the task environment
as fully as possible.
• Example: The task of designing a self- driving car.
Task Environment
Task Environment Types

•Fully observable vs. partially observable


•Single agent vs. Multi agent
•Deterministic vs. stochastic
•Episodic vs. sequential
•Static vs. dynamic
•Discrete vs. continuous
•Known vs. unknown
Fully observable vs. partially observable
• When an agent can sense or access the complete state of an environment at
any given time is known as fully observable.
• A task environment is effectively fully observable if the sensors detect all
aspects that are relevant to the choice of action; relevance, in turn, depends
on the performance measure.
• Fully observable environments are convenient because the agent
need not maintain any internal state to keep track of the world.
• An environment might be partially observable because of noisy and
inaccurate sensors or because parts of the state are simply missing from
the sensor data
• If the agent has no sensors at all then the environment is unobservable
Single agent vs. multiagent

• If only one agent is involved in an environment, and operating by


itself then such an environment is called single agent environment.
• However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
• Chess is a competitive multiagent environment.
• The game of football is multiagent as it involves 11 players in each team.
Deterministic vs. stochastic

• If the next state of the environment is completely determined by the current


state and the action executed by the agent, then we say the environment is
deterministic; otherwise, it is stochastic.
• An agent need not worry about uncertainty in a fully observable,
deterministic environment.
• If the environment is partially observable, however, then it could
appear to be stochastic.
• Stochastic means the introduction of randomness or probability into
the algorithm and models.
• Environment is uncertain if it is not fully observable or not deterministic.
Deterministic vs. stochastic

• “stochastic” generally implies that uncertainty about outcomes is quantified in terms of probabilities
• a nondeterministic environment is one in which actions are characterized by their possible outcomes,
• but no probabilities are attached to them
Episodic vs. sequential
• Episodic task environment:
◦ the agent’s experience is divided into atomic episodes.
◦ In each episode the agent receives a percept and then performs a single
action.
◦ Crucially, the next episode does not depend on the actions taken in
previous episodes.
◦ Many classification tasks are episodic.
• Sequential environments
◦ the current decision could affect all future decisions
◦ Chess and taxi driving are sequential: in both cases, short-term actions can
have long- term consequences.
◦ Episodic environments are much simpler than sequential environments
because the agent does not need to think ahead.
Static vs. dynamic
• If the environment can change while an agent is deliberating, then we
say the environment is dynamic for that agent; otherwise, it is static
• A static environment does not change while the agent is thinking.
• The passage of time as an agent deliberates is irrelevant.
• Dynamic environments, on the other hand, are continuously asking the
agent what it wants to do;
• if it hasn’t decided yet, that counts as deciding to do nothing.
• If the environment itself does not change with the passage of time but the
agent’s performance
• score does, then we say the environment is semi-dynamic.
Discrete vs. continuous
• If the number of distinct percepts and actions is limited, the environment is
discrete, otherwise it is continuous.
• The chess environment has a finite number of distinct states (excluding the clock).
• Chess also has a discrete set of percepts and actions.
• Taxi driving is a continuous-state and continuous-time problem: the speed and location
of the taxi and of the other vehicles sweep through a range of continuous values and do
so smoothly over time
• Taxi-driving actions are also continuous (steering angles, etc.).
• Input from digital cameras is discrete, strictly speaking, but is typically treated as
representing continuously varying intensities and locations.
Known vs. unknown
• In a known environment, the outcomes (or outcome probabilities if the environment is
stochastic) for all actions are given.
• If the environment is unknown, the agent will have to learn how it works in order to
make good
• decisions
• a known environment can be partially observable
◦ for example, in solitaire card games, I know the rules but am still unable to see the
cards that have not yet been turned over.
• An unknown environment can be fully observable
◦ in a new video game, the screen may show the entire game state but I still don’t
know what
• the buttons do until I try them.
Structure of Agent
• The job of AI is to design an agent program that implements the
agent function the mapping from percepts to actions.
• his program will run on some sort of computing device with physical
sensors and actuators called the architecture
• agent = architecture + program
• Architecture makes the percepts from the sensors available to the
program, runs the program, and feeds the program’s action choices to
the actuators as they are generated
Agent programs
• Agent program: use current percept as input from the sensors and
return an action to the actuators
• Agent function: takes the entire percept history
Agent programs
• Let P be the set of possible percepts and let T be the lifetime of the
agent (the total number of percepts it will receive)
• The lookup table will contain entries

• Consider the automated taxi: the visual input from a single camera
comes in at the rate of roughly 27 megabytes per second (30 frames per
second, 640 × 480 pixels with 24 bits of color information). This gives a
lookup table with over 10250,000,000,000 entries for an hour’s driving.
• Even the lookup table for chess a tiny, well-behaved fragment of the
real world would have at least
Agent programs

• 10150 entries which means that


a) no physical agent in this universe will have the space to store the table,
b) the designer would not have time to create the table,
c) no agent could ever learn all the right table entries from its
experience, and
d) even if the environment is simple enough to yield a feasible table
size, the designer still has no
• guidance about how to fill in the table entries.
Types of Agent Programs
• Four basic kinds of agent programs that embody the principles underlying
almost all intelligent systems:
1. Simple reflex agents;
2. Model-based reflex agents;
3. Goal-based agents; and
4. Utility-based agents
Simple reflex agents
• Select actions on the basis of the current percept, ignoring the rest of
the percept history
• Agents do not have memory of past world states or percepts.
• So, actions depend solely on current percept.
• Action becomes a “reflex.”
Simple reflex agents
• Agents select actions on the basis of the current percept, ignoring the
rest of the percept history.
• For example, the vacuum agent is a simple reflex agent, because its
decision is based only on the current location and on whether that
location contains dirt. An agent program for this agent is shown in
Figure.
Simple reflex agents

• Simple reflex behaviors occur even in more complex environments.


Imagine yourself as the driver of the automated taxi. If the car in front
brakes and its brake lights come on, then you should notice this and
initiate braking. In other words, some processing is done on the visual
input to establish the condition we call “The car in front is braking.”
Then, this triggers some established connection in the agent program
to the action “initiate braking.”
• We call such a connection a condition–action rule, written as if car-in-
front-is-braking then initiate-braking
Simple reflex agents
Simple reflex agents
Simple reflex agents

• The INTERPRET-INPUT function generates an abstracted description


of the current state from the percept, and.
• The RULE-MATCH function returns the first rule in the set of rules
that matches the given state description.
• This will work only if the correct decision can be made on the basis of
only the current percept that is, only if the environment is fully
observable.
Simple reflex agents

• Even a little bit of unobservability can cause serious trouble. For


example, the braking rule given earlier assumes that the condition car-
in-front-is-braking can be determined from the current percept—a
single frame of video. This works if the car in front has a centrally
mounted brake light.
• Infinite loops are often unavoidable for simple reflex agents operating
in partially observable environments. Escape from infinite loops is
possible if the agent can randomize its actions.
Model-based reflex agents
• It works by finding a rule whose condition matches the current situation
Key difference (with respect to simple reflex agents):
• Agents have internal state, which is used to keep track of past states of
the world.
• Agents have the ability to represent change in the World.
• The current state is stored inside the agent which maintains some kind of
structure describing the part of the world which cannot be seen.
• Internal state information as time goes by requires two kinds of
knowledge to be encoded in the agent program.
• we need some information about how the world evolves independently
of the agent
Model-based reflex agents
• we need some information about how the agent’s own actions affect
the world Knowledge about “how the world works is called a model of
the world.
• An agent that uses such a model is called a model-based agent.
• Figure below gives the structure of the model-based reflex agent with
internal state, showing how the current percept is combined with the
old internal state to generate the updated description of the current
state, based on the agent’s model of how the world works
Model-based reflex agents
•.
Model-based reflex agents
• The agent program is shown in Figure below. UPDATE-STATE, which is
responsible for creating the new internal state description.
Goal-based agents
• Knowing something about the current state of the environment is not
always enough to decide what to do.
• For example, at a road junction, the taxi can turn left, turn right, or go
straight on.
• The correct decision depends on where the taxi is trying to get to. In other
words, as well as a current state description, the agent needs some sort of
goal information that describes situations that are desirable—for example,
being at the passenger’s destination.
• The agent program can combine this with the model (the same
information as was used in the model based reflex agent) to choose
actions that achieve the goal.
Goal-based agents
Goal-based agents
• The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.
• Sometimes goal-based action selection is straightforward—for
example, when goalsatisfaction results immediately from a single
action.
• Sometimes it will be trickier—for example, when the agent has to
consider longsequences of twists and turns in order to find a way to
achieve the goal.
• Search and planning are the subfields of AI devoted to finding action
sequences thatachieve the agent’s goals.
Utility-based agents
• These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by
providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to
achieve the goal.
• The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
• The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
Utility-based agents
Utility-based agents

• Goals alone are not enough to generate high-quality behavior in most


environments. Goals just provide a crude binary distinction between
“happy” and “unhappy” states. Because “happy” does not sound very
scientific, economists and computer scientists use the term utility
instead
• An agent’s utility function is essentially an internalization of the
performance measure. If the internal utility function and the external
performance measure are in agreement, then an agent that chooses
actions to maximize its utility will be rational according to the external
performance measure.
Learning agent
• A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
• A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by
learning from environment
2. Critic: Learning element takes feedback from critic which describes that
how well the agent is doing with respect to a fixed performance standard.
Learning agent
3. Performance element: It is responsible for selecting external action.
4. Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.

You might also like