Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture - 3 - 6

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 23

Introduction to Agent

Dr. Ashish Kumar


Associate Professor-CSE
Manipal University Jaipur
Agents
• An agent is anything which can perceive its environment through sensors and acts upon that
environment through actuators/effectors.
• The agent analyses the complete history of its percepts using an agent function (or an agent program),
that maps the sequence of percepts to an action.
• There are mainly two ways the agents interact with the environment, such as perception and action.
• Perception in Artificial Intelligence is the process of interpreting vision, sounds, smell, and touch
through sensors.
• When the agent changes the environment through active interaction with the help of past and current
perception is called Action. So, We can say that the agent transforms information from one format to
another – Perceptual data is transformed into action.
Agents
• An AI system is composed of an agent and its environment.
• The agents act in their environment. The environment may contain other agents.
• An agent is anything that can be viewed as:
• Perceiving its environment through sensors and
• Acting upon that environment through actuators.
• We need to build intelligent agents that work in an environment.
• We can define in simple terms, Agent as game and
Environment as ground.

? ??
nts
Age
are
an
u m
H
Intelligent Agents
• An Intelligent Agent is a computational entity or software program that possesses the ability to perceive
its environment, make decisions, and take actions in pursuit of specific goals or objectives.
• These agents operate autonomously, meaning they can act independently and make decisions without
direct human intervention.
• Intelligent agents are characterized by their capacity to exhibit intelligent behavior, which includes
reasoning, learning, problem-solving, and adapting to changing circumstances.
• AI researchers and developers aim to create systems that can perform complex tasks, make decisions in
real-time, and effectively interact with and adapt to dynamic environments.
Key Characteristics of Intelligent Agents
• Autonomy: Intelligent agents are autonomous entities capable of acting independently without direct
human control or intervention. They have the ability to make decisions and take actions based on their
internal mechanisms and perceived information from the environment.
• Perception: Intelligent agents are equipped with sensors or other means of perception that allow them to
gather information from their environment. They can perceive and interpret data from various sources,
such as cameras, microphones, or other sensors, to understand the state of the world around them.
• Decision-Making: Intelligent agents possess decision-making capabilities, which involve processing the
information gathered from the environment and selecting appropriate actions to achieve their goals.
Decision-making can be rule-based, heuristic-based, or learned from data through machine learning
algorithms.
• Goal-Oriented Behavior: Intelligent agents are designed to pursue specific goals or objectives. They have
predefined goals or tasks to accomplish and are driven by their internal mechanisms to achieve those
goals efficiently.
Key Characteristics of Intelligent Agents
• Adaptability: Intelligent agents are adaptable and can adjust their behavior in response to changes in
the environment or unexpected situations. They can learn from past experiences and modify their
decision-making process to improve performance.
• Proactiveness: Intelligent agents are proactive in their actions. They can anticipate future events and
take preemptive actions to achieve their goals more effectively.
• Reactive Capability: Reactive agents are a type of intelligent agent that responds to immediate stimuli
from the environment without maintaining internal models or planning. They react based on
predefined rules or behaviors.
• Communication: Intelligent agents can often communicate with other agents or humans to exchange
information, coordinate actions, or collaborate on tasks.
• Learning and Improvement: Intelligent agents can learn from their experiences and feedback from the
environment. They can improve their performance over time through reinforcement learning,
supervised learning, unsupervised learning, or a combination of these techniques.
Type of Intelligent Agents
• Agents can be classified into different types based on their characteristics, such as whether they are
reactive or proactive, whether they have a fixed or dynamic environment, and whether they are single or
multi-agent systems.
• Reactive agents are those that respond to immediate stimuli from their environment and take actions
based on those stimuli. Proactive agents, on the other hand, take initiative and plan ahead to achieve
their goals. Also known as Deliberative agents.
• Hybrid agents combine the characteristics of reactive and deliberative agents. They can respond to
immediate stimuli from their environment, but they also have a model of the world and a plan for
achieving their goals.
• The environment in which an agent operates can also be fixed or dynamic. Fixed environments have a
static set of rules that do not change, while dynamic environments are constantly changing and require
agents to adapt to new situations.
• Multi-agent systems involve multiple agents working together to achieve a common goal. These agents
may have to coordinate their actions and communicate with each other to achieve their objectives.
Rational Agents
• Artificial intelligence is defined as the study of rational agents.
• Agents have inherent goals that they want to achieve (e.g. survive, reproduce).
• A rational agent acts in a way to maximize the achievement of its goals.
• A rational agent could be anything that makes decisions, such as a person, firm, machine, or software.
• It carries out an action with the best outcome after considering past and current percepts (agent’s
perceptual inputs at a given instance).
• True maximization of goals requires omniscience and unlimited computational abilities.
• Limited rationality involves maximizing goals within the computational and other resources available.
• Rational Agent at any given time depends on:
• The performance measure that defines degree of success.
• Everything that the agent has perceived so far (percept sequence).
• What the agent knows about the environment.
• The actions that the agent can perform.
Example of Rational Agents

Human???
Types of Environments
1. Fully observable vs Partially Observable: A fully observable environment is one in which an agent sensor
can perceive or access the complete state of an agent at any given time; otherwise, it is a partially
observable environment.
• When the agent has no sensors in all environments, it is said to be unobservable.
• It’s simple to maintain a completely observable environment because there’s no need to keep track of the
environment’s past history.
• Example:
• Chess — the board is fully observable, so are opponent moves.
• Driving — the environment is partially observable because you never know what’s around the corner.
Types of Environments
2. Deterministic vs Stochastic: A deterministic environment is one in which an agent’s current state and
chosen action totally determine the next state of the environment.
• Unlike deterministic environments, stochastic environments are random in nature and cannot be
totally predicted by an agent.
• Example:
• Chess has only a few possible movements for pieces in their current state, and these moves can be predicted.
• Self-Driving Cars – The activities of a self-driving car are not consistent; they change over time.
3. Competitive vs Collaborative: When an agent competes with another agent to optimize output, it is said
to be in a competitive environment. Ex: Chess is a competitive game in which the agents compete with one
another to win the game, which is the output.
• When multiple agents collaborate to produce the desired output, an agent is said to be in a
collaborative environment. When multiple self-driving cars are detected on the road, they cooperate
together to prevent crashes and reach their destination, which is the desired output.
Types of Environments
4. Static vs Dynamic: A static environment is one in which there is no change in its state.
• When an agent enters a vacant house, there is no change in the surroundings.
• A dynamic environment is one that is always changing when the agent is performing some action.
• A roller coaster ride is dynamic since it is in motion and the surroundings change all the time.

5. Discrete vs Continuous: A discrete environment is one that has a finite number of actions that can be
deliberated in the environment to produce the output.
• Chess is a discrete game since it has a limited or finite number of moves. The number of moves varies
from game to game, but it is always finite.
• The environment in which actions cannot be numbered, i.e. is not discrete, is referred to as continuous.
• Because their actions cannot be counted, self-driving cars are examples of continuous environments.
Types of Environments
6. Single-agent vs Multi-agent: A single-agent environment is defined as one that has only one agent
participates.
• A person left alone in a maze or forest or playing a crossword puzzle is an example of the single-agent
system.
• A multi-agent environment is one in which more than one agent exists.
• Football is a multi-agent game since each team has 11 players.

7. Episodic vs sequential: The episodic environment is also called the non-sequential environment. In an
episodic environment, an agent’s current state or action will not affect a future state or action. Whereas, in
a non-episodic environment, an agent’s current state or action will affect a future action and is also called
the sequential environment. That is, the agent performs the independent tasks in the episodic
environment, whereas in the non-episodic environment all agents’ actions are related.
Types of Environments
8. Known vs Unknown: These two are specially referred to the environment itself, not to the agent; If a
known environment, the outcomes for all actions are given. If the environment is unknown, the agent will
have to learn how it works in order to make good decisions. These environments are good examples of
Exploitation (Known Environment) and Exploration (Unknown Environment) which come in
Reinforcement Learning.

9. Accessible vs Inaccessible: If an agent can acquire complete and accurate knowledge about the state’s
environment, that environment is said to as accessible; otherwise, it is referred to as inaccessible.
• An accessible environment is an empty room whose state can be defined by its temperature.
• An example of an inaccessible environment is information about an event on Earth.
Examples of Task Environments
Types of Agents -1
• Simple Reflex Agents ignore the rest of the percept history and act only on the basis of the current
percept. Percept history is the history of all that an agent has perceived to date. The agent function is
based on the condition-action rule. A condition-action rule is a rule that maps a state i.e., a condition to
an action. If the condition is true, then the action is taken, else not. This agent function only succeeds
when the environment is fully observable. For simple reflex agents operating in partially observable
environments, infinite loops are often unavoidable. It may be possible to escape from infinite loops if the
agent can randomize its actions.
Problems with Simple reflex agents are :
• Very limited intelligence.
• No knowledge of non-perceptual parts of the state.
• Usually too big to generate and store.
• If there occurs any change in the environment,
then the collection of rules needs to be updated.
Types of Agents -2
• Model-Based Reflex Agents works by finding a rule whose condition matches the current situation. A
model-based agent can handle partially observable environments by the use of a model about the
world. The agent has to keep track of the internal state which is adjusted by each percept and that
depends on the percept history. The current state is stored inside the agent which maintains some kind
of structure describing the part of the world which cannot be seen.
Updating the state requires information about:
• How the world evolves independently from the agent?
• How do the agent’s actions affect the world?
Types of Agents -3
• Goal-Based Agents take decisions based on how far they are currently from their goal(description of
desirable situations). Their every action is intended to reduce their distance from the goal. This allows
the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. The
knowledge that supports its decisions is represented explicitly and can be modified, which makes these
agents more flexible.
• They usually require search and planning.
• The goal-based agent’s behavior can easily be changed.
Types of Agents -4
• Utility-Based Agents: The agents which are developed having their end uses as building blocks are
called utility-based agents. When there are multiple possible alternatives, then to decide which one is
best, utility-based agents are used. They choose actions based on a preference (utility) for each state.
Sometimes achieving the desired goal is not enough. We may look for a quicker, safer, cheaper trip to
reach a destination. Agent happiness should be taken into consideration. Utility describes how “happy”
the agent is.
• Because of the uncertainty in the world, a utility agent
chooses the action that maximizes the expected utility.
• A utility function maps a state onto a real number
which describes the associated degree of happiness.
Types of Agents -5
• Learning Agent in AI is the type of agent that can learn from its past experiences, or it has learning
capabilities. It starts to act with basic knowledge and then is able to act and adapt automatically
through learning. A learning agent has mainly four conceptual components, which are:
1) Learning element: It is responsible for making improvements by learning from the environment.
2) Critic: The learning element takes feedback from critics
which describes how well the agent is doing with respect to
a fixed performance standard.
3) Performance element: It is responsible for selecting
external action.
4) Problem Generator: This component is responsible for
suggesting actions that will lead to new and informative
experiences.
Types of Agents -6
• Multi-Agent Systems : These agents interact with other agents to achieve a common goal. They may have
to coordinate their actions and communicate with each other to achieve their objective.
• A multi-agent system (MAS) is a system composed of multiple interacting agents that are designed to
work together to achieve a common goal. These agents may be autonomous or semi-autonomous and
are capable of perceiving their environment, making decisions, and taking action to achieve the
common objective.
• In a homogeneous MAS, all the agents have the same capabilities, goals, and behaviors.
• In contrast, in a heterogeneous MAS, the agents have different capabilities, goals, and behaviors.
• Cooperative MAS involves agents working together to achieve a common goal, while competitive MAS
involves agents working against each other to achieve their own goals. In some cases, MAS can also
involve both cooperative and competitive behavior (Hybrid MAS), where agents must balance their own
interests with the interests of the group.
Types of Agents -7
• Hierarchical Agents are organized into a hierarchy, with high-level agents overseeing the behavior of
lower-level agents. The high-level agents provide goals and constraints, while the low-level agents carry
out specific tasks.
• These goals and constraints are typically based on the overall objective of the system. For example, in a
manufacturing system, the high-level agents might set production targets for the lower-level agents
based on customer demand.
• These subtasks may be relatively simple or more complex, depending on the specific application. For
example, in a transportation system, low-level agents might be responsible for managing traffic flow at
specific intersections.
• Hierarchical agents are useful in complex environments with many tasks and sub-tasks.
• One advantage of hierarchical agents is that they allow for more efficient use of resources. By organizing
agents into a hierarchy, it is possible to allocate tasks to the agents that are best suited to carry them
out, while avoiding duplication of effort. This can lead to faster, more efficient decision-making and
better overall performance of the system.
“Thank you for being such an
engaged audience during my
presentation.”
- Dr. Ashish Kumar

You might also like