AI
AI
AI
1)
Artificial intelligence is defined as the study of rational agents. A rational agent could be anything
that makes decisions, as a person, firm, machine, or software. It carries out an action with the best
outcome after considering past and current percepts(agent’s perceptual inputs at a given
instance). An AI system is composed of an agent and its environment. The agents act in their
environment. The environment may contain other agents.
To understand the structure of Intelligent Agents, we should be familiar with Architecture and Agent
programs. Architecture is the machinery that the agent executes on. It is a device with sensors and
actuators, for example, a robotic car, a camera, a PC. Agent program is an implementation of an
agent function. An agent function is a map from the percept sequence(history of all that an agent has
perceived to date) to an action.
Examples of Agent:
A software agent has Keystrokes, file contents, received network packages which act as sensors and
displays on the screen, files, sent network packets acting as actuators.
A Human-agent has eyes, ears, and other organs which act as sensors, and hands, legs, mouth, and
other body parts acting as actuators.
A Robotic agent has Cameras and infrared range finders which act as sensors and various motors
acting as actuators.
Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and capability
:
Goal-Based Agents
Utility-Based Agents
Learning Agent
Simple reflex agents ignore the rest of the percept history and act only on the basis of the current
percept. Percept history is the history of all that an agent has perceived to date. The agent function is
based on the condition-action rule. A condition-action rule is a rule that maps a state i.e, condition to
an action. If the condition is true, then the action is taken, else not. This agent function only succeeds
when the environment is fully observable. For simple reflex agents operating in partially observable
environments, infinite loops are often unavoidable. It may be possible to escape from infinite loops if
the agent can randomize its actions.
If there occurs any change in the environment, then the collection of rules need to be updated.
It works by finding a rule whose condition matches the current situation. A model-based agent can
handle partially observable environments by the use of a model about the world. The agent has to
keep track of the internal state which is adjusted by each percept and that depends on the percept
history. The current state is stored inside the agent which maintains some kind of structure
describing the part of the world which cannot be seen.
Goal-based agents
These kinds of agents take decisions based on how far they are currently from their goal(description
of desirable situations). Their every action is intended to reduce its distance from the goal. This
allows the agent a way to choose among multiple possibilities, selecting the one which reaches a
goal state. The knowledge that supports its decisions is represented explicitly and can be modified,
which makes these agents more flexible. They usually require search and planning. The goal-based
agent’s behavior can easily be changed.
Utility-based agents
The agents which are developed having their end uses as building blocks are called utility-based
agents. When there are multiple possible alternatives, then to decide which one is best, utility-based
agents are used. They choose actions based on a preference (utility) for each state. Sometimes
achieving the desired goal is not enough. We may look for a quicker, safer, cheaper trip to reach a
destination. Agent happiness should be taken into consideration. Utility describes how “happy” the
agent is. Because of the uncertainty in the world, a utility agent chooses the action that maximizes
the expected utility. A utility function maps a state onto a real number which describes the
associated degree of happiness.
Learning Agent :
A learning agent in AI is the type of agent that can learn from its past experiences or it has learning
capabilities. It starts to act with basic knowledge and then is able to act and adapt automatically
through learning.
Learning element: It is responsible for making improvements by learning from the environment
Critic: The learning element takes feedback from critics which describes how well the agent is doing
with respect to a fixed performance standard.
Problem Generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.