Definition of Ai How AI Applications Work Major Fields of AI
Definition of Ai How AI Applications Work Major Fields of AI
Definition of Ai How AI Applications Work Major Fields of AI
Expert Systems
Computer Vision
Robotics
Rationality
Rationality is nothing but status of being reasonable, sensible, and having good
sense of judgment.
Rationality is concerned with expected actions and results depending upon what the
agent has perceived. Performing actions with the aim of obtaining useful information
is an important part of rationality.
Types of Agents
Agents can be grouped into four classes based on their degree of perceived intelligence and
capability :
1. Simple Reflex Agents
Simple reflex agents ignore the rest of the percept history and act only on the basis of
the current percept. Percept history is the history of all that an agent has perceived till date.
The agent function is based on the condition-action rule. A condition-action rule is a rule that
maps a state i.e, condition to an action. If the condition is true, then the action is taken, else not.
This agent function only succeeds when the environment is fully observable. For simple reflex
agents operating in partially observable environments, infinite loops are often unavoidable. It
may be possible to escape from infinite loops if the agent can randomize its actions. Problems
with Simple reflex agents are :
Very limited intelligence.
No knowledge of non-perceptual parts of state.
Usually too big to generate and store.
If there occurs any change in the environment, then the collection of rules need
to be updated.
Properties of Environment
The environment has multifold properties −
Discrete / Continuous − If there are a limited number of distinct, clearly
defined, states of the environment, the environment is discrete (For example,
chess); otherwise it is continuous (For example, driving).
Observable / Partially Observable − If it is possible to determine the
complete state of the environment at each time point from the percepts it is
observable; otherwise it is only partially observable.
Static / Dynamic − If the environment does not change while an agent is
acting, then it is static; otherwise it is dynamic.
Single agent / Multiple agents − The environment may contain other agents
which may be of the same or different kind as that of the agent.
Accessible / Inaccessible − If the agent’s sensory apparatus can have
access to the complete state of the environment, then the environment is
accessible to that agent.
Deterministic / Non-deterministic − If the next state of the environment is
completely determined by the current state and the actions of the agent, then
the environment is deterministic; otherwise it is non-deterministic.
Episodic / Non-episodic − In an episodic environment, each episode
consists of the agent perceiving and then acting. The quality of its action
depends just on the episode itself. Subsequent episodes do not depend on
the actions in the previous episodes. Episodic environments are much
simpler because the agent does not need to think ahead.
A simple reflex agent takes an action based on only the current environmental situation; it
maps the current percept into proper action ignoring the history of percepts. The
mapping process could be simply a table-based or by any rule-based matching
algorithm. An example of this class is a robotic vacuum cleaner that deliberate in an
infinite loop, each percept contains a state of a current location [clean] or [dirty] and,
accordingly, it decides whether to [suck] or [continue-moving].
A model-based reflex agent needs memory for storing the percept history; it uses the
percept history to help to reveal the current unobservable aspects of the environment.
An example of this IA class is the self-steering mobile vision, where it's necessary to
check the percept history to fully understand how the world is evolving.
A goal-based reflex agent has a goal and has a strategy to reach that goal. All actions are
taken to reach this goal. More precisely, from a set of possible actions, it selects the one
that improves the progress towards the goal (not necessarily the best one). An example
of this IA class is any searching robot that has an initial location and wants to reach a
destination.
An utility-based reflex agent is like the goal-based agent but with a measure of "how
much happy" an action would make it rather than the goal-based binary feedback
['happy', 'unhappy']. This kind of agents provide the best solution. An example is
the route recommendation system which solves the 'best' route to reach a destination.