Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Of Module-1 1.: I. What Is AI?

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE Sub Code: BCS515B

Table of Contents
Module-1
1. Introduction
I. What is AI?
2. Intelligent Agents:
IV. Agents and environment
V. Concept of Rationality
VI. The nature of environment
VII. The structure of agents

Downloaded by Raghavendra G S (raghugs.cs@sode-edu.in) Page- 2


lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE Sub Code: BCS515B

Introduction:
Foundation of AI was laid with Boolean theory by a mathematician, Boole & other
researchers. Since the invention of computer in 1943, AI has been of interest to the
researchers. They always aimed to make machines more intelligent than humans.

Intelligence is a property of mind that encompasses many related mental abilities, such as the
capabilities to
 reason
 plan
 solve problems
 think abstractly
 comprehend ideas and language and
 learn

I. What is AI?
John McCarthy in mid-1950’s coined the term Artificial Intelligence‖ which he would define
as the science and engineering of making intelligent machines‖ AI is about teaching the
machines to learn, to act, and think as humans would do.

We can organize AI definition into 4 categories:

 The definitions on top are concerned with thought processes and reasoning, whereas
the ones on the bottom address behaviour.
 The definitions on the left measure success in terms of conformity to human
performance whereas the ones on the right measure against an ideal performance
measure called rationality.
 A system is rational if it does the "right thing," given what it knows.
 Historically, all four approaches to AI have been followed, each by different people
with different methods.
 A human-centred approach must be in part an empirical science, involving
observations and hypotheses about human behaviour.
 A rationalist’s approach involves a combination of mathematics and engineering. The
various groups have both disparaged and helped each other.

Page- 3
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

Let us look at the four approaches in more detail.

Thinking humanly:
The cognitive modelling approach If we are going to say that a given program thinks like a
human, we must have some way of determining how humans think. We need to get inside the
actual workings of human minds.
There are three ways to do this:
1. through introspection-trying to catch our own thoughts as they go by
2. through psychological experiments observing a person in action; and
3. through brain imaging—observing the brain in action.
Acting humanly: The Turing Test approach
 The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory
operational definition of intelligence.
 A computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or from a computer.
 This test is used to evaluate a computer acting like humanly.
 For current scenarios the computer would need to possess the following capabilities:
 natural language processing to enable it to communicate successfully in English
 knowledge representation to store what it knows or hears; o automated reasoning to
use the stored information to answer questions and to draw new conclusions
 machine learning to adapt to new circumstances and to detect and the patterns.

Page- 4
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

 Total Turing Test includes a video signal so that the interrogator can test the subject’s
perceptual abilities, as well as the opportunity for the interrogator to pass physical objects
through the hatch.
 To pass the total Turing Test, the computer will need computer vision to perceive objects,
and robotics to manipulate objects and move about.
These six disciplines compose most of AI.
 The Turing test, proposed by Alan Turing (1950)

 Human interrogator is connected to either another human or a machine in another room.


 Interrogator may ask text questions via a computer terminal, gets answers from the
other room.
 If the human interrogator cannot distinguish the human from the machine, the
machine is said to be intelligent.
Thinking rationally:
The “laws of thought” approach Aristotle was one of the first to attempt to codify right
thinking that is, irrefutable reasoning processes. His syllogisms provided patterns for
argument structures that always yielded correct conclusions when given correct premises.
Example:
Socrates is a man;
all men are mortal;
therefore, Socrates is mortal. -- logic
There are two main obstacles to this approach.
1. It is not easy to take informal knowledge and state it in the formal terms required by
logical notation, particularly when the knowledge is less than 100% certain.
2. Second, there is a big difference between solving a problem in principle‖ and solving it in
practice.

Page- 5
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

Acting rationally:
The rational agent approaches:
 An agent is just something that acts.
 All computer programs do something, but computer agents are expected to do more:
operate autonomously, perceive their environment, persist over a prolonged time period, and
adapt to change, and create and pursue goals.
 A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.
 In the laws of thought approach to AI, the emphasis was on correct inferences.
 On the other hand, correct inference is not all of rationality; in some situations, there is no
provably correct thing to do, but something must still be done.
 For example, recoiling from a hot stove is a reflex action that is usually more successful
than a slower action taken after careful deliberation.
 ertain reasoning in AI systems.

Page- 6
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

Intelligent Agents:

An Intelligent Agent perceives it environment via sensors and acts rationally upon that
environment with its effectors (actuators). Hence, an agent gets percept’s one at a time, and
maps this percept sequence to actions.
II. Agents and environment

Agent: An Agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.

 A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth,
and other body parts for actuators.
 A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.
 A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and
sending network packets.

This simple idea is illustrated in Figure 2.1.

Percept: We use the term percept to refer to the agent's perceptual inputs at any given
instant. Percept Sequence: An agent's percept sequence is the complete history of everything
the agent has ever perceived.
Agent function: Mathematically speaking, we say that an agent's behaviour is described by
the agent function that maps any given percept sequence to an action.
Agent program: Internally, the agent function for an artificial agent will be implemented by
an agent program. It is important to keep these two ideas distinct. The agent function is an
abstract mathematical description; the agent program is a concrete implementation, running
on the agent architecture.
Page- 7
lOMoARcPSD|39429107

To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world shown
in Fig 2.2. This particular world has just two locations: squares A and B. The vacuum agent
perceives which square it is in and whether there is dirt in the square. It can choose to move
left, move right, suck up the dirt, or do nothing. One very simple agent function is the
following: if the current square is dirty, then suck, otherwise move to the other square. A
partial tabulation of this agent function is shown in Fig 2.3.

Concept of Rationality
A Rational agent is one that does the right thing. we say that the right action is the one that
will cause the agent to be most successful. That leaves us with the problem of deciding how
and when to evaluate the agent's success.
We use the term performance measure for the how—the criteria that determine how
successful an agent is.
✓ Ex-Agent cleaning the dirty floor
✓ Performance Measure-Amount of dirt collected
✓ When to measure-Weekly for better results
What is rational at any given time depends on four things:
• The performance measure defining the criterion of success
• The agent’s prior knowledge of the environment

Page- 8
lOMoARcPSD|39429107

• The actions that the agent can perform


• The agent’s percept sequence up to now.

Omniscience, Learning and Autonomy:


➢ We need to distinguish between rationality and omniscience. An Omniscient agent knows
the actual outcome of its actions and can act accordingly but omniscience is impossible in
reality.
➢ Rational agent not only gathers information but also learns as much as possible from what
it perceives.
➢ If an agent just relies on the prior knowledge of its designer rather than its own percepts
then the agent lacks autonomy.
➢ A system is autonomous to the extent that its behaviour is determined its own experience.
➢ A rational agent should be autonomous.
E.g., a clock (lacks autonomy)
➢ No input (percept’s)
➢ Run only but its own algorithm (prior knowledge)
➢ No learning, no experience, etc.

III. The nature of environment


Specifying the task environment:
Environments: The Performance measure, the environment and the agent’s actuators and
sensors come under the heading task environment. We also call this as PEAS (Performance,
Environment, Actuators, Sensors)
Figure 2.4 summarizes the PEAS description for the taxi’s task environment

Page- 9
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

Properties of task environments:


The range of task environments that might arise in AI is obviously vast. We can, however,
identify a fairly small number of dimensions along which task environments can be
categorized. These dimensions determine, to a large extent, the appropriate agent design and
the applicability of each of the principal families of techniques for agent implementation.
First, we list the dimensions, then we analyse several task environments to illustrate the ideas.

Environment-Types:
1. Accessible vs. inaccessible or Fully observable vs Partially Observable: If an agent
sensor can sense or access the complete state of an environment at each point of time then it
is a fully observable environment, else it is partially observable.
2. Deterministic vs. Stochastic: If the next state of the environment is completely
determined by the current state and the actions selected by the agents, then we say the
environment is deterministic
3. Episodic vs. nonepisodic:

Page- 10
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

➢ The agent's experience is divided into "episodes." Each episode consists of the agent
perceiving and then acting. The quality of its action depends just on the episode itself,
because subsequent episodes do not depend on what actions occur in previous episodes.
➢ Episodic environments are much simpler because the agent does not need to think ahead.
4. Static vs. dynamic: If the environment can change while an agent is deliberating, then we
say the environment is dynamic for that agent; otherwise it is static.
5. Discrete vs. continuous: If there are a limited number of distinct, clearly defined percept’s
and actions we say that the environment is discrete. Otherwise, it is continuous.
Figure 2.6 lists the properties of a number of familiar environments.

IV. The structure of agents:


➢ The job of AI is to design the agent program: a function that implements the agent
mapping from percept’s to actions. We assume this program will run on some sort of
computing device, which we will call the architecture.
➢ The architecture might be a plain computer, or it might include special-purpose
hardware for certain tasks, such as processing camera images or filtering audio input. It might
also include software that provides a degree of insulation between the raw computer and the
agent program, so that we can program at a higher level. In general, the architecture makes
the percept’s from the sensors available to the program, runs the program, and feeds the
program's action choices to the effectors as they are generated.
➢ The relationship among agents, architectures, and programs can be summed up as
follows: agent = architecture + program

Page- 11
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

Agent programs:
➢ Intelligent agents accept percepts from an environment and generates actions. The
early versions of agent programs will have a very simple form (Figure 2.7)
➢ Each will use some internal data structures that will be updated as new percepts
arrive.

➢ These data structures are operated on by the agent's decision-making procedures to


generate an action choice, which is then passed to the architecture to be executed

Types of agents: Agents can be grouped into four classes based on their degree of perceived
intelligence and capability:
➢ Simple Reflex Agents
➢ Model-Based Reflex Agents
➢ Goal-Based Agents
➢ Utility-Based Agents
Simple reflex agents:
➢ Simple reflex agents ignore the rest of the percept history and act only on the basis
of the current percept.
➢ The agent function is based on the condition-action rule.
➢ If the condition is true, then the action is taken, else not. This agent function only
succeeds when the environment is fully observable.
The program in Figure 2.8 is specific to one particular vacuum environment. A more general
and flexible approach is first to build a general-purpose interpreter for condition– action rules
and then to create rule sets for specific task environments. Figure 2.9 gives the structure of
this general program in schematic form, showing how the condition–action rules allow the
agent to make the connection from percept to action. The agent program, which is also very
simple, is

Page- 12
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

shown in Figure 2.10. The agent in Figure 2.10 will work only if the correct decision can be
made on the basis of only the current percept that is, only if the environment is fully
observable.

Model-based reflex agents:


➢ The Model-based agent can work in a partially observable environment, and track
the situation.
➢ A model-based agent has two important factors:

Page- 13
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

 Model: It is knowledge about "how things happen in the world," so it is called


a Model-based agent.
 Internal State: It is a representation of the current state based on percept
history. Figure 2.11 gives the structure of the model-based reflex agent with internal state,
showing how the current percept is combined with the old internal state to generate the
updated description of the current state, based on the agent’s model of how the world works.
The agent program is shown in Figure 2.12. The interesting part is the function UPDATE-
STATE, which is responsible for creating the new internal state description.

Goal-based agents:
➢ A goal-based agent has an agenda.
➢ It operates based on a goal in front of it and makes decisions based on how best to
reach that goal.

Page- 14
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

➢ A goal-based agent operates as a search and planning function, meaning it targets


the goal ahead and finds the right action in order to reach it.
➢ Expansion of model-based agent.
Figure 2.13 shows the goal-based agent’s structure.

Utility-based agents:
➢ A utility-based agent is an agent that acts based not only on what the goal is, but
the best way to reach that goal.
➢ The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
➢ The term utility can be used to describe how "happy" the agent
is. The utility-based agent structure appears in Figure 2.14.

Page- 15
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

Learning agents:
We have described agent programs with various methods for selecting actions. We have not,
so far, explained how the agent programs come into being. In his famous early paper, Turing
(1950) considers the idea of actually programming his intelligent machines by hand. He
estimates how much work this might take and concludes “Some more expeditious method
seems desirable.” The method he proposes is to build learning machines and then to teach
them. In many areas of AI, this is now the preferred method for creating state-of-the-art
systems.
Learning has another advantage, as we noted earlier:
It allows the agent to operate in initially unknown environments and to become more
competent than its initial knowledge alone might allow.
A learning agent can be divided into four conceptual components, as shown in Figure 2.15.

Page- 16
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

Learning Element:
The most important distinction is between the learning element, which is responsible for
making improvements, and the performance element, which is responsible for selecting
external actions.
Performance Element:
The performance element is what we have previously considered to be the entire agent It
takes in percepts and decides on actions.
Critic:
The learning element uses feedback from the critic on how the agent is doing and determines
how the performance element should be modified to do better in the future. The design of the
learning element depends very much on the design of the performance element. When trying
to design an agent that learns a certain capability, the first question is not “How am I going to
get it to learn this?” but “What kind of performance element will my agent need to do this
once it has learned how?” Given an agent design, learning mechanisms can be constructed to
improve every part of the agent. The critic tells the learning element how well the agent is
doing with respect to a fixed performance standard.
The critic is necessary because the percepts themselves provide no indication of the agent’s
success. For example, a chess program could receive a percept indicating that it has
checkmated its opponent, but it needs a performance standard to know that this is a good
thing; the percept itself does not say so. It is important that the performance. standard be
fixed. Conceptually, one should think of it as being outside the agent altogether because the
agent must not modify it to fit its own behavior.
The last component of the learning agent is the problem generator.
Problem Generator:
It is responsible for suggesting actions that will lead to new and informative experiences. The
point is that if the performance element had its way, it would keep doing the actions that are
best, given what it knows. But if the agent is willing to explore a little and do some perhaps
suboptimal actions in the short run, it might discover much better actions for the long run.
The problem generator’s job is to suggest these exploratory actions. This is what scientists do
when they carry out experiments. Galileo did not think that dropping rocks from the top of a
tower in Pisa was valuable in itself. He was not trying to break the rocks or to modify the
brains of unfortunate passers-by. His aim was to modify his own brain by identifying a better
theory of the motion of objects.

Page- 17
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

To make the overall design more concrete, let us return to the automated taxi example. The
performance element consists of whatever collection of knowledge and procedures the taxi
has for selecting its driving actions. The taxi goes out on the road and drives, using this
performance element. The critic observes the world and passes information along to the
learning element. For example, after the taxi makes a quick left turn across three lanes of
traffic, the critic observes the shocking language used by other drivers. From this experience,
the learning element is able to formulate a rule saying this was a bad action, and the
performance element is modified by installation of the new rule. The problem generator
might identify certain areas of behavior in need of improvement and suggest experiments,
such as trying out the brakes on different road surfaces under different conditions.
The simplest cases involve learning directly from the percept sequence. Observation of pairs
of successive states of the environment can allow the agent to learn “How the world evolves,”
and observation of the results of its actions can allow the agent to learn “What my actions
do.” For example, if the taxi exerts a certain braking pressure when driving on a wet road,
then it will soon find out how much deceleration is actually achieved. Clearly, these two
learning tasks are more difficult if the environment is only partially observable.
The forms of learning in the preceding paragraph do not need to access the external
performance standard—in a sense, the standard is the universal one of making predictions
that agree with experiment. The situation is slightly more complex for a utility-based agent
that wishes to learn utility information. For example, suppose the taxi-driving agent receives
no tips from passengers who have been thoroughly shaken up during the trip. The external
performance standard must inform the agent that the loss of tips is a negative contribution to
its overall performance; then the agent might be able to learn that violent maneuvers do not
contribute to its own utility. In a sense, the performance standard distinguishes part of the
incoming percept as a reward (or penalty) that provides direct feedback on the quality of the
agent’s behavior. Hard-wired performance standards such as pain and hunger in animals can
be understood in this way.
How the components of agent programs work:
We have described agent programs (in very high-level terms) as consisting of various
components, whose function it is to answer questions such as: “What is the world like now?”
“What action should I do now?” “What do my actions do?” The next question for a student of
AI is, “How on earth do these components work?” It takes about a thousand pages to begin to
answer that question properly, but here we want to draw the reader’s attention to some basic

Page- 18
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE

distinctions among the various ways that the components can represent the environment that
the agent inhabits.
Roughly speaking, we can place the representations along an axis of increasing complexity
and expressive power—atomic, factored, and structured. To illustrate these ideas, it helps to
consider a particular agent component, such as the one that deals with “What my actions do.”
This component describes the changes that might occur in the environment as the result of
taking an action, and Figure 2.16 provides schematic depictions of how those transitions
might be represented.

Atomic Representation:
In an atomic representation each state of the world is indivisible it has no internal structure.
Consider the problem of finding a driving route from one end of a country to the other via
some sequence of cities. For the purposes of solving this problem, it may suffice to reduce the
state of world to just the name of the city we are in a single atom of knowledge; a “black
box” whose only discernible property is that of being identical to or different from another
black box.
Factored Representation Variable:
A factored representation splits up each state into a fixed set of variables or attributes, each of
which can have a value. While two different atomic states have nothing in common—they are
just different black boxes—two different factored states can share some attributes (such as
being at some particular GPS location) and not others (such as having lots of gas or having no
gas); this makes it much easier to work out how to turn one state into another.

Page- 19
lOMoARcPSD|39429107

Sub Name: ARTIFICIAL INTELLIGENCE Sub Code: BCS515B

With factored representations, we can also represent uncertainty for example, ignorance
about the amount of gas in the tank can be represented by leaving that attribute blank.
For many purposes, we need to understand the world as having things in it that are related to
each other, not just variables with values. For example, we might notice that a large truck
ahead of us is reversing into the driveway of a dairy farm but a cow has got loose and is
blocking the truck’s path. A factored representation is unlikely to be pre-equipped with
the attribute TruckAheadBackingIntoDairyFarmDrivewayBlockedByLooseCow with value
true or false. Instead, we would need a structured representation, in which objects such as
cows and trucks and their various and varying relationships can be described explicitly. (See
Figure 2.16(c).) In fact, almost everything that humans express in natural language concerns
objects and their relationships. As we mentioned earlier, the axis along which atomic,
factored, and structured representations lie is the axis of increasing expressiveness. Often,
the more expressive language is much more concise; for example, the rules of chess can be
written in a page or two of a structured-representation language such as first-order logic but
require thousands of pages when written in a factored-representation language such as
propositional logic. On the other hand, reasoning and learning become more complex as
the expressive power of the representation increases. To gain the benefits of expressive
representations while avoiding their drawbacks, intelligent systems for the real world may
need to operate at all points along the axis simultaneously.

END OF MODULE-1

Page- 20

You might also like