Module 1_Notes_AI (2)
Module 1_Notes_AI (2)
Module 1
Introduction to Artificial Intelligence
• Homo Sapiens : The name is Latin for "wise man“
• Philosophy of AI - “Can a machine think and behave like humans do?”
• In Simple Words - Artificial Intelligence is a way of making a computer, a computer-
controlled robot, or a software think intelligently, in the similar manner the
intelligent humans think.
• Artificial intelligence (AI) is an area of computer science that emphasizes the
creation of intelligent machines that work and react like humans.
• AI is accomplished by studying how human brain thinks and how humans learn,
decide, and work while trying to solve a problem, and then using the outcomes of this
study as a basis of developing intelligent software and systems.
1. What is AI ?
Views of AI fall into four categories:
i. Thinking humanly
ii. Thinking rationally
iii. Acting humanly
iv. Acting rationally
Thinking Humanly Thinking Rationally
“The exciting new effort to make comput- ers “The study of mental faculties through the
think ... machines with minds, in the full and use of computational models.”
literal sense.” (Haugeland, 1985) (Charniak and McDermott, 1985)
“[The automation of] activities that we “The study of the computations that make it
associate with human thinking, activities such possible to perceive, reason, and act.”
as decision-making, problem solv- ing, (Winston, 1992)
learning .. .” (Bellman, 1978)
Acting Humanly Acting Rationally
“The art of creating machines that per- form “Computational Intelligence is the study
functions that require intelligence when of the design of intelligent agents.” (Poole
performed by people.” (Kurzweil, 1990) et al., 1998)
“The study of how to make computers do “AI . . . is concerned with intelligent be-
things at which, at the moment, people are havior in artifacts.” (Nilsson, 1998)
better.” (Rich and Knight, 1991)
Figure 1.1 Some definitions of artificial intelligence, organized into four categories.
Computer engineering:
How can we build an efficient computer?
• For artificial intelligence to succeed, we need two things: intelligence and an artifact.
The computer has been the artifact(object) of choice.
• The first operational computer was the electromechanical Heath Robinson, built in
1940 by Alan Turing's team for a single purpose: deciphering German messages.
• The first operational programmable computer was the Z-3, the invention of
KonradZuse in Germany in 1941.
• The first electronic computer, the ABC, was assembled by John Atanasoff and his
student Clifford Berry between 1940 and 1942 at Iowa State University.
• The first programmable machine was a loom, devised in 1805 by Joseph Marie
Jacquard (1752-1834) that used punched cards to store instructions for the pattern to
be woven.
•Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling machine: a water
clock with a regulator that maintained a constant flow rate. This invention changed
the definition of what an artifact could do.
• Modern control theory, especially the branch known as stochastic optimal control, has
as its goal the design of systems that maximize an objective function over time. This
roughly OBJECTIVE FUNCTION matches our view of Al: designing systems that
behave optimally.
• Calculus and matrix algebra- the tools of control theory
• The tools of logical inference and computation allowed AI researchers to consider
problems such as language, vision, and planning that fell completely outside the
control theorist’s purview.
Linguistics
How does language relate to thought?
The gestation of artificial intelligence (AI) during the period from 1943 to 1955 marked
the early theoretical and conceptual groundwork for the field. This period laid the
foundation for the subsequent development of AI
2. The birth of artificial intelligence (1956)
The birth of artificial intelligence (AI) in 1956 is commonly associated with the
Dartmouth Conference, a seminal event that took place at Dartmouth College in
Hanover, New Hampshire.
The period from 1952 to 1969 in the history of artificial intelligence (AI) was
characterized by early enthusiasm and great expectations. Researchers during this time
were optimistic about the potential of AI and believed that significant progress could be
made in creating machines with human-like intelligence.
The period from 1966 to 1973 in the history of artificial intelligence (AI) is often
referred to as "A Dose of Reality." During this time, researchers faced challenges and
setbacks that led to a reevaluation of the initial optimism and expectations surrounding
AI.
5. Knowledge-based systems: The key to power? (1969–1979)
The period from 1969 to 1979 in the history of artificial intelligence (AI) is
characterized by a focus on knowledge-based systems, with researchers exploring the
use of symbolic representation of knowledge to address challenges in AI. This era saw
efforts to build expert systems, which were designed to emulate human expertise in
specific domains.
The period from 1980 to the present marks the evolution of artificial intelligence (AI)
into an industry, witnessing significant advancements, increased commercialization,
and widespread applications across various domains.
The period from 1986 to the present is characterized by the resurgence and dominance
of neural networks in the field of artificial intelligence (AI). This era is marked by
significant advancements in the development of neural network architectures, training
algorithms, and the widespread adoption of deep learning techniques.
8. AI adopts the scientific method (1987–present)
The period from 1987 to the present has seen the adoption of the scientific method in
the field of artificial intelligence (AI), reflecting a more rigorous and empirical
approach to research. This shift has involved the application of experimental
methodologies, reproducibility, and a greater emphasis on evidence-based practices.
9. The emergence of intelligent agents (1995–present)
The period from 1995 to the present has been marked by the emergence and evolution
of intelligent agents in the field of artificial intelligence (AI). Intelligent agents are
autonomous entities that perceive their environment, make decisions, and take actions
to achieve goals.
10. The availability of very large data sets (2001–present)
The period from 2001 to the present has been characterized by the availability and
utilization of very large datasets in the field of artificial intelligence (AI). This era has
witnessed an unprecedented growth in the volume and diversity of data, providing a
foundation for training and enhancing increasingly sophisticated AI models.
Intelligent Agents
1. Agents and environment
An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators. simple idea is illustrated in Figure
2.1
Figure 2.3 Partial tabulation of a simple agent function for the vacuum-cleaner world
shown in Figure 2.2.
2. Concept of Rationality
A rational agent is one that does the right thing—conceptually speaking, every entry in
the table for the agent function is filled out correctly.
Rationality
Rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent's prior knowledge of the environment.
• The actions that the agent can perform.
• The agent's percept sequence to date.
A definition of a rational agent: For each possible percept sequence, a rational
agent should select an action that is ex-pected to maximize its performance measure, given
the evidence provided by the percept sequence and whatever built-in knowledge the agent
has.
Consider the simple vacuum-cleaner agent that cleans a square if it is dirty and moves to the
other square if not; this is the agent function tabulated in Figure 2.3. Is this a rational agent?
That depends! First, we need to say what the performance measure is, what is known about
the environment, and what sensors and actuators the agent has. Let us assume the following:
• The performance measure awards one point for each clean square at each time step,
over a “lifetime” of 1000 time steps.
• The “geography” of the environment is known a priori (Figure 2.2) but the dirt distri-
bution and the initial location of the agent are not. Clean squares stay clean and sucking
cleans the current square. The Left and Right actions move the agent left and right
except when this would take the agent outside the environment, in which case the agent
remains where it is.
• The only available actions are Left , Right , and Suck .
• The agent correctly perceives its location and whether that location contains dirt.
We claim that under these circumstances the agent is indeed rational
The performance measure to which we would like our automated driver to aspire?
Desirable qualities include getting to the correct destination; minimizing fuel con- sumption
and wear and tear; minimizing the trip time or cost; minimizing violations of traffic laws and
disturbances to other drivers; maximizing safety and passenger comfort; maximizing
profits. Obviously, some of these goals conflict, so tradeoffs will be required.
What is the driving environment that the taxi will face? Any taxi driver must deal with a
variety of roads, ranging from rural lanes and urban alleys to 12-lane freeways. The roads
contain other traffic, pedestrians, stray animals, road works, police cars, puddles, and
potholes. The taxi must also interact with potential and actual passengers.
Note:
The simplest environment is fully observable, single-agent, deterministic, episodic,
static and discrete. Ex: simple vacuum cleaner
The table represents explicitly the agent function Ex: the simple vacuum cleaner
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the
time. These are given below:
• Simple Reflex Agent
• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
For the braking problem, the internal state is not too extensive— just the previous
frame from the camera, allowing the agent to detect when two red lights at the edge of
the vehicle go on or off simultaneously.
For other driving tasks such as changing lanes, the agent needs to keep track of where
the other cars are if it can’t see them all at once. And for any driving to be possible at
all, the agent needs to keep track of where its keys are.
Updating this internal state information as time goes by requires two kinds of
knowledge to be encoded in the agent program.
First, we need some information about how the world evolves independently of the
agent—for example, that an overtaking car generally will be closer behind than it was
a moment ago.
Second, we need some information about how the agent’s own actions affect the
world—for example, that when the agent turns the steering wheel clockwise, the car
turns to the right, or that after driving for five minutes northbound on the freeway, one
is usually about five miles north of where one was five minutes ago.
This knowledge about “how the world works”—whether implemented in simple
Boolean circuits or in complete scientific theories—is called a model of the world. An
agent that uses such a model is called a model-based agent.
Goal-based agents
• The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent proactive.
• Sometimes goal-based action selection is straightforward: for example when goal
satisfaction results immediately from a single action.
• Sometimes it will be trickier: for example, when the agent has to consider long
sequences of twists and turns to find a way to achieve the goal.
• Search and planning are the subfields of AI devoted to finding action sequences that
achieve the agent’s goals.
Example: Example:
The reflex agent brakes when it sees A goal-based agent, in principle, could reason
brake lights that if the car in front has its brake lights on, it
will slow down.
Utility-based agents
• These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success at
a given state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
Utility-based Agents advantages wrt. goal-based:
with conflicting goals, utility specifies and appropriate tradeoff
with several goals none of which can be achieved with certainty, utility selects proper
tradeoff between importance of goals and likelihood of success
Learning Agents
Problem Previous agent programs describe methods for selecting actions
How are these agent programs programmed?
Programming by hand inefficient and ineffective!
Solution: build learning machines and then teach them (rather than instruct them)
Advantage: robustness of the agent program toward initially-unknown environments