Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
15 views

Module1 1

This document provides information about an Artificial Intelligence course with the following key details: - The course code is MCA303_EL1, the title is Artificial Intelligence, and it is worth 4 credits. - It lists 3 recommended textbooks for the course. - The course modules will cover topics like introducing and history of AI, different approaches to defining intelligence like thinking and acting rationally or humanly, and foundations of AI from various disciplines.

Uploaded by

Athira Nair
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Module1 1

This document provides information about an Artificial Intelligence course with the following key details: - The course code is MCA303_EL1, the title is Artificial Intelligence, and it is worth 4 credits. - It lists 3 recommended textbooks for the course. - The course modules will cover topics like introducing and history of AI, different approaches to defining intelligence like thinking and acting rationally or humanly, and foundations of AI from various disciplines.

Uploaded by

Athira Nair
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 55

COURSE CODE : MCA303_EL1

COURSE TITLE : ARTIFICIAL INTELLIGENCE


TOTAL CREDITS : 4

Text Books and References

1. Stuart Russel and Peter Norvig:


Artificial Intelligence – A Modern Approach, 2nd Edition
Pearson Education

2. Elaine Rich and Kevin Knight:


Artificial Intelligence, Tata McGraw Hill 2nd Ed.

3. Dan W Patterson
Introduction to Artificial Intelligence and Expert Systems
Module I AI - Introduction and History
• Defining AI
• Acting Humanly (Turing Test Approach)
• Thinking Humanly (Cognitive Modeling
Approach)
• Thinking Rationally (laws of thought approach)
• Acting Rationally (Rational Agent Approach)
• Foundations of Artificial Intelligence
• History of AI
Intelligence

Intelligence – The ability to learn, understand,


remember, think, reuse with reasoning.
Artificial Intelligence (Unofficial definition) –
Field of Computer science to study ‘how to make
computer to do things which, at a moment, people do
better’
What is Artificial Intelligence?

• The concept of AI is based on understanding how natural


intelligence works or understanding human intelligence.
What is Artificial Intelligence?

• According to the father of Artificial Intelligence, John


McCarthy, it is “The science and engineering of making
intelligent machines, especially intelligent computer
programs”.
• Artificial Intelligence is a way of making a computer, a
computer-controlled robot, or a software think intelligently, in
the similar manner the intelligent humans think.
• AI is accomplished by studying how human brain thinks,
and how humans learn, decide, and work while trying to
solve a problem, and then using the outcomes of this
study as a basis of developing intelligent software and
systems.
Four Approaches – in defining AI

Thinking Humanly(Cognitive Modeling Approach)

Acting Humanly (Turing Test)

Thinking Rationally (Laws of thought Approach)

Acting Rationally(Rational Agent Approach)
Definition of Artificial Intelligence
(AI)
• Above definitions vary along two main
dimensions.
• The definitions on top are concerned with
thought processes and reasoning, whereas
the ones on the bottom address behaviour.
• The definitions on the left measure success in
terms of fidelity to human performance,
whereas the ones on the right measure
against an ideal performance measure,
called rationality.
• A system is rational if it does the “right
thing,” given what it knows.

A human-centered approach must be in part an
empirical science involving observations and
hypotheses about human behavior.

A rationalist approach involves a combination of
mathematics and engineering.


Empirical science - one which is built up out of the
elements of experience.
Acting Humanly: The Turing Test
Approach

This is a problem that has greatly troubled AI
researchers for years. They ask the question
“when can we count a machine as being
intelligent?”

The most famous response is attributed to Alan
Turing, a British mathematician and computing
pioneer. The famous “Turing Test” was named
after him
• The Turing Test, proposed by Alan Turing
(1950), was designed as a thought
experiment that would sidestep the
philosophical vagueness of the question
“Can a machine t hi nk?” .
Acting Humanly: The Turing
Test approach

• A computer passes the test if a human


interrogator, after posing some written
questions, cannot tell whether the written
responses come from a person or from a
computer.
• Programming a computer to pass a rigorously
applied test provides plenty to work on.
Acting Humanly: The Turing
Test approach
Turing Test
Acting Humanly: The Turing
Test Approach
• For a computer to pass this test,it
should possess the following capabilities:
• natura l la nguage processing
to communicate su ccessfully
in a human language;
• knowledge representation ( k n o w l e d g e
b a s e ) to store what it knows or hears;
• automated reasoningto answer
questions and draw new
conclusions;
• machine learning to adapt to new
circumstances and to detect and extrapolate
patterns.
Acting Humanly: The Turing
Test Approach

• Other researchers have proposed


Total TuringTest that requires interaction
with objects and people in the real world.
• To pass the total Turing Test, the computer
will need Computer vision and speech
recognition to perceive the world, and
Robotics to manipulate objects and move
about .

Thinking Humanly (Cognitive Modeling Approach)

• To make a program think like h u m a n s , one


should kn ow h ow h u m an s think.
• For this, we need to get inside the actual workings
of h u m a n minds.

Thinking Humanly (Cognitive Modeling Approach)

• There are three ways to do this:


• Introspection
• It is the examination or observation of one’s own
mental or emotional process.
• trying to catch our own thoughts as they go
by.
• Psychological experiments
• It is a scientific procedure undertaken to
make a discovery, test a hypothesis or
demonstrate a known fact.
• observing a person in action;
• Brain imaging
• observing the brain in action.
Thinking Humanly (Cognitive
Modeling Approach)

Once we have a sufficient precise theory of the mind, it become


possible to express the theory as a computer program.

Allen Newell and Herbert Simon developed the General


Problem Solver (GPS) program to model h u m a n thinking
and check whether it can solve problems like a person by
following the same reasoning steps as a h u m a n .
Thinking Humanly (Cognitive Modeling Approach)

• Its intent is not just to solve the problem correctly


but to go through the same series of steps as that
of a human brain to solve it.
• Cognitive science brings together computer models
from AI and experimental techniques from the
psychology to construct precise and testable
theories of the human mind.
Thinking Humanly (Cognitive Modeling Approach)

What is Cognitive Science?

Cognitive science is the interdisciplinary study


of mind and intelligence, embracing philosophy,
psychology, artificial intelligence, neuroscience,
linguistics, and anthropology.
Thinking Rationally (laws of thought approach)

• If someone thinking rightly always in a given


circumstances in a given amount of information then we
can call it as laws of thought approach.
• The Greek philosopher Aristotle was one of the
first to attempt to codify “right thinking”.
• His syllogisms provided patterns for argument
structures that always yielded correct
conclusions when given correct premises.
Thinking Rationally (laws of thought approach)

• E.g.
• Socrates is a man;
• All men are mortal”;
• therefore, “Socrates is mortal.
• These laws of thought were supposed to govern the
operation of the mind; their study initiated the field
called logic.
• There are two main obstacles exist to implement
this approach are
• This approach needed 100% knowledge.
• Too many computation required.
Acting rationally (The rational agent
Approach)
• Agent is anything that perceives its environment
through sensor and acts upon that environment
through effectors or actuator..
• Acting rationally means acting to achieve one’s
goals, given one’s beliefs.
• An agent is just something that perceives and acts.
• In this approach, AI is viewed as the study and
construction of rational agents.
Acting rationally (The rational agent
Approach)
• A rational agent is one that acts so as to achieve the
best outcome or, when there is uncertainty, the
best-expected outcome.
• For eg
• Build a rational agent for maintaining
temperature of a particular room, it should have
a temperature sensor to detect the temperature
and if the room temperature is more than the
specified temperature, then agent has to take
action through effectors to reduce the
temperature.
Acting rationally (The rational agent
Approach)
• Different types of rational agents are
• Human agent
• Robotic agent
• Software agent
Acting rationally (The rational agent
Approach)
• E . g . , an agent that is designed to play a game
should make moves that increase its chances of
winning the game
• The rational agent approach to AI has two
advantages
• First, it is more general than the "laws of thought"
approach, because correct inference is only a useful
mechanism for achieving rationality and not a
necessary one.
• Second, it is more agreeable to scientific development
than approaches based on h um an behaviour or
h um an thought, because the standard of rationality
is clearly defined and completely general.
• AI has focused on the study and construction of
agents that do the right thing.
Foundations of Artificial
Intelligence

• The foundation section briefs various disciplines that


contributed ideas’ viewpoints, and techniques to AI.

• How ever the list not limited to only the following


because contributions from many fields laid
foundation to the field of AI.
INTELLIGENT AGENTS –
DIFFERENT TYPES - EXAMPLES
Foundations of Artificial
Intelligence

❖ Philosophy (428 B.C. - Present)


❖ Mathematics (c. 800 - Present)
❖ Economics (1776 - Present)
❖ Neuroscience (1861 - Present)
❖ Psychology (1879 - Present)
❖ Computer Engineering (1940 - Present)
❖ Control theory and Cybernetics (1948 -
Present)
❖ Linguistics (1957 - Present)
Philosophy

• Can formal rules be used to draw valid


conclusions?
• How does the mind arise from a
physical brain?
• Where does knowledge come from?
• How does knowledge lead to action?
• Logic, methods of reasoning
• Mind as a physical system
• Foundations of learning, language, and
rationality
Mathematics

• What are the formal rules to draw valid


conclusions?
• What can be computed?
• H o w do w e reason with uncertain information?
• Areas developed
• Formal logic
• Probability
• Statistics
• Algorithm
Economics

• H o w should w e make decisions so a s to


maximize payoff?
• H o w should we do this when others may not
go along?
• H o w should we do this when the payoff may
be far in the future?
• Decision theory, which combines probability
theory with utility theory, provides a formal
and complete framework for decisions made
under uncertainty
• If they are not immediate but instead result
from several actions taken in sequence, the
field of operations research is used
Neuroscience

• How do brains process information?


• Neuroscience is the study of the
nervous system, particularly the
brain
• Brain consisted of nerve cells or neurons
Psychology

• How do humans and animals think and act?


• In behaviourism, the study includes objective
measures of the percepts (or stimulus) given
to an animal only and its resulting actions (or
response)
• Cognitive psychology views the brain
as an information-processing device
• Intelligence Augmentation states that
computers should augment human abilities
rather than automate away human task.
Computer Engineering

• How can we build an efficient computer?


• For artificial intelligence to succeed ,we
need two things:
• intelligence and an artifact
• The computer has been the artifact of
choice.
• Each generation of computer hardware
has brought an increase in speed and
capacity and a decrease in price
Control Theory and
Cybernetics
• H o w can artifacts operate under their own
control?
• The objective of control theory is to develop a
model or algorithm governing the application
of system inputs to drive the system to a
desired state, while minimizing any delay,
overshoot, or steady- state error and ensuring
a level of control stability; often with the aim
to achieve a degree of optimality.
• Cybernetics is associated with models in
which a monitor compares what is happening
to a system at various sampling times with
some standard of what should be happening,
and a controller adjusts the system’s
behaviour accordingly.
linguistics
• How does language relate to thought?
• Modern linguistics and AI intersect
in a hybrid field called
• computational linguistics or natural
language processing
• Natural Language Processing is a field of
AI that focuses on the interaction
between humans and computers using
natural language
• Used in a wide range of applications,
from virtual assistants to sentiment
analysis
• Computational Linguistics is a field of
linguistics that focuses on the study of
language from a computational
perspective
• Used in a wide range of applications,
from machine translation to speech
recognition
The History of AI

❖The gestation of AI (1943-55)


❖The birth of AI (1956)
❖Early enthusiasm, great expectations (1952-69)
❖A dose of reality (1966-73)
❖Knowledge-based systems (1969-79)
❖AI becomes an industry (1980 - present)
❖The return of neural networks (1986 - present)
❖AI becomes a science (1980- present)
❖The emerging intelligent agents (1995 - present)
History of AI- Milestones

• The inception of Artificial Intelligence (1943-1952)


• 1943:
• The first work which is now recognized as AI was
done by Warren McCulloch and Walter pits in
1943
• They proposed a model of artificial neurons
• It was not then referred to as a work in artificial
intelligence
• 1949:
• Donald Hebb demonstrated an
u pdating rule for modifying the
connection strength between neurons
• His rule is now called Hebbian learning
History of AI- Milestones

• 1950:
• Alan Turing publishes "Computing
Machinery and Intelligence" in which he
proposed a test.
• The test can check the machine’s
ability to exhibit intelligent behaviour
equivalent to human intelligence, called a
Turing test.
History of AI- Milestones
• Early enthusiasm, great expectations
(1952-1969)
• 1955:
• Allen Newell and Herbert A.Simon
created the "first artificial
intelligence program“ named "Logic
Theorist“.
• This program proved 38 of 52 Mathematics
theorems, and find new and more
• elegant proofs for some theorems.
• 1956:
• The word "Artificial Intelligence"
was first adopted by American
Com puter scientist J o h n McCarthy at the
Dartmouth Conference.
• For the first time, AI was coined as an
academic field.
History of AI- Milestones

• Early enthusiasm, great expectations


(1952-1969)
• 1966:
• The researchers emphasized developing
algorithms which can solve mathematical
problems. Joseph Weizenbaum created the
first chatbot in 1966, which was named
ELIZA.
History of AI- Milestones

• A dose of reality (1966-1973)


• 1966:
• The researchers emphasized developing algorithms which
can solve mathematical problems. J o s e p h Weizenbaum
created the first chatbot in 1966, which was named ELIZA
• 1972:
• The first intelligent humanoid robot was built in J a p a n
which was named as WABOT-1
History of AI- Milestones

• Expert Syste ms (1969-1986)


• The duration between the years 1974 to 1980 was
the first AI winter duration, the time period when
computer scientists dealt with a severe shortage of
funding from the government for AI research.
• During AI winters, an interest in publicity on
artificial intelligence decreased
• 1980:
• After AI winter duration, AI came back with a n "Expert
System“
• Expert systems were programmed that emulates the
decision-making ability of a h u m a n expert
History of AI- Milestones

• The return of neural networks (1986 - present)


• In the mid-1980s at least four different
groups reinvented the back-propagation learning
algorithm first found in 1969.
• As occurred with the separation of AI and
cognitive science, modern neural network research
has bifurcated into two fields
• O n e concerned with creating effective
network architectures and algorithms and
understanding their mathematical properties
• Other concerned with careful modeling of the
empirical properties of
• actual neurons and ensembles of neurons
History of AI- Milestones
• Probabilistic reasoning and machine learning (1987
- present)
• It is now more common to build on existing theories than
to propose brand-new ones, to base claims on rigorous
theorems or hard experimental evidence rather than on
intuition, and to show relevance to real-world
applications rather than toy examples
• In recent years, approaches based on hidden Markov
models (HMMs)
• have come to dominate the area
• Two aspects of H MMs are relevant.
• First, they are based on a rigorous mathematical theory
which has allowed speech researchers to build on several
decades of mathematical results developed in other fields.
• Second, they are generated by a process of training on a
large corpus of real speech data
History of AI- Milestones

• Probabilistic reasoning and machine


learning (1987 - present)
• 1990
• The Bayesian network formalism was
invented to allow efficient representation of,
and rigorous reasoning with, uncertain
knowledge
History of AI- Milestones

• Big Data (2001 - present)


• Remarkable advances in computing power
and the creation of the World Wide Web have
facilitated the creation of very large data sets
known as big data
• Deep Learning (2011 - present)
• It refers to machine learning using multiple
layers of simple, adjustable computing
elements
• The gestation of AI
• (1943 - 1956):
- 1943: McCulloch & Pitts: Boolean circuit
model of brain.
- 1950: Turing’s “Computing Machinery and
Intelligence”.
- 1956: McCarthy’s name “Artificial
Intelligence” adopted.
• Early enthusiasm, great expectations (1952 -
1969):
- Early successful AI programs: Samuel’s
checkers,
Newell & Simon’s Logic Theorist,
Gelernter’s Geometry
Theorem Prover.
- Robinson’s complete algorithm for logical
reasoning.
• A dose of reality (1966 - 1974):
- AI discovered computational complexity.
- Neural network research almost disappeared
after Minsky & Papert’s book in 1969.

• Knowledge-based systems (1969 - 1979):


- 1969: DENDRAL by Buchanan et al..
- 1976: MYCIN by Shortliffle.
- 1979: PROSPECTOR by Duda et al..
• AI becomes an industry (1980 - 1988):
- Expert systems industry booms.
- 1981: Japan’s 10-year Fifth Generation project.

• The return of NNs and novel AI (1986 - present):


- Mid 80’s: Back-propagation learning algorithm
reinvented.
- Expert systems industry busts.
- 1988: Resurgence of probability.
- 1988: Novel AI (ALife, GAs, Soft Computing,
…).
- 1995: Agents everywhere.
- 2003: Human-level AI back on the agenda.

You might also like