Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Ai Unit 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 101

18CSC305J ARTIFICIAL INTELLIGENCE

Introduction

1
What is AI?

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 2


Acting humanly: Turing Test
• Turing (1950) "Computing machinery and intelligence":
• "Can machines think?"  "Can machines behave intelligently?"
• Operational test for intelligent behavior: the Imitation Game

The computer would need to possess the following capabilities:


• natural language processing to enable it to communicate successfully in English (or some
other human language);
• knowledge representation to store information provided before or during the interrogation;
• automated reasoning to use the stored information to answer questions and to draw new
conclusions;
• machine learning to adapt to new circumstances and to detect and extrapolate patterns.
To pass the total Turing Test, the computer will need
• computer vision to perceive objects
• robotics to move them about.

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 3


Thinking humanly: cognitive modeling
Determining how humans think
• through introspection—trying to catch our own thoughts as they go by
• through psychological experiments

Express the theory as a computer program


• program's input/output and timing behavior matches human behavior

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 4


Thinking rationally: "laws of thought"
• Aristotle: what are correct arguments/thought processes?
• Several Greek schools developed various forms of logic: notation and rules of derivation for thoughts;
may or may not have proceeded to the idea of mechanization
• Direct line through mathematics and philosophy to modern AI
• Problems:
1. Not all intelligent behavior is mediated by logical deliberation
2. What is the purpose of thinking? What thoughts should I have?

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 5


Acting rationally: rational agent
• Rational behavior: doing the right thing
• The right thing: that which is expected to maximize goal achievement, given the available
information
• An agent is just something that perceives and acts
• Doesn't necessarily involve thinking – but thinking should be in the service of rational action

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 6


Rational agents
• An agent is an entity that perceives and acts
• Abstractly, an agent is a function from percept histories to actions:
[f: P*  A]

• For any given class of environments and tasks, we seek the agent (or class of agents) with the best
performance
• computational limitations make perfect rationality unachievable
 design best program for given machine resources

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 7


History of AI
• AI has roots in a number of scientific disciplines
– computer science and engineering (hardware and software)
– philosophy (rules of reasoning)
– mathematics (logic, algorithms, optimization)
– cognitive science and psychology (modeling high level
human/animal thinking)
– neural science (model low level human/animal brain activity)
– linguistics
• The birth of AI (1943 – 1956)
– McCulloch and Pitts (1943): simplified mathematical model of
neurons (resting/firing states) can realize all propositional logic
primitives (can compute all Turing computable functions)
– Alan Turing: Turing machine and Turing test (1950)
– Claude Shannon: information theory; possibility of chess playing
computers
– Boole, Aristotle, Euclid (logics, syllogisms)

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 8


History of AI
• Early enthusiasm (1952 – 1969)
– 1956 Dartmouth conference
John McCarthy (Lisp);
Marvin Minsky (first neural network machine);
Alan Newell and Herbert Simon (GPS);
– Emphasis on intelligent general problem solving
GSP (means-ends analysis);
Lisp (AI programming language);
Resolution by John Robinson (basis for automatic theorem
proving);
heuristic search (A*, AO*, game tree search)
• Emphasis on knowledge (1966 – 1974)
– domain specific knowledge is the key to overcome existing
difficulties
– knowledge representation (KR) paradigms
– declarative vs. procedural representation

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 9


History of AI
• Knowledge-based systems (1969 – 1999)
– DENDRAL: the first knowledge intensive system (determining 3D structures
of complex chemical compounds)
– MYCIN: first rule-based expert system (containing 450 rules for diagnosing
blood infectious diseases)
EMYCIN: an ES shell
– PROSPECTOR: first knowledge-based system that made significant profit
(geological ES for mineral deposits)
• AI became an industry (1980 – 1989)
– wide applications in various domains
– commercially available tools
– AI winter
• Current trends (1990 – present)
– more realistic goals
– more practical (application oriented)
– distributed AI and intelligent software agents
– resurgence of natural computation - neural networks and emergence of
genetic algorithms – many applications
– dominance of machine learning (big apps)
Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 10
State of the art
• HITECH, becomes the first computer program to defeat a grandmaster in a game
of chess (Arnold Denker)
• A speech understanding program named PEGASUS results in a confirmed
reservation that saves the traveller $894 over the regular coach fare.
• MARVEL, a real-time expert system that monitors the massive stream of data
transmitted by the spacecraft, handling routine tasks and alerting the analysts to
more serious problems.

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 11


Advantages of Artificial Intelligence
• more powerful and more useful computers
• new and improved interfaces
• solving new problems
• better handling of information
• relieves information overload
• conversion of information into knowledge

12
The Disadvantages
• increased costs
• difficulty with software development - slow and expensive
• few experienced programmers
• few practical products have reached the market as yet.

13
AI Technique
• AI deals with a large spectrum of Problems
• Applications spread across the domains, from medical to manufacturing with their own
complexities
• AI Deals with
• Various Day-to-day Problem
• Different identification and authentication problems (in security)
• Classification problems in Decision-making systems
• Interdependent and cross-domain problems (Such as Cyber-Physical
• Systems)
• The problems faced by AI is hard to resolve and also computationally

14
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
AI Technique
• Intelligence requires knowledge
(less desirable properties)
– voluminous
– hard to characterize accurately
– constantly changing
– differ from data by being organized in a way that
corresponds to the ways it will be used

Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems 15


Knowledge Representation
• Generalizations-defines property
• Understood by people -eg taking readings
• Easily modified – correct errors and reflect changes
• Used in a great many situations(even not accurate or complete)
• Can be used to reduce the possibilities that must be considered(bulk
to narrow)
Categories of problems
• Structured problems –goal state defined
• Unstructured problems- goal state not known
• Linear problems- based on dependent variable
• Non linear problems- no dependency between variables

Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems 16


Problem
• In AI, formally define a problem as
• a space of all possible configurations where each configuration is called a state
• The state-space is the configuration of the possible states and how they connect
to each other e.g. the legal moves between states.
• an initial state
• one or more goal states
• a set of rules/operators which move the problem from one state to the next
• In some cases, we may enumerate all possible states
• but usually, such an enumeration will be overwhelmingly large so we only
generate a portion of the state space, the portion we are currently examining
• we need to search the state-space to find an optimal path from a start state to a
goal state.

17
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
State space: Tic-Tac-Toe

Goal: Arrange in horizontal or vertical or diagonal to win


18
State space: 8 Puzzle

The 8 puzzle search space consists of 8! states (40320)


19
Search
• Search is a general algorithm that helps in finding the path in state space
• The path may lead to the solution or dead end.
• Control strategies- overall rules and approach towards searching
i) forward search(data directed)
Starts search from initial state towards goal state.
Ex: locating a city from current location
ii) backward search(goal directed)
Search stars from goal state towards a solvable initial state.
Ex: start from target city

20
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Search
• Strategies to explore the states
• Informed search – No guarantee for solution but high probability of getting solution
-heuristic approach is used to control the flow of solution path
-heuristic approach is a technique based on common sense, rule of
thumb, educated guesses or intuitive judgment
• Uninformed search – generates all possible states in the state space and checks for
the goal state.
- time consuming due to large state space
- used where error in the algorithm has severe consequences

• Parameters for search evaluation


i) completeness: Guaranteed to find a solution within finite time
ii) space and time complexity: memory required and time factor needed
iii) optimality and admissibility: correctness of the solution

21
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Problem solving with AI
Well structured problems- yield right answer or inference for an algorithm
Ex:
• Quadratic equation-find value of x
• Speed of ball when reaches to batsman
• Network flow analysis
Ill structured problems-do not yield a particular answer
Ex:
• How to dispose wet waste safely
• Security threats in big social gathering
Unstructured problems- exact goal state not known(many goal states)
Ex: improve life expectancy of human being
Linear problems-have a solution or will not have
Non Linear problems-relationship between input and output are not linear

22
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
23
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
AI Applications
• Credit granting
• Information management and retrieval
• AI and expert systems embedded in products
• Plant layout
• Help desk and assistance
• Employee performance evaluation
• Shipping
• Marketing
• Warehouse optimization
• In space workstation maintainance
• Satellite controls
• Network developments
• Nuclear management

24
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
AI Models

Semiotic models- Based on sign process, signification or communication


Statistical models- representation and formalization of relationships through statistical techniques.
- History of data for decision making
- uses probabilistic approaches
25
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Data Acquisition and Learning Aspects in AI
• Knowledge Discovery- data mining and machine learning:
data-recorded facts
information-pattern underlying the data
data mining or knowledge discovery-extraction of meaning information.
machine learning-algorithms that improve performance with experience
• Computational Learning Theory(COLT)- formal mathematical models defined
complexity-computation, prediction and feasibility
analyze patterns-Probably Approximately Correct(PAC)-hypothesis
mistake bound-target function
• Neural and evolutionary computation- speed up mining of data
evolutionary computing- biological properties
decision making and optimization
Neural computing-neural behavior of human being
pattern recognition and classification
26
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Data Acquisition and Learning Aspects in AI
• Intelligent agents and multi agent systems- decision making in complex scenarios
Intelligent agents –based on knowledge, available resources and
perspectives
multi agent systems- combination of more than one percept of intelligent
agents
• Multi-perspective integrated intelligence-utilizing and exploiting knowledge from
different perspective

27
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Problem Solving: Definition
A problem exists when you want to get from “here” (a knowledge state) to “there” (another
knowledge state) and the path is not immediately obvious.
Makes optimal use of knowledge and information to select set of actions for reaching the goal.
What are problems?
Everyday experiences
How to get to the airport?
How to study for a quiz, complete a paper, and finish a lab before
recitation?
Domain specific problems
Physics or math problems
Puzzles/games
Crossword, anagrams, chess
Categories of problem solving
• General purpose: means-end analysis
present situation is compared with the goal to detect the difference
select action that reduces the difference
Ex:select the mode of transport
• Special purpose-modelled for the specific problem, which have specific features
Ex: classify legal document reference to particular case
28
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Problem solving process
• Problem solving-process of generating solutions for the given situation
• Problem is defined,
1. in a context
2. has well defined objective
3. solution has set of activities
4. uses previous knowledge and domain knowledge
Primary objective-problem identification

29
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Problem solving process
• Problem solving technique involves
1. problem definition
2. problem analysis and representation
3. planning
4. execution
5. evaluating solution
6. consolidating gains
• A search algorithm takes a problem as input and returns a solution in the form of
an action sequence.
• execution phase-Once a solution is found, the actions it recommends can be
carried out.

30
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Formulating problems
• Problem formulation is the process of deciding what actions and states to consider, and follows
goal formulation.
• Goal formulation-the agent may wish to decide on some other factors that affect the desirability
of different ways of achieving the goal.
• Knowledge and problem types
single-state problem-Agent knows exactly what each of its actions does and it can
calculate exactly which state it will be in after any sequence of actions.
multiple-state problem-when the world is not fully accessible, the agent must reason
about sets of states that it might get to, rather than single states.
contingency problem-the agent may be in need to now calculate a whole tree of actions,
rather than a single action sequence in which each branch of the tree deals with a possible
contingency that might arise.
exploration problem-the agent learns a "map" of the environment, which it can then use
to solve subsequent problems.

31
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Formulating problems
• Well-defined problems and solutions
A problem is really a collection of information that the agent will use to decide what to do.
Elements of a problem:
1. The initial state that the agent knows itself to be in.
2. The set of possible actions available to the agent.
operator is used to denote the description of an action to
reach a state.
state space-the set of all states reachable from the initial state by
any sequence of actions.
A path in the state space is simply any sequence of actions
leading from one state to another.
3. The goal test, which the agent can apply to a single state description to determine if it is
a goal state.
4. A path cost function is a function that assigns a cost to a path.
The output of a search algorithm is a solution, that is, a path from the initial state to a state that
satisfies the goal test.

32
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Formulating problems
• Measuring problem-solving performance
Solution is obtained or not
Obtained solution is good solution or not(with a low path cost)
Search cost-associated with the time and memory required to find a solution.
total cost of the search is the sum of the path cost and the search cost
• Choosing states and actions
To decide a better solution, determine the measurement of path cost function
The process of removing detail from a representation is called abstraction

33
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Problem Formulation and Representation
• Example: Water jug problem-find out a way to empty 2 galloon jug and fill 5 galloon jug with 1
galloon of water
states: Amount of water in the jugs
actions: 1. empty the big jug
2. empty the small jug
3. pour water from small jug to big jug
4. . pour water from big jug to small jug
Goal: 1 galloon of water in big jug and empty the small jug
path cost: number of actions(minimum number of actions->better solution)
Representation: jugs(b,s), where b-amount of water in bigger jug, s- b-amount of water in
smaller jug
initial state: (5,2)
goal state: (1,0)
operators: i) empty the jug
ii) fill the jug
34
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Problem Formulation and Representation
• Solution:
initial state: (5,2)
goal state: (1,0)
operators:
i) empty big(remove water from big jug)
ii) empty small(remove water from small jug)
iii) big is empty(pour water from small jug to big jug)
iv) small is empty(pour water from big jug to small jug)
actions of sequence: 2,4,2,4,2

35
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Toy problems
Intended to illustrate or exercise various problem-solving methods
The 8-puzzIe
• 3x3 board with eight numbered tiles and a blank space.
• A tile adjacent to the blank space can slide into the space.
• objective-to reach the configuration shown on the right of the figure.
Problem formulation:
• States: a state description specifies the location of each of the eight tiles in one of the nine
squares. For efficiency, it is useful to include the location of the blank.
• Operators: blank moves left, right, up, or down.
• Goal test: state matches the goal configuration shown in Figure.
• Path cost: each step costs 1, so the path cost is just the length of the path.

36
Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach
Toy problems
The 8-queens problem
• Place eight queens on a chessboard such that no queen attacks any other.
• There are two main kinds of formulation
The incremental formulation involves placing queens one by one
the complete-state formulation starts with all 8 queens on the board and moves
them around.
• Goal test: 8 queens on board, none attacked.
• Path cost: zero.
There are also different possible states and operators.
Consider the following for incremental formulation:
• States: any arrangement of 0 to 8 queens on board.
• Operators: add a queen to any square.
Consider the following for complete state formulation:
• States: arrangements of 8 queens, one in each column.
• Operators: move any attacked queen to another square in the same column.

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 37


Toy problems
Cryptarithmetic
• Letters stand for digits
• The aim is to find a substitution of digits for letters such that the resulting sum is arithmetically
correct.
• Each letter must stand for a different digit.
Problem formulation:
• States: a cryptarithmetic puzzle with some letters replaced by digits.
• Operators: replace all occurrences of a letter with a digit not already appearing in the puzzle.
• Goal test: puzzle contains only digits, and represents a correct sum.
• Path cost: zero. All solutions equally valid.

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 38


Toy problems
The vacuum world
• Assume that the agent knows its location and the locations of all the pieces of dirt, and the
suction is still in good working order.
• States: one of the eight states
• Operators: move left, move right, suck.
• Goal test: no dirt left in any square.
• Path cost: each action costs 1.

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 39


Real world problems
More difficult and whose solutions people actually care about
Route finding
• routing in
computer networks
automated travel advisory systems
airline travel planning systems.
• the actions in the problem do not have completely known outcomes:
flights can be late or overbooked
connections can be missed
fog or emergency maintenance can cause delays.
• Other real world problems(refer Artificial Intelligence :A Modern Approach by Stuart J. Russell and
Peter Norvig page 69)
Touring and travelling salesman problem
VLSI layout
Robot navigation
Assembly sequencing
Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 40
Problem types and Characteristics

problem types Problem Characteristics:


single-state problem-Agent knows exactly what To choose an appropriate method for a
each of its actions does and it can calculate exactly which particular Problem:
state it will be in after any sequence of actions.
• Is the problem decomposable?
multiple-state problem-when the world is not
fully accessible, the agent must reason about sets of • Can solution steps be ignored or undone?
states that it might get to, rather than single states. • Is the universe predictable?
contingency problem-the agent may be in need
to now calculate a whole tree of actions, rather than a • Is a good solution absolute or relative?
single action sequence in which each branch of the tree • Is the solution a state or a path?
deals with a possible contingency that might arise. • What is the role of knowledge?
exploration problem-the agent learns a "map"
of the environment, which it can then use to solve • Does the task require human‐interaction?
subsequent problems.

Kevin Night and Elaine Rich, Nair B., “Artificial Intelligence (SIE)”, Mc Graw Hill- 2008. 41
Problem Characteristics

1.Is the problem decomposable?


• Can the problem be broken down to smaller
problems to be solved independently?
• Decomposable problem can be solved easily.
Ex 1:- ∫ x2 + 3x + sin2x cos 2x dx
This can be done by breaking it into three smaller problems and solving each
by applying specific rules. Adding the results the complete solution is
obtained.
Ex2: blocks world problem

Kevin Night and Elaine Rich, Nair B., “Artificial Intelligence (SIE)”, Mc Graw Hill- 2008. 42
Problem Characteristics

2. Can solution steps be ignored or undone?


1. Ignorable problems can be solved using a simple control structure that never
backtracks.
Ex:- theorem proving - In which solution steps can be ignored.(comment lines)

2. Recoverable problems can be solved using backtracking.


Ex:- 8 puzzle- In which solution steps can be undone(backtracking and rollback)

3. Irrecoverable problems can be solved by recoverable style methods via planning


Ex:- Chess- In which solution steps can’t be undone(Moves cannot be retracted.)

Kevin Night and Elaine Rich, Nair B., “Artificial Intelligence (SIE)”, Mc Graw Hill- 2008. 43
Problem Characteristics

3.Is the universe predictable?


 Certain outcome-8‐Puzzle
Every time we make a move, we know exactly what will
happen.
 Uncertain outcome-Playing Bridge

We cannot know exactly where all the cards are


or what the other players will do on their turns.

Kevin Night and Elaine Rich, Nair B., “Artificial Intelligence (SIE)”, Mc Graw Hill- 2008. 44
Problem Characteristics
4.Is a good solution absolute or relative?
Eg: Marcus was a man.
Marcus was a Pompeian.
Marcus was born in 40 A.D.
All men are mortal.
All Pompeians died when the volcano erupted in 79 A.D.
No mortal lives longer than 150 years.
It is now 2004 A.D.
Is Marcus alive? (relative)
Different reasoning paths lead to the answer.
It does not matter which path we follow.
The Travelling Salesman Problem (absolute)
We have to try all paths to find the shortest one.
• Any‐path problems can be solved using heuristics that suggest good paths to
explore.
• For best‐path problems, much more exhaustive search will be performed.
Kevin Night and Elaine Rich, Nair B., “Artificial Intelligence (SIE)”, Mc Graw Hill- 2008. 45
Problem Characteristics

5.Is the solution a state or a path?


Finding a consistent intepretation
“The bank president ate a dish of pasta salad with the fork”.
– “bank” refers to a financial situation or to a side of a river?
– “dish” or “pasta salad” was eaten?
– Does “pasta salad” contain pasta, as “dog food” does not
contain “dog”?
– Which part of the sentence does “with the fork” modify?
• A path‐solution problem can be reformulated as a
state‐solution problem by describing a state as a partial path to
a solution.
• The question is whether that is natural or not.
Kevin Night and Elaine Rich, Nair B., “Artificial Intelligence (SIE)”, Mc Graw Hill- 2008. 46
Problem Characteristics

6.What is the role of knowledge?


 Playing Chess
Knowledge is important only to constrain the search for
a solution.
 Reading Newspaper

Knowledge is required even to be able to recognize a


solution.

Kevin Night and Elaine Rich, Nair B., “Artificial Intelligence (SIE)”, Mc Graw Hill- 2008. 47
Problem Characteristics

7.Does the task require human‐interaction?


 Solitary problem, in which there is no intermediate
communication and no demand for an explanation of
the reasoning process.
Eg: mathematical theorems
• Conversational problem, in which intermediate
communication is to provide either additional
assistance to the computer or additional information
to the user.
Eg: medical diagnosis

Kevin Night and Elaine Rich, Nair B., “Artificial Intelligence (SIE)”, Mc Graw Hill- 2008. 48
Agents
• An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators

• Human agent:
– eyes, ears, and other organs for sensors;
– hands, legs, mouth, and other body parts for actuators

• Robotic agent:
– cameras and infrared range finders for sensors
– various motors for actuators

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 49


Agents and environments

• The agent function maps from percept histories to actions:


[f: P*  A]

• The agent program runs on the physical architecture to produce f


• agent = architecture + program

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 50


Vacuum-cleaner world

• Percepts: location and contents, e.g., [A,Dirty]


• Actions: Left, Right, Suck, NoOp
• Agent’s function  look-up table
• For many agents this is a very large table

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 51


Rational agents
• Rationality
– Performance measuring success
– Agents prior knowledge of environment
– Actions that agent can perform
– Agent’s percept sequence to date

• Ideal Rational Agent: For each possible percept sequence, an ideal rational agent
should select an action that is expected to maximize its performance measure,
given the evidence provided by the percept sequence and whatever built-in
knowledge the agent has.

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 52


Rationality
• Rational is different from omniscience
• Percepts may not supply all relevant information
• E.g., in card game, don’t know cards of others.

• Rational is different from being perfect


• Rationality maximizes expected outcome while perfection maximizes actual
outcome.

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 53


Performance measure
• The criteria that determine how successful an agent is.
• One fixed measure is not suitable for all agents
• Performance measures: Safe, fast, legal, comfortable trip, maximize profits
• Example:
• Agent: Interactive English tutor
• Performance measure: Maximize student's score on test
• Environment: Set of students
• Actuators: Screen display (exercises, suggestions, corrections)
• Sensors: Keyboard

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 54


Flexibility
• Autonomy means “Independence”
• A system is autonomous if its behavior is determined by its own
experience
• An alarm that goes off at a prespecified time is not autonomous
• An alarm that goes off when smoke is sensed is autonomous
• A system without autonomy lacks flexibility

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 55


Autonomy in Agents
• The autonomy of an agent is the extent to which its behaviour is
determined by its own experience, rather than knowledge of
designer.
• Extremes
• No autonomy – ignores environment/data
• Complete autonomy – must act randomly/no program
• Example: baby learning to crawl
• Ideal: design agents to have some autonomy
• Possibly become more autonomous with experience

Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern approach 56


SRM

Intelligent Agent

I Agent - Entity that can perceive the information and act on


that information to achieve the desired outcome
I Intelligent Agent - Capable of making decisions and act most
logically
I Agent Interacts with environment through sensors and
actuators
SRM

Intelligent Agent
I Entity that works without assistance, interprets inputs, senses
the environment, make choices and acts to achieve the goal

I Decisions are based on Set of Rules (if-then)


I Intelligent autonomous in Nature, Good Observant to the
environment
SRM

Intelligent Agent - Cont’d


I Percept Sequence - Complete history of everything the agent
has ever captured
I Agent’s choice of an action depends on percept sequence
I Agent Function maps actions and percept sequences
I Agent Function is generally implemented by an internal
program called agent program
I Mapping of Agent Function is

f :P→A
I Agent consists of architecture and program

Sensors → Agent Programs → Actuators


I Operation of agent is dependent on the function
SRM

Rationality and Rational Agent


I Rational Agent is an agent that behaves logically and does the
right things
I Rationality is a normative concept that stands for acting
based on logical reasoning
I Rational Agent is expected to select an action that would
maximize the performance with respect to the percept
sequence and Built-in knowledge
SRM

Rationality and Rational Agent - cont’d


Parameters for consideration:
I Priorities and preferences of the agent
I Environment information available in the agents
I Possible actions for agents
I Estimated or actual benefits and chances of success of the
actions
SRM

Agents in AI
I Performance Measures is used to track the performance of an
agent
I The selected action by the agents changes the state of
environment
I If the sequence of selected actions is desirable, agent did very
well
I Performance measure based on what one actually wants from
environment
SRM

Rationality and Performance


I Rationality is with knowledge available with the agent
I Knowledge is derived from the percept sequence to date
I Learning agent which is part of rational agent that can
succeed in variety of environments
I Rational agent should possess the following properties
I Ability to gather information
I Ability to learn from the experience
I Perform knowledge augmentation
I Autonomy
SRM

Flexibility and Intelligent Agent


I Intelligence demands flexibility and agents do need that
I Agents need to be
I Responsive (Timely fashion reaction)
I Pro-active (Opportunistic and goal-directed behaviour)
I Social (Interaction with other agents)
I Mobility (to accumulate the knowledge)
I Veracity (Truthful)
I Benevolence (Avoiding conflict)
I Rationality (Act to maximize the expected performance)
I Learning (Increase in learning increases the performances)
I For designing an intelligent agent, PEAS (P - Performance, E
- Environment, A - Agent’s actuators, S - Sensors) are
necessary
SRM

Types of Agents
I Types based on Complexity and functionality (Expected
Intelligence) of agent
I Complexity of the environment contributes contributes to
complexity of the agent architecture
I Types of Agents are,
I Table-driven agents
I Simple Reflex Agents
I Model-based reflex Agents
I Goal-based Agents
I Utility-based Agents
SRM

Table-driven Agents
I Agents are based on simple table which entries determine the
action (Percept to Percept Sequence)
I Drawbacks:
I Size of the Table
I Time to build the table
I No Autonomy
I Time required for learning
SRM

Simple Reflex Agents


I Agents accountable only to the current percept
I Action is decided based on rules (Condition)
I Agents are limited with intelligence
I Can not handle the complex scenario
I Simple Reflex Agents may stuck in infinite loops incase of
unavailability of some sensors
I Use of randomise simple reflex agents can solve problem (to
some extent)
I Randomization may impact rational behaviour
SRM
SRM

Model-based Reflex Agents


I Percept history is maintained (Using internal states)
I Agents reflect some of the unobserved aspects of the current
state
I Incorporating things in determining the unobserved aspects is
called model
I Model keeps track the past sequence of percept to determine
the unseen part and impact of actions on environment
I Modle-based intelligent agents have wide applications
compared to simple reflex agents
SRM
SRM

Goal-based Agents
I Goal of Decision-Making is considered
I Goal based action is very complicated and need a sequence of
actions
I Decision-Making checks the impact of actions with reference
to the goal
I Less efficient but more flexible
I More time is required for reasoning process
I Resemblance of Model-Based agents
I Goal based agents require
I Require search and planning to drive the goal
I Need to be more flexible, as knowledge supports action
SRM
SRM

Utility-Based Agents
I Selection is described by the utility function
I Utility Function is most often a real number that maps to
degree of happiness
I Complete specification of utility function helps in rational
decision-making
I Utility-based Agents can help in decision-making in following
cases
I Only certain goals could be achieved in conflicting goals with
respect to some performance measure
I Utility function maps to likelihood of success and importance
of goal in order to take decision when goals are uncertain
I Problems related to route selection and modification
SRM

Utility-based agents are useful when:


I Not only goals, but means are also important
I Some series of actions are safer, quicker and reliable
I Happy and Unhappy states are required to be distinguished
SRM

Learning Agent
I Learning agent responds new or unknown scenarios in a
logical way
I Agents learn from experiences
I Two important elements in leanring agents
I Performance Agents - selection of external actions
I Learning Agents - Responsible for making improvements
I Feedback is provided for learning which makes improvements
in name of penalty or reward
I Task Generator or Problem Generator suggests actions that
generate new experiences
I Learning is the process of modification of each component of
agent to bring to reach the expected
SRM
SRM

Constraint Satisfaction Problems (CSP)


I Search Space is constrained by a set of conditions and
dependencies
I Class of problems where the search space is constrained called
as CSP
I For example, Time-table scheduling problem for lecturers.
I Constraint,
I Two Lecturers can not be assigned to same class at the same
time
I To solve CSP, the problem is to be decomposed and analyse
the structure
I Constraints are typically mathematical or logical relationships
SRM

CSP
I Any problem in the world can be mathematically represented
as CSP
I Hypothetical problem does not have constraints
I Constraint restricts movement, arrangement, possibilities and
solutions
I Example: if only concurrency notes of 2,5,10 reupees are
available and we need to give certain amount of money, say
Rs. 111 to a salesman and the total number of notes should
be between 40 and 50.
I Expressed as, Let n be number of notes,

40 < n < 50
and
c1 X1 + c2 X2 + c3 X3 = 111
SRM

CSP
I Types of Constraints
I Unary Constraints - Single variable
I Binary Constraints - Two Variables
I Higher Order Constraints - More than two variables
I CSP can be represented as Search Problem
I Initial state is empty assignment, while successor function is a
non-conflicting value assigned to an unassigned variables
I Goal test checks whether the current assignment is complete
and path cost is the cost for the path to reach the goal state
I CSP Solutions leads to the final and complete assignment
with no exception
SRM

Crypt-arithmetic Puzzles
I Cryptarithmetic puzzles are also represented as CSP
I Example:
MIKE +
JACK =
JOHN
I Replace every letter in puzzle with single number (number
should not be repeated for two different alphabets)
I The domain is { 0,1, ... , 9 }
I Often treated as the ten-variable constraint problem where
the constraints are:
I All the variables should have a different value
I The sum must work out
SRM

Cryptarithmetic Puzzles
I M * 1000 + I * 100 + K * 10 + E + J * 1000 + A * 100 +
C * 10 + K = J * 1000 + O * 100 + H * 10 + N
I Constraint Domain is represented by Five-tuple and
represented by,
D = {var , f , O, dv , rg }
I Var stands for set variables, f is set functions, O stands for
the set of legitimate operators to be used, dv is domain
variable and rg is range of function in the constraint
I Constraint without conjunction is referred as Primitive
constraint (for Eg., x < 9 )
I Constraint with conjunction is called as non-primitive
constraint or a generic constraint (For Eg., x < 9 and x > 2)
SRM

Room Coloring Problem


I Let K for Kitchen, D for Dining Room, H is for Hall, B2 and
B3 are bedrooms 2 and 3, MB1 is master bedroom, SR is the
store Room, GR is Guest Room and Lib is Library
I Constraints
I All bedrooms should not be colored red, only one can
I No two adjacent rooms can have the same color
I The colors available are red, blue, green and violet
I Kitchen should not be colored green
I Recommended to color the kitchen as blue
I Dining room should not have violet color
SRM

Room Coloring Problem


SRM

Representation as a search problem


I Lines - Adjacency rooms
I Dotted Lines - No Adjacency
SRM

Room Coloring Problem


I Soft Constraints are that they are cost-oriented or preferred
choice
I All paths in the Search tree can not be accepted because of
the violation in constraints
SRM
SRM

Backtracking Search for CSP


I Assignment of value to any additional variable within
constraint can generate a legal state (Leads to successor state
in search tree)
I Nodes in a branch backtracks when there is no options are
available
SRM

Backtracking Search for Room Coloring


SRM

Algorithm for Backtracking


Pick initial state
R = set of all possible states
Select state with var assignment
Add to search space
check for con
If Satisfied
Continue
Else
Go to last Decision Point (DP)
Prune the search sub-space from DP
Continue with next decision option
If state = Goal State
Return Solution
Else
Continue
SRM

Backtracking Search for CSP


I Backtracking allows to go to the previous decision-making
node to eliminate the invalid search space with respect to
constraints
I Heuristics plays a very important role here
I If we are in position to determine which variables should be
assigned next, then backtracking can be improved
I Heuristics help in deciding the initial state as well as
subsequent selected states
I Selection of a variable with minimum number of possible
values can help in simplifying the search
I This is called as Minimum Remaining Values Heuristic (MRV)
or Most Constraint Variable Heuristic
I Restricts the most search which ends up in same variable
(which would make the backtracking ineffective)
SRM

Heuristics
I MRV cannot have hold on initial selection process
I Node with maximum constraint is selected over other
unassigned variables - Degree Heuristics
I By degree heuristics, branching factor can not be reduced
I Selection of variables are considered not the values for it, so
the order in which the values of particular variable can be
arranged is tackled by least constraining value heuristic
SRM
SRM

Forward Checking
I To understand the forward checking, we shall see 4 Queens
problem
I If an arrangement on the board of a queen x, hampers the
position of queensx+1 , then this forward check ensures that
the queen x should not be placed at the selected position and
a new position is to be looked upon
SRM
SRM

Foward Checking
I Q1 and Q2 are placed in row 1 nad 2 in the left sub-tree, so,
search is halted, since No positions are left for Q3 and Q4
I Forward Checking keeps track of the next moves that are
available for the unassigned variables
I The search will be terminated when there is no legal move
available for the unassigned variables
SRM

Forward Checking
I For Room Coloring problem, Considering all the constraints
the mapping can be done in following ways;
I At first, B2 is selected with Red (R). Accordingly, R is deleted
from the adjacent nodes
I Kitchen is assigned with Blue (B). So, B is deleted form the
adjacent Nodes
I Furthermore, as MB1 is selected green, no color is left for D.
SRM

Constraint Propagation
I There is no early detection of any termination / failure that
would possible occur even though the information regarding
the decision is propagated
I Constraint should be propagated rather than the information
SRM

Constraint Propagation
I Step 2 shows the consistency propagated from D to B2
I Since D is can have only G value and B2 being adjacent to it,
the arc is drawn
I It is mapped as D → B2 or Mathematically,

A → B is consistent ↔ ∀ legal value a ∈ A, ∃

non-conflicting value b ∈ B
I Failure detection can take place at early stage
SRM

Constraint Propagation
I Algorithm for arc assignment is:
I Let C be the variable which is being assigned at a given
instance
I X will have some value from D{} where D is domain
I For each and every assigned variable, that is adjacent to X,
Say X 0
1 Perform forward check (remove values from domain D that
conflict the decision of the current assignment)
2 For every other variable X 00 that are adjacent or connected to
X0 ;
i Remove the values from D from X 00 that can’t be taken as
further unassigned variables
ii Repeat step 2, till no more values can be removed or discarded
I Inconsistency is considered and constraints are propagated in
Step (2)
SRM

Intelligent Backtracking

I Conflict set is maintained using forward checking and maintained

I Considering the 4 Queens problem, Conflict needs to be detected


by the user of conflict set so that a backtrack can occur
I Backtracking with respect to the conflict set is called as
conflict-directed backjumping
I Backjumping approach can’t actually restrict the earlier committed
mistakes in some other branches
References
• Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building
Intelliegent Systems, 1st ed., PHI learning,2015
• Stuart J. Russell, Peter Norwig , Artificial Intelligence –A Modern
approach, 3rd Pearson Education, 2016
• Kevin Night and Elaine Rich, Nair B., “Artificial Intelligence (SIE)”, Mc
Graw Hill- 2008.

57

You might also like