Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Ai Unit 1 Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

1

Unit

INTRODUCTION

Introduction–Definition – Future of Artificial Intelligence – Characteristics of Intelligent


Agents–Typical Intelligent Agents – Problem Solving Approach to Typical AI problems.

1.1 INTRODUCTION TO AI
Artificial Intelligence (AI) is the study of how to make computers do things which, at the
moment, people do better. Artificial intelligence (AI) is also defined as the simulation of human
intelligence processes by machines, especially computer systems.

Merriam-Webster defines artificial intelligence this way:


1) A branch of computer science dealing with the simulation of intelligent behavior
in computers.
2) The capability of a machine to imitate intelligent human behavior.

1.2 DEFINITION OF AI
AI definitions can be categorized into four, they are as follows:

Systems that think like humans Systems that think rationally


Systems that act like humans System that act rationally

Acting Humanly: Turning Test


A Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether
or not a computer is capable of thinking like a human being. The test is named after Alan Turing,
the founder of the Turning Test and an English computer scientist, cryptanalyst, mathematician
and theoretical biologist. Turing proposed that a computer can be said to possess artificial
intelligence if it can mimic human responses under specific conditions. Figure 1 Depicts Turning
Test
1.2 Artificial Intelligence

Figure 1 depicts Turning Test

Thinking Humanly: The Cognitive Modelling Approach


When program thinks like a human, it must have some way of determining how humans
think. Two ways to get to know the actual workings of human minds namely:
• Through Introspection (Trying to catch one’s own thoughts)
• Through Psychological Experiments.

Thinking Rationally: The Laws of Thought Approach


Right thinking that is, irrefutable reasoning processes.

Acting Rationally: The Rational Agent Approach


An agent is something that perceives and acts. AI is looked upon as a study that deals with
the construction and study of the rational agents. Forming the right inferences is sometimes part of
being a rational agent. One way to act rationally is to reason logically the conclusion that might
achieve the goal and then only to act on the conclusion.

1.3 FUTURE OF ARTIFICIAL INTELLIGENCE


6 ways AI might affect us in the future.
1) Automated Transportation – Driver less cars (present), Driver less buses and
trains(future)
2) Cyborg Technology – possibility of making our brains to communicate with robotic
human parts attached to our bodies.
3) Taking over dangerous jobs – Involving robots with AI in place of dangerous or
toxic welding, in war fields to identify and defuse bombs.
4) There will be more numbers of smart cities as vehicles, phones, home appliances will
be run by AI.
5) ‘Home robots’ will help elderly people with their day to day work.

Advantages of AI
1) Minimum Fault
2) Everyday utilization
3) Digital Assistant
4) Time saver
5) Efficient Work
Introduction 1.3
Disadvantages of AI
1) Dependent on AI
2) Smarter than humans
3) Reduction of Decision making capacity

What is an Agent?
An agent is anything that can be viewed as: – perceiving its environment through sensors
and acts upon that environment through actuators.

An agent program runs in cycles of:


• Perceive,
• Think,
• Act.

Types of Agents:
• Human agent: – Sensors: eyes, ears, and other organs. – Actuators: hands, legs,
mouth, and other body parts.
• Robotic agent: – Sensors: Cameras and infrared range finders. – Actuators: various
motors.
• Examples of agent – Thermostat – Cell phone – Vacuum cleaner – Robot – Alexa
Echo – Self-driving car – Human – etc.

Intelligent Agents
An agent gets to interact with environment, agent perceive (sense) its state of an
environment through its sensors and it can affect its state through its actuators.IA is depicted in
figure 2.

Intelligent agents may also learn or use knowledge to achieve their goals.

They may be very simple or very complex.

A reflex machine, such as a thermostat, is considered an example of an intelligent agent.

Figure 2: Depicts Agent interacting with environment through sensors and effectors
1.4 Artificial Intelligence
1.4 AGENT’S BASIC CHARACTERISTICS
• Situatedness
The agent receives some form of sensory input from its environment, and it
performs some action that changes its environment in some way. Examples of
environments: the physical world and the Internet.
• Autonomy
The agent can act without direct intervention by humans or other agents and that it
has control over its own actions and internal state.
• Adaptivity
The agent is capable of
 Reacting flexibly to changes in its environment;
 Taking goal-directed initiative (i.e., is pro-active), when appropriate;
 Learning from its own experience, its environment, and interactions with
others.
• Sociability
The agent is capable of interacting in a peer-to-peer manner with other agents or
humans.

1.4.1 Internal characteristics are


Learning/reasoning: an agent has the ability to learn from previous experience and to
successively adapt its own behavior to the environment.

Reactivity: an agent must be capable of reacting appropriately to influences or


information from its environment.

Autonomy: an agent must have both control over its actions and internal states. The
degree of the agent’s autonomy can be specified. There may need intervention from the user only
for important decisions.

Goal-oriented: an agent has well-defined goals and gradually influence its environment
and so achieve its own goals

1.4.2 External characteristics are


Communication: an agent often requires an interaction with its environment to fulfill its
tasks, such as human, other agents, and arbitrary information sources.

Cooperation: cooperation of several agents permits faster and better solutions for
complex tasks that exceed the capabilities of a single agent.

Mobility: an agent may navigate within electronic communication networks.


Introduction 1.5
Character: like human, an agent may demonstrate an external behavior with many human
characters as possible.

There are four main aspects that need to be taken into consideration when designing an
intelligent agent.
• Percepts - This is the information that the agent receives
• Actions - This is what the agent needs to do or can do to achieve its objectives.
• Goals - This is the factor that the agent is trying to achieve
• Environment - The final aspect is the environment in which the agent will be working
in. The environment in which the agent performs is probably the most important
aspect that needs to be considered as this affects the outcome of the percepts, actions
and goals.

Examples of Agents
Agent Type Percepts Actions Goals Environment
Bin-Picking Robot Images Grasp objects; Parts in correct Conveyor belt
Sort into bins bins
Medical Diagnosis Patient Tests and Healthy patient Patient &
symptoms, tests treatments hospital
Excite's Jango Web pages navigate web, Find best price for Internet
product finder gather relevant a product
products
Webcrawler Softbot Web pages Follow links, Collect info on a Internet
pattern matching subject
Financial Financial data Gather data on Pick stocks to buy Stock market,
forecasting software companies & sell company
reports

Area of Impact of Intelligent Agents


1.6 Artificial Intelligence
1.5 TYPICAL INTELLIGENT AGENT
Rational Agent
A rational agent is an agent that has clear preferences, models uncertainty via expected
values of variables or functions of variables, and always chooses to perform the action with
the optimal expected outcome for itself from among all feasible actions. For each possible percept
sequence, a rational agent should select an action that is expected to maximize its Performance
Measures.

A rational agent can be anything that makes decisions, typically a person, firm, machine,
or software.

Rational agents in AI are closely related to intelligent agents, autonomous software


programs that display intelligence. An ideal rational agent is the one, which is capable of doing
expected actions to maximize its performance measure.

A rational agent depends on the following four things:


• Performance measures that defines degree of success.
• Percept sequence – The knowledge that the agent has acquired so far.
• Knowledge about the environment, which the agent interacts with.
• Actions that the agents can perform.

What is Performance Measure?


An objective criterion for success of an agent's behavior. Eg. Vacuum Cleaner World
Percepts: Location and Status
Actions: Left , Right ,Suck ,Noop
function Vacuum-Agent([location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
Structure of Agent
Agent’s structure can be viewed as
• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on.
• Agent Program = an implementation of an agent function.
Agent Program is a function that implements the agent mapping from percepts to actions.
There exists a variety of basic agent program designs, reflecting the kind of information made
explicit and used in the decision process. The designs vary in efficiency, compactness, and
flexibility. The appropriate design of the agent program depends on the nature of the environment.
Introduction 1.7
Architecture is a computing device used to run the agent program. To perform the
mapping task four types of agent programs are there. They are:
1) Simple reflex agents
2) Model-based reflex agents
3) Goal-based agents
4) Utility-based agents.

(1) Simple Reflex Agent


The decision (Action) can be chosen based on current perception of the world not the past
one. In other words we can say if perceives then act (Condition-action rule). Consider a Mars
Lander as an example, it collects rocks once it sense and it will not bother about the past
perception. Assume it collects a rock, again it senses the rock it collects without knowing that this
type of rock is already collected or not.

(2) Model-Based Reflex Agents (Agents that keep track of the world)
In this type of agent, there is an internal state which maintains important information from
previous percepts. This internal state helps an agent to do action. Again consider a Mars Lander, it
sense the rocks and checking for previous state before perform collection action ,whether this type
of rock is already collected or not.

Condition action rule--->if rock check for previous state then collect
1.8 Artificial Intelligence
(3) Goal-Based Agents
An agent knows the description of current state and also needs some sort of goal
information that describes situations that are desirable. The action matches with the current state
and selection depends on the goal state. The goal based agent is more flexible for more than one
destination also. After identifying one destination, the new destination is specified, goal based
agent is activated to come up with a new behavior. Search and Planning are the subfields of AI
devoted to finding action sequences that achieve the agents goals.

(4) Utility-Based Agents (Utility – refers to ― the quality of being useful)


A utility function maps each state after each action to a real number representing how
efficiently each action achieves the goal. This is useful when we either have many actions all
solving the same goal or when we have many goals that can be satisfied and we need to choose an
action to perform.

Consider the mars Lander on the surface of mars with an obstacle in its way. In a goal
based agent it is uncertain which path will be taken by the agent and some are clearly not as
efficient as others but in a utility based agent the best path will have the best output from the
utility function and that path will be chosen.
Introduction 1.9
Learning Agents
The learning task allows the agent to operate in initially unknown environments and to
become more competent than its initial knowledge.

A learning agent can be divided into four conceptual components.


• Learning element – This is responsible for making improvements. It uses the
feedback from the critic on how the agent is doing and determines how the
performance element should be modified to do better in the future.
• Performance element – which is responsible for selecting external actions and it is
equivalent to agent: it takes in percepts and decides on actions.
• Critic – It tells the learning element how well the agent is doing with respect to a
fixed performance standard.
• Problem generator – It is responsible for suggesting actions that will lead to new and
informative experiences.

1.6 PROBLEM DEFINITION AND PROBLEM FORMULATION:


1.6.1 Problem Definition
Four important aspect to solve a problem:
(i) Define the problem precisely (i.e.) the definition must include specification of the
initial situations and also final situations which constitute (i.e.) contain acceptable
solutions to the problem.

(ii) Analyze the problem (i.e.) important features have an immense (or) huge impact
on the correctness of using various techniques for solving the problems.

(iii) Isolate and represent the knowledge to be used to solve the problem.

(iv) Choose the best problem – solving techniques and apply it the problem.
1.10 Artificial Intelligence
Define the Problem as State Space Search
Consider the problem of “Playing Chess” to build a program that could play chess, we
have to specify the starting position of the chess board, the rules that define legal moves. And the
board position that represent a win. The goal of the winning the game, if possible, must be made
explicit.

The starting position can be described by an 8 X 8 array square in which each element
square (x,y),(x varying from 1to 8 & y varying from 1 to 8) describes the board position of an
appropriate chess coin, the goal is any board position in which the opponent does not have a legal
move and his or her “king” is under attack. The legal moves provide the way of getting from
initial state of final state.

The legal moves can be described as a set of rules consisting of two parts: A left side that
gives the current position and the right side that describes the change to be made to the board
position. An example is shown in the following figure.

Current Position
While pawn at square ( 5 , 2), AND Square ( 5 , 3 ) is empty, AND Square ( 5 , 4 ). is empty.

Changing Board Position


Move pawn from Square ( 5 , 2 ) to Square ( 5 , 4 ) .
The current position of a coin on the board is its STATE and the set of all possible
STATES is STATE SPACE. One or more states where the problem terminates is FINAL
STATE or GOAL STATE. The state space representation forms the basis of most of the AI
methods.
It allows for a formal definition of the problem as the need to convert some given situation
into some desired situation using a set of permissible operations. It permits the problem to be
solved with the help of known techniques and control strategies to move through the problem
space until goal state is found. For any problem solving process the basic is search process.

1.6.2 Problem Formulation


Problem formulation involves deciding what actions and states to consider, when the
description about the goal is provided. It is composed of:
• Initial State - start state
• Possible actions that can be taken
• Transition model – describes what each action does
• Goal test – checks whether current state is goal state
• Path cost – cost function used to determine the cost of each path.
◦ The initial state, actions and the transition model constitutes state space of the
problem - the set of all states reachable by any sequence of actions.
◦ A path in the state space is a sequence of states connected by a sequence of
actions.
Introduction 1.11
◦ The solution to the given problem is defined as the sequence of actions from the
initial state to the goal states.
◦ The quality of the solution is measured by the cost function of the path, and
an optimal solution is the one with most feasible path cost among all the
solutions.
Problem Types
Not all problems are created equal
• single-state problem
• multiple-state problem
• contingency problem
• exploration problem

Single-state problem
• Exact prediction is possible
• state - is known exactly after any sequence of actions
• accessibility - all essential information can be obtained through sensors
• consequences of actions are known to the agent
• goal - for each known initial state, there is a unique goal state that is guaranteed to be
reachable via an action sequence
• simplest case, but severely restricted
Example: Vacuum world, Limitations: Can’t deal with incomplete accessibility
incomplete knowledge about consequences changes in the world
indeterminism in the world, in action

Multiple-state problem
• semi-exact prediction is possible
• state is not known exactly, but limited to a set of possible states after each action
• accessibility- not all essential information can be obtained through sensors
• reasoning can be used to determine the set of possible states
• consequences of actions are not always or completely known to the agent; actions or
the environment might exhibit randomness
• goal due to ignorance, there may be no fixed action sequence that leads to the goal
• less restricted, but more complex
Example: Vacuum world, but the agent has no sensors. The action sequence right,
suck, left, suck is guaranteed to reach the goal state from any initial state
Limitations: Can’t deal with changes in the world during execution (“contingencies”)
1.12 Artificial Intelligence
Contingency Problem
• Exact prediction is impossible
• state unknown in advance, may depend on the outcome of actions and changes in the
environment
• Accessibility of the world some essential information may be obtained through
sensors only at execution time
• Consequences of action may not be known at planning time goal instead of single
action sequences, there are trees of actions contingency branching point in the tree of
actions agent design different from the previous two cases: the agent must act on
incomplete plans.
• Search and execution phases are interleaved
Example: Vacuum world, The effect of a suck action is random. There is no action
sequence that can be calculated at planning time and is guaranteed to reach
the goal state.
Limitations: Can’t deal with situations in which the environment or effects of action
are unknown

Exploration Problem
• Effects of actions are unknown
• State the set of possible states may be unknown
• Accessibility - some essential information may be obtained through sensors only at
execution time
• Consequences of actions may not be known at planning time
• Goal can’t be completely formulated in advance because states and consequences may
not be known at planning time d
• Discovery what states exist
• Experimentation what are the outcomes of actions
• Learning remember and evaluate experiments
• Agent design different from the previous cases: the agent must experiment
Search requires search in the real world, not in an abstract model
• Realistic problems, very hard
Example - Vacuum cleaner

Consider a Vacuum cleaner world


Introduction 1.13
Imagine that our intelligent agent is a robot vacuum cleaner. Let's suppose that the
world has just two rooms. The robot can be in either room and there can be dirt in zero,
one, or two rooms.

Goal formulation: intuitively, we want all the dirt cleaned up. Formally, the goal is
{state 7, state 8 }.

Problem formulation (Actions):Left,Right,Suck,NoOp

State Space Graph:

Measuring performance
With any intelligent agent, we want it to find a (good) solution and not spend forever
doing it. The interesting quantities are, therefore, the search cost--how long the agent takes to
come up with the solution to the problem, and the path cost--how expensive the actions in the
solution are. The total cost of the solution is the sum of the above two quantities.
1.14 Artificial Intelligence
Example - Water Jug Problem:
Problem:
You are given two jugs, a 4-gallon one and a 3-gallon one. Neither has any measuring
mark on it. There is a pump that can be used to fill the jugs with water. How can you get exactly 2
gallons of water into the 4-gallon jug.

Solution:
The state space for this problem can be described as the set of ordered pairs of
integers (x,y) Where, X represents the quantity of water in the 4-gallon jug X= 0,1,2,3,4.
Y represents the quantity of water in 3-gallon jug Y=0,1,2,3
Start State: (0,0)
Goal State: (2,0)

Generate production rules for the water jug problem

Production Rules:
Rule State Process

1 (X,Y | X<4) (4,Y) {Fill 4-gallon jug}

2 (X,Y |Y<3) (X,3) {Fill 3-gallon jug}


3 (X,Y |X>0) (0,Y) {Empty 4-gallon jug}
4 (X,Y | Y>0) (X,0) {Empty 3-gallon jug}
5 (X,Y | X+Y>=4 ^ Y>0) (4,Y-(4-X)) {Pour water from 3-gallon jug into 4-gallon jug
until 4-gallon jug is full}
6 (X,Y | X+Y>=3 ^X>0) (X-(3-Y),3) {Pour water from 4-gallon jug into 3-gallon jug
until 3-gallon jug is full}
7 (X,Y | X+Y<=4 ^Y>0) (X+Y,0) {Pour all water from 3-gallon jug into 4-gallon jug}
8 (X,Y | X+Y <=3^ X>0) (0,X+Y) {Pour all water from 4-gallon jug into 3-gallon jug}

9 (0,2) (2,0) {Pour 2 gallon water from 3 gallon jug into 4 gallon
jug}

Initialization:
Start State: (0,0)
Apply Rule 2:
(X,Y | Y<3) -> (X,3) {Fill 3-gallon jug}
Now the state is (X,3)
Introduction 1.15
Iteration 1:
Current State: (X,3)

Apply Rule 7:
(X,Y | X+Y<=4 ^Y>0) (X+Y,0) {Pour all water from 3-gallon jug into 4-
gallon jug}

Now the state is (3,0)

Iteration 2:
Current State : (3,0)

Apply Rule 2:

(X,Y | Y<3) -> (3,3) {Fill 3-gallon jug}


Now the state is (3,3)

Iteration 3:
Current State:(3,3)
Apply Rule 5:

(X,Y | X+Y>=4 ^ Y>0) (4,Y-(4-X)) {Pour water from 3-gallon jug into 4-gallon
jug until 4-gallon jug is full}
Now the state is (4,2)
Iteration 4:
Current State : (4,2)
Apply Rule 3:
(X,Y | X>0) (0,Y) {Empty 4-gallon jug}

Now state is (0,2)

Iteration 5:
Current State : (0,2)
Apply Rule 9:
(0,2) (2,0) {Pour 2 gallon water from 3 gallon jug into 4 gallon jug}
Now the state is (2,0)
Goal Achieved.
1.16 Artificial Intelligence
State Space Tree:

1.6.3 Production System


A production system consists of four basic components:
1) A set of rules of the form Ci → Ai where Ci is the condition part and Ai is the action
part. The condition determines when a given rule is applied, and the action determines
what happens when it is applied.
2) One or more knowledge databases that contain whatever information is relevant for
the given problem. Some parts of the database may be permanent, while others may
temporary and only exist during the solution of the current problem. The information
in the databases may be structured in any appropriate manner.
3) A control strategy that determines the order in which the rules are applied to the
database, and provides a way of resolving any conflicts that can arise when several
rules match at once.
4) A rule applier which is the computational system that implements the control strategy
and applies the rules.

Figure 3: Architecture of Production System


Introduction 1.17
The important roles played by production systems include a powerful knowledge
representation scheme.

A production system not only represents knowledge but also action. It acts as a bridge
between AI and expert systems. Production system provides a language in which the
representation of expert knowledge is very natural. In a production system knowledge is
represented as a set of rules of the form If (condition) THEN (condition) along with a control
system and a database.

1.6.4 Control Strategies


The control system serves as a rule interpreter and sequencer. The database acts as a
context buffer, which records the conditions evaluated by the rules and information on which the
rules act. Figure 1 shows the working mechanism of production system. The production rules are
also known as condition – action, antecedent – consequent, pattern – action, situation – response,
feedback – result pairs.
• Helps us to decide which rule to apply next.
• What to do when there are more than 1 matching rules?
• Good control strategy should:
◦ Cause motion
◦ Systematic
The control system serves as a rule interpreter and sequencer. The database acts as a
context buffer, which records the conditions evaluated by the rules and information on which the
rules act. Figure 1 shows the working mechanism of production system. The production rules are
also known as condition – action, antecedent – consequent, pattern – action, situation – response,
feedback – result pairs.

1.6.5 Problem Characeristics


Heuristic search is applicable for both large and small problems. The correct method to
solve a particular problem is selected based on analyzing the following dimensions.
• Is the problem decomposable into a set of independent smaller or easier sub problems?
• Can solution steps be ignored or at least undone if they prove unwise?
• Is the problem’s universe predictable?
• Is a good solution to the problem obvious without comparison to all other possible
solutions?
• Is the desired solution a state of the world or a path to a state?
• Is a large amount of knowledge absolutely required to solve the problem, or is
knowledge important only to constrain the search?
• Can a computer that is simply given the problem return the solution, or will the
solution of the problem require interaction between the computer and a person?
1.18 Artificial Intelligence
1.6.6 Production System Characteristics
Production system is a good way to describe the operations that can search for a solution
to a problem.
Can production systems, like problems, be described by a set of characteristics that shed
some light on how they can be implemented?
Solution: Yes.
• A monotonic production system is a production system in which the application of a
rule never prevents the later application of another rule that have been applied at the
time the first rule was selected.
• A non monotonic production system is one which is not true.
• A partially commutative production system is with the property that if application of a
particular sequence of rules transforms state x into state y, any permutation of those
rules allowable also transforms state x into state y.
• A commutative production system is both monotonic and partially commutative.
Importance of categories of production system lies in the relationship between the
categories and appropriate implementation strategies.

What relationships are there between problem types and the types of production systems
best suited to solving the problems?
Solution:
There are infinite number of production system which describe ways to find solutions
commutative one solves any problem but is useless. It represents sequence of applications of
rules of a non-commutative system.

In a formal sense, no relationship between kinds of problems and kinds of production


systems since all problems can be solved by all kinds of systems.

In a practical sense, relationship between kinds of problems and the kinds of systems that
lend themselves to describing those problems.

1.6.7 Features of Production System


Some of the main features of production system are:
• Expressiveness and intuitiveness: In real world, many times situation comes like “if
this happen-this has to be done”, “if this is the case then this should happen” and many
such things. The production system is the one that informs us what do to in any given
situation.
◦ Simplicity: The structure of any production system is unique and uniform as they
use “if – then” format. The rules provides simplicity in knowledge representation.
This improves the readability of production rules.
◦ Modifiability: This means the facility of modifying rules. The skeletal form is
developed first and then it is accurate to suit a specific application.
Introduction 1.19
◦ Modularity: This means knowledge available in discreet chunks. Information are
collection of independent facts which could be added or deleted from the system
with no side effects.
◦ Knowledge intensive: The knowledge base of any production system is to store
pure knowledge. There is no control of any form or programming information
available with this part. Each production rule is normally written as an English
sentence; the problem of semantics is solved by the structure representation.

You might also like