Ai Unit 1 Notes
Ai Unit 1 Notes
Ai Unit 1 Notes
Unit
INTRODUCTION
1.1 INTRODUCTION TO AI
Artificial Intelligence (AI) is the study of how to make computers do things which, at the
moment, people do better. Artificial intelligence (AI) is also defined as the simulation of human
intelligence processes by machines, especially computer systems.
1.2 DEFINITION OF AI
AI definitions can be categorized into four, they are as follows:
Advantages of AI
1) Minimum Fault
2) Everyday utilization
3) Digital Assistant
4) Time saver
5) Efficient Work
Introduction 1.3
Disadvantages of AI
1) Dependent on AI
2) Smarter than humans
3) Reduction of Decision making capacity
What is an Agent?
An agent is anything that can be viewed as: – perceiving its environment through sensors
and acts upon that environment through actuators.
Types of Agents:
• Human agent: – Sensors: eyes, ears, and other organs. – Actuators: hands, legs,
mouth, and other body parts.
• Robotic agent: – Sensors: Cameras and infrared range finders. – Actuators: various
motors.
• Examples of agent – Thermostat – Cell phone – Vacuum cleaner – Robot – Alexa
Echo – Self-driving car – Human – etc.
Intelligent Agents
An agent gets to interact with environment, agent perceive (sense) its state of an
environment through its sensors and it can affect its state through its actuators.IA is depicted in
figure 2.
Intelligent agents may also learn or use knowledge to achieve their goals.
Figure 2: Depicts Agent interacting with environment through sensors and effectors
1.4 Artificial Intelligence
1.4 AGENT’S BASIC CHARACTERISTICS
• Situatedness
The agent receives some form of sensory input from its environment, and it
performs some action that changes its environment in some way. Examples of
environments: the physical world and the Internet.
• Autonomy
The agent can act without direct intervention by humans or other agents and that it
has control over its own actions and internal state.
• Adaptivity
The agent is capable of
Reacting flexibly to changes in its environment;
Taking goal-directed initiative (i.e., is pro-active), when appropriate;
Learning from its own experience, its environment, and interactions with
others.
• Sociability
The agent is capable of interacting in a peer-to-peer manner with other agents or
humans.
Autonomy: an agent must have both control over its actions and internal states. The
degree of the agent’s autonomy can be specified. There may need intervention from the user only
for important decisions.
Goal-oriented: an agent has well-defined goals and gradually influence its environment
and so achieve its own goals
Cooperation: cooperation of several agents permits faster and better solutions for
complex tasks that exceed the capabilities of a single agent.
There are four main aspects that need to be taken into consideration when designing an
intelligent agent.
• Percepts - This is the information that the agent receives
• Actions - This is what the agent needs to do or can do to achieve its objectives.
• Goals - This is the factor that the agent is trying to achieve
• Environment - The final aspect is the environment in which the agent will be working
in. The environment in which the agent performs is probably the most important
aspect that needs to be considered as this affects the outcome of the percepts, actions
and goals.
Examples of Agents
Agent Type Percepts Actions Goals Environment
Bin-Picking Robot Images Grasp objects; Parts in correct Conveyor belt
Sort into bins bins
Medical Diagnosis Patient Tests and Healthy patient Patient &
symptoms, tests treatments hospital
Excite's Jango Web pages navigate web, Find best price for Internet
product finder gather relevant a product
products
Webcrawler Softbot Web pages Follow links, Collect info on a Internet
pattern matching subject
Financial Financial data Gather data on Pick stocks to buy Stock market,
forecasting software companies & sell company
reports
A rational agent can be anything that makes decisions, typically a person, firm, machine,
or software.
(2) Model-Based Reflex Agents (Agents that keep track of the world)
In this type of agent, there is an internal state which maintains important information from
previous percepts. This internal state helps an agent to do action. Again consider a Mars Lander, it
sense the rocks and checking for previous state before perform collection action ,whether this type
of rock is already collected or not.
Condition action rule--->if rock check for previous state then collect
1.8 Artificial Intelligence
(3) Goal-Based Agents
An agent knows the description of current state and also needs some sort of goal
information that describes situations that are desirable. The action matches with the current state
and selection depends on the goal state. The goal based agent is more flexible for more than one
destination also. After identifying one destination, the new destination is specified, goal based
agent is activated to come up with a new behavior. Search and Planning are the subfields of AI
devoted to finding action sequences that achieve the agents goals.
Consider the mars Lander on the surface of mars with an obstacle in its way. In a goal
based agent it is uncertain which path will be taken by the agent and some are clearly not as
efficient as others but in a utility based agent the best path will have the best output from the
utility function and that path will be chosen.
Introduction 1.9
Learning Agents
The learning task allows the agent to operate in initially unknown environments and to
become more competent than its initial knowledge.
(ii) Analyze the problem (i.e.) important features have an immense (or) huge impact
on the correctness of using various techniques for solving the problems.
(iii) Isolate and represent the knowledge to be used to solve the problem.
(iv) Choose the best problem – solving techniques and apply it the problem.
1.10 Artificial Intelligence
Define the Problem as State Space Search
Consider the problem of “Playing Chess” to build a program that could play chess, we
have to specify the starting position of the chess board, the rules that define legal moves. And the
board position that represent a win. The goal of the winning the game, if possible, must be made
explicit.
The starting position can be described by an 8 X 8 array square in which each element
square (x,y),(x varying from 1to 8 & y varying from 1 to 8) describes the board position of an
appropriate chess coin, the goal is any board position in which the opponent does not have a legal
move and his or her “king” is under attack. The legal moves provide the way of getting from
initial state of final state.
The legal moves can be described as a set of rules consisting of two parts: A left side that
gives the current position and the right side that describes the change to be made to the board
position. An example is shown in the following figure.
Current Position
While pawn at square ( 5 , 2), AND Square ( 5 , 3 ) is empty, AND Square ( 5 , 4 ). is empty.
Single-state problem
• Exact prediction is possible
• state - is known exactly after any sequence of actions
• accessibility - all essential information can be obtained through sensors
• consequences of actions are known to the agent
• goal - for each known initial state, there is a unique goal state that is guaranteed to be
reachable via an action sequence
• simplest case, but severely restricted
Example: Vacuum world, Limitations: Can’t deal with incomplete accessibility
incomplete knowledge about consequences changes in the world
indeterminism in the world, in action
Multiple-state problem
• semi-exact prediction is possible
• state is not known exactly, but limited to a set of possible states after each action
• accessibility- not all essential information can be obtained through sensors
• reasoning can be used to determine the set of possible states
• consequences of actions are not always or completely known to the agent; actions or
the environment might exhibit randomness
• goal due to ignorance, there may be no fixed action sequence that leads to the goal
• less restricted, but more complex
Example: Vacuum world, but the agent has no sensors. The action sequence right,
suck, left, suck is guaranteed to reach the goal state from any initial state
Limitations: Can’t deal with changes in the world during execution (“contingencies”)
1.12 Artificial Intelligence
Contingency Problem
• Exact prediction is impossible
• state unknown in advance, may depend on the outcome of actions and changes in the
environment
• Accessibility of the world some essential information may be obtained through
sensors only at execution time
• Consequences of action may not be known at planning time goal instead of single
action sequences, there are trees of actions contingency branching point in the tree of
actions agent design different from the previous two cases: the agent must act on
incomplete plans.
• Search and execution phases are interleaved
Example: Vacuum world, The effect of a suck action is random. There is no action
sequence that can be calculated at planning time and is guaranteed to reach
the goal state.
Limitations: Can’t deal with situations in which the environment or effects of action
are unknown
Exploration Problem
• Effects of actions are unknown
• State the set of possible states may be unknown
• Accessibility - some essential information may be obtained through sensors only at
execution time
• Consequences of actions may not be known at planning time
• Goal can’t be completely formulated in advance because states and consequences may
not be known at planning time d
• Discovery what states exist
• Experimentation what are the outcomes of actions
• Learning remember and evaluate experiments
• Agent design different from the previous cases: the agent must experiment
Search requires search in the real world, not in an abstract model
• Realistic problems, very hard
Example - Vacuum cleaner
Goal formulation: intuitively, we want all the dirt cleaned up. Formally, the goal is
{state 7, state 8 }.
Measuring performance
With any intelligent agent, we want it to find a (good) solution and not spend forever
doing it. The interesting quantities are, therefore, the search cost--how long the agent takes to
come up with the solution to the problem, and the path cost--how expensive the actions in the
solution are. The total cost of the solution is the sum of the above two quantities.
1.14 Artificial Intelligence
Example - Water Jug Problem:
Problem:
You are given two jugs, a 4-gallon one and a 3-gallon one. Neither has any measuring
mark on it. There is a pump that can be used to fill the jugs with water. How can you get exactly 2
gallons of water into the 4-gallon jug.
Solution:
The state space for this problem can be described as the set of ordered pairs of
integers (x,y) Where, X represents the quantity of water in the 4-gallon jug X= 0,1,2,3,4.
Y represents the quantity of water in 3-gallon jug Y=0,1,2,3
Start State: (0,0)
Goal State: (2,0)
Production Rules:
Rule State Process
9 (0,2) (2,0) {Pour 2 gallon water from 3 gallon jug into 4 gallon
jug}
Initialization:
Start State: (0,0)
Apply Rule 2:
(X,Y | Y<3) -> (X,3) {Fill 3-gallon jug}
Now the state is (X,3)
Introduction 1.15
Iteration 1:
Current State: (X,3)
Apply Rule 7:
(X,Y | X+Y<=4 ^Y>0) (X+Y,0) {Pour all water from 3-gallon jug into 4-
gallon jug}
Iteration 2:
Current State : (3,0)
Apply Rule 2:
Iteration 3:
Current State:(3,3)
Apply Rule 5:
(X,Y | X+Y>=4 ^ Y>0) (4,Y-(4-X)) {Pour water from 3-gallon jug into 4-gallon
jug until 4-gallon jug is full}
Now the state is (4,2)
Iteration 4:
Current State : (4,2)
Apply Rule 3:
(X,Y | X>0) (0,Y) {Empty 4-gallon jug}
Iteration 5:
Current State : (0,2)
Apply Rule 9:
(0,2) (2,0) {Pour 2 gallon water from 3 gallon jug into 4 gallon jug}
Now the state is (2,0)
Goal Achieved.
1.16 Artificial Intelligence
State Space Tree:
A production system not only represents knowledge but also action. It acts as a bridge
between AI and expert systems. Production system provides a language in which the
representation of expert knowledge is very natural. In a production system knowledge is
represented as a set of rules of the form If (condition) THEN (condition) along with a control
system and a database.
What relationships are there between problem types and the types of production systems
best suited to solving the problems?
Solution:
There are infinite number of production system which describe ways to find solutions
commutative one solves any problem but is useless. It represents sequence of applications of
rules of a non-commutative system.
In a practical sense, relationship between kinds of problems and the kinds of systems that
lend themselves to describing those problems.