Artificial Intelligence Unit 1 Notes
Artificial Intelligence Unit 1 Notes
Definition: Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines
"man-made," and intelligence defines "thinking power", hence AI means "a man-made thinking power."
• Systems that think like humans
• Systems that act like humans
• Systems that think rationally
• Systems that act rationally
Definition: Artificial Intelligence is the study of how to make computers do things, which, at the moment, people
do better.
According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of making
intelligent machines, especially intelligent computer programs”.
Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think intelligently,
in the similar manner the intelligent humans think.
AI is accomplished by studying how human brain thinks and how humans learn, decide, and work while trying to
solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems.
It has gained prominence recently due, in part, to big data, or the increase in speed, size and variety of data
businesses are now collecting. AI can perform tasks such as identifying patterns in the data more efficiently than
humans, enabling businesses to gain more insight out of their data.
From a business perspective AI is a set of very powerful tools, and methodologies for using those tools to solve
business problems.
From a programming perspective, AI includes the study of symbolic programming, problem solving, and search.
AI Vocabulary
Intelligence relates to tasks involving higher mental processes, e.g. creativity, solving problems, pattern
recognition, classification, learning, induction, deduction, building analogies, optimization, language processing,
knowledge and many more. Intelligence is the computational part of the ability to achieve goals.
Intelligent behaviour is depicted by perceiving one’s environment, acting in complex environments, learning and
understanding from experience, reasoning to solve problems and discover hidden knowledge, applying knowledge
successfully in new situations, thinking abstractly, using analogies, communicating with others and more.
goals of AI
Science based goals of AI pertain to developing concepts, mechanisms and understanding biological intelligent
behaviour. The emphasis is on understanding intelligent behaviour.
Engineering based goals of AI relate to developing concepts, theory and practice of building intelligent machines.
The emphasis is on system building.
AI Techniques depict how we represent, manipulate and reason with knowledge in order to solve problems.
Knowledge is a collection of ‘facts’. To manipulate these facts by a program, a suitable representation is required.
A good representation facilitates problem solving.
SIBA ,VIJAYAPURA Page 1
Artificial Intelligence
Learning means that programs learn from what facts or behaviour can represent. Learning denotes changes in the
systems that are adaptive in other words, it enables the system to do the same task(s) more efficiently next time.
Applications of AI refers to problem solving, search and control strategies, speech recognition, natural language
understanding, computer vision, expert systems, etc.
1.1 The Problems of AI:
Intelligence does not imply perfect understanding; every intelligent being has limited perception, memory and
computation.
Many points on the spectrum of intelligence versus cost are viable, from insects to humans.
AI seeks to understand the computations required from intelligent behaviour and to produce computer systems that
exhibit intelligence.
Aspects of intelligence studied by AI include perception, communicational using human languages, reasoning,
planning, learning and memory.
The following questions are to be considered before we can step forward:
1. What are the underlying assumptions about intelligence?
2. What kinds of techniques will be useful for solving AI problems?
3. At what level human intelligence can be modelled?
4. When will it be realized when an intelligent program has been built?
A physical symbol system consists of a set of entities, called symbols, which are physical patterns that can occur
as components of another type of entity called an expression (or symbol structure). Thus, a symbol structure is
composed of a number of instances (or tokens) of symbols related in some physical way (such as one token being
next to another). At any instant of time the system will contain a collection of these symbol structures. Besides
these structures, the system also contains a collection of processes that operate on expressions to produce other
expressions: processes of creation, modification, reproduction and destruction. A physical symbol system is a
machine that produces through time an evolving collection of symbol structures. Such a system exists in a world
of objects wider than just these symbolic expressions themselves.
The operating mechanism can even be thrown into action independently of any object to operate upon (although
of course no result could then be developed). Again, it might act upon other things besides numbers, were objects
found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and
which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine.
Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical
composition were susceptible of such expression and adaptations, the engine might compose elaborate and
scientific pieces of music of any degree of complexity or extent. [Lovelace, 1961]
As it has become increasingly easy to build computing machines, so it has become increasingly possible to conduct
empirical investigations of the physical symbol system hypothesis. In each such investigation, a particular task that
might be regarded as requiring intelligence is selected. A program to perform the task is proposed and then tested.
Although we have not been completely successful at creating programs that perform all the selected.
1.3 AI Technique:
Artificial Intelligence research during the last three decades has concluded that Intelligence requires knowledge.
To compensate overwhelming quality, knowledge possesses less desirable properties.
A. It is huge.
B. It is difficult to characterize correctly/accurately.
C. It is constantly varying/changing.
D. It differs from data by being organized in a way that corresponds to its application.
E. It is complicated.
The objective is to write a computer program in such a way that computer wins most of the time.
Three approaches are presented to play this game which increase in
Complexity
Use of generalization
Clarity of their knowledge
Extensibility of their approach
These approaches will move towards being representations of what we will call AI techniques.
Tic Tac Toe Board- (or Noughts and crosses, Xs and Os)
It is two players, X and O, game who take turns marking the spaces in a 3×3 grid.
The player who succeeds in placing three respective marks in a horizontal, vertical, or diagonal row wins the game.
Approach 1
Data Structure
Consider a Board having nine elements vector.
Each element will contain
● 0 for blank
● 1 indicating X player move
● 2 indicating O player move
Computer may play as X or O player.
First player who so ever is always plays X.
Move Table MT
MT is a vector of 39 elements, each element of which is a nine-element vector representing board position.
Total of 39 (19683) elements in MT
Algorithm
To make a move, do the following:
i. View the vector (board) as a ternary number and convert it to its corresponding decimal number.
ii. Use the computed number as an index into the MT and access the vector stored there.
iii. The selected vector in step 2 represents the way the board will look after the move. Set board equal to that
vector.
Comments
▪ Very efficient in terms of time but has several disadvantages.
Lot of space to store the move table.
Lot of work to specify all the entries in move table.
Highly error prone as the data is voluminous.
Poor extensibility: If we want to extend 3D tic-tac-toe = 327 board position to be stored which results
overwhelming present computer memories.
Not intelligent at all.
PossWin(p)
PossWin (P)
▪ Returns 0, if player P cannot win in its next move,
▪ otherwise the number of square that constitutes a winning move for P.
Rule
If PossWin (P) = 0 {P can not win} then find whether opponent can win. If so, then block it.
Strategy used by PosWin
PosWin checks one at a time, for each rows /columns and diagonals as follows.
If 3 * 3 * 2 = 18 then player X can win
else if 5 * 5 * 2 = 50 then player O can win
These procedures are used in the algorithm are showed below.
Algorithm:
1. Turn = 1: (X moves)
Go(1) //make a move at the left-top cell
2. Turn = 2: (O moves)
IF board[5] is blank THEN
Go(5)
ELSE
Go(1)
3. Turn = 3: (X moves)
IF board[9] is blank THEN
Go(9)
ELSE
Go(3).
4. Turn = 4: (O moves)
IF Posswin (X) is not 0 THEN
Go (Posswin (X))
//Prevent the opponent to win
ELSE Go (Make2)
5. Turn = 5: (X moves)
IF Posswin(X) is not 0 THEN
Go(Posswin(X)) //Win for X.
ELSE IF Posswin(O) is not 0 THEN
Go(Posswin(O)) //Prevent the opponent to win
ELSE IF board[7] is blank THEN
Go(7)
ELSE Go(3)
SIBA ,VIJAYAPURA Page 5
Artificial Intelligence
6. Turn = 6: (O moves)
IF Posswin(O) is not 0 THEN
Go(Posswin(O)) //Win for O.
ELSE IF Posswin(X) is not 0 THEN
Go(Posswin(X)) //Prevent the opponent to win
ELSE Go(Make2)
7. Turn=7: (X moves)
IF Posswin(X) is not 0 THEN
Go(Posswin(X))
ELSE IF Posswin(O) is not 0 THEN
Go(Posswin(O))
ELSE go anywhere that is blank
8. Turn =8:( O moves)
IF Posswin(O) is not 0 THEN
Go(Posswin(O))
ELSE IF Posswin(X) is not 0 THEN
Go(Posswin(X))
ELSE go anywhere that is blank
9. Same as the turn 7
Comments
▪ Not as efficient as first one in terms of time.
▪ Several conditions are checked before each move.
▪ It is memory efficient.
▪ Easier to understand & complete strategy has been determined in advance
▪ Still can not generalize to 3-D.
Approach 3
Same as approach 2 except for one change in the representation of the board.
Board is considered to be a magic square of size 3 X 3 with 9 blocks numbered by numbers indicated by magic
square.
This representation makes process of checking for a possible win more simple.
Board Layout – Magic Square
Board Layout as magic square. Each row, column and diagonals add to 15.
Comments
This program will require more time than two others as
it has to search a tree representing all possible move sequences
before making each move.
This approach is extensible to handle
3-dimensional tic-tac-toe.
games more complicated than tic-tac-toe.
iii) Example
The system used here will be the slot and filler system.
Take, for example sentence: ‘She found a red one she really liked’.
Event 2 Thing 1 Event2
instance : finding instance : coat instance : liking
tense : past colour : red tense : past
agent : Rani modifier : much
object : Thing1 object : Thing1
Figure 1.2 A structure Representation of a sentences
The question is stored in two forms: as input and in the above form.
ii) Algorithm
Convert the question to a structured form using English know how, then use a marker to indicate the substring
(like ‘who’ or ‘what’) of the structure, that should be returned as an answer.
If a slot and filler system is used a special marker can be placed in more than one slot.
The answer appears by matching this structured form against the structured text.
The structured form is matched against the text and the requested segments of the question are returned.
iii) Examples
Both questions 1 and 2 generate answers via a new coat and a red coat respectively.
Question 3 cannot be answered, because there is no direct response.
iv) Comments
• This approach is more meaningful than the previous one and so is more effective.
• The extra power given must be paid for by additional search time in the knowledge bases.
• A warning must be given here: that is – to generate unambiguous English knowledge base is a complex task
• and must be left until later in the course.
• The problems of handling pronouns are difficult.
For example:
Rani walked up to the salesperson: she asked where the toy department was.
iii) Example
• Both questions 1 and 2 generate answers, as in the previous program.
• Question 3 can now be answered.
• The shopping script is instantiated and from the last sentence the path through step 14 is the one used to form
the representation. ‘M’ is bound to the red coat-got home.
• ‘Rani buys a red coat’ comes from step 10 and the integrated text generates that she bought a red coat.
iv) Comments
• This program is more powerful than both the previous programs because it has more knowledge.
• Thus, like the last game program it is exploiting AI techniques.
• However, we are not yet in a position to handle any English question.
• The major omission is that of a general reasoning mechanism known as inference to be used when the
required answer is not explicitly given in the input text.
• But this approach can handle, with some modifications, questions of the following form with the answer—
Saturday morning Rani went shopping. Her brother tried to call her but she did not answer.
Question: Why couldn’t Rani’s brother reach her?
Answer: Because she was not in.
This answer is derived because we have supplied an additional fact that a person cannot be in two places at once.
This patch is not sufficiently general so as to work in all cases and does not provide the type of solution we are
really looking for.
A problem is defined by its ‘elements’ and their ‘relations’. To provide a formal description of a problem, we
need to do the following:
a. Define a state space that contains all the possible configurations of the relevant objects, including some
impossible ones.
The problem can then be solved by using the rules, in combination with an appropriate control strategy, to move
through the problem space until a path from an initial state to a goal state is found. This process is known as
‘search’. Thus:
Search is fundamental to the problem-solving process.
Search is a general mechanism that can be used when a more direct method is not known.
Search provides the framework into which more direct methods for solving subparts of a problem can be
embedded. A very large number of AI problems are formulated as search problems.
Summary: In order to provide a formal description of a problem, it is necessary to do the following things:
✓ Define a state space that contains all the possible configurations of the relevant objects.
✓ Specify one or more states within that space that describe possible situations from which the problem-
solving process may start. These states are called initial states.
✓ Specify one or more states that would be acceptable as solutions to the problem called goal states.
✓ Specify a set of rules that describe the actions. Order of application of the rules is called control strategy.
Control strategy should cause motion towards a solution.
• The problem solved by using the production rules in combination with an appropriate control strategy, moving
through the problem space until a path from an initial state to a goal state is found.
• In this problem-solving process, search is the fundamental concept.
• For simple problems it is easier to achieve this goal by hand but there will be cases where this is far too
difficult.
• Algorithm:
1. If the initial state is a goal state, quit and return success.
2. Otherwise, do the following until success or failure is signaled:
• Generate a successor, E, of initial state. If there are no more successors, signal failure.
• Call Depth-First Search, with E as the initial state
• If success is returned, signal success. Otherwise continue in this loop.
✓ Here we pursue a single branch of the tree until it yields a solution or some pre- specified depth has reached.
✓ If solution is not found then go back to immediate previous node and explore other branches in DF fashion.
✓ Let us see the tree formation for water jug problem using DFS.
Advantages
1. Low storage requirement: linear with tree
depth.
2. Easily programmed: function call stack does
most of the work of maintaining state of the
search.
Disadvantages
1. May find a sub-optimal solution (one that is
deeper or more costly than the best solution).
2. Incomplete: without a depth bound, may not
find a solution even if one exists.
This example can be solved by the operator sequence UP, RIGHT, UP, LEFT, DOWN.
There are several components of this sentence, each of which, in isolation, may have more than one interpretation.
But the components must form a coherent whole, and so they constrain each other's interpretations. Some of the
sources of ambiguity in this sentence are the following:
• The word "bank" may refer either to a financial institution or to a side of a river. But only one of these
may have a president.
• The word "dish" is the object of the verb "eat." It is possible that a dish was eaten. But it is more likely that
the pasta salad in the dish was eaten.
• Pasta salad is a salad containing pasta. But there are other ways meanings can be formed from pairs of
nouns. For example, dog food does not normally contain dogs.
• The phrase "with the fork" could modify several parts of the sentence. In this case, it modifies the verb
"eat." But, if the phrase had been "with vegetables," then the modification structure would be different.
And if the phrase had been "with her friends," the structure would be different still.
The algorithms that use heuristic functions are called heuristic algorithms.
Heuristic algorithms are not really intelligent; they appear to be intelligent because they achieve better
performance.
Heuristic algorithms are more efficient because they take advantage of feedback from the data to direct the
search path.
There are some more algorithms. They are either improvements or combinations of these.
• Hierarchical Representation of Search Algorithms: A Hierarchical representation of most search algorithms is
illustrated below. The representation begins with two types of search:
• Uninformed Search: Also called blind, exhaustive or brute-force search, it uses no information about the
problem to guide the search and therefore may not be very efficient.
Uninformed search algorithms or Brute-force algorithms, search through the search space all possible
candidates for the solution checking whether each candidate satisfies the problem’s statement.
• Informed Search: Also called heuristic or intelligent search, this uses information about the problem to guide
the search—usually guesses the distance to a goal state and is therefore efficient, but the search may not be
always possible.
Heuristic search
To find a solution in proper time rather than a complete solution in unlimited time we use heuristics. ‘A
heuristic function is a function that maps from problem state descriptions to measures of desirability, usually
represented as numbers’.
Heuristic search methods use knowledge about the problem domain and choose promising operators first. These
heuristic search methods use heuristic functions to evaluate the next state towards the goal state.
For finding a solution, by using the heuristic technique, one should carry out the following steps:
1. Add domain—specific information to select what is the best path to continue searching along.
2. Define a heuristic function h(n) that estimates the ‘goodness’ of a node n.
Specifically, h(n) = estimated cost(or distance) of minimal cost path from n to a goal state.
3. The term, heuristic means ‘serving to aid discovery’ and is an estimate, based on domain specific
information that is computable from the current state description of how close we are to a goal.
Finding a route from one city to another city is an example of a search problem in which different search orders
and the use of heuristic knowledge are easily understood.
1. State: The current city in which the traveller is located.
2. Operators: Roads linking the current city to other cities.
3. Cost Metric: The cost of taking a given road between cities.
4. Heuristic information: The search could be guided by the direction of the goal city from the current city, or
we could use airline distance as an estimate of the distance to the goal.
Can only search what it has knowledge about already Estimates ‘distance’ to goal state through
explored nodes
No knowledge about how far a node node from goal state Guides search process toward goal
Prefers states (nodes) that lead close to and not
away from goal state
Systematic Generate-And-Test
✓ While generating complete solutions and generating random solutions are the two extremes there exists
another approach that lies in between.
✓ The approach is that the search process proceeds systematically but some paths that unlikely to lead the
solution are not considered.
✓ This evaluation is performed by a heuristic function.
✓ Depth-first search tree with backtracking can be used to implement systematic generate-and-test
procedure.
✓ As per this procedure, if some intermediate states are likely to appear often in the tree, it would be better
to modify that procedure to traverse a graph rather than a tree.
3.2.2. Steepest-Ascent Hill climbing : It first examines all the neighbouring nodes and then selects the node
closest to the solution state as next node.
Step 1 : Evaluate the initial state. If it is goal state then exit else make the current state as initial state
Step 2 : Repeat these steps until a solution is found or current state does not change
i. Let ‘target’ be a state such that any successor of the current state will be better than it;
ii. for each operator that applies to the current state
a. apply the new operator and create a new state
b. evaluate the new state
c. if this state is goal state then quit else compare with ‘target’
d. if this state is better than ‘target’, set this state as ‘target’
e. if target is better than current state set current state to Target
Step 3 : Exit
3.3.3. Stochastic hill climbing : It does not examine all the neighbouring nodes before deciding which node to
select .It just selects a neighbouring node at random, and decides (based on the amount of improvement in that
neighbour) whether to move to that neighbour or to examine another.
State Space diagram for Hill Climbing
State space diagram is a graphical representation of the set of states our search algorithm can
reach vs the value of our objective function(the function which we wish to maximize).
X-axis : denotes the state space ie states or configuration our algorithm may reach.
Y-axis : denotes the values of objective function corresponding to to a particular state.
The best solution will be that state space where objective function has maximum value(global maximum).
Different regions in the State Space Diagram
1. Local maximum : It is a state which is better than its neighbouring state however there exists a state which is
better than it(global maximum). This state is better because here value of objective function is higher than its
neighbours
2. Global maximum : It is the best possible state in the state space diagram. This because at this state, objective
function has highest value.
3. Plateua/flat local maximum : It is a flat region of state space where neighbouring states have the same value.
OPEN -nodes that has been generated and have heuristic function applied to them but which have been not
examined yet.
It is a priority queue of nodes that have been evaluated by the heuristic function but which have not yet been
expanded into successors. The most promising nodes are at the front.
CLOSED are nodes that have already been examined and we need to keep this node in memory if we want to
search a graph rather than a tree. Since whenever new node is generated, we need to check whether it has been
generated before.
Algorithm: Best first search (OR graph)
1. Start with OPEN holding the initial state
2. Until a goal is found or there are no nodes left on open do.
Pick the best node on OPEN
Generate its successors
For each successor Do
• If it has not been generated before, evaluate it, add it to OPEN and record its parent
• If it has been generated before change the parent if this new path is better and, in that case, update the cost
of getting to any successor nodes.
SIBA ,VIJAYAPURA Page 23
Artificial Intelligence
Observations
In hill climbing, sorting is done on the successors nodes whereas in the best first search sorting is done on
the entire list.
It is not guaranteed to find an optimal solution, but normally it finds some solution faster than any other
methods.
The performance varies directly with the accuracy of the heuristic evaluation function.
Termination Condition: Instead of terminating when a path is found, terminate when the shortest incomplete
path is longer than the shortest complete path.
Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the most promising path. It takes
the current state of the agent as its input and produces the estimation of how close agent is from the goal. The heuristic method,
however, might not always give the best solution, but it guaranteed to find a good solution in reasonable time. Heuristic function
estimates how close a state is to the goal. It is represented by h(n), and it calculates the cost of an optimal path between the
pair of states. The value of the heuristic function is always positive.
h(n) <= h*(n)
Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be less than or equal to the
estimated cost.
In this search example, we are using two lists which are OPEN and CLOSED Lists. Following are the iteration for
traversing the above example.
Expand the nodes of S and put in the CLOSED list
Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [A,E,F], Closed [S, B]
: Open [ A,E], Closed [S, B, F]
Time Complexity: The worst case time complexity of Greedy best first search is O(bm).
Space Complexity: The worst case space complexity of Greedy best first search is O(bm). Where, m is the maximum
depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state space is finite.
Optimal: Greedy best first search algorithm is not optimal.
3.3.2 A* search
A* (“Aystar”) (Hart, 1972) method is a combination of branch & bound and best search, combined with the
dynamic programming principle.
• The function g is a measure of the cost of getting from the Start node (initial state) to the current node.
• It is sum of costs of applying the rules that were applied along the best path to the current node.
• The function h is an estimate of additional cost of getting from the current node to the Goal node (final state).
• Here knowledge about the problem domain is exploited.
• A* algorithm is called OR graph / tree search algorithm.
Algorithm (A*)
Step1: Initialization OPEN list with initial node; CLOSED= ∅; g = 0, f = h, Found = false;
Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation function (g+h), if node
n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each successor n',
check whether n' is already in the OPEN or CLOSED list, if not then compute evaluation function for n' and
place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer which
reflects the lowest g(n') value.
Step 6: Return to Step 2.
Behavior of A* Algorithm
Underestimation
If we can guarantee that h never over estimates actual value from current to goal, then A* algorithm is guaranteed
to find an optimal path to a goal, if one exists
Example – Underestimation – f=g+h Here h is underestimated
{(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with cost 6.
Admissibility of A*:
A search algorithm is admissible, if for any graph, it always terminates in an optimal path from initial state to
goal state, if path exists.
If heuristic function h is underestimate of actual value from current state to goal state, then the it is called
admissible function.
Alternatively we can say that A* always terminates with the optimal path in case h(x) is an admissible heuristic
function.
Advantages:
A* search algorithm is the best algorithm than other search algorithms.
A* search algorithm is optimal and complete.
This algorithm can solve very complex problems.
Disadvantages:
It does not always produce the shortest path as it mostly based on heuristics and approximation.
A* search algorithm has some complexity issues.
The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it is not
practical for various large-scale problems.
The propagation of revised cost estimation backward is in the tree is not Tabu Search necessary in A*
algorithm. This is because in AO* algorithm expanded nodes are re-examined so that the current best path
can be selected.
ALGORITHM:The A0*
Example 1
• A is the only node, it is at the end of the current best path.
• It is expanded, yielding nodes B, C, D. The arc to D is labeled as the
most promising one emerging from A, since it costs 6 compared to B
and C, Which costs 9.
Node B is chosen for expansion. This process produces one new arc, the
AND arc to E and F, with a combined cost estimate of 10.
so we update the f’ value of D to 10.Going back one more level, we see
that this makes the AND arc B-C better than the arc to D, so it is labeled
as the current best path.
Example 2
• Crypt-Arithmetic puzzles.
• Many design tasks can also be viewed as constrained satisfaction problems.
• N-Queen: Given the condition that no two queens on the same row/column/diagonal attack each other.
• Map colouring: Given a map, colour three regions in blue, red and black, such that no two
neighbouring regions have the same colour.
• Such problems do not require a new search method.
Algorithm:
1. Propagate available constraints. To do this first set OPEN to set of all objects that must have values assigned
to them in a complete solution. Then do until an inconsistency is detected or until OPEN is empty:
a. Select an object OB from OPEN. Strengthen as much as possible the set of constraints that apply to OB.
b. If this set is different from the set that was assigned the last time OB was examined or if this is the first
time OB has been examined, then add to OPEN all objects that share any constraints with OB.
2. If the union of the constraints discovered above defines a solution, then quit and report the solution.
3. If the union of the constraints discovered above defines a contradiction, then return the failure.
4. If neither of the above occurs, then it is necessary to make a guess at something in order to proceed. To
do this loop until a solution is found or all possible solutions have been eliminated:
a.Select an object whose value is not yet determined and select a way of strengthening the constraints
on that object.
b. Recursively invoke constraint satisfaction with the current set of constraints augmented by
2.6.1.Crypt-Arithmetic puzzle
Problem Statement:
Solve the following puzzle by assigning numeral (0-9) in such a way that each letter is assigned unique
digit which satisfy the following addition.
Constraints : No two letters have the same value. (The constraints of arithmetic).
Carries : C4 = ? ; C3 = ? ; C2 = ? ; C1 = ?
CONSTRAINTS:-
1. no two digits can be assigned to same letter. only single digit number can be assigned to a letter.
2. Assumption can be made at various levels such that they do not contradict each other.
SIBA ,VIJAYAPURA Page 30
Artificial Intelligence
3. The problem can be decomposed into secured constraints. A constraint satisfaction approach may be used.
4. Any of search techniques may be used.
5. Backtracking may be performed as applicable us applied search techniques.
6. Rule of arithmetic may be followed.
Goal State: the digits to the letters must be assigned in such a manner so that the sum is satisfied
Constraint Equation
We can easily see that M has to be non zero digit, so the value of C4 =1
1. M = C4 => M = 1
2. O = S + M + C3 -> C4
If C3 = 0, then S = 9 else if C3 = 1,
then S = 8 or 9.
C3 = 0 or 1
=> O = 11 => O has to be assigned digit 1 but 1 is already assigned to M, so not possible.
Therefore, only choice for C3 = 0, and thus O = 10. This implies that O is assigned 0 (zero) digit.
Therefore, O = 0 M = 1, O = 0
Since C3 = 0; N = E + O + C2 produces no carry.
• As O = 0, N = E + C2 .
• Since N E, therefore, C2 = 1.
with S = 9.