Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

AI Unit 2 Algorithms

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

Artificial Intelligence

Class Notes Feb 19, 2009


A question about unification
• Unify:
– Likes(y, y) with Likes(x, John)
Can {x/John, y/John} and {x/John, y/x} both be MGU’s?

First step: Θ = {y/x} www.StudentRockStars.com


Second step: unify-var(y, John, Θ)

Adds {x/John} to Θ yields what?


Push(pair, theta)  {x/John, y/x}
Compose(pair, theta): add pair to theta and
for each element e that is a rhs of an element in Θ
replace e with subst(pair, e)  {x/John, y/John}
The unification algorithm

And Op(x) = Op(y)


The unification algorithm

Using compose
Assignment 4b. Now Due Feb 23, 10am
In one file:
standardize (FOLexp) returns a FOLexp object with the variables
replaced with new variables with unique names.
unify(FOLexp1, FOLexp2, [])
returns None or a substitution, which is a list of variable/term
pairs (2 element lists)
Note: using “Compose” method you must also implement
subst(theta, FOLexp) and apply it correctly.
www.StudentRockStars.com

These programs must be accompanied GOOD test/demo datasets,


and programs testStandardize() and testUnify() that run them.
These programs must be implemented using recursion for full
credit.
FOLexp interface:
members:
kind: var, const, compound www.StudentRockStars.com
name (for var or const)
op (for compound)
args (for compound) - a list of FOLexps
methods: isVar, isConst, isCompound -- returns True or False
firstArg -- returns an FOLexp or None
restArgs -- returns a list of FOLexp or None
FOLexp (stringExp) -- the constructor requires an argument
The argument is a list of strings representing a tokenized logical
expression. Any string beginning with a lower case letter
is a variable, and all other symbols must begin with an upper case
letter. (following your textbook).
Here are some examples of input strings:
x -- a variable
John -- a constant
likes[x, John] - a compound
rprint -- prettyprints the FOLexp with indentation
def testFOLexp(filename):
f = open(filename)
for aLine in f.readlines():
exp = FOLexp(convert(aLine))
exp.rprint("")

def convert(sExp): #returns a list of strings representing the logical expr.


# add blanks for tokenizing
sExp = sExp.replace('[',' [ ')
sExp = sExp.replace(']',' ] ')
sExp = sExp.replace(',' , ' , ')
# now convert string to a list of token strings
return sExp.split() www.StudentRockStars.com

Contents of test file:


x
John
Likes[x, John]
Likes[Mother[Father[ John]], Brother[Sally], x]
class FOLexp:
kind = 'unknown'
name = ''
op = ''
args = [] www.StudentRockStars.com

def __init__(self, e):


# recursive function to build the structure
# e is a list of strings that needs to be parsed into a FOLexp
if len(e) == 1:
self.name = e[0]
if self.name[0].islower():
self.kind = 'var'
else:
self.kind = 'const'
#that was the easy case, now we have to find the arguments
# and use slices to recursively build the component objects
else:
self.kind = 'compound'
self.op = e[0]
position = 2
# get ready to parse the argument list
while position < len(e) - 1: # skip the final ']'
# here we parse the argument list
# get end pt of next arg
endpos = position + self.getNext(e[position:])
self.args.append(FOLexp(e[position:endpos]))
position = endpos + 1
Uninformed search strategies (cont.)

• Uninformed search strategies use only the


information available in the problem definition
• Breadth-first search
• Uniform-cost search
• Depth-first search www.StudentRockStars.com
• Depth-limited search
• Iterative deepening search

14 Jan 2004 CS 3243 - Blind Search 10


Explore the problems space by generating and
searching a tree
• Root = start state
• Each node represents a state, children are all the
next-states
Depth-limited search

= depth-first search with depth limit l,


i.e., nodes at depth l have no successors

• Recursive implementation:

14 Jan 2004 CS 3243 - Blind Search 12


Iterative deepening search

14 Jan 2004 CS 3243 - Blind Search 13


Iterative deepening search l =0

14 Jan 2004 CS 3243 - Blind Search 14


Iterative deepening search l =1

14 Jan 2004 CS 3243 - Blind Search 15


Iterative deepening search l =2

14 Jan 2004 CS 3243 - Blind Search 16


Iterative deepening search l =3

14 Jan 2004 CS 3243 - Blind Search 17


Iterative deepening search
• Number of nodes generated in a depth-limited search to
depth d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd

• Number of nodes generated in an iterative deepening search


to depth d with branching factor b:
NIDS = (d+1)b0 + d b1 + (d-1)b2 + … + 3bd-2 +2bd-1 + 1bd

• For b = 10, d = 5, www.StudentRockStars.com



– NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111

– NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

• Overhead = (123,456 - 111,111)/111,111 = 11%


14 Jan 2004 CS 3243 - Blind Search 18
Properties of iterative deepening search

• Complete? Yes

• Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)

• Space? O(bd) www.StudentRockStars.com

• Optimal? Yes, if step cost = 1

14 Jan 2004 CS 3243 - Blind Search 19


Summary of algorithms

14 Jan 2004 CS 3243 - Blind Search 20


Repeated states

• Failure to detect repeated states can turn a linear


problem into an exponential one!

14 Jan 2004 CS 3243 - Blind Search 21


Graph search

14 Jan 2004 CS 3243 - Blind Search 22


Informed search algorithms

www.StudentRockStars.com
Outline

• Heuristics
• Best-first search
• Greedy best-first search
• A* search

A search strategy is defined by picking the


order of node expansion
Best-first search

• Idea: use an evaluation function f(n) for each node


– estimate of "desirability“

 Expand most desirable unexpanded node

• Implementation:
Order the nodes in fringe in decreasing order of desirability

• Special cases: www.StudentRockStars.com


– greedy best-first search
– A* search
Romania with step costs in km
Greedy best-first search

• Evaluation function f(n) = h(n) (heuristic)


• = estimate of cost from n to goal

• e.g., hSLD(n) = straight-line distance from n to


Bucharest www.StudentRockStars.com

• Greedy best-first search expands the node that


appears to be closest to goal
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first search

• Complete? No – can get stuck in loops, e.g., Iasi 


Neamt  Iasi  Neamt 

• Time? O(bm), but a good heuristic can give dramatic


improvement

• Space? O(bm) -- keeps all nodes in memory

• Optimal? No
A* search

• Idea: avoid expanding paths that are already


expensive

• Evaluation function f(n) = g(n) + h(n)


• g(n) = cost so far to reach n

• h(n) = estimated cost from n to goal

• f(n) = estimated total cost of path through n to goal


A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Admissible heuristics
• A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the
goal state from n.

• An admissible heuristic never overestimates the cost


to reach the goal, i.e., it is optimistic

• Example: hSLD(n) (never overestimates the actual


road distance) www.StudentRockStars.com

• Theorem: If h(n) is admissible, A* using TREE-


SEARCH is optimal
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and is
in the fringe. Let n be an unexpanded node in the fringe such
that n is on a shortest path to an optimal goal G.

• f(G2) > f(G) from above


• h(n) ≤ h^*(n) since h is admissible
• g(n) + h(n) ≤ g(n) + h*(n)
• f(n) ≤ f(G)
Hence f(G2) > f(n), and A* will never select G2 for expansion
Consistent heuristics
• A heuristic is consistent if for every node n, every successor n'
of n generated by any action a,

h(n) ≤ c(n,a,n') + h(n')

• If h is consistent, we have
f(n') = g(n') + h(n')
= g(n) + c(n,a,n') + h(n')
≥ g(n) + h(n)
= f(n)

• i.e., f(n) is non-decreasing along any path.

• Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is


optimal
Properties of A*

• Complete? Yes (unless there are infinitely many


nodes with f ≤ f(G) )

• Time? Exponential www.StudentRockStars.com

• Space? Keeps all nodes in memory

• Optimal? Yes www.StudentRockStars.com

Admissible heuristics

E.g., for the 8-puzzle:

• h1(n) = number of misplaced tiles


• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ?
• h2(S) = ?

Admissible heuristics

E.g., for the 8-puzzle:

• h1(n) = number of misplaced tiles


• h2(n) = total Manhattan distance www.StudentRockStars.com
(i.e., no. of squares from desired location of each tile)

• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18

You might also like