AI WriteUps
AI WriteUps
AI WriteUps
Experiment No.: 01
Aim: Give State-space formulation and PEAS representation for AI application.
Experiment No.: 02
Aim: Implement the following uninformed search algorithms to reach the goal state.
2. Explain the difference between Depth First Search Breadth First Search
Definition BFS stands for Breadth First Search. DFS stands for Depth First Search.
BFS uses a Queue to find the shortest DFS uses a Stack to find the shortest path.
Data structure path.
BFS is better when target is closer to DFS is better when target is far from
Source Source. source.
As BFS considers all neighbour so it is DFS is more suitable for decision tree. As
Suitability for not suitable for decision tree used in with one decision, we need to traverse
decision tree puzzle games. further to augment the decision. If we reach
the conclusion, we won.
Time Time Complexity of BFS = O(V+E) Time Complexity of DFS is also O(V+E)
Complexity where V is vertices and E is edges. where V is vertices and E is edges.
Memory BFS requires more memory space. DFS requires less memory space.
Tapping in In BFS, there is no problem of trapping In DFS, we may be trapped into infinite
loops into finite loops. loops.
BFS is implemented using FIFO (First DFS is implemented using LIFO (Last In
Principle
In First Out) principle. First Out) principle.
Experiment No.: 03
Aim: Implement the following informed search algorithms to reach the goal state.
(A* search)
A*:
A* Algorithms avoid expanding paths that are already expansive.
Experiment No.: 04
Aim: Implement max-min with α-β pruning for the following game. (Tic-Tac-Toe)
1. Apply alpha-beta pruning on the following example by considering the root
node as max.
Step 1:
α=3
Step 2:
β=3
α=3 α=5
Step 3: α=3
β=3
α=5
α=3
α=1
Step 3: α=3
β=1
β=3
α=5
α=3
α=1
Experiment No.: 05
Aim: Implement first-order logic (resolution) for a particular problem in prolog
(Example problems: Family tree, LCM and GCD)
Given 1, 2, and 3:
• B(x): x is bird.
• L(x): x is literate.
• I(x): x is intelligenet.
• R(x): x can read
Above sentences can be rewritten as:
1. ∀xR(x) ⇒ L(x).
2. ∀xB(x) ⇒ ¬L(x).
3. ∃xB(x) ∧ I(x).
4. ∃xI(x) ∧ ¬R(x).
Converting every sentence to a clause, we get:
1. ¬R(x) ∨ L(x).
2. ¬B(x) ∨ ¬L(x).
3. we need to assign specific value to variable x.
a. B(b1).
b. I(b1).
4. ¬I(x) ∨ R(x) (because the goal is negated).
We can prove state 4 as follows:
x/b1
¬R(x) ∨ L(x).
¬R(x) ∨ L(x). R(b1)
x/b1
x/b1
B(b1) ¬ D(b1)
x/b1
Experiment No.: 06
Aim: Implement any one of the following planning problems in prolog.
1. Give the partial order plan for the following Blocks World Problem
The list of predicates that can be used to define the stores are:
On(A, B): Block A is on B
Ontable(A, B): Block A is ontable on position P
Clear(A): Nothing is on top of A
Holding(A): Arm is holding A
Armempty: Arm is holding nothing
Free(P): Position P on the table is free
Initial State:
On(B,A) ∆ on(C,B) ∆ ontable(A,B) armempty free(p) free(a) free(c)
Goal State:
On(C,A) ontable(A) ontable(B) clear(B) armempty free(r) free(c)
Experiment No.: 07
Aim: Design a Bayes Belief Network and perform inferencing for a particular example
1. The gauge reading at a nuclear power station shows high values if the
temperature of the core goes very high. The gauge also shows high value if the
gauge is faulty. A high reading in the gauge sets an alarm off. The alarm can also
go off if it is faulty. The probability of faulty instruments is low in a nuclear
power plant.
i) Draw the Bayesian Belief Network for the above situation
ii) Associate a conditional probability table for each node.
i. Here is the Bayesian Belief Network for given situation markdown
Gauge Alarm
CoreTemp AlarmFaulty
ii. The conditional probability table (IPT) for each node is given
below
Node: CoreTemp
CPT:
TEMPRATURE P(TEMPRATURE)
LOW 0.95
HIGH 0.05
Node: Gauge
CPT:
Node: Gauge
CPT:
P(Alarm Faulty)
0.001
0.999
Node: Alarm
CPT:
Experiment No.: 08
Aim: Implement following unsupervised learning algorithm (Self-Organizing Maps)
1.
Let’s say an input data of size (m, n) where m is the number of training examples and
n is the number of features in each example. First, it initializes the weights of size (n,
C) where C is the number of clusters. Then iterating over the input data, for each
training example, it updates the winning vector (weight vector with the shortest
distance (e.g Euclidean distance) from training example). Weight updation rule is
given by :
wij = wij(old) + alpha(t) * (xik - wij(old))
where alpha is a learning rate at time t, j denotes the winning vector, i denotes the ith
feature of training example and k denotes the kth training example from the input data.
After training the SOM network, trained weights are used for clustering new examples.
A new example falls in the cluster of winning vectors.
Algorithm
Training:
Step 1: Initialize the weights wij random value may be assumed. Initialize the learning
rate α.
Step 2: Calculate squared Euclidean distance.
D(j) = Σ (wij – xi)^2 where i=1 to n and j=1 to m
Step 3: Find index J, when D(j) is minimum that will be considered as winning index.
Step 4: For each j within a specific neighborhood of j and for all i, calculate the new
weight.
wij(new)=wij(old) + α[xi – wij(old)]
Step 5: Update the learning rule by using :
α(t+1) = 0.5 * t
Step 6: Test the Stopping Condition.