Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Aiml Docx

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

1.

Aim: Implementation of TSP using heuristic approach using


Java/LISP/Prolog.
Description:
Heuristic Search:
A heuristic is a technique that is used to solve a problem faster than the classic methods.
These techniques are used to find the approximate solution of a problem when classical
methods do not. Heuristics are said to be the problem-solving techniques that result in
practical and quick solutions.
Heuristics are strategies that are derived from past experience with similar problems.
Heuristics use practical methods and shortcuts used to produce the solutions that may or may
not be optimal, but those solutions are sufficient in a given limited timeframe.
Problem statement:
A traveller S wants to visit each of n cities exactly once and return to its satisfies point with
minimum distance/mileage for Heuristic algorithm for Travelling Sales man problem as
follows: Example: Let's number the cities from 1 to n , and let city 1 be the city-base of the
salesman.
Also let's assume that c(i , j) is the visiting cost from i to j.
Example:

Output:
Minimum weight Hamiltonian Cycle: EACBDE= 32
Java Code:
import java.util.*;
class Main{
   
static int V = 4;
static int travllingSalesmanProblem(int graph[][],
                                    int s)
{
    ArrayList<Integer> vertex = new ArrayList<Integer>();

    for (int i = 0; i < V; i++)


        if (i != s)
            vertex.add(i);

    int min_path = Integer.MAX_VALUE;


    do
    {
        int current_pathweight = 0;
        int k = s;
   
        for (int i = 0; i < vertex.size(); i++)
        {
            current_pathweight += graph[k][vertex.get(i)];
            k = vertex.get(i);
        }
        current_pathweight += graph[k][s];
        min_path = Math.min(min_path, current_pathweight);

    } while (findNextPermutation(vertex));

    return min_path;
}
public static ArrayList<Integer> swap(ArrayList<Integer> data,int left, int
right){
    int temp = data.get(left);
    data.set(left, data.get(right));
    data.set(right, temp);
    return data;
}
public static ArrayList<Integer> reverse(ArrayList<Integer> data, int left,
int right)
{
    while (left < right)
    {
        int temp = data.get(left);
        data.set(left++, data.get(right));
        data.set(right--, temp);
    }
    return data;
}
public static boolean findNextPermutation(ArrayList<Integer> data)
{
    if (data.size() <= 1)
        return false;

    int last = data.size() - 2;


    while (last >= 0)  
    {
        if (data.get(last) < data.get(last + 1))
        {
            break;
        }
        last--;
    }
    if (last < 0)
        return false;

    int nextGreater = data.size() - 1;


    for (int i = data.size() - 1; i > last; i--) {
        if (data.get(i) > data.get(last))
        {
            nextGreater = i;
            break;
        }
    }
    data = swap(data,nextGreater, last);
    data = reverse(data, last + 1, data.size() - 1);
    return true;
}
public static void main(String args[]){
    int graph[][] = {{0, 10, 15, 20},
                    {10, 0, 35, 25},
                    {15, 35, 0, 30},
                    {20, 25, 30, 0}};
    int s = 0;
    System.out.println(travllingSalesmanProblem(graph, s));
    }
}

Output:
80
2.Aim: Implementation of Simulated Annealing Algorithm using Python.
Description:
Problem Statement:
Given a cost function f: R^n –> R, find an n-tuple that minimizes the value of f. Note that
minimizing the value of a function is algorithmically equivalent to maximization.
With a background in calculus/analysis one is likely familiar with simple optimization for
single variable functions. For instance, the function f(x) = x^2 + 2x can be optimized setting
the first derivative equal to zero, obtaining the solution x = -1 yielding the minimum value f(-
1) = -1. This technique suffices for simple functions with few variables. However, it is often
the case that researchers are interested in optimizing functions of several variables, in which
case the solution can only be obtained computationally.

One excellent example of a difficult optimization task is the chip floor planning problem.
Imagine you’re working at Intel and you’re tasked with designing the layout for an integrated
circuit. You have a set of modules of different shapes/sizes and a fixed area on which the
modules can be placed. There are a number of objectives you want to achieve: maximizing
ability for wires to connect components, minimize net area, minimize chip cost, etc. With
these in mind, you create a cost function, taking all, say, 1000 variable configurations and
returning a single real value representing the ‘cost’ of the input configuration. We call this the
objective function, since the goal is to minimize its value. 
A naive algorithm would be a complete space search — we search all possible configurations
until we find the minimum. This may suffice for functions of few variables, but the problem
we have in mind would entail such a brute force algorithm to fun in O(n!).

Due to the computational intractability of problems like these, and other NP-hard problems,
many optimization heuristics have been developed in an attempt to yield a good, albeit
potentially suboptimal, value. In our case, we don’t necessarily need to find a strictly optimal
value — finding a near-optimal value would satisfy our goal. One widely used technique is
simulated annealing, by which we introduce a degree of stochasticity, potentially shifting
from a better solution to a worse one, in an attempt to escape local minima and converge to a
value closer to the global optimum. 

Simulated annealing is based on metallurgical practices by which a material is heated to a


high temperature and cooled. At high temperatures, atoms may shift unpredictably, often
eliminating impurities as the material cools into a pure crystal. This is replicated via the
simulated annealing optimization algorithm, with energy state corresponding to current
solution.
In this algorithm, we define an initial temperature, often set as 1, and a minimum
temperature, on the order of 10^-4. The current temperature is multiplied by some fraction
alpha and thus decreased until it reaches the minimum temperature. For each distinct
temperature value, we run the core optimization routine a fixed number of times. The
optimization routine consists of finding a neighbouring solution and accepting it with
probability e^(f(c) – f(n)) where c is the current solution and n is the neighbouring solution. A
neighbouring solution is found by applying a slight perturbation to the current solution. This
randomness is useful to escape the common pitfall of optimization heuristics — getting
trapped in local minima. By potentially accepting a less optimal solution than we currently
have, and accepting it with probability inverse to the increase in cost, the algorithm is more
likely to converge near the global optimum. Designing a neighbour function is quite tricky
and must be done on a case by case basis, but below are some ideas for finding neighbours in
locational optimization problems.

 Move all points 0 or 1 units in a random direction


 Shift input elements randomly
 Swap random elements in input sequence
 Permute input sequence
 Partition input sequence into a random number of segments and permute segments

Python Code:

import random
import math
class Solution:
    def __init__(self, CVRMSE, configuration):
        self.CVRMSE = CVRMSE
        self.config = configuration
T = 1
Tmin = 0.0001
alpha = 0.9
numIterations = 100

def genRandSol():
    a = [1, 2, 3, 4, 5]
    return Solution(-1.0, a)

def neighbor(currentSol):
    return currentSol

def cost(inputConfiguration):
    return -1.0

def indexToPoints(index):
    points = [index % M, index//M]
    return points

M = 5
N = 5
sourceArray = [['X' for i in range(N)] for j in range(M)]
min = Solution(float('inf'), None)
currentSol = genRandSol()

while T > Tmin:


    for i in range(numIterations):
        if currentSol.CVRMSE < min.CVRMSE:
            min = currentSol
        newSol = neighbor(currentSol)
        ap = math.exp((currentSol.CVRMSE - newSol.CVRMSE)/T)
        if ap > random.uniform(0, 1):
            currentSol = newSol
    T *= alpha
print(min.CVRMSE, "\n\n")

for i in range(M):
    for j in range(N):
        sourceArray[i][j] = "X"

for obj in min.config:


    coord = indexToPoints(obj)
    sourceArray[coord[0]][coord[1]] = "-"

for i in range(M):
    row = ""
    for j in range(N):
        row += sourceArray[i][j] + " "
    print(row)

Output:

-1.0

[X, -, X, X, X]

[-, X, X, X, X]

[-, X, X, X, X]

[-, X, X, X, X]

[-, X, X, X, X]
3.AIM: Implementation of Hill-climbing to solve 8- Puzzle Problem.

Description:
Hill Climbing:
 Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best
solution to the problem. It terminates when it reaches a peak value where no neighbor
has a higher value.
 Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems. One of the widely discussed examples of Hill climbing algorithm is
Traveling-salesman Problem in which we need to minimize the distance traveled by
the salesman.
 It is also called greedy local search as it only looks to its good immediate neighbor
state and not beyond that.
 A node of hill climbing algorithm has two components which are state and value.
 Hill Climbing is mostly used when a good heuristic is available.
 In this algorithm, we don't need to maintain and handle the search tree or graph as it
only keeps a single current state.
State Space Tree:

8 puzzle problem:
A 3 by 3 board with 8 tiles (each tile has a number from 1 to 8) and a single empty space is
provided. The goal is to use the vacant space to arrange the numbers on the tiles such that
they match the final arrangement. Four neighbouring (left, right, above, and below) tiles
be moved into the available area.
State Space Tree:
Python Code:
import copy  
from heapq import heappush, heappop  
n = 3    
rows = [ 1, 0, -1, 0 ]  
cols = [ 0, -1, 0, 1 ]  
class priorityQueue:  
    def __init__(self):  
        self.heap = []  
    def push(self, key):  
        heappush(self.heap, key)  
 
    def pop(self):  
        return heappop(self.heap)  
    def empty(self):  
        if not self.heap:  
            return True  
        else:  
            return False  
class nodes:  
     
    def __init__(self, parent, mats, empty_tile_posi,  
                costs, levels):
        self.parent = parent  
        self.mats = mats  
        self.empty_tile_posi = empty_tile_posi  
        self.costs = costs  
        self.levels = levels  
    def __lt__(self, nxt):  
        return self.costs < nxt.costs  
def calculateCosts(mats, final) -> int:  
    count = 0  
    for i in range(n):  
        for j in range(n):  
            if ((mats[i][j]) and  
                (mats[i][j] != final[i][j])):  
                count += 1  
                 
    return count  
 
def newNodes(mats, empty_tile_posi, new_empty_tile_posi,  
            levels, parent, final) -> nodes:            
    new_mats = copy.deepcopy(mats)  
    x1 = empty_tile_posi[0]  
    y1 = empty_tile_posi[1]  
    x2 = new_empty_tile_posi[0]  
    y2 = new_empty_tile_posi[1]  
    new_mats[x1][y1], new_mats[x2][y2] = new_mats[x2][y2], new_mats[x1][y1]  
    costs = calculateCosts(new_mats, final)  
    new_nodes = nodes(parent, new_mats, new_empty_tile_posi,  
                    costs, levels)  
    return new_nodes  
def printMatsrix(mats):  
     
    for i in range(n):  
        for j in range(n):  
            print("%d " % (mats[i][j]), end = " ")  
             
        print()  
def isSafe(x, y):  
     
    return x >= 0 and x < n and y >= 0 and y < n  
def printPath(root):  
     
    if root == None:  
        return  
     
    printPath(root.parent)  
    printMatsrix(root.mats)  
    print()  
def solve(initial, empty_tile_posi, final):  
    pq = priorityQueue()    
    costs = calculateCosts(initial, final)  
    root = nodes(None, initial,  
                empty_tile_posi, costs, 0)  
    pq.push(root)  
    while not pq.empty():  
        minimum = pq.pop()  
        if minimum.costs == 0:  
            printPath(minimum)  
            return  
        for i in range(n):  
            new_tile_posi = [  
                minimum.empty_tile_posi[0] + rows[i],  
                minimum.empty_tile_posi[1] + cols[i], ]  
                 
            if isSafe(new_tile_posi[0], new_tile_posi[1]):  
                child = newNodes(minimum.mats,  
                                minimum.empty_tile_posi,  
                                new_tile_posi,  
                                minimum.levels + 1,  
                                minimum, final,)  
                pq.push(child)  

initial = [ [ 1, 2, 3 ],  
            [ 5, 6, 0 ],  
            [ 7, 8, 4 ] ]  
final = [ [ 1, 2, 3 ],  
        [ 5, 8, 6 ],  
        [ 0, 7, 4 ] ]  
# Method call for solving the puzzle  
solve(initial, empty_tile_posi, final)  

Output:
123
560
784

123
506
784

123
586
704

123
586
074

You might also like