WB - Algorithms - 2017-18 (1) 1056HRS
WB - Algorithms - 2017-18 (1) 1056HRS
WB - Algorithms - 2017-18 (1) 1056HRS
(AUTONOMOUS)
Cheeryal (V), Keesara (M), Medchal District – 501 301 (T.S)
ALGORITHMS
LABORATORY
(16CS22L3)
2018-2019
CERTIFICATE
This is to certify that Mr./ Miss ___________________________________
Head Faculty
Dept. of CSE In charge
ALGORITHMS LAB
ALGORITHMS LAB
1 Sort a given set of elements using the Quick sort method and determine the time
required to sort the elements. Repeat the experiment for different values of n, the
number of elements in the list to be sorted and plot a graph of the time taken versus n.
The elements can be read from a file or can be generated using the random number
generator.
2 Using OpenMP, implement a parallelized Merge Sort algorithm to sort a given set of
elements and determine the time required to sort the elements. Repeat the experiment
for different values of n, the number of elements in the list to be sorted and plot a
graph of the time taken versus n. The elements can be read from a file or can be
generated using the random number generator.
3 Implement Binary tree traversal techniques using recursion and without recursion.
Identify the best method. Justify your answer.
4 a. Print all the nodes reachable from a given starting node in a digraph using BFS
method.
b. Check whether a given graph is connected or not using DFS method.
5 Write and implement an algorithm determining articulation points and the biconnected
components in the given graph
6 Implement an algorithm to find the minimum cost spanning tree using
a) Prims algorithm
b) Kruskals Algorithm
7 From a given vertex in a weighted connected graph, find shortest paths to other
vertices using Dijkstra’s algorithm.
8 Implement Job Sequencing with Deadlines algorithm and Fast Job Sequencing with
Deadlines
9 Implement Matrix Chain multiplication algorithm. Parallelize this algorithm,
implement it using Open and determine the speed-up achieved.
11 Implement an algorithm to find the optimal binary search tree for the given list of
identifiers.
15 Implement the solution for TSP problem using Branch & Bound technique
Additional Programs
PO’S:1,3,4,12
PSO’S:1
PSO’S:1
To impact adequate fundamental knowledge in all basic science and engineering technical
and Inter-Personals skills so students.
To bring out creativity in students that would promote innovation, research and
entrepreneurship
To Preserve and promote cultural heritage, humanistic and spiritual values promoting
peace and harmony in society
PSO 1: To identify and define the computing requirements for its solution under given
constraints.
PSO 2: To follow the best practices namely SEI-CMM levels and six sigma which varies from
time to time for software development project using open ended programming environment to
produce software deliverables as per customer needs.
Course Objectives:
Develop ability to
1. Realize the asymptotic performance of algorithms.
2. Understand the behavior of Greedy strategy, Divide and Conquer approach, Dynamic
Programming and branch and bound theory for several problem solving techniques.
3. Understand how the choice of data structures and algorithm design methods impact the
performance of programs.
4. Distinguish deterministic and non-deterministic algorithms and their computational complexities.
Course Outcomes:
16CS22L3.1 Analyze algorithms and estimate their best-case, worst-case and average-case behavior in
terms of time and space and execute the same through programming.
16CS22L3.2 Identify suitable problem solving technique for a given problem and design algorithms using
greedy strategy, divide and conquer approach, dynamic programming, and branch and bound theory
accordingly and execute the same through programming.
16CS22L3.3 Implement algorithm design method into appropriate data structures using programming.
16CS22L3.4 Design deterministic and non-deterministic algorithms for tractable and intractable
problems and categorize them as P Class/ NP Class/ NP-Hard/ NP-complete problems accordingly.
Subject
Course Code PEO’S PO’S & PSO’S
16CS22L3 PEO1, PO1,PO2,PO3,PO4,PO6,PO7,
Algorithms
PEO2, PO9,PO10,PO11,PO12,PSO1,
Laboratory
PEO3 PSO2
5. Observation book and lab records submitted for the lab work are to be checked and signed
before the next lab session.
6. Students should be instructed to switch ON the power supply after the connections are
checked by the lab assistant / teacher.
7. The promptness of submission should be strictly insisted by awarding the marks accordingly.
8. Ask viva questions at the end of the experiment.
9. Do not allow students who come late to the lab class.
10. Encourage the students to do the experiments innovatively.
11. Fill continuous Evaluation sheet, on regular basis.
12. Ensure that the students are dressed in formals
Sort a given set of elements using the Quick sort method and determine the time required to
sort the elements. Repeat the experiment for different values of n, the number of elements in
the list to be sorted and plot a graph of the time taken versus n. The elements can be read
from a file or can be generated using the random number generator.
Objective
The student will be able to understand how much time is required for quick sort algorithm to sort
the elements for different values of n
Outcome
Student gains the ability to understand how an algorithm works by using divide and conquer
approach, partition function and analyse its running time under different data conditions
Method:
Quick Sort divides the array according to the value of elements. It rearranges elements of a given
array A[0..n-1] to achieve its partition, where the elements before position s are smaller than or
equal to A[s] and all the elements after position s are greater than or equal to A[s].
A[0]…A[s-1] A[s] A[s+1]…A[n-1]
All are <=A[s] all are >=A[s]
} until i>=j
swap(A[l],A[j])
return j;
}
Program
Viva Questions
1. Explain partitioning in Quick sort.
3. Explain the best case, worst case and average case time complexity of Quick sort.
5. What is the output of quick sort after the 3rd iteration given the following sequence of
numbers: 65 70 75 80 85 60 55 50 45
Using OpenMP, implement a parallelized Merge Sort algorithm to sort a given set of
elements and determine the time required to sort the elements. Repeat the experiment for
different values of n, the number of elements in the list to be sorted and plot a graph of the
time taken versus n. The elements can be read from a file or can be generated using the
random number generator.
Objective
The student will be able to understand openMP(parallel programming) on merge sort and how
much time is required for merge sort algorithm to sort the elements for different values of n
Outcome
Student gains the ability to understand how an algorithm works by using divide and conquer
approach, recursion and analyse its running time under different data conditions
Complexity:
All cases have same efficiency: Θ (n log n)
Number of comparisons is close to theoretical minimum for comparison-based sorting: log n ! ≈ n
lg n - 1.44 n
Space requirement: Θ (n) (NOT in-place)
Program
Viva Questions
1. Write and solve the recurrence relation giving the time complexity of merge sort
4. Given two sorted lists of size m, n. What is the number of comparisons needed in the worst
case by the merge sort algorithm?
5. Given the initial sequence: 3, 41, 52, 26, 38, 57, 9, and 49 .Write the sequences to be merged at
the last step in the merge sort.
Algorithms
Recursive Algorithms for In Order, Pre Order, and Post Order Binary tree traversals
Algorithm InOrder(t)
// t is a binary tree. Each node of t has three fields: lchild, data, and rchild.
{
if t≠ 0 then
{
InOrder(tlchild);
Visit(t);
InOrder(trchild);
}
}
Algorithm PreOrder(t)
// t is a binary tree. Each node of t has three fields: lchild, data, and rchild.
{
if t≠ 0 then
{
Visit(t);
PreOrder(tlchild);
PreOrder(trchild);
}
}
}
}
In Order:
1) Create an empty stack S.
2) Initialize current node as root
3) Push the current node to S and set current = current->left until current is NULL
4) If current is NULL and stack is not empty then
a) Pop the top item from stack.
b) Print the popped item, set current = popped_item->right
c) Go to step 3.
5) If current is NULL and stack is empty then we are done.
Pre Order:
1) Create an empty stack S and push root node to stack.
2) Do following while S is not empty.
a) Pop an item from stack and print it.
b) Push right child of popped item to stack
c) Push left child of popped item to stack
Post Order:
3. Give a complete binary tree with 12 elements and write its inorder, preorder, and postorder
traversals.
4. A binary search tree contains the numbers 1, 2, 3, 4, 5, 6, 7, 8. When the tree is traversed in
pre-order and the values in each node printed out, the sequence of values obtained is 5, 3, 1, 2, 4,
6, 8, and 7. Write its post order traversal.
5. The array representation of a complete binary tree contains the data in sorted order. Which
traversal of the tree will produce the data in sorted form?
Objective
The student will be able to understand BFS graph search algorithm that can be used for a variety
of different purposes.
Outcome
Student gains the ability to understand how a BFS algorithm works to identify all nodes that are
reachable from a given starting node.
Algorithm: BFS(v)
// A breadth-first search of G is carried out beginning at vertex v. For any node i, visited[i] =1 if i has
// already been visited. The graph G and array visited [] are global; visited [] is initialized to zero
{
u := v; // q is a queue of unexplored vertices.
visited[v] := 1;
repeat
{
for all vertices w adjacent from u do
{
if(visited[w] = 0) then
{
Add w to q; //w is unexplored.
Visited[w] := 1;
}
}
if q is empty then return; // No unexplored vertex.
Delete the next element , u from q; //Get first unexplored vertex.
} until (false);
}
Complexity:
BFS has the same efficiency as DFS:
it is Θ (n2) for Adjacency matrix representation and Θ(n+e) for Adjacency linked list representation.
{
visited[v] := 1;
for each vertex w adjacent from v do
{
if (visited[w] = 0) then DFS(w);
}
}
Program:
2. Explain the various ways of representing a graph and give representations of the following
graph.
Write and implement an algorithm determining articulation points and the biconnected
components in the given graph
Algorithm
Objective
The student will be able to understand the concept of articulation point and how sub graphs are
formed after removing articulation point
Outcome
Student gains the ability to understand how an algorithm works to find articulation points and
biconnected components in the given graph
{
dfn[u] := num; L[u] := num; num := num+1;
for each vertex w adjacent from u do
{
if (( v≠ w) and ( dfn[w] < dfn[u])) then
add (u, w) to the top of a stack s;
if (dfn[w] = 0) then
{
BiComp(w, u);
L[u] := min(L[u], L[w]);
if ( L[w] ≥ dfn [u] ) then
{
write (“New Bicomponent”);
repeat
{
Delete an edge from the top of stack s;
Let this edge be (x,y);
write (x,y);
} until (((x, y) = (u, w)) or (x, y) = (w, u)));
}
else if ( w ≠ v) then L[u] := min(L[u], dfn[w]);
Program:
Viva Questions:
3. Identify the articulation points and associated biconnected components in the following graph.
Prim’s algorithm finds the minimum spanning tree for a weighted connected graph G= (V, E) to
get an acyclic sub graph with |V|-1 edges for which the sum of edge weights is the smallest.
Consequently the algorithm constructs the minimum spanning tree as expanding sub-trees. The
initial sub tree in such a sequence consists of a single vertex selected arbitrarily from the set V of
the graph’s vertices. On each iteration, we expand the current tree in the greedy manner by simply
attaching to it the nearest vertex not in that tree. The algorithm stops after all the graph’s vertices
have been included in the tree being constructed.
Objective
The student will be able to understand the concept of Prim’s algorithm in order to find the
minimum cost spanning tree
Outcome
Student gains the ability to understand how a prim’s algorithm works to find minimum cost
spanning tree
Program:
b. Kruskal's algorithm:
Kruskal’s algorithm finds the minimum spanning tree for a weighted connected graph G= (V, E)
to get an acyclic sub graph with |V|-1 edges for which the sum of edge weights is the smallest.
Consequently the algorithm constructs the minimum spanning tree as an expanding sequence of
sub graphs, which are always acyclic but are not necessarily connected on the intermediate stages
of algorithm. The algorithm begins by sorting the graph’s edges in non decreasing order of their
weights. Then starting with the empty sub graph, it scans the sorted list adding the next edge on
the list to the current sub graph if such an inclusion does not create a cycle and simply skipping
the edge otherwise.
Objective
The student will be able to understand the concept of Kruskal’s algorithm in order to find the
minimum cost spanning tree
Outcome
Student gains the ability to understand how a kruskal’s algorithm works to find minimum cost
spanning tree
Algorithm kruskal(E,cost,n,t)
//E is set of edges in G has ‘n’ vertices.
//cost[u,v] is cost of edge (u,v). t is set of edge in minimum cost spanning tree
// the final cost is returned.
{
for i=1 to n do
Complexity: With an efficient sorting algorithm, the time efficiency of kruskal’s algorithm will
be in O(|E| log |E|).
Program:
Viva Questions:
1. What are the applications of spanning tree?
4. Give MSP for the following graph using Prim’s and Kruskal’s algorithm.
a. From a given vertex in a weighted connected graph, find shortest paths to other vertices
using Dijkstra’s algorithm.
Single Source Shortest Paths Problem:
For a given vertex called the source in a weighted connected graph, find the shortest paths to all
its other vertices. Dijkstra’s algorithm is the best known algorithm for the single source shortest
paths problem. This algorithm is applicable to graphs with nonnegative weights only and finds the
shortest paths to a graph’s vertices in order of their distance from a given source. It finds the
shortest path from the source to a vertex nearest to it, then to a second nearest, and so on. It is
applicable to both undirected and directed graphs
Objective
The student will be able to understand the concept of Dijkstra’s algorithm for finding the shortest
paths between nodes in a graph
Outcome
Student gains the ability to understand how a Dijkstra’s algorithm works for finding the shortest
paths between nodes in a graph.
Objective
The student will be able to understand the concept of Job Sequencing with Deadlines algorithm
for finding a sequence of jobs, which is completed within their deadlines and gives maximum
profit.
Outcome
Student gains the ability to understand how a Job Sequencing with Deadlines algorithm works for
finding a sequence of jobs, which is completed within their deadlines and gives maximum profit.
Algorithm:
Algorithm JS (d, j, n)
{
d[0]=J[0]=0;
if (d[J[r]]<d[I])and (d[I]>r))then
{
for q=k to (r+1) step –1 do J [q+1]=j[q] J[r+1]=i;
K=k+1;
}
}
return k;
}
Program:
Viva Questions:
3. Write all feasible solutions for following instance of JSD: n=4 p[1:4]=(100,10,15,27) and
d[1:4]=(2,1,2,1). Select the optimal solution.
4. Using greedy method write the Optimal Solution for following instance of JSD: n=5,
p[1:5]=(20, 15,10,5,11) and d[1:5]=(2,2,1,3,3).
Algorithm Matrix_chain(p,n)
{
for i := 1 to n do
m[i, i] :=0;
for l := 2 to n do
for i := 1 to n-l+1 do
{
j := i + l -1;
m[i, j] := 0;
for k := i to j-1 do
{
q := m[i, k] + m[k + 1, j] + p[i -1] * p[k] *p[j];
if q < m[i, j] then
{
m[i, j] := q;
S[i, j] := k;
}
}
}
return m and S;
}
Program:
Viva Questions:
2. Write the various ways of performing the following chain of matrix multiplication: ABCD.
4. State the equation holding the principle of optimality for a chain of matrix multiplication.
5. Given a chain of four matrices A1, A2, A3, and A4, with p0 = 5, p1 = 4, p2 = 6, p3 = 2, p4 =7. Find
m[1, 4].
Algorithm:
PW= record {float p; float w }
Algorithm DKnap(p, w, x, n, m)
{
// pair [] is an array of PW’s
b[0] :=1; pair[1].p :=pair[1].w := 0.0; //S0
t :=1; h :=1; //start and end of S0
b[1] :=next :=2; //Next free spot in pair[]
for i := 1 to n-1 do
{ // Generate Si
k :=t;
u := Largest(pair, w, t, h, i, m);
for j :=t to u do
{ // Generate Si-11 and merge.
pp := pair[j].p + p[i];
i-1
ww := pair[j].w + w[i]; // (pp, ww) is the next element in S 1
while ((k ≤ h) and (pair[k].w ≤ ww )) do
{
pair[next].p :=pair[k].p;
pair[next].w :=pair[k].w;
next :=next + 1; k :=k+1;
}
if ((k ≤ h) and (pair[k].w = ww )) then
{
if pp < pair[k].p then pp :=pair[k].p;
k := k +1;
}
if pp > pair[next – 1].p then
{
pair[next].p := pp; pair[next].w := ww;
next :=next + 1;
Viva Questions:
1. Differentiate fractional Knapsack and 0/1 Knapsack.
3. Write and explain the equation for 0/1 Knapsack holding principle of optimality.
5. What is the time complexity of 0/1 Knapsack using dynamic programming? Is any
improvement possible? If so, explain.
Objective
The student will be able to understand the concept of dynamic programming and the objective is
to fill the knapsack with items such that we have a maximum profit without crossing the weight
limit of the knapsack
Outcome
Student gains the ability to understand how a dynamic programming technique evolved in 0/1
knapsack problem algorithm to fill the knapsack with items such that we have a maximum profit
without crossing the weight limit of the knapsack
Algorithm Find(c, r, i, j)
{
return l;
}
Viva Questions:
1. Define binary tree and binary search tree.
4. What is the significance of internal nodes and external nodes in a binary tree?
5. What is the time complexity of OBST algorithm? Is any improvement possible? If so, explain.
Sum of Subsets
Subset-Sum Problem is to find a subset of a given set S= {s1, s2… s n} of n positive integers
whose sum is equal to a given positive integer d. It is assumed that the set’s elements are sorted in
increasing order. The state-space tree can then be constructed as a binary tree and applying
backtracking algorithm, the solutions could be obtained. Some instances of the problem may have
no solutions
Algorithm SumOfSub(s, k, r)
//Find all subsets of w[1…n] that sum to m. The values of x[j], 1≤ j < k, have already been
//determined. s=∑k-1 w[j]*x[j] and r =∑n w[j]. The w[j]’s in ascending order. It is assumed that
// w[1] ≤ m and ∑ w[i] ≥ m
{
// Generate left child. Note: s+w[k] ≤ m since Bk-1 is true.
x[k] := 1
if (s+w[k] = m) then write (x[1 : k]) //subset found
// There is no recursive call here as w[j]>0, 1 ≤ j ≤ n.
else if ( s + w[k]+w[k+1] ≤ m) then
SumOfSub( s + w[k], k+1, r-w[k])
//Generate right child and evaluate Bk.
if( (s + r - w[k] >= m) and (s + w[k+1] <= m) )
{
x[k] := 0
SumOfSub( s, k+1, r-w[k] )
}
}
Complexity:
Subset sum problem solved using backtracking generates at each step maximal two new subtrees, and the running
time of the bounding functions is linear, so the running time is O (2n).
Program:
Viva Questions:
1. Compare brute force approach and backtracking.
4. Draw fixed and variable tuple solution space tree for following instance of Sum-of-Subsets
problem: n=4, w[1:4]=(11,13,24,7) and m=31.
Complexity:
The power of the set of all possible solutions of the n queen’s problem is n! And the bounding function takes
a linear amount of time to calculate, therefore the running time of the n queens problem is O (n!).
Viva:
1. Explain N-Queens problem.
3. Number the nodes in above tree as in depth first search, breadth first search and D-Search.
4. Write and explain the condition to determine if two queens are on the same diagonal.
Algorithm NextValue(k)
// x[1: k-1] is a path of k-1 distinct vertices. If x[k] = 0, then no vertex has as yet been assigned to
//x[k]. After execution, x[k] is assigned to the next highest numbered vertex which does not already
//appear in x[1 : k-1] and is connected by an edge to x[k-1]. Otherwise x[k] = 0. If k = n, then in
//addition x[k] is connected to x[1].
{
repeat
{
x[k] := (x[k] +1) mod (n+1);
if (x[k] = 0) then return;
if (G[x[k-1], x[k]] ≠ 0) then
{ // Is there an edge?
for j := 1 to k-1 do if (x[j] = x[k]) then break; // Check for distinctness.
if (j = k) then // If true, then the vertex is distinct.
if ((k<n) or ((k = n) and G[x[n], x[1]] ≠ 0))
then return;
}
} until (false);
}
Program:
Viva Questions:
1. What is Hamiltonian cycle?
3. Draw the portion of state space tree for Hamiltonian cycle for the above graph.
Viva:
1. Explain Traveling Salesperson problem (TSP).
4. Explain the procedure in obtaining a reduced cost matrix if tree edge (R, S) corresponding to
edge (i, j) in the graph.
5. What is the time complexity of Branch -and-Bound solution to TSP? How is it better than that
of dynamic programming solution?
Definitions
One of the goals the designers had for OpenMP is for programs to execute and yield the same
results whether they use one thread or many threads. The term for a program to yield the same
results if it executed one thread or many is called sequentially equivalent. Incremental parallelism
refers to a programming practice (which is not always possible) in which a sequential program
evolves into a parallel program. That is, the programmer starts working with a sequential program
from the top down, block by block, and finds pieces of code that are better off executing in
parallel. Thus, parallelism is added incrementally. Having said that, OpenMP is a collection of
compiler directives, library routines, and environmental variables that specify shared-memory
concurrency in FORTRAN, C, and (soon) C++ programs. Note that in the Windows OS, any
memory that can be shared is shared. OpenMP directives demarcate code that can be executed in
parallel (called parallel regions), and control how code is assigned to threads. The threads in
OpenMP code operate under the fork-join model. When the main thread encounters a parallel
region while executing an application, a team of threads is forked off, and these threads begin to
execute the code within the parallel region. At the end of the parallel region, the threads within
the team wait until all other threads in the team have finished before being joined. The main
thread resumes serial execution with the statement following the parallel region. The implicit
barrier at the end of all parallel regions preserves sequential consistency. More to the point, an
executing OpenMP program starts a single thread. At points in the program where parallel
execution is desired, the program forks additional threads to form a team of threads. The threads
execute in parallel across a region of code called the parallel region. At the end of the parallel
region, the threads wait until the full team arrives, and then they join back together. At that point,
the original or master thread continues until the next parallel region (or end of the program).
All OpenMP pragmas have the same prefix of #pragma omp. This is followed by an OpenMP
directive construct and one or more optional clauses to modify the construct. OpenMP is an
explicitly parallel programming language. The compiler doesn't guess how to exploit
concurrency. Any parallelism expressed in a program is there because the programmer directed
the compiler to put it there. To create threads in OpenMP, the programmer designates blocks of
code that are to run in parallel. This is done in C and C++ with the pragma used to define a
parallel region within an application: use the parallel construct:
Now, we will take a few simple examples. When compiled, this code is meant to print a string to
standard output console:
#include <stdio.h>
int main()
{
printf("E is the second vowel\n");
}
Now, we add the compiler directive to define a parallel region in this simple program:
#include <stdio.h>
#include "omp.h"
int main()
{
#pragma omp parallel
{
printf("E is the second vowel\n");
}
}
#include <stdio.h>
#include "omp.h"
int main()
{
int i=5;
#pragma omp parallel
{
printf("E is equal to %d\n",i);
}
}
OpenMP is a shared-memory programming model. A good rule that holds in most cases is that a
variable allocated prior to the parallel region is shared between the threads. So the program prints:
E is equal to 5
E is equal to 5
#include <stdio.h>
#include <omp.h>
int main()
{
int i= 256; // a shared variable
#pragma omp parallel
{
int x; // a variable local or private to each thread
x = omp_get_thread_num();
printf("x = %d, i = %d\n",x,i);
}
}
Output
x = 0, i = 256
x = 1, i = 256
//note the value of x decrements, while the value of i remains the same
Synchronization
Synchronization is all about timing. Threads running within a process must sometimes access
resources, because the container process has created a handle table where the threads can access
resources by a handle identification number. A resource could be a Registry key, a TCP port, a
file, or any other type of system resource. It is obviously important for those threads to access
those resources in an orderly fashion. It is also obvious that two threads cannot execute
simultaneously in the same CRITICAL_REGION. For example, if one thread writes some data to
a message queue, and then another thread writes over that data, then we have data corruption.
More to the point, we have a race condition: two threads race to execute at a single instance
because they (think) appear to be scheduled that way. A race condition results in a serious system
crash. So, how does OpenMP handle these issues?
OpenMP has synchronization constructs that ensure mutual exclusion to your critical regions. Use
these when variables must remain shared by all threads, but updates must be performed on those
variables in parallel regions. The critical construct acts like a lock around a critical region. Only
one thread may execute within a protected critical region at a time. Other threads wishing to have
The OpenMP runtime library is then expressed in compiler directives, but there are certain
language features that can only be handled by library functions. Here are a few of them:
And, here is code that uses some OpenMP API functions to extract information about the
environment:
#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
}
}
Output
In certain cases, a large number of independent operations are found in loops. Using the loop
work sharing construct in OpenMP, you can split up these loop iterations and assign them to
threads for concurrent execution. The parallel for construct will initiate a new parallel region
around the single for loop following the pragma, and divide the loop iterations among the threads
of the team. Upon completion of the assigned iterations, threads sit at the implicit barrier at the
end of the parallel region, waiting to join with the other threads. It is possible to split up the
combined parallel for construct into two pragmas: a parallel construct and the for construct, which
must be lexically contained within a parallel region. Here is an example of the former:
#include <stdlib.h>
#include <stdio.h>
#include <omp.h>
#define CHUNKSIZE 10
#define N 100
/* Some initializations */
for (i=0; i < N; i++)
a[i] = b[i] = i * 1.0;
chunk = CHUNKSIZE;
}
Number of threads = 2
Thread 1 starting...
Thread 0 starting...
Thread 0: c[10]= 20.000000
Thread 1: c[0]= 0.000000
Thread 0: c[11]= 22.000000
Thread 1: c[1]= 2.000000
Thread 0: c[12]= 24.000000
Thread 1: c[2]= 4.000000
Thread 0: c[13]= 26.000000
Thread 1: c[3]= 6.000000
Thread 0: c[14]= 28.000000
Thread 1: c[4]= 8.000000
Thread 0: c[15]= 30.000000
Thread 1: c[5]= 10.000000
Thread 0: c[16]= 32.000000
Thread 1: c[6]= 12.000000
Thread 0: c[17]= 34.000000
Thread 1: c[7]= 14.000000
. . . . . . .
We begin by defining the terms we will use to describe the data environment in OpenMP. In a
program, a variable is a container (or more concretely, a storage location in memory) bound to a
name and holding a value. Variables can be read and written as the program runs (as opposed to
constants that can only be read). In OpenMP, the variable that is bound to a given name depends
on whether the name appears prior to a parallel region, inside a parallel region, or following a
parallel region. When the variable is declared prior to a parallel region, it is by default shared, and
#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
#define N 50
#define CHUNKSIZE 5
/* Some initializations */
for (i=0; i < N; i++)
a[i] = b[i] = i * 1.0;
chunk = CHUNKSIZE;
first_time = 'y';
. . . . . and so on