Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
20 views

Lab14. Parallel and Distributed Computing

Uploaded by

210287
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Lab14. Parallel and Distributed Computing

Uploaded by

210287
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

AIR UNIVERSITY

DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING

EXPERIMENT NO 14

Lab Title: __Parallel and Distributed Computing: MP Programs____________

Student Name: Muhammad Burhan Ahmed Reg. No: 210287 ___


Objective: Implement and analyze various MP Programs.
LAB ASSESSMENT:
Attributes Excellent Good Average Satisfactory (2) Unsatisfactory (1)
(5) (4) (3)

Ability to Conduct
Experiment

Ability to assimilate the


results

Effective use of lab


equipment and
follows the lab
safety rules

Total Marks: Obtained Marks:

LAB REPORT ASSESSMENT:


Attributes Excellent Good Average Satisfactory Unsatisfactory
(5) (4) (3) (2) (1)

Data presentation

Experimental results

Conclusion

Total Marks: Obtained Marks:

Date: _______12/31/2024______ Signature:


Air University

DEPARTMENT OF ELECTRICAL AND


COMPUTER ENGINEERING

LAB REPORT 14

SUBMITTED TO: Miss Sidrish Ehsan

SUBMITTED BY: Muhammad Burhan Ahmed

Date: 12/31/2024
2|Page
LAB TASKS

Lab Task 01:

Parallel Matrix Multiplication with Block Distribution: Write MPI


program to implement a parallel matrix multiplication program where
matrices are divided into blocks and distributed across multiple
processes. Each process computes a block of the result matrix, and the
final matrix is gathered back at the root process. Use MPI_Send and
MPI_Recv for point-to-point communication and optimize data
distribution and collection.

Setup:

Code:

3|Page
4|Page
5|Page
6|Page
Output:

7|Page
8|Page
Lab Task 02:

Count Prime Numbers: Write an OpenMP program to count how many


prime numbers exist between 1 and 10000. Parallelize the loop that checks
for primes.

Code:

9|Page
Output:

Conclusion:
10 | P a g e
In this open-ended lab, we focused on the fundamentals of parallel and distributed
computing using MPI. We gained practical insights into the benefits and challenges of
parallel computing. I used MPI communication primitives such as MPI_send and
MPI_recv to efficiently distribute data and aggregate results, ensuring effective
collaboration between processes and reducing redundancy. These skills are critical for
solving real-world problems in high performance computing and distributed systems.

11 | P a g e

You might also like