Parallel & Distributed Computing
Parallel & Distributed Computing
Jun Zhang
Department of Computer Science
University of Kentucky
Lexington, KY 40506
Chapter 1: CS621 1
1.1a: von Neumann Architecture
Chapter 1: CS621 2
1.1b: A More Detailed Architecture based
on von Neumann Model
Chapter 1: CS621 3
1.1c: Old von Neumann Computer
Chapter 1: CS621 4
1.1d: CISC von Neumann Computer
Chapter 1: CS621 6
1.1f: John von Neumann
Chapter 1: CS621 7
1.2a: Motivations for Parallel Computing
Chapter 1: CS621 8
1.2b: Fundamental Limits – Cycle Speed
Intel processor
price/performance
Chapter 1: CS621 10
1.2d: Moore’s Law
Moore’s observation in 1965: the number of
transistors per square inch on integrated
circuits had doubled every year since the
integrated circuit was invented
Moore’s revised observation in 1975: the
pace was slowed down a bit, but data
density had doubled approximately every 18
months
How about the future? (price of computing
power falls by a half every 18 months?)
Chapter 1: CS621 11
1.2e: Moore’s Law – Held for Now
Chapter 1: CS621 12
1.3: Power Wall Effect in Computer
Architecture
Too many transistors in a given chip die area
Tremendous increase in power density
Increased chip temperature
High temperature slows down the transistor
switching rate and the overall speed of the
computer
Chip may melt down if not cooled properly
Efficient cooling systems are expensive
Chapter 1: CS621 13
1.3: Cooling Computer Chips
Chapter 1: CS621 14
1.3: Solutions
Use multiple inexpensive processors
A processor with multiple cores
Chapter 1: CS621 15
1.3: A Multi-core Processor
Chapter 1: CS621 16
1.3a: CPU and Memory Speeds
Chapter 1: CS621 17
1.3b: Memory Access and CPU Speed
Chapter 1: CS621 18
1.3b: CPU, Memory, and Disk Speed
Chapter 1: CS621 19
1.3c: Possible Solutions
Chapter 1: CS621 20
1.3f: Multilevel Hierarchical Cache
Chapter 1: CS621 21
1.4a: Distributed Data Communications
Chapter 1: CS621 22
1.4b: Distributed Data Communications
Chapter 1: CS621 23
1.4c: Move Computation to Data
(CS626: Large Scale Data Science)
Chapter 1: CS621 24
1.5a: Why Use Parallel Computing
Chapter 1: CS621 26
1.5b: Other Reasons for Parallel
Computing
Taking advantages of non-local resources –
using computing resources on a wide area
network, or even internet (grid or cloud
computing)
Cost savings – using multiple “cheap”
computing resources instead of a high-end
CPU
Overcoming memory constraints – for large
problems, using memories of multiple
computers may overcome the memory
constraint obstacle
Chapter 1: CS621 27
1.6a: Need for Large Scale Modeling
Chapter 1: CS621 29
1.4b: Semiconductor Diffusion Process
Chapter 1: CS621 30
1.6c: Drug Design
Chapter 1: CS621 32
1.6d: Computer Aided Drug Design
Chapter 1: CS621 33
1.7: Issues in Parallel Computing
Chapter 1: CS621 36
1.10 Cloud Computing
Cloud Computing is a style of computing in which
dynamically scalable and often virtualized resources are
provided over the Internet
Users need not have knowledge of, expertise in, or
control over the technology infrastructure in the “cloud”
that supports them
Compared to: Grid Computing (cluster of networked,
loosely coupled computers)
Utility Computing (packaging of computing resources as
a metered service)
and Autonomic Computing (computer systems capable
of self-management)
Chapter 1: CS621 37
1.11 Cloud Computing
Chapter 1: CS621 38
1.12 Cloud Computing
Chapter 1: CS621 40
1.14 Cloud Computing
Chapter 1: CS621 41
1.15 Cloud Computing
Chapter 1: CS621 42
1.16 Top Players in Cloud Computing
Amazon.com (IaaS, most comprehesive)
Vmware (vCloud, software to build cloud)
Microsoft (PaaS, IaaS)
Salesforce.com (SaaS)
Google (IaaS, PaaS, Google App Engine)
Rackspace (IaaS)
IBM (Openstack)
Citrix (compete with Vmware, free cloud operating
system)
Joyent (compete with Vmware, OpenStack, Citrix)
SoftLayer (web hosting service provider)
Chapter 1: CS621 43
1.17 Parallel and Distributed
Computing (CS621)
This class is about parallel and distributed computing
algorithms and applications
Algorithms for communications between processors
Algorithms to solve scientific computing problems
Hands-on experience with message-passing interface
(MPI) to write parallel programs to solve some problems
Chapter 1: CS621 44
*** Hands-on Experience
You can down and install a copy of Microsoft MPI
(Message Passing Interface) at
https://www.microsoft.com/en-
us/download/details.aspx?id=57467
https://visualstudio.microsoft.com/
Chapter 1: CS621 45
*** Hands-on Experience
Here is a video demonstrating how to set up MS Visual
Studio for MS MPI programming
https://www.youtube.com/watch?v=IW3SKDM_yEs&t=330s
Chapter 1: CS621 46
*** Hands-on Experience
https://www.ccs.uky.edu/
Chapter 1: CS621 47