Week 14 Applications of Parallel and Distributed Computing
Week 14 Applications of Parallel and Distributed Computing
Parallel and
Distributed
Computing
Parallel and distributed computing involve harnessing the power of multiple
processors and computers to solve complex problems. This field is crucial for
tackling modern challenges in various domains, including scientific
simulations, data analytics, and artificial intelligence.
sv
by DARWIN VARGAS
Definitions and Characteristics
Parallel computing utilizes multiple processors within a single system to execute tasks simultaneously, while distributed computing
involves distributing tasks across multiple independent machines connected through a network.
Characterized by shared memory, tightly coupled processors, Features independent machines, distributed memory, and
and high communication speed. communication over a network.
Motivations and Driving Factors
The increasing complexity of problems, coupled with the demand for faster
processing and analysis, drives the adoption of parallel and distributed computing.
3 Cost-Effectiveness
Utilizing existing hardware resources effectively can reduce costs associated
with purchasing new, high-performance systems.
Hardware Architectures
Different hardware architectures are employed in parallel and distributed systems, each with its strengths and weaknesses.
OpenMP
A directive-based API for shared-memory parallelism.
MapReduce
A programming model for distributed data processing.
Parallelism: Task, Data, and
Pipeline
Parallelism can be achieved in various ways, each suited for different types of
problems and data.
Client-Server Peer-to-Peer
A central server provides services to multiple clients. Nodes act as both clients and servers, communicating directly
with each other.
Synchronization and Coordination
Synchronization techniques ensure that parallel and distributed processes access shared
resources in a controlled and orderly manner.
Locks
Exclusive access to shared resources.
Semaphores
Limited access to shared resources.
Barriers
Synchronization points for processes.
Load Balancing and Resource
Management
Load balancing and resource management strategies aim to distribute workloads evenly
across available resources, maximizing efficiency and performance.
3 Resource Allocation
Efficiently distributing resources like CPU, memory, and network
bandwidth.
Fault Tolerance and
Reliability
Fault tolerance ensures the continued operation of a system even in the face
of failures, by employing redundancy and error handling mechanisms.
3 Checkpointing
Periodic saving of system state for restoration in case of failure.