This document discusses parallel processing and multithreading in computer architecture. It defines threads and processes, and describes different approaches to multithreading including interleaved, blocked, and simultaneous multithreading. It also covers multiprocessor architectures like chip multiprocessing, and compares scalar, superscalar, and VLIW processor approaches to multithreading. Finally, it discusses parallelization techniques, cluster computing, NUMA architectures, and trends in processor design.
4. Multithreading and Chip Multiprocessors
Instruction stream divided into smaller streams (threads)
Executed in parallel
Wide variety of multithreading designs
5. Definitions of Threads and Processes
Thread in multithreaded processors may or may not be same as software threads
Process:
An instance of program running on computer
Resource ownership
Virtual address space to hold process image
Scheduling/execution
Process switch
Thread: dispatchable unit of work within process
Includes processor context (which includes the program counter and stack pointer) and data area
for stack
Thread executes sequentially
Interruptible: processor can turn to another thread
Thread switch
Switching processor between threads within same process
Typically less costly than process switch
6. Implicit and Explicit Multithreading
All commercial processors and most experimental ones use
explicit multithreading
Concurrently execute instructions from different explicit threads
Interleave instructions from different threads on shared pipelines or
parallel execution on parallel pipelines
Implicit multithreading is concurrent execution of multiple
threads extracted from single sequential program
Implicit threads defined statically by compiler or dynamically by
hardware
7. Approaches to Explicit Multithreading
Interleaved
Fine-grained
Processor deals with two or more thread contexts at a time
Switching thread at each clock cycle
If thread is blocked it is skipped
Blocked
Coarse-grained
Thread executed until event causes delay
E.g.Cache miss
Effective on in-order processor
Avoids pipeline stall
Simultaneous (SMT)
Instructions simultaneously issued from multiple threads to execution units of superscalar processor
Chip multiprocessing
Processor is replicated on a single chip
Each processor handles separate threads
8. Scalar Processor Approaches
Single-threaded scalar
Simple pipeline
No multithreading
Interleaved multithreaded scalar
Easiest multithreading to implement
Switch threads at each clock cycle
Pipeline stages kept close to fully occupied
Hardware needs to switch thread context between cycles
Blocked multithreaded scalar
Thread executed until latency event occurs
Would stop pipeline
Processor switches to another thread
10. Multiple Instruction Issue Processors (1)
Superscalar
No multithreading
Interleaved multithreading superscalar:
Each cycle, as many instructions as possible issued from single
thread
Delays due to thread switches eliminated
Number of instructions issued in cycle limited by dependencies
Blocked multithreaded superscalar
Instructions from one thread
Blocked multithreading used
12. Multiple Instruction Issue Processors (2)
Very long instruction word (VLIW)
E.g. IA-64
Multiple instructions in single word
Typically constructed by compiler
Operations that may be executed in parallel in same word
May pad with no-ops
Interleaved multithreading VLIW
Similar efficiencies to interleaved multithreading on superscalar
architecture
Blocked multithreaded VLIW
Similar efficiencies to blocked multithreading on superscalar
architecture
14. Parallel, Simultaneous
Execution of Multiple Threads
Simultaneous multithreading
Issue multiple instructions at a time
One thread may fill all horizontal slots
Instructions from two or more threads may be issued
With enough threads, can issue maximum number of instructions on
each cycle
Chip multiprocessor
Multiple processors
Each has two-issue superscalar processor
Each processor is assigned thread
Can issue up to two instructions per cycle per thread
16. Clusters
Alternative to SMP
High performance
High availability
Server applications
A group of interconnected whole computers
Working together as unified resource
Illusion of being one machine
Each computer called a node
17. Cluster Benefits
Absolute scalability
Incremental scalability
High availability
Superior price/performance
20. Operating Systems Design Issues
Failure Management
High availability
Fault tolerant
Failover
Switching applications & data from failed system to alternative within cluster
Failback
Restoration of applications and data to original system
After problem is fixed
Load balancing
Incremental scalability
Automatically include new computers in scheduling
Middleware needs to recognise that processes may switch between machines
21. Parallelizing
Single application executing in parallel on a number of machines in cluster
Complier
Determines at compile time which parts can be executed in parallel
Split off for different computers
Application
Application written from scratch to be parallel
Message passing to move data between nodes
Hard to program
Best end result
Parametric computing
If a problem is repeated execution of algorithm on different sets of data
e.g. simulation using different scenarios
Needs effective tools to organize and run
22. Cluster Middleware
Unified image to user
Single system image
Single point of entry
Single file hierarchy
Single control point
Single virtual networking
Single memory space
Single job management system
Single user interface
Single I/O space
Single process space
Checkpointing
Process migration
23. Blade Servers
Common implementation of cluster
Server houses multiple server modules (blades) in single
chassis
Save space
Improve system management
Chassis provides power supply
Each blade has processor, memory, disk
24. Cluster vs SMP
Both provide multiprocessor support to high demand applications.
Both available commercially
SMP for longer
SMP:
Easier to manage and control
Closer to single processor systems
Scheduling is main difference
Less physical space
Lower power consumption
Clustering:
Superior incremental & absolute scalability
Superior availability
Redundancy
25. Nonuniform Memory Access (NUMA)
Alternative to SMP & clustering
Uniform memory access
All processors have access to all parts of memory
Using load & store
Access time to all regions of memory is the same
Access time to memory for different processors same
As used by SMP
Nonuniform memory access
All processors have access to all parts of memory
Using load & store
Access time of processor differs depending on region of memory
Different processors access different regions of memory at different speeds
Cache coherent NUMA
Cache coherence is maintained among the caches of the various processors
Significantly different from SMP and clusters
27. CC-NUMA Operation
Each processor has own L1 and L2 cache
Each node has own main memory
Nodes connected by some networking facility
Each processor sees single addressable memory space
Memory request order:
L1 cache (local to processor)
L2 cache (local to processor)
Main memory (local to node)
Remote memory
Delivered to requesting (local to processor) cache
Automatic and transparent
28. Memory Access Sequence
Each node maintains directory of location of portions of memory and cache
status
e.g. node 2 processor 3 (P2-3) requests location 798 which is in memory of
node 1
P2-3 issues read request on snoopy bus of node 2
Directory on node 2 recognises location is on node 1
Node 2 directory requests node 1’s directory
Node 1 directory requests contents of 798
Node 1 memory puts data on (node 1 local) bus
Node 1 directory gets data from (node 1 local) bus
Data transferred to node 2’s directory
Node 2 directory puts data on (node 2 local) bus
Data picked up, put in P2-3’s cache and delivered to processor
29. Cache Coherence
Node 1 directory keeps note that node 2 has copy of data
If data modified in cache, this is broadcast to other nodes
Local directories monitor and purge local cache if necessary
Local directory monitors changes to local data in remote
caches and marks memory invalid until writeback
Local directory forces writeback if memory location requested
by another processor
30. NUMA Pros & Cons
Effective performance at higher levels of parallelism than SMP
No major software changes
Performance can breakdown if too much access to remote
memory
Can be avoided by:
L1 & L2 cache design reducing all memory access
Need good temporal locality of software
Good spatial locality of software
Virtual memory management moving pages to nodes that are using them most
Not transparent
Page allocation, process allocation and load balancing changes needed
Availability?
32. Chaining
Cray Supercomputers
Vector operation may start as soon as first element of operand vector available
and functional unit is free
Result from one functional unit is fed immediately into another
If vector registers used, intermediate results do not have to be stored in memory