Multiprocessing Wiki 20150330
Multiprocessing Wiki 20150330
Multiprocessing Wiki 20150330
Contents
1 Multiprocessing 1
1.1 Pre-history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Key topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Processor symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.2 Instruction and data streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.3 Processor coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.4 Multiprocessor Communication Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Flynn’s taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3.1 SISD multiprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3.2 SIMD multiprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3.3 MISD multiprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.4 MIMD multiprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Computer multitasking 5
2.1 Multiprogramming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Cooperative multitasking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Preemptive multitasking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Real time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Multithreading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.6 Memory protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.7 Memory swapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.8 Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Symmetric multiprocessing 9
3.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.4 Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
i
ii CONTENTS
3.6 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.6.1 Entry-level systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.6.2 Mid-level systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.7 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4 Asymmetric multiprocessing 13
4.1 Background and history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Burroughs B5000 and B5500 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.3 CDC 6500 and 6700 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.4 DECsystem-1055 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.5 PDP-11/74 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.6 VAX-11/782 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.7 Univac 1108-II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.8 IBM System/370 model 168 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6 Multi-core processor 19
6.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.2 Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.2.1 Commercial incentives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.2.2 Technical factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.2.3 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.2.4 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.3 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.3.1 Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.3.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.4 Software effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
CONTENTS iii
6.4.1 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.5 Embedded applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.6 Hardware examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.6.1 Commercial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.6.2 Free . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.6.3 Academic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.7 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
8 Intel Core 32
8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
8.2 Enhanced Pentium M based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
8.2.1 Core Duo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
8.2.2 Core Solo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
8.3 64-bit Core microarchitecture based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.3.1 Core 2 Solo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.3.2 Core 2 Duo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.3.3 Core 2 Quad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.3.4 Core 2 Extreme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.4 Nehalem microarchitecture based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.4.1 Core i3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.4.2 Core i5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.4.3 Core i7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.5 Sandy Bridge microarchitecture based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
iv CONTENTS
8.5.1 Core i3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.5.2 Core i5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.5.3 Core i7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.6 Ivy Bridge microarchitecture based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.6.1 Core i3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.6.2 Core i5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.6.3 Core i7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.7 Haswell microarchitecture based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.7.1 Core i3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.7.2 Core i5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.7.3 Core i7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.8 Broadwell microarchitecture based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.8.1 Core i3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.8.2 Core i5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.8.3 Core i7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.8.4 Core M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
10 Pentium Dual-Core 42
10.1 Processor cores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.1.1 Yonah . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.1.2 Allendale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
CONTENTS v
10.1.3 Merom-2M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.1.4 Wolfdale-3M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.1.5 Penryn-3M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.2 Rebranding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.3 Comparison to the Pentium D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
10.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
11 Xeon 45
11.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
11.2 P6-based Xeon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
11.2.1 Pentium II Xeon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
11.2.2 Pentium III Xeon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
11.3 Netburst-based Xeon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
11.3.1 Xeon (DP) & Xeon MP (32-bit) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
11.3.2 “Gallatin” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
11.3.3 Xeon (DP) & Xeon MP (64-bit) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
11.3.4 Dual-Core Xeon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
11.4 Pentium M (Yonah) based Xeon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
11.4.1 LV (ULV), “Sossaman” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
11.5 Core-based Xeon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
11.5.1 Dual-Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
11.5.2 Quad-Core and Multi-Core Xeon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
11.6 Nehalem-based Xeon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
11.6.1 3400-series “Lynnfield” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
11.6.2 3400-series “Clarkdale” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
11.6.3 3500-series “Bloomfield” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
11.6.4 <span id="5500-series “Gainestown"">5500-series “Gainestown” . . . . . . . . . . . . . . 51
11.6.5 <span id="C3500/C5500-Series “Jasper Forest"">C3500/C5500-series “Jasper Forest” . . . 51
11.6.6 3600/5600-series “Gulftown” & “Westmere-EP” . . . . . . . . . . . . . . . . . . . . . . . 51
11.6.7 6500/7500-series “Beckton” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
11.6.8 E7-x8xx-series “Westmere-EX” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
11.7 Sandy Bridge– and Ivy Bridge–based Xeon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.7.1 E3-12xx-series “Sandy Bridge” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.7.2 E3-12xx v2-series “Ivy Bridge” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.7.3 E5-14xx/24xx series “Sandy Bridge-EN” and E5-16xx/26xx/46xx-series “Sandy Bridge-EP” 52
11.7.4 E5-14xx v2/24xx v2 series “Ivy Bridge-EN” and E5-16xx v2/26xx v2/46xx v2 series “Ivy
Bridge-EP” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.7.5 E7-28xx v2/48xx v2/88xx v2 series “Ivy Bridge-EX” . . . . . . . . . . . . . . . . . . . . 52
11.8 Haswell-based Xeon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.8.1 E3-12xx v3-series “Haswell” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
vi CONTENTS
12 Distributed computing 55
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
12.1.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
12.2 Parallel and distributed computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
12.3 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
12.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
12.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
12.6 Theoretical foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
12.6.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
12.6.2 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
12.6.3 Complexity measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
12.6.4 Other problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
12.6.5 Properties of distributed systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
12.7 Coordinator election . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
12.7.1 Bully algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
12.7.2 Chang and Roberts algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
12.8 Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
12.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
12.10Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
12.11References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
12.12Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
12.13External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
13 Service-oriented architecture 64
13.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
13.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
13.3 SOA framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
13.4 Design concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
13.5 Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
13.5.1 Service architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
13.5.2 Service composition architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
13.5.3 Service inventory architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
13.5.4 Service-oriented enterprise architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
13.6 Web services approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
13.7 Web service protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
13.8 Other SOA concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
CONTENTS vii
Multiprocessing
1
2 CHAPTER 1. MULTIPROCESSING
or MISD, used for redundancy in fail-safe systems and 1.2.4 Multiprocessor Communication Ar-
sometimes applied to describe pipelined processors or chitecture
hyper-threading), or multiple sequences of instructions
in multiple contexts (multiple-instruction, multiple-data Message passing
or MIMD).
Separate address space for each processor.
Additionally, programs must be carefully and specially the operating system of a computer but does not require
written to take maximum advantage of the architecture, application changes unless the programs themselves use
and often special optimizing compilers designed to pro- multiple threads (MIMD is transparent to single-threaded
duce code specifically for this environment must be used. programs under most operating systems, if the programs
Some compilers in this category provide special con- do not voluntarily relinquish control to the OS). Both sys-
structs or extensions to allow programmers to directly tem and user software may need to use software con-
specify operations to be performed in parallel (e.g., DO structs such as semaphores (also called locks or gates) to
FOR ALL statements in the version of FORTRAN used prevent one thread from interfering with another if they
on the ILLIAC IV, which was a SIMD multiprocessing should happen to cross paths in referencing the same data.
supercomputer). This gating or locking process increases code complexity,
lowers performance, and greatly increases the amount of
SIMD multiprocessing finds wide use in certain domains
such as computer simulation, but is of little use in general- testing required, although not usually enough to negate
the advantages of multiprocessing.
purpose desktop and business computing environments.
Similar conflicts can arise at the hardware level between
processors (cache contention and corruption, for exam-
1.3.3 MISD multiprocessing ple), and must usually be resolved in hardware, or with a
combination of software and hardware (e.g., cache-clear
Main article: MISD instructions).
1.5 References
1.3.4 MIMD multiprocessing
[1] Raj Rajagopal (1999). Introduction to Microsoft Win-
Main article: MIMD dows NT Cluster Server: Programming and Administra-
tion. CRC Press. p. 4. ISBN 978-1-4200-7548-9.
MIMD multiprocessing architecture is suitable for a wide [2] Mike Ebbers; John Kettner; Wayne O'Brien; Bill Ogden,
variety of tasks in which completely independent and par- IBM Redbooks (2012). Introduction to the New Main-
allel execution of instructions touching different sets of frame: z/OS Basics. IBM Redbooks. p. 96. ISBN 978-0-
data can be put to productive use. For this reason, and 7384-3534-3.
because it is easy to implement, MIMD predominates in
[3] Chip multiprocessing
multiprocessing.
Processing is divided into multiple threads, each with its [4] http://www.yourdictionary.com/multiprocessor
own hardware processor state, within a single software-
[5] http://www.thefreedictionary.com/multiprocessor
defined process or within multiple processes. Insofar as
a system has multiple threads awaiting dispatch (either [6] Irv Englander (2009). The architecture of Computer Hard-
system or user threads), this architecture makes good use ware and Systems Software. An Information Technology
of hardware resources. Approach. (4th ed.). Wiley. p. 265.
MIMD does raise issues of deadlock and resource con- [7] Deborah Morley; Charles Parker (13 February 2012).
tention, however, since threads may collide in their ac- Understanding Computers: Today and Tomorrow, Com-
cess to resources in an unpredictable way that is difficult prehensive. Cengage Learning. p. 183. ISBN 1-133-
to manage efficiently. MIMD requires special coding in 19024-3.
4 CHAPTER 1. MULTIPROCESSING
Computer multitasking
For other uses, see Multitasking (disambiguation). executed at once (physically, one per CPU or core), mul-
In computing, multitasking is a method where multiple titasking allows many more tasks to be run than there
are CPUs. The term multitasking has become an inter-
national term, as the same word is used in many other
languages such as German, Italian, Dutch, Danish and
Norwegian.
Operating systems may adopt one of many different
scheduling strategies, which generally fall into the follow-
ing categories:
5
6 CHAPTER 2. COMPUTER MULTITASKING
the context of this program was stored away, and the sec- 2.3 Preemptive multitasking
ond program in memory was given a chance to run. The
process continued until all programs finished running.
The use of multiprogramming was enhanced by the ar-
rival of virtual memory and virtual machine technology,
Main article: Preemption (computing)
which enabled individual programs to make use of mem-
ory and operating system resources as if other concur-
rently running programs were, for all practical purposes, Preemptive multitasking allows the computer system to
non-existent and invisible to them. guarantee more reliably each process a regular “slice”
of operating time. It also allows the system to deal
Multiprogramming doesn't give any guarantee that a pro-
rapidly with important external events like incoming data,
gram will run in a timely manner. Indeed, the very first
which might require the immediate attention of one or an-
program may very well run for hours without needing ac-
other process. Operating systems were developed to take
cess to a peripheral. As there were no users waiting at an
advantage of these hardware capabilities and run mul-
interactive terminal, this was no problem: users handed
tiple processes preemptively. Preemptive multitasking
in a deck of punched cards to an operator, and came back
was supported on DEC’s PDP-8 computers, and imple-
a few hours later for printed results. Multiprogramming
mented in OS/360 MFT in 1967, in MULTICS (1964),
greatly reduced wait times when multiple batches were
and Unix (1969); it is a core feature of all Unix-like op-
being processed.
erating systems, such as Linux, Solaris and BSD with its
derivatives.[2]
At any specific time, processes can be grouped into two
categories: those that are waiting for input or output
2.2 Cooperative multitasking (called "I/O bound"), and those that are fully utilizing the
CPU ("CPU bound"). In primitive systems, the software
would often "poll", or "busywait" while waiting for re-
See also: Nonpreemptive multitasking quested input (such as disk, keyboard or network input).
During this time, the system was not performing useful
The expression “time sharing” usually designated com- work. With the advent of interrupts and preemptive mul-
puters shared by interactive users at terminals, such as titasking, I/O bound processes could be “blocked”, or put
IBM’s TSO, and VM/CMS. The term “time-sharing” is on hold, pending the arrival of the necessary data, allow-
no longer commonly used, having been replaced by “mul- ing other processes to utilize the CPU. As the arrival of
titasking”, following the advent of personal computers the requested data would generate an interrupt, blocked
and workstations rather than shared interactive systems. processes could be guaranteed a timely return to execu-
Early multitasking systems used applications that volun- tion.
tarily ceded time to one another. This approach, which The earliest preemptive multitasking OS available to
was eventually supported by many computer operating home users was Sinclair QDOS on the Sinclair QL, re-
systems, is known today as cooperative multitasking. Al- leased in 1984, but very few people bought the machine.
though it is now rarely used in larger systems except for Commodore’s powerful Amiga, released the following
specific applications such as CICS or the JES2 subsystem, year, was the first commercially successful home com-
cooperative multitasking was once the scheduling scheme puter to use the technology, and its multimedia abilities
employed by Microsoft Windows (prior to Windows 95 make it a clear ancestor of contemporary multitasking
and Windows NT) and Mac OS (prior to OS X) in or- personal computers. Microsoft made preemptive multi-
der to enable multiple applications to be run simultane- tasking a core feature of their flagship operating system
ously. Windows 9x also used cooperative multitasking, in the early 1990s when developing Windows NT 3.1 and
but only for 16-bit legacy applications, much the same then Windows 95. It was later adopted on the Apple Mac-
way as pre-Leopard PowerPC versions of Mac OS X used intosh by Mac OS X that, as a Unix-like operating system,
it for Classic applications. The network operating system uses preemptive multitasking for all native applications.
NetWare used cooperative multitasking up to NetWare A similar model is used in Windows 9x and the Windows
6.5. Cooperative multitasking is still used today on RISC NT family, where native 32-bit applications are multi-
OS systems.[1] tasked preemptively, and legacy 16-bit Windows 3.x pro-
As a cooperatively multitasked system relies on each pro- grams are multitasked cooperatively within a single pro-
cess regularly giving up time to other processes on the sys- cess, although in the NT family it is possible to force a
tem, one poorly designed program can consume all of the 16-bit application to run as a separate preemptively mul-
CPU time for itself, either by performing extensive cal- titasked process.[3] 64-bit editions of Windows, both for
culations or by busy waiting; both would cause the whole the x86-64 and Itanium architectures, no longer provide
system to hang. In a server environment, this is a hazard support for legacy 16-bit applications, and thus provide
that makes the entire environment unacceptably fragile. preemptive multitasking for all supported applications.
2.7. MEMORY SWAPPING 7
2.4 Real time cess attempts to access a memory location outside of its
memory space, the MMU denies the request and signals
Another reason for multitasking was in the design of real- the kernel to take appropriate actions; this usually results
time computing systems, where there are a number of in forcibly terminating the offending process. Depending
possibly unrelated external activities needed to be con- on the software and kernel design and the specific error
trolled by a single processor system. In such systems a hi- in question, the user may receive an access violation error
erarchical interrupt system is coupled with process prior- message such as “segmentation fault”.
itization to ensure that key activities were given a greater In a well designed and correctly implemented multitask-
share of available process time. ing system, a given process can never directly access
memory that belongs to another process. An exception
to this rule is in the case of shared memory; for exam-
2.5 Multithreading ple, in the System V inter-process communication mech-
anism the kernel allocates memory to be mutually shared
by multiple processes. Such features are often used by
As multitasking greatly improved the throughput of com-
database management software such as PostgreSQL.
puters, programmers started to implement applications as
sets of cooperating processes (e. g., one process gathering Inadequate memory protection mechanisms, either due to
input data, one process processing input data, one process flaws in their design or poor implementations, allow for
writing out results on disk). This, however, required some security vulnerabilities that may be potentially exploited
tools to allow processes to efficiently exchange data. by malicious software.
Threads were born from the idea that the most efficient
way for cooperating processes to exchange data would be
to share their entire memory space. Thus, threads are 2.7 Memory swapping
effectively processes that run in the same memory con-
text and share other resources with their parent processes,
such as open files. Threads are described as lightweight Use of a swap file or swap partition is a way for the op-
processes because switching between threads does not in- erating system to provide more memory than is physi-
volve changing the memory context.[4][5][6] cally available by keeping portions of the primary mem-
ory in secondary storage. While multitasking and mem-
While threads are scheduled preemptively, some operat- ory swapping are two completely unrelated techniques,
ing systems provide a variant to threads, named fibers, they are very often used together, as swapping memory
that are scheduled cooperatively. On operating systems allows more tasks to be loaded at the same time. Typ-
that do not provide fibers, an application may implement ically, a multitasking system allows another process to
its own fibers using repeated calls to worker functions. run when the running process hits a point where it has
Fibers are even more lightweight than threads, and some- to wait for some portion of memory to be reloaded from
what easier to program with, although they tend to lose secondary storage.
some or all of the benefits of threads on machines with
multiple processors.
Some systems directly support multithreading in hard-
ware. 2.8 Programming
Processes that are entirely independent are not much trou-
2.6 Memory protection ble to program in a multitasking environment. Most of
the complexity in multitasking systems comes from the
need to share computer resources between tasks and to
Main article: Memory protection
synchronize the operation of co-operating tasks.
Essential to any multitasking system is to safely and effec- Various concurrent computing techniques are used to
tively share access to system resources. Access to mem- avoid potential problems caused by multiple tasks at-
ory must be strictly managed to ensure that no process tempting to access the same resource.
can inadvertently or deliberately read or write to memory Bigger systems were sometimes built with a central pro-
locations outside of the process’s address space. This is cessor(s) and some number of I/O processors, a kind of
done for the purpose of general system stability and data asymmetric multiprocessing.
integrity, as well as data security. Over the years, multitasking systems have been refined.
In general, memory access management is the operating Modern operating systems generally include detailed
system kernel’s responsibility, in combination with hard- mechanisms for prioritizing processes, while symmetric
ware mechanisms (such as the memory management unit multiprocessing has introduced new complexities and ca-
(MMU)) that provide supporting functionalities. If a pro- pabilities.
8 CHAPTER 2. COMPUTER MULTITASKING
2.10 References
[1] “Preemptive multitasking”. riscos.info. 2009-11-03. Re-
trieved 2014-07-27.
Symmetric multiprocessing
9
10 CHAPTER 3. SYMMETRIC MULTIPROCESSING
I/O handler) and peripherals could generally be attached performance increase even when they have been writ-
to either processor.[5] ten for uniprocessor systems. This is because hardware
The MTS supervisor (UMMPS) ran on either or both interrupts that usually suspend program execution while
CPUs of the IBM System/360 model 67-2. Supervisor the kernel handles them can execute on an idle processor
locks were small and were used to protect individual com- instead. The effect in most applications (e.g. games) is
mon data structures that might be accessed simultane- not so much a performance increase as the appearance
ously from either CPU.[6] that the program is running much more smoothly. Some
applications, particularly compilers and some distributed
Digital Equipment Corporation's first multi-processor computing projects, run faster by a factor of (nearly) the
VAX system, the VAX-11/782, was asymmetric,[7] but number of additional processors.
later VAX multiprocessor systems were SMP.[8]
Systems programmers must build support for SMP into
The first commercial Unix SMP implementation was the operating system: otherwise, the additional proces-
the NUMA based Honeywell Information Systems Italy sors remain idle and the system functions as a uniproces-
XPS-100 designed by Dan Gielan of VAST Corporation sor system.
in 1985. Its design supported up to 14 processors al-
though due to electrical limitations the largest marketed SMP systems can also lead to more complexity regard-
version was a dual processor system. The operating sys- ing instruction sets. A homogeneous processor system
tem was derived and ported by VAST Corporation from typically requires extra registers for “special instructions”
AT&T 3B20 Unix SysVr3 code used internally within such as SIMD (MMX, SSE, etc.), while a heterogeneous
AT&T. system can implement different types of hardware for
different instructions/uses.
3.3 Uses
3.5 Performance
Time-sharing and server systems can often use SMP
without changes to applications, as they may have multi- When more than one program executes at the same time,
ple processes running in parallel, and a system with more an SMP system has considerably better performance than
than one process running can run different processes on a uni-processor, because different programs can run on
different processors. different CPUs simultaneously.
On personal computers, SMP is less useful for appli- In cases where an SMP environment processes many
cations that have not been modified. If the system jobs, administrators often experience a loss of hardware
rarely runs more than one process at a time, SMP is efficiency. Software programs have been developed to
useful only for applications that have been modified schedule jobs so that the processor utilization reaches
for multithreaded (multitasked) processing. Custom- its maximum potential. Good software packages can
programmed software can be written or modified to use achieve this maximum potential by scheduling each CPU
multiple threads, so that it can make use of multiple separately, as well as being able to integrate multiple SMP
processors. However, most consumer products such as machines and clusters.
word processors and computer games are written in such Access to RAM is serialized; this and cache coherency
a manner that they cannot gain large benefits from con- issues causes performance to lag slightly behind the num-
current systems. For games this is usually because writ- ber of additional processors in the system.
ing a program to increase performance on SMP systems
can produce a performance loss on uniprocessor systems.
Recently, however, multi-core chips are becoming more
common in new computers, and the balance between in- 3.6 Systems
stalled uni- and multi-core computers may change in the
coming years. 3.6.1 Entry-level systems
Multithreaded programs can also be used in time-sharing
and server systems that support multithreading, allowing Before about 2006, entry-level servers and workstations
them to make more use of multiple processors. with two processors dominated the SMP market. With
the introduction of dual-core devices, SMP is found
in most new desktop machines and in many laptop
machines. The most popular entry-level SMP sys-
3.4 Programming tems use the x86 instruction set architecture and are
based on Intel’s Xeon, Pentium D, Core Duo, and
Uniprocessor and SMP systems require different pro- Core 2 Duo based processors or AMD’s Athlon64 X2,
gramming methods to achieve maximum performance. Quad FX or Opteron 200 and 2000 series processors.
Programs running on SMP systems may experience a Servers use those processors and other readily avail-
3.7. ALTERNATIVES 11
able non-x86 processor choices, including the Sun Mi- of 64 GiB. With the introduction of 64-bit memory ad-
crosystems UltraSPARC, Fujitsu SPARC64 III and later, dressing on the AMD64 Opteron in 2003 and Intel 64
SGI MIPS, Intel Itanium, Hewlett Packard PA-RISC, (EM64T) Xeon in 2005, systems are able to address much
Hewlett-Packard (merged with Compaq, which acquired larger amounts of memory; their addressable limitation
first Digital Equipment Corporation) DEC Alpha, IBM of 16 EiB is not expected to be reached in the foresee-
POWER and PowerPC (specifically G4 and G5 series, as able future.
well as earlier PowerPC 604 and 604e series) processors.
In all cases, these systems are available in uniprocessor
versions as well. 3.7 Alternatives
Earlier SMP systems used motherboards that have two or
more CPU sockets. More recently, microprocessor man-
ufacturers introduced CPU devices with two or more pro- CPU CPU CPU
cessors in one device, for example, the Itanium, POWER, I/O
UltraSPARC, Opteron, Athlon, Core 2, and Xeon all have
System Bus or Crossbar Switch
multi-core variants. Athlon and Core 2 Duo multipro-
cessors are socket-compatible with uniprocessor variants,
so an expensive dual socket motherboard is no longer Memory
needed to implement an entry-level SMP machine. It
should also be noted that dual socket Opteron designs
are technically ccNUMA designs, though they can be Diagram of a typical SMP system. Three processors are con-
programmed as SMP for a slight loss in performance. nected to the same memory module through a system bus or
Software based SMP systems can be created by linking crossbar switch
smaller systems together. An example of this is the soft-
ware developed by ScaleMP. SMP using a single shared system bus represents one of
the earliest styles of multiprocessor machine architec-
With the introduction of ARM Cortex-A9 multi-core tures, typically used for building smaller computers with
SoCs, low-cost symmetric multiprocessing embedded up to 8 processors.
systems began to flourish in the form of smartphones and
tablet computers with a multi-core processor. Larger computer systems might use newer architectures
such as NUMA (Non-Uniform Memory Access), which
dedicates different memory banks to different proces-
sors. In a NUMA architecture, processors may access
3.6.2 Mid-level systems
local memory quickly and remote memory more slowly.
This can dramatically improve memory throughput as
The Burroughs D825 first implemented SMP in
long as the data are localized to specific processes (and
1962.[9][10] It was implemented later on other main-
thus processors). On the downside, NUMA makes the
frames. Mid-level servers, using between four and eight
cost of moving data from one processor to another, as
processors, can be found using the Intel Xeon MP, AMD
in workload balancing, more expensive. The benefits of
Opteron 800 and 8000 series and the above-mentioned
NUMA are limited to particular workloads, notably on
UltraSPARC, SPARC64, MIPS, Itanium, PA-RISC,
servers where the data are often associated strongly with
Alpha and POWER processors. High-end systems, with
certain tasks or users.
sixteen or more processors, are also available with all of
the above processors. Finally, there is computer clustered multiprocessing
(such as Beowulf), in which not all memory is available
Sequent Computer Systems built large SMP machines
to all processors. Clustering techniques are used fairly
using Intel 80386 (and later 80486) processors. Some
extensively to build very large supercomputers.
smaller 80486 systems existed, but the major x86 SMP
market began with the Intel Pentium technology support-
ing up to two processors. The Intel Pentium Pro ex-
panded SMP support with up to four processors natively. 3.8 See also
Later, the Intel Pentium II, and Intel Pentium III proces-
sors allowed dual CPU systems, except for the respective • Asymmetric multiprocessing
Celerons. This was followed by the Intel Pentium II Xeon
• Binary Modular Dataflow Machine
and Intel Pentium III Xeon processors, which could be
used with up to four processors in a system natively. In • Locale (computer hardware)
2001 AMD released their Athlon MP, or MultiProcessor
CPU, together with the 760MP motherboard chipset as • Massively parallel
their first offering in the dual processor marketplace. Al- • Non-Uniform Memory Access
though several much larger systems were built, they were
all limited by the physical memory addressing limitation • Sequent Computer Systems
12 CHAPTER 3. SYMMETRIC MULTIPROCESSING
• Software lockout
• Xeon Phi
3.9 References
[1] Lina J. Karam, Ismail AlKamal, Alan Gatherer, Gene
A. Frantz, David V. Anderson, Brian L. Evans (2009).
“Trends in Multi-core DSP Platforms”. IEEE Signal Pro-
cessing Magazine, Special Issue on Signal Processing on
Platforms with Multiple Cores.
[7] VAX Product Sales Guide, pages 1-23 and 1-24: the
VAX-11/782 is described as an asymmetric multiprocess-
ing system in 1982
[9] 1962
• AMD
Chapter 4
Asymmetric multiprocessing
Asymmetric multiprocessing (AMP) was a software much less expensive than a CPU that ran twice as fast.
stopgap for handling multiple CPUs before symmetric Also, adding a second CPU was less expensive than a sec-
multiprocessing (SMP) was available. It has also been ond complete computer, which would need its own pe-
used to provide less expensive options[1] on systems ripherals, thus requiring much more floor space and an
where SMP was available. In an asymmetric multipro- increased operations staff.
cessing system, not all CPUs are treated equally; for ex- Notable early offerings by computer manufacturers were
ample, a system might only allow (either at the hardware
the Burroughs B5000, the DECsystem-1055, and the
or operating system level) one CPU to execute operating IBM System/360 model 65MP. There were also dual-
system code or might only allow one CPU to perform I/O
CPU machines built at universities.[3]
operations. Other AMP systems would allow any CPU
to execute operating system code and perform I/O opera- The problem with adding a second CPU to a computer
tions, so that they were symmetric with regard to proces- system was that the operating system had been developed
sor roles, but attached some or all peripherals to partic- for single-CPU systems, and extending it to handle mul-
ular CPUs, so that they were asymmetric with regard to tiple CPUs efficiently and reliably took a long time. To
peripheral attachment. fill the gap, operating systems intended for single CPUs
were initially extended to provide minimal support for a
second CPU. In this minimal support, the operating sys-
tem ran on the “boot” processor, with the other only al-
lowed to run user programs. In the case of the Burroughs
B5000, the second processor’s hardware was not capable
of running “control state” code.[4]
Other systems allowed the operating system to run on all
processors, but either attached all the peripherals to one
Asymmetric multiprocessing processor or attached particular peripherals to particular
processors.
Multiprocessing is the use of more than one CPU in a
computer system. The CPU is the arithmetic and logic
engine that executes user applications; an I/O interface 4.2 Burroughs B5000 and B5500
such as a GPU, even if it is implemented using an em-
bedded processor, does not constitute a CPU because it
An option on the Burroughs B5000 was “Processor B”.
does not run the user’s application program. With multi-
This second processor, unlike “Processor A” had no con-
ple CPUs, more than one set of program instructions can
nection to the peripherals, though the two processors
be executed at the same time. All of the CPUs have the
shared main memory, and Processor B could not run in
same user-mode instruction set, so a running job can be
Control State.[4] The operating system ran only on Pro-
rescheduled from one CPU to another.[2]
cessor A. When there was a user job to be executed, it
might be run on Processor B, but when that job tried to
access the operating system the processor halted and sig-
4.1 Background and history naled Processor A. The requested operating system ser-
vice was then run on Processor A.
For the room-size computers of the 1960s and 1970s, On the B5500, either Processor A or Processor B could
a cost-effective way to increase compute power was to be designated as Processor 1 by a switch on the engineer’s
add a second CPU. Since these computers were al- panel, with the other processor being Processor 2; both
ready close to the fastest available (near the peak of the processors shared main memory and had hardware access
price:performance ratio), two standard-speed CPUs were to the I/O processors hence the peripherals, but only Pro-
13
14 CHAPTER 4. ASYMMETRIC MULTIPROCESSING
• Multi-core (computing)
• Software lockout
4.4 DECsystem-1055
• Giant lock
Digital Equipment Corporation (DEC) offered a dual-
• Symmetric multiprocessing
processor version of its DECsystem-1050 which used two
KA10 processors.[8][9] This offering was extended to later • Heterogeneous computing
processors in the PDP-10 line.
• big.LITTLE
• Tegra 3
4.5 PDP-11/74
Digital Equipment Corporation developed, but never re- 4.10 Notes
leased, a multiprocessor PDP-11, the PDP-11/74,[10]
running a multiprocessor version of RSX-11M.[11] In that [1] IBM (December 1976). IBM System/370 System Sum-
system, either processor could run operating system code, mary. Seventh Edition. pp. 6–12, 6–15–6.16.1.
and could perform I/O, but not all peripherals were acces- GA22·7001·6.
sible to all processors; most peripherals were attached to
[2] Introduction to Multiprocessing: distinguishes “symmet-
one or the other of the CPUs, so that a processor to which ric” from “master/slave”
a peripheral wasn't attached would, when it needed to per-
form an I/O operation on that peripheral, request the pro- [3] Early Computers at Stanford: the dual processor computer
cessor to which the peripheral was attached to perform at the AI lab
the operation.[11]
[4] “Operational Characteristics of the Processors for the
Burroughs B5000”. Burroughs.
[12] VAX Product Sales Guide, pages 1-23 and 1-24: the
VAX-11/782 is described as an asymmetric multiprocess-
ing system in 1982
4.11 References
• Bell, C. Gordon, Mudge, J. Craig, McNamara John
E. “The PDP-10 Family”. (1979). Part V of Com-
puter Engineering: A DEC View of Hardware Sys-
tems Design. Digital Equipment Corp.
16
5.3. NUMA VS. CLUSTER COMPUTING 17
include additional hardware or software to move data be- As of 2011, ccNUMA systems are multiprocessor sys-
tween memory banks. This operation slows the proces- tems based on the AMD Opteron processor, which can
sors attached to those banks, so the overall speed increase be implemented without external logic, and the Intel
due to NUMA depends heavily on the nature of the run- Itanium processor, which requires the chipset to support
ning tasks.[3] NUMA. Examples of ccNUMA-enabled chipsets are the
Intel announced NUMA compatibility for its x86 and Ita- SGI Shub (Super hub), the Intel E8870, the HP sx2000
nium servers in late 2007 with its Nehalem and Tukwila (used in the Integrity and Superdome servers), and those
CPUs.[5] Both CPU families share a common chipset; the found in NEC Itanium-based systems. Earlier ccNUMA
systems such as those from Silicon Graphics were based
interconnection is called Intel Quick Path Interconnect
(QPI).[6] AMD implemented NUMA with its Opteron on MIPS processors and the DEC Alpha 21364 (EV7)
processor.
processor (2003), using HyperTransport. Freescale’s
NUMA for PowerPC is called CoreNet.
• Symmetric multiprocessing (SMP) [12] Java HotSpot™ Virtual Machine Performance Enhance-
ments
• Cache only memory architecture (COMA)
[13] “Linux Scalability Effort: NUMA Group Homepage”.
• Scratchpad memory (SPM) sourceforge.net. 2002-11-20. Retrieved 2014-02-06.
• Supercomputer [14] “Linux kernel 3.8, Section 1.8. Automatic NUMA bal-
ancing”. kernelnewbies.org. 2013-02-08. Retrieved
• Silicon Graphics (SGI) 2014-02-06.
[3] Zoltan Majo; Thomas R. Gross (2011). “Memory Sys- 5.7 External links
tem Performance in a NUMA Multicore Multiprocessor”
(PDF). ACM. Retrieved 2014-01-27. • NUMA FAQ
[4] “Intel Dual-Channel DDR Memory Architecture White
• Page-based distributed shared memory
Paper” (PDF, 1021 KB) (Rev. 1.0 ed.). Infineon
Technologies North America and Kingston Technology. • OpenSolaris NUMA Project
September 2003. Archived from the original on 2011-
09-29. Retrieved 2007-09-06. • Introduction video for the Alpha EV7 system archi-
tecture
[5] Intel Corp. (2008). Intel QuickPath Architecture [White
paper]. Retrieved from http://www.intel.com/pressroom/ • More videos related to EV7 systems: CPU, IO, etc
archive/reference/whitepaper_QuickPath.pdf
• NUMA optimization in Windows Applications
[6] Intel Corporation. (September 18th, 2007). Gelsinger
Speaks To Intel And High-Tech Industry’s Rapid • NUMA Support in Linux at SGI
Technology Caden[Press release]. Retrieved from
http://www.intel.com/pressroom/archive/releases/2007/ • Intel Tukwila
20070918corp_b.htm
• Intel QPI (CSI) explained
[7] “ccNUMA: Cache Coherent Non-Uniform Memory Ac-
cess”. slideshare.net. 2014. Retrieved 2014-01-27. • current Itanium NUMA systems
Multi-core processor
Back side
19
20 CHAPTER 6. MULTI-CORE PROCESSOR
diminished gains in processor performance from in- (FSB). In terms of competing technologies for the avail-
creasing the operating frequency. This is due to able silicon die area, multi-core design can make use of
three primary factors: proven CPU core library designs and produce a product
with lower risk of design error than devising a new wider
1. The memory wall; the increasing gap between core-design. Also, adding more cache suffers from di-
processor and memory speeds. This, in effect, minishing returns.
pushes for cache sizes to be larger in order to
Multi-core chips also allow higher performance at lower
mask the latency of memory. This helps only
energy. This can be a big factor in mobile devices that op-
to the extent that memory bandwidth is not the
erate on batteries. Since each and every core in multi-core
bottleneck in performance.
is generally more energy-efficient, the chip becomes more
2. The ILP wall; the increasing difficulty of find- efficient than having a single large monolithic core. This
ing enough parallelism in a single instruction allows higher performance with less energy. The chal-
stream to keep a high-performance single-core lenge of writing parallel code clearly offsets this benefit.[9]
processor busy.
3. The power wall; the trend of consuming expo-
nentially increasing power with each factorial 6.2.4 Disadvantages
increase of operating frequency. This increase
can be mitigated by "shrinking" the processor Maximizing the usage of the computing resources pro-
by using smaller traces for the same logic. The vided by multi-core processors requires adjustments both
power wall poses manufacturing, system de- to the operating system (OS) support and to existing ap-
sign and deployment problems that have not plication software. Also, the ability of multi-core proces-
been justified in the face of the diminished sors to increase application performance depends on the
gains in performance due to the memory wall use of multiple threads within applications.
and ILP wall. Integration of a multi-core chip drives chip production
yields down and they are more difficult to manage ther-
In order to continue delivering regular performance im- mally than lower-density single-core designs. Intel has
provements for general-purpose processors, manufactur- partially countered this first problem by creating its quad-
ers such as Intel and AMD have turned to multi-core core designs by combining two dual-core on a single die
designs, sacrificing lower manufacturing-costs for higher with a unified cache, hence any two working dual-core
performance in some applications and systems. Multi- dies can be used, as opposed to producing four cores on
core architectures are being developed, but so are the al- a single die and requiring all four to work to produce
ternatives. An especially strong contender for established a quad-core. From an architectural point of view, ulti-
markets is the further integration of peripheral functions mately, single CPU designs may make better use of the
into the chip. silicon surface area than multiprocessing cores, so a de-
velopment commitment to this architecture may carry the
risk of obsolescence. Finally, raw processing power is
6.2.3 Advantages not the only constraint on system performance. Two pro-
cessing cores sharing the same system bus and memory
The proximity of multiple CPU cores on the same die al- bandwidth limits the real-world performance advantage.
lows the cache coherency circuitry to operate at a much It has been claimed that if a single core is close to be-
higher clock-rate than is possible if the signals have to ing memory-bandwidth limited, then going to dual-core
travel off-chip. Combining equivalent CPUs on a sin- might give 30% to 70% improvement; if memory band-
gle die significantly improves the performance of cache width is not a problem, then a 90% improvement can
snoop (alternative: Bus snooping) operations. Put sim- be expected; however, Amdahl’s law makes this claim
ply, this means that signals between different CPUs travel dubious.[10] It would be possible for an application that
shorter distances, and therefore those signals degrade less. used two CPUs to end up running faster on one dual-core
These higher-quality signals allow more data to be sent in if communication between the CPUs was the limiting fac-
a given time period, since individual signals can be shorter tor, which would count as more than 100% improvement.
and do not need to be repeated as often.
Assuming that the die can physically fit into the package,
multi-core CPU designs require much less printed cir-
6.3 Hardware
cuit board (PCB) space than do multi-chip SMP designs.
Also, a dual-core processor uses slightly less power than 6.3.1 Trends
two coupled single-core processors, principally because
of the decreased power required to drive signals external The general trend in processor development has moved
to the chip. Furthermore, the cores share some circuitry, from dual-, tri-, quad-, hex-, oct-core chips to ones with
like the L2 cache and the interface to the front side bus tens or even thousands of cores. In addition, multi-
22 CHAPTER 6. MULTI-CORE PROCESSOR
core chips mixed with simultaneous multithreading, Although threaded applications incur little additional per-
memory-on-chip, and special-purpose “heterogeneous” formance penalty on single-processor machines, the extra
cores promise further performance and efficiency gains, overhead of development has been difficult to justify due
especially in processing multimedia, recognition and net- to the preponderance of single-processor machines. Also,
working applications. There is also a trend of improving serial tasks like decoding the entropy encoding algorithms
energy-efficiency by focusing on performance-per-watt used in video codecs are impossible to parallelize because
with advanced fine-grain or ultra fine-grain power man- each result generated is used to help create the next result
agement and dynamic voltage and frequency scaling (i.e. of the entropy decoding algorithm.
laptop computers and portable media players).
Given the increasing emphasis on multi-core chip design,
stemming from the grave thermal and power consump-
tion problems posed by any further significant increase in
6.3.2 Architecture processor clock speeds, the extent to which software can
be multithreaded to take advantage of these new chips
The composition and balance of the cores in multi-core
is likely to be the single greatest constraint on computer
architecture show great variety. Some architectures use
performance in the future. If developers are unable to
one core design repeated consistently (“homogeneous”),
design software to fully exploit the resources provided by
while others use a mixture of different cores, each opti-
multiple cores, then they will ultimately reach an insur-
mized for a different, "heterogeneous" role.
mountable performance ceiling.
The article “CPU designers debate multi-core future” by
The telecommunications market had been one of the first
Rick Merritt, EE Times 2008,[11] includes these com-
that needed a new design of parallel datapath packet pro-
ments:
cessing because there was a very quick adoption of these
multiple-core processors for the datapath and the control
Chuck Moore [...] suggested computers plane. These MPUs are going to replace[12] the tradi-
should be more like cellphones, using a va- tional Network Processors that were based on proprietary
riety of specialty cores to run modular soft- micro- or pico-code.
ware scheduled by a high-level applications
Parallel programming techniques can benefit from mul-
programming interface.
tiple cores directly. Some existing parallel program-
[...] Atsushi Hasegawa, a senior chief en-
ming models such as Cilk Plus, OpenMP, OpenHMPP,
gineer at Renesas, generally agreed. He sug-
FastFlow, Skandium, MPI, and Erlang can be used on
gested the cellphone’s use of many specialty
multi-core platforms. Intel introduced a new abstraction
cores working in concert is a good model for
for C++ parallelism called TBB. Other research efforts
future multi-core designs.
include the Codeplay Sieve System, Cray’s Chapel, Sun’s
[...] Anant Agarwal, founder and chief ex- Fortress, and IBM’s X10.
ecutive of startup Tilera, took the opposing
view. He said multi-core chips need to be Multi-core processing has also affected the ability of
homogeneous collections of general-purpose modern computational software development. Develop-
cores to keep the software model simple. ers programming in newer languages might find that their
modern languages do not support multi-core functional-
ity. This then requires the use of numerical libraries to ac-
cess code written in languages like C and Fortran, which
6.4 Software effects perform math computations faster than newer languages
like C#. Intel’s MKL and AMD’s ACML are written in
An outdated version of an anti-virus application may cre- these native languages and take advantage of multi-core
ate a new thread for a scan process, while its GUI thread processing. Balancing the application workload across
waits for commands from the user (e.g. cancel the scan). processors can be problematic, especially if they have dif-
In such cases, a multi-core architecture is of little bene- ferent performance characteristics. There are different
fit for the application itself due to the single thread doing conceptual models to deal with the problem, for exam-
all the heavy lifting and the inability to balance the work ple using a coordination language and program building
evenly across multiple cores. Programming truly multi- blocks (programming libraries or higher-order functions).
threaded code often requires complex co-ordination of Each block can have a different native implementation for
threads and can easily introduce subtle and difficult-to- each processor type. Users simply program using these
find bugs due to the interweaving of processing on data abstractions and an intelligent compiler chooses the best
shared between threads (thread-safety). Consequently, implementation based on the context.[13]
such code is much more difficult to debug than single-
Managing concurrency acquires a central role in devel-
threaded code when it breaks. There has been a perceived
oping parallel applications. The basic steps in designing
lack of motivation for writing consumer-level threaded
parallel applications are:
applications because of the relative rarity of consumer-
level demand for maximum use of computer hardware.
6.5. EMBEDDED APPLICATIONS 23
• Microsoft has stated that it would treat a socket as a • Aeroflex Gaisler LEON3, a multi-core SPARC that
single processor.[14][15] also exists in a fault-tolerant version.
• Ageia PhysX, a multi-core physics processing unit.
• Oracle Corporation counts an AMD X2 or an In- • Ambric Am2045, a 336-core Massively Parallel
tel dual-core CPU as a single processor but uses Processor Array (MPPA)
other metrics for other types, especially for proces-
sors with more than two cores.[16] • AMD
24 CHAPTER 6. MULTI-CORE PROCESSOR
• A-Series, dual-, triple-, and quad-core of Ac- • POWER4, a dual-core processor, released in
celerated Processor Units (APU). 2001.
• Athlon 64, Athlon 64 FX and Athlon 64 X2 • POWER5, a dual-core processor, released in
family, dual-core desktop processors. 2004.
• Athlon II, dual-, triple-, and quad-core desktop • POWER6, a dual-core processor, released in
processors. 2007.
• FX-Series, quad-, 6-, and 8-core desktop pro- • POWER7, a 4,6,8-core processor, released in
cessors. 2010.
• Opteron, dual-, quad-, 6-, 8-, 12-, and 16-core • POWER8, a 12-core processor, released in
server/workstation processors. 2013.
• Phenom, dual-, triple-, and quad-core proces- • PowerPC 970MP, a dual-core processor, used
sors. in the Apple Power Mac G5.
• Phenom II, dual-, triple-, quad-, and 6-core • Xenon, a triple-core, SMT-capable, PowerPC
desktop processors. microprocessor used in the Microsoft Xbox
• Sempron X2, dual-core entry level processors. 360 game console.
• Xeon dual-, quad-, 6-, 8-, 10- and 15-core • SPARC T5, a sixteen-core, 128-concurrent-
processors.[19] thread processor.
• Xeon Phi 57-core, 60-core and 61-core pro- • Texas Instruments
cessors.
• TMS320C80 MVP, a five-core multimedia
• IntellaSys video processor.
• SEAforth 40C18, a 40-core processor[20] • TMS320TMS320C66, 2,4,8 core dsp.
• SEAforth24, a 24-core processor designed by • Tilera
Charles H. Moore
• TILE64, a 64-core 32-bit processor
• NetLogic Microsystems
• TILE-Gx, a 72-core 64-bit processor
• XLP, a 32-core, quad-threaded MIPS64 pro- • XMOS Software Defined Silicon quad-core XS1-
cessor G4
• XLR, an eight-core, quad-threaded MIPS64
processor
• XLS, an eight-core, quad-threaded MIPS64
6.6.2 Free
processor • OpenSPARC
• Nvidia
multi-core DSPs with very large numbers of proces- [5] Aater Suleman (May 20, 2011). “What makes parallel
sors. programming hard?". FutureChips. Retrieved March 6,
2013.
2. ^ Two types of operating systems are able to use
a dual-CPU multiprocessor: partitioned multipro- [6] Programming Many-Core Chips. By András Vajda, page
3
cessing and symmetric multiprocessing (SMP). In a
partitioned architecture, each CPU boots into sep- [7] Ryan Shrout (December 2, 2009). “Intel Shows 48-core
arate segments of physical memory and operate in- x86 Processor as Single-chip Cloud Computer”. Re-
dependently; in an SMP OS, processors work in a trieved March 6, 2013.
shared space, executing threads within the OS inde-
[8] “Intel unveils 48-core cloud computing silicon chip”.
pendently.
BBC. December 3, 2009. Retrieved March 6, 2013.
• Multicore Association [11] Rick Merritt (February 6, 2008). “CPU designers debate
multi-core future”. EE Times. Retrieved March 6, 2013.
• Hyper-threading
[12] Multicore packet processing Forum
• Multitasking
[13] John Darlinton, Moustafa Ghanem, Yike Guo, Hing Wing
• PureMVC MultiCore – a modular programming To (1996), “Guided Resource Organisation in Heteroge-
neous Parallel Computing”, Journal of High Performance
framework
Computing 4 (1): 13–23
• XMTC [14] Multicore Processor Licensing
• Parallel Random Access Machine [15] Compare: “Multi-Core Processor Licensing”. down-
load.microsoft.com. Microsoft Corporation. 2004-10-19.
• Partitioned global address space (PGAS) p. 1. Retrieved 2015-03-05. On October 19, 2004, Mi-
crosoft announced that our server software that is cur-
• Thread rently licensed on a per-processor model will continue to
be licensed on a per-processor, and not on a per-core,
• CPU shielding
model.
• GPGPU [16] Compare: “The Licensing Of Oracle Technology Prod-
ucts”. OMT-CO Operations Management Technology
• CUDA
Consulting GmbH. Retrieved 2014-03-04.
• OpenCL (Open Computing Language) – a frame- [17] Maximizing network stack performance
work for heterogeneous execution
[18] 80-core prototype from Intel
• Ateji PX – an extension of the Java language for par-
allelism [19] 15 core Xeon
6.10 References
6.11 External links
[1] Margaret Rouse (March 27, 2007). “Definition: multi-
core processor”. TechTarget. Retrieved March 6, 2013. • What Is A Processor Core?
[2] CSA Organization • Embedded moves to multicore
[3] “Rockwell R65C00/21 Dual CMOS Microcomputer and • Multicore News blog
R65C29 Dual CMOS Microprocessor”. Rockwell Inter-
national. October 1984. • IEEE: Multicore Is Bad News For Supercomputers
[4] “Rockwell 1985 Data Book”. Rockwell International
Semiconductor Products Division. January 1985.
Chapter 7
This article is about the netbook and MID version of nouncement, outside sources had speculated that Atom
Atom. It is not to be confused with the Atom (system on would compete with AMD's Geode system-on-a-chip
chip) for smartphones and tablets. processors, used by the One Laptop per Child (OLPC)
project, and other cost and power sensitive applications
Intel Atom is the brand name for a line of ultra-low- for x86 processors. However, Intel revealed on 15 Oc-
voltage IA-32 and x86-64 CPUs (or microprocessors) tober 2007 that it was developing another new mo-
from Intel, originally designed in 45 nm complementary bile processor,
[12]
codenamed Diamondville, for OLPC-type
metal–oxide–semiconductor (CMOS) with subsequent devices.
models, codenamed Cedar, using a 32 nm process.[2] “Atom” was the name under which Silverthorne would be
Atom is mainly used in netbooks, nettops, embedded ap- sold, while the supporting chipset formerly code-named
plications ranging from health care to advanced robotics, Menlow was called Centrino Atom.[13]
and mobile Internet devices (MIDs). At Spring Intel Developer Forum (IDF) 2008 in
Atom processors are based on the Bonnell microarchi- Shanghai, Intel officially announced that Silverthorne
tecture.[3][4] On 21 December 2009, Intel announced the and Diamondville are based on the same microarchitec-
Pine Trail platform, including new Atom processor code- ture. Silverthorne would be called the Atom Z5xx se-
named Pineview (Atom N450), with total kit power con- ries and Diamondville would be called the Atom N2xx
sumption down 20%.[5] On 28 December 2011, Intel up- series. The more expensive lower-power Silverthorne
dated the Atom line with the Cedar processors.[2] parts will be used in Intel mobile Internet devices (MIDs)
In December 2012, Intel launched the 64-bit Centerton whereas Diamondville will be used in low-cost desktop
family of Atom CPUs, designed specifically for use in and notebooks. Several Mini-ITX motherboard samples
servers.[6] Centerton adds features previously unavailable have also been revealed.[14] Intel and Lenovo also jointly
in Atom processors, such as Intel VT virtualization tech- announced an Atom powered MID called the IdeaPad
nology and support for ECC memory.[7] On 4 September U8.[15]
2013 Intel launched a 22 nm successor to Centerton, co- In April 2008, a MID development kit was an-
denamed Avoton.[8] nounced by Sophia Systems[16] and the first board called
CoreExpress-ECO was revealed by a German company
In 2012, Intel announced a new system on chip (SoC)
platform designed for smartphones and tablets which LiPPERT Embedded Computers, GmbH.[17] Intel offers
[18][19]
would use the Atom line of CPUs.[9] It is a continuation Atom based motherboards.
of the partnership announced by Intel and Google on 13 In December 2012, Intel released Atom for servers, the
September 2011 to provide support for the Android oper- S1200 series. The primary difference between these pro-
ating system on Intel x86 processors.[10] This range com- cessors and all prior versions, is that ECC memory sup-
petes with existing SoCs developed for the smartphone port has been added, enabling the use of the Atom in
and tablet market from companies like Texas instruments, mission-critical server environments that demand redun-
Nvidia, Qualcomm and Samsung.[11] dancy and memory failure protection.
Intel Atom is a direct successor of the Intel A100 and 7.2.1 32-bit and 64-bit hardware support
A110 low-power microprocessors (code-named Stealey),
which were built on a 90 nm process, had 512 kB L2 All Atom processors implement the x86 (IA-32) instruc-
cache and ran at 600 MHz/800 MHz with 3 W TDP tion set; however, support for the Intel 64 instruction set
(Thermal Design Power). Prior to the Silverthorne an- was not added until the desktop Diamondville and desktop
27
28 CHAPTER 7. INTEL ATOM (CPU)
and mobile Pineview cores. The Atom N2xx and Z5xx se- and southbridges, onto a mainboard, Atom processors are
ries Atom models cannot run x86-64 code.[21] The Cen- not available to home users or system builders as sep-
terton server processors will support the Intel 64 instruc- arate processors, although they may be obtained prein-
tion set.[7] stalled on some ITX motherboards. The Diamondville
and Pineview[29] Atom is used in the HP Mini Series,
aigo MID Asus N10, Lenovo IdeaPad S10, Acer As-
7.2.2 Intel 64 software support pire One & Packard Bell’s “dot” (ZG5), recent ASUS
Eee PC systems, Sony VAIO M-series, AMtek Elego,
Intel states the Atom supports 64-bit operation only “with Dell Inspiron Mini Series, Gigabyte M912, LG X Series,
a processor, chipset, BIOS” that all support Intel 64. Samsung NC10, Sylvania g Netbook Meso, Toshiba NB
Those Atom systems not supporting all of these cannot series (100, 200, 205, 255, 300, 500, 505), MSI Wind PC
enable Intel 64.[22] As a result, the ability of an Atom- netbooks, RedFox Wizbook 1020i, Sony Vaio X Series,
based system to run 64-bit versions of operating sys- Zenith Z-Book, a range of Aleutia desktops, Magic W3
tems such as Ubuntu or Debian GNU/Linux may vary and the Archos.The Pineview line is also used in multiple
from one motherboard to another. Online retailer mini- AAC devices for the disabled individual who is unable
itx.com has tested Atom-based motherboards made by to speak and the AAC device assists the user in everyday
Intel and Jetway, and while they were able to install 64- communication with dedicated speech software.
bit versions of Linux on Intel-branded motherboards with
D2700 (Pineview) processors, Intel 64 support was not
enabled on a Jetway-branded motherboard with a D2550
(Pineview) processor.[23]
7.4 Performance
Even among Atom-based systems which have Intel 64
The performance of a single core Atom is about half that
enabled, not all are able to run 64-bit versions of
of a Pentium M of the same clock rate. For example, the
Microsoft Windows. For those Pineview processors
Atom N270 (1.60 GHz) found in many netbooks such
which support 64-bit operation, Intel Download Center
as the Eee PC can deliver around 3300 MIPS and 2.1
currently provides 64-bit Windows 7 and Windows Vista
GFLOPS in standard benchmarks,[30] compared to 7400
drivers for Intel GMA 3150 graphics, found in Pineview
MIPS and 3.9 GFLOPS for the similarly clocked (1.73
processors.[24] However, no 64-bit Windows drivers are
GHz) Pentium M 740.[31]
available for Intel Atom Cedarview processors, released
Q3 2011.[25] However, Intel’s Bay Trail-M processors, The Pineview platform has proven to be only slightly
built on the Silvermont microarchitecture and released in faster than the previous Diamondville platform. This is
the second half of 2013, regain 64-bit support, although because the Pineview platform uses the same Bonnell
driver support for Linux and Windows 7 is limited at execution core as Diamondville and is connected to the
launch.[26] memory controller via the FSB, hence memory latency
and performance in CPU-intensive applications are min-
The lack of 64-bit Windows support for Cedarview pro-
imally improved.[32]
cessors appears to be due to a driver issue. A member
of the Intel Enthusiast Team has stated in a series of
posts on enthusiast site Tom’s Hardware that while the
Atom D2700 (Pineview) was designed with Intel 64 sup- 7.5 Bonnell microarchitecture
port, due to a “limitation of the board” Intel had pulled
their previously-available 64-bit drivers for Windows 7 Main article: Bonnell (microarchitecture)
and would not provide any further 64-bit support.[27]
Some system manufacturers have similarly stated that
their motherboards with Atom Cedarview processors lack Intel Atom processors [3][4]
are based on the Bonnell
64-bit support due to a “lack of Intel® 64-bit VGA driver microarchitecture, which can execute up to two in-
support”. [28]
Because all Cedarview processors use the structions per cycle. Like many other x86 micropro-
same Intel GMA 3600 or 3650 graphics as the D2700, cessors, it translates x86-instructions (CISC instructions)
this indicates that Atom Cedarview systems will remain into simpler internal operations (sometimes referred to
unable to run 64-bit versions of Windows, even those as micro-ops, i.e., effectively RISC style instructions)
which have Intel 64 enabled and are able to run 64-bit prior to execution. The majority of instructions pro-
versions of Linux. duce one micro-op when translated, with around 4% of
instructions used in typical programs producing multi-
ple micro-ops. The number of instructions that produce
more than one micro-op is significantly fewer than the P6
7.3 Availability and NetBurst microarchitectures. In the Bonnell microar-
chitecture, internal micro-ops can contain both a memory
Atom processors became available to system manufactur- load and a memory store in connection with an ALU op-
ers in 2008. Because they are soldered, like northbridges eration, thus being more similar to the x86 level and more
7.8. SEE ALSO 29
powerful than the micro-ops used in previous designs.[33] based System on Chip (SoC) that offers a below average
This enables relatively good performance with only two thermal envelope compared to the Atom.
integer ALUs, and without any instruction reordering, Kenton Williston of EE Times said that while Atom will
speculative execution, or register renaming. The Bon- not displace ARM from its current markets, the ability to
nell microarchitecture therefore represents a partial re- apply the PC architecture into smaller, cheaper and lower
vival of the principles used in earlier Intel designs such as power form factors will open up new markets for Intel.[44]
P5 and the i486, with the sole purpose of enhancing the
performance per watt ratio. However, Hyper-Threading ARM has found that Intel’s Atom processors offer less
is implemented in an easy (i.e., low power) way to em- compatibility and lower performance than their chips
ploy the whole pipeline efficiently by avoiding the typical when running Android, and higher power consumption
single thread dependencies.[33] and less battery life for the same tasks under both An-
droid and Windows.[45]
Even AMD is in this competition with the Mullins brand
7.6 Collaborations based on Puma Microarchitecture who offers better
Computing and even better Graphics performance with
In March 2009, Intel announced that it would be col- similar thermal power.
laborating with TSMC for the production of the Atom
processors.[34] The deal was put on hold due to lack of
demand in 2010. 7.8 See also
On 13 September 2011 Intel and Google held a joint
• List of Intel Atom microprocessors
announcement of a partnership to provide support in
Google’s Android operating system for Intel processors • Intel Edison
(beginning with the Atom). This would allow Intel
to supply chips for the growing smartphone and tablet • Intel Quark
market.[35]
7.9 Notes
7.7 Competition
[1] “Intel® Atom™ Processor Z520”. Intel. Archived from
the original on 2011-07-04.
Embedded processors based on the ARM version 7 in-
struction set architecture (such as Nvidia's Tegra 3 series, [2] Anand Lal Shimpi. “Intel’s Atom N2600, N2800 &
TI’s 4 series and Freescale’s i.MX51 based on the Cortex- D2700: Cedar Trail, The Heart of the 2012 Netbook”.
A8 core, or the Qualcomm Snapdragon and Marvell Ar- Archived from the original on 2014-04-29. Retrieved 28
December 2011.
mada 500/600 based on custom ARMv7 implementa-
tions) offer similar performance to the low end Atom [3] Jeff Moriarty (1 April 2008). "'Atom 101' - Deciphering
chipsets but at roughly one quarter the power consump- the Intel codewords around MIDs”. Archived from the
tion, and (like most ARM systems) as a single integrated original on 2012-03-27. Retrieved 4 August 2010.
system on a chip, rather than a two chip solution like the
[4] Anand Lal Shimpi (27 January 2010). “Why Pine Trail
current Atom line. Although the second-generation Atom
Isn't Much Faster Than the First Atom”. Archived from
codenamed “Pineview” should greatly increase its com- the original on 2014-01-04. Retrieved 4 August 2010.
petitiveness in performance/watt, ARM plans to counter
the threat with the multi-core capable Cortex-A9 core [5] “Intel Announces Next-Generation Atom Platform”.
as used in Nvidia’s Tegra 2/3, TI’s OMAP 4 series, and Intel. Archived from the original on 2013-06-06.
Qualcomm's next-generation Snapdragon series, among [6] “Products (Formerly Centerton)". Archived from the
others. original on 2013-10-14. Retrieved 22 March 2013.
The Nano and Nano Dual-Core series from VIA is [7] Ryan Smith (11 December 2012). “Intel Launches Cen-
slightly above the average thermal envelope of the Atom, terton Atom S1200 Family, First Atom for Servers”.
but offers hardware AES support, random number gen- Archived from the original on 2014-05-02. Retrieved 22
erators, and out-of-order execution. Performance com- March 2013.
parisons of the Intel Atom against the Via Nano indi-
[8] Inside Intel’s Atom C2000-series 'Avoton' processors
cate that a single core Intel Atom is easily outperformed
Archived February 9, 2014 at the Wayback Machine
by the Via Nano which is in turn outperformed by a
dual core Intel Atom 330 in tests where multithreading [9] Intel Raises Bar on Smartphones, Tablets and Ultra-
is used. The Core 2 Duo SU7300 outperforms the dual- book™ Devices
core Nano.[36][37][38][39][40][41][42][43]
[10] Antara News : Intel, Google announce partnership for An-
The Xcore86 (also known as the PMX 1000) is x586 droid smartphones
30 CHAPTER 7. INTEL ATOM (CPU)
[11] Sadauskas, Andrew (30 April 2012). “Intel battles ARM [30] “SiSoft Sandra : Atom Benchmarked: 4W Of Perfor-
with new handset”. smartcompany.com.au. Retrieved 29 mance”. Tomshardware.com. 29 July 2008. Retrieved
May 2012. 4 April 2010.
[12] “Intel to unveil OLPC chips in Shanghai next April”. [31] “Intel Pentium M 740 PCSTATS Review - Benchmarks:
InfoWorld. 15 October 2007. Archived from the origi- Office Productivity, SiSoft Sandra 2005”. PCstats.com.
nal on 2012-03-11. Archived from the original on 2013-10-29.
[13] “Intel Announces Atom Brand for Silverthorne, Menlow”. [32] “Why Pine Trail Isn't Much Faster Than the First Atom”.
PC World. Archived from the original on 2008-07-09. AnandTech. Archived from the original on 2010-02-01.
Retrieved 4 April 2010.
[14] “Intel Developer Forum Spring 2008: Day 1 – Hardware
Upgrade”. Hwupgrade.it. 30 July 2005. Archived from [33] “Intel’s Atom Architecture: The Journey Begins”.
the original on 2012-01-12. Retrieved 4 April 2010. AnandTech. Archived from the original on 2009-05-31.
Retrieved 4 April 2010.
[15] “Lenovo exhibits Atom based MID Ideapad U8 at IDF
2008 : Specs, reviews and prices”. Archived from the [34] “TSMC To Build Intel’s Atom-Based Chips”. Forbes. 2
original on 2012-02-23. March 2009. Archived from the original on 2012-10-27.
Retrieved 3 March 2009.
[16] “MID dev kit sports Centrino Atom chipset”. Archived
from the original on 2009-03-02. Retrieved 29 January [35] “Intel, Google announce partnership for Android smart-
2011. phones”. 14 September 2011. Archived from the original
on 2013-12-04.
[17] “Tiny Centrino Atom-based module unveiled”. Archived
from the original on 2009-04-27. Retrieved 29 January [36] “Intel Atom vs. VIA Nano Platform Comparo Introduc-
2011. tion”. TweakTown. 11 August 2008. Archived from the
original on 2014-04-13. Retrieved 4 April 2010.
[18] “Intel Desktop Board D945GCLF – Overview”. Archived
from the original on 2008-08-21. Retrieved 29 January [37] “VIA Nano Dual Core Preview”. 26 December 2010.
2011. Archived from the original on 2014-04-13. Retrieved 26
December 2010.
[19] “Intel offers $80 “Little Falls” Atom mobo”. Archived
from the original on 2009-02-16. Retrieved 29 January [38] Kyle Bennett. “Introduction & Power - Intel Atom vs.VIA
2011. Nano”. Hardocp.com. Archived from the original on
2012-02-19. Retrieved 4 April 2010.
[20] “Products: SPECIFICATIONS: Intel® Atom™ Proces-
sor”. [39] “VIA Nano vs Intel Atom”. TrustedReviews. Archived
from the original on 2009-09-05. Retrieved 4 April 2010.
[21] “Intel Atom Processor Specifications”. Intel.com.
Archived from the original on 2011-03-17. Retrieved 4 [40] “VIA Nano Outperforms Intel Atom in Actual Industry
April 2010. Performance Benchmarking tests”. Mydigitallife.info. 31
July 2008. Archived from the original on 2010-01-02.
[22] “Intel N2600 : Atom Benchmarked: 4W Of Perfor- Retrieved 4 April 2010.
mance”. Intel.com. 28 August 2012. Archived from the
original on 2014-04-21. Retrieved 28 August 2012. [41] “Intel Atom Initial Benchmarking Data vs. Pentium and
Celeron M Processors Before Official Release”. Mydigi-
[23] “mini-itx.com - store - Intel Atom Mini-ITX boards”. tallife.info. 8 March 2008. Archived from the original on
mini-itx.com. Archived from the original on 2013-06-13. 2011-04-08. Retrieved 4 April 2010.
Retrieved 4 March 2013.
[42] “EEE PC vs MSI Wind - Atom vs Celeron CPU Perfor-
[24] “Download Center”. Intel.com. Archived from the origi- mance Benchmark: Netbooks, EEE PC, MSI Wind, As-
nal on 2014-03-18. Retrieved 4 March 2013. pire One and Akoya Resources”. Eeejournal.com. 11
[25] “Logic Supply Cedar View”. logicsupply.com. Archived May 2008. Archived from the original on 2014-04-13.
from the original on 2013-10-26. Retrieved 4 March Retrieved 4 April 2010.
2013. [43] “Intel Atom 230/330/VIA Nano performances con-
[26] “Logic Supply Bay Trail Offers Performance Boost”. log- trasted”. En.hardspell.com. 25 September 2008.
icsupply.com. Archived from the original on 2014-03-17. Archived from the original on 2008-12-20. Retrieved 4
Retrieved 17 March 2013. April 2010.
[27] "[Solved] Atom D2700 (Cedar Trail) 32 bit?". tomshard- [44] “Analysis: The real scoop on Atom-ARM rivalry”.
ware.com. 10 February 2012. Retrieved 4 March 2013. Archived from the original on 2014-02-15. Retrieved 1
January 2012.
[28] “ASRock > AD2700B-ITX”. asrock.com. Retrieved 4
March 2013. [45] Myslewski, Rik (2 May 2014). “ARM tests: Intel
flops on Android compatibility, Windows power”. www.
[29] “HP Mini 210-2072cl PC Product Specifications”. theregister.co.uk (The Register). Archived from the origi-
Archived from the original on 2014-01-08. nal on 2014-05-03. Retrieved 2 May 2014.
7.11. EXTERNAL LINKS 31
7.10 References
• “Intel cranks 45nm ultramobile CPU”. EE Times.
18 April 2007. Retrieved 28 October 2007.
• “Intel reaches back in time for its ultralow power
chips”. 28 January 2008. Retrieved 29 January
2008.
Intel Core
This article is about the Intel processor brand name. For out closely resembled two interconnected Pentium M
the Intel microarchitecture that is the basis for the Core branded CPUs packaged as a single die (piece) silicon
2 processor family, see Intel Core (microarchitecture). chip (IC). Hence, the 32-bit microarchitecture of Core
branded CPUs – contrary to its name – had more in com-
Intel Core is a brand name that Intel uses for var- mon with Pentium M branded CPUs than with the subse-
quent 64-bit Core microarchitecture of Core 2 branded
ious mid-range to high-end consumer and business
microprocessors. These processors replaced the then- CPUs. Despite a major rebranding effort by Intel start-
ing January 2006, some companies continued to market
currently mid to high end Pentium processors, making
them entry level, and bumping the Celeron series of pro- computers with the Yonah core marked as Pentium M.
cessors to low end. Similarly, identical or more capable The Core series is also the first Intel processor used as the
versions of Core processors are also sold as Xeon proces- main CPU in an Apple Macintosh computer. The Core
sors for the server and workstation market. Duo was the CPU for the first generation MacBook Pro,
As of 2015 the current lineup of Core processors included while the Core Solo appeared in Apple’s Mac mini line.
the latest Intel Core i7, Intel Core i5, and Intel Core i3.[1] Core Duo signified the beginning of Apple’s shift to Intel
processors across their entire line.
In 2007, Intel began branding the Yonah core CPUs
8.1 Overview intended for mainstream mobile computers as Pentium
Dual-Core, not to be confused with the desktop 64-bit
Core microarchitecture CPUs also branded as Pentium
Clock speed slowest 1.2 GHz to the fastest 4.0 GHz (In-
Dual-Core.
tel Core i7-4790K) (or 4.4 GHz via Intel Turbo Boost
[3] September 2007 and January 4, 2008 marked the discon-
Technology)
tinuation of a number of Core branded CPUs including
several Core Solo, Core Duo, Celeron and one Core 2
8.2 Enhanced Pentium M based Quad chip.[4][5]
32
8.4. NEHALEM MICROARCHITECTURE BASED 33
ufactured as 486DX CPUs but with the FPU disabled. tively slow ultra-low-power Uxxxx (10 W) and low-power
Lxxxx (17 W) versions, to the more performance ori-
ented Pxxxx (25 W) and Txxxx (35 W) mobile versions
8.3 64-bit Core microarchitecture and the Exxxx (65 W) desktop models. The mobile Core
2 Duo processors with an 'S' prefix in the name are pro-
based duced in a smaller µFC-BGA 956 package, which allows
building more compact laptops.
Main article: Core (microarchitecture) Within each line, a higher number usually refers to a
better performance, which depends largely on core and
The successor to Core is the mobile version of the Intel front-side bus clock frequency and amount of second level
Core 2 line of processors using cores based upon the Intel cache, which are model-specific. Core 2 Duo processors
Core microarchitecture,[8] released on July 27, 2006. typically use the full L2 cache of 2, 3, 4, or 6 MB avail-
The release of the mobile version of Intel Core 2 marks able in the specific stepping of the chip, while versions
the reunification of Intel’s desktop and mobile product with the amount of cache reduced during manufacturing
lines as Core 2 processors were released for desktops and are sold for the low-end consumer market as Celeron or
notebooks, unlike the first Intel Core CPUs that were tar- Pentium Dual-Core processors. Like those processors,
geted only for notebooks (although some small form fac- some low-end Core 2 Duo models disable features such
tor and all-in-one desktops, like the iMac and the Mac as Intel Virtualization Technology.
Mini, also used Core processors).
Unlike the Intel Core, Intel Core 2 is a 64-bit processor, 8.3.3 Core 2 Quad
supporting Intel 64. Another difference between the orig-
inal Core Duo and the new Core 2 Duo is an increase in Core 2 Quad[12][13] processors are multi-chip modules
the amount of Level 2 cache. The new Core 2 Duo has consisting of two dies similar to those used in Core 2
tripled the amount of on-board cache to 6 MB. Core 2 Duo, forming a quad-core processor. This allows twice
also introduced a quad-core performance variant to the the performance of a dual-core processors at the same
single- and dual-core chips, branded Core 2 Quad, as well clock frequency in ideal conditions.
as an enthusiast variant, Core 2 Extreme. All three chips
are manufactured at a 65 nm lithography, and in 2008, Initially, all Core 2 Quad models were versions of Core
a 45 nm lithography and support Front Side Bus speeds 2 Duo desktop processors, Kentsfield derived from Con-
ranging from 533 MHz to 1600 MHz. In addition, the 45 roe and Yorkfield from Wolfdale, but later Penryn-QC
nm die shrink of the Core microarchitecture adds SSE4.1 was added as a high-end version of the mobile dual-core
support to all Core 2 microprocessors manufactured at a Penryn.
45 nm lithography, therefore increasing the calculation The Xeon 32xx and 33xx processors are mostly identical
rate of the processors. versions of the desktop Core 2 Quad processors and can
be used interchangeably.
8.3.1 Core 2 Solo
8.3.4 Core 2 Extreme
The Core 2 Solo,[9] introduced in September 2007, is
the successor to the Core Solo and is available only as Core 2 Extreme processors[14][15] are enthusiast versions
an ultra-low-power mobile processor with 5.5 Watt ther- of Core 2 Duo and Core 2 Quad processors, usually with a
mal design power. The original U2xxx series “Merom-L” higher clock frequency and an unlocked clock multiplier,
used a special version of the Merom chip with CPUID which makes them especially attractive for overclocking.
number 10661 (model 22, stepping A1) that only had a This is similar to earlier Pentium processors labeled as
single core and was also used in some Celeron processors. Extreme Edition. Core 2 Extreme processors were re-
The later SU3xxx are part of Intel’s CULV range of pro- leased at a much higher price than their regular version,
cessors in a smaller µFC-BGA 956 package but contain often $999 or more.
the same Penryn chip as the dual-core variants, with one
of the cores disabled during manufacturing.
8.4 Nehalem microarchitecture
8.3.2 Core 2 Duo based
The majority of the desktop and mobile Core 2 proces-
sor variants are Core 2 Duo[10][11] with two processor Main article: Nehalem (microarchitecture)
cores on a single Merom, Conroe, Allendale, Penryn, or
Wolfdale chip. These come in a wide range of perfor- With the release of the Nehalem microarchitecture
mance and power consumption, starting with the rela- in November 2008,[16] Intel introduced a new naming
34 CHAPTER 8. INTEL CORE
scheme for its Core processors. There are three vari- L3 cache, a DMI bus running at 2.5 GT/s and support for
ants, Core i3, Core i5 and Core i7, but the names no dual-channel DDR3-800/1066/1333 memory and have
longer correspond to specific technical features like the Hyper-threading disabled. The same processors with
number of cores. Instead, the brand is now divided from different sets of features (Hyper-Threading and other
low-level (i3), through mid-range (i5) to high-end per- clock frequencies) enabled are sold as Core i7-8xx and
formance (i7),[17] which correspond to three, four and Xeon 3400-series processors, which should not be con-
five stars in Intel’s Intel Processor Rating[18] following on fused with high-end Core i7-9xx and Xeon 3500-series
from the entry-level Celeron (one star) and Pentium (two processors based on Bloomfield. A new feature called
stars) processors.[19] Common features of all Nehalem Turbo Boost Technology was introduced which maxi-
based processors include an integrated DDR3 memory mizes speed for demanding applications, dynamically ac-
controller as well as QuickPath Interconnect or PCI Ex- celerating performance to match the workload.
press and Direct Media Interface on the processor replac- The Core i5-5xx mobile processors are named
ing the aging quad-pumped Front Side Bus used in all ear-
Arrandale and based on the 32 nm Westmere shrink of
lier Core processors. All these processors have 256 KB the Nehalem microarchitecture. Arrandale processors
L2 cache per core, plus up to 12 MB shared L3 cache.
have integrated graphics capability but only two proces-
Because of the new I/O interconnect, chipsets and main- sor cores. They were released in January 2010, together
boards from previous generations can no longer be used with Core i7-6xx and Core i3-3xx processors based on
with Nehalem based processors. the same chip. The L3 cache in Core i5-5xx processors
is reduced to 3 MB, while the Core i5-6xx uses the full
cache and the Core i3-3xx does not support for Turbo
8.4.1 Core i3 Boost.[31] Clarkdale, the desktop version of Arrandale,
is sold as Core i5-6xx, along with related Core i3 and
Intel intended the Core i3 as the new low end of the per- Pentium brands. It has Hyper-Threading enabled and
formance processor line from Intel, following the retire- the full 4 MB L3 cache.[32]
ment of the Core 2 brand.[20][21]
According to Intel “Core i5 desktop processors and desk-
The first Core i3 processors were launched on January 7, top boards typically do not support ECC memory”,[33] but
2010.[22] information on limited ECC support in the Core i3 sec-
The first Nehalem based Core i3 was Clarkdale-based, tion also applies to Core i5 and i7.
with an integrated GPU and two cores.[23] The same pro-
cessor is also available as Core i5 and Pentium, with
slightly different configurations. 8.4.3 Core i7
The Core i3-3xxM processors are based on Arrandale, Intel Core i7 as an Intel brand name applies to several
the mobile version of the Clarkdale desktop processor. families of desktop and laptop 64-bit x86-64 processors
They are similar to the Core i5-4xx series but running using the Nehalem, Westmere, Sandy Bridge, Ivy Bridge
at lower clock speeds and without Turbo Boost.[24] Ac- and Haswell microarchitectures. The Core i7 brand tar-
cording to an Intel FAQ they do not support Error Cor- gets the business and high-end consumer markets for both
rection Code (ECC) memory.[25] According to moth- desktop and laptop computers,[35] and is distinguished
erboard manufacturer Supermicro, if a Core i3 pro- from the Core i3 (entry-level consumer), Core i5 (main-
cessor is used with a server chipset platform such as stream consumer), and Xeon (server and workstation)
Intel 3400/3420/3450, the CPU supports ECC with brands.
UDIMM.[26] When asked, Intel confirmed that, although
the Intel 5 series chipset supports non-ECC memory only Intel introduced the Core i7 name with the Nehalem-
with the Core i5 or i3 processors, using those processors based Bloomfield Quad-core processor in late
on a motherboard with 3400 series chipsets it supports the 2008.[36][37][38][39] In 2009 new Core i7 models based
ECC function of ECC memory.[27] A limited number of on the Lynnfield (Nehalem-based) desktop quad-core
motherboards by other companies also support ECC with processor and the Clarksfield (Nehalem-based) quad-
Intel Core ix processors; the Asus P8B WS is an exam- core mobile were added,[40] and models based on the
ple, but it does not support ECC memory under Windows Arrandale dual-core mobile processor (also Nehalem-
non-server operating systems.[28] based) were added in January 2010. The first six-core
processor in the Core lineup is the Nehalem-based
Gulftown, which was launched on March 16, 2010.
8.4.2 Core i5 Both the regular Core i7 and the Extreme Edition are
advertised as five stars in the Intel Processor Rating.
The first Core i5 using the Nehalem microarchitec- In each of the first three microarchitecture generations
ture was introduced on September 8, 2009, as a main- of the brand, Core i7 has family members using two dis-
stream variant of the earlier Core i7, the Lynnfield tinct system-level architectures, and therefore two distinct
core.[29][30] Lynnfield Core i5 processors have an 8 MB sockets (for example, LGA 1156 and LGA 1366 with
8.6. IVY BRIDGE MICROARCHITECTURE BASED 35
8.6.1 Core i3
The Ivy Bridge based Core-i3-3xxx line is a minor up-
8.5.1 Core i3 grade to 22 nm process technology and better graphics.
8.7.1 Core i3 [10] “Intel Core2 Duo Processor: Upgrade Today”. Intel.com.
Retrieved 2010-12-13.
8.7.2 Core i5 [11] “Intel Core2 Duo Mobile Processor”. Intel.com. Re-
trieved 2010-12-13.
8.7.3 Core i7
[12] “Intel Core2 Quad Processor Overview”. Intel.com. Re-
trieved 2010-12-13.
8.8 Broadwell microarchitecture
[13] “Intel Core2 Quad Mobile Processors – Overview”. In-
based tel.com. Retrieved 2010-12-13.
The Broadwell microarchitecture was released by Intel on [15] “Intel Core2 Extreme Processor”. Intel.com. Retrieved
2010-12-13.
September 6, 2014, and began shipping in late 2014. It
[46]
is the first to use a 14 nm chip. Additional, mobile [16] “Intel Microarchitecture Codenamed Nehalem”. In-
processors were launched in January 2015. [47] tel.com. Retrieved 2010-12-13.
8.8.4 Core M [20] “Intel Quietly Announces Core i5 and Core i3 Branding”.
AnandTech. Retrieved 2010-12-13.
8.9 See also [21] “Intel confirms Core i3 as 'entry-level' Nehalem chip”.
Apcmag.com. 2009-09-14. Retrieved 2010-12-13.
• Centrino [22] “Core i5 and i3 CPUs With On-Chip GPUs Launched”.
Hardware.slashdot.org. 2010-01-04. Retrieved 2010-12-
13.
8.10 References [23] “Intel May Unveil Microprocessors with Integrated
Graphics Cores at Consumer Electronics Show”. Xbit-
[1] “Desktop Processors”. Intel.com. Retrieved 2010-12-13. labs.com. Retrieved 2010-12-13.
[2] http://arstechnica.com/gadgets/2014/09/ [24] “Intel to launch four Arrandale CPUs for mainstream
notebooks in January 2010”. Digitimes.com. 2009-11-
intels-launches-three-core-m-cpus-promises-more-broadwell-early-2015/
13. Retrieved 2010-12-13.
[3] “Intel Launches Devil’s Canyon and Overclockable Pen-
tium: i7-4790K, i5-4690K and G3258”. Anandtech. 3 [25] Intel Core i3 desktop processor frequently asked questions
June 2014. Retrieved 29 June 2014.
[26] Supermicro FAQ on ECC with Core i3
[4] “Intel already phasing out first quad-core CPU”. TG [27] Intel correspondence quoted on silentpcreview forum
Daily. Retrieved 2007-09-07.
[28] Asus P8B WS specification: supports “ECC, Non-ECC,
[5] “Intel to discontinue older Centrino CPUs in Q1 08”. TG un-buffered Memory”, but “Non-ECC, un-buffered mem-
Daily. Retrieved 2007-10-01. ory only support for client OS (Windows 7, Vista and
XP).”
[6] “Support for the Intel Core Duo Processor”. Intel.com.
Retrieved 2010-12-13. [29] “Support for the Intel Core i5 Processor”. Intel.com. Re-
trieved 2010-12-13.
[7] “Support for the Intel Core Solo processor”. Intel.com.
Retrieved 2010-12-13. [30] Anand Lal Shimpi, Intel’s Core i7 870 & i5 750, Lynnfield:
Harder, Better, Faster Stronger, anandtech.com
[8] “Intel Microarchitecture”. Intel.com. Retrieved 2010-12-
13. [31] http://www.digitimes.com/news/a20091113PD209.html
[9] “Intel Core2 Solo Mobile Processor – Overview”. In- [32] Intel E5300( ) |CPU | Core
tel.com. Retrieved 2010-12-13. i5 i3 |IT168 diy
8.11. EXTERNAL LINKS 37
[37] “IDF Fall 2008: Intel un-retires Craig Barrett, AMD sets
up anti-IDF camp”. Tigervision Media. 2008-08-11. Re-
trieved 2008-08-11.
[42] “Intel Core i7-920 Processor (8M Cache, 2.66 GHz, 4.80
GT/s Intel QPI)". Intel. Retrieved 2008-12-06.
[43] “Intel Core i7-940 Processor (8M Cache, 2.93 GHz, 4.80
GT/s Intel QPI)". Intel. Retrieved 2008-12-06.
The following is a list of Intel Core i5 brand • Graphics Transistors: 177 million
microprocessors.
• Graphics and Integrated Memory Controller die
size: 114 mm²
• All models support: MMX, SSE, SSE2, SSE3, SSSE3, • All models support: MMX, SSE, SSE2, SSE3, SSSE3,
SSE4.1, SSE4.2, Enhanced Intel SpeedStep Technol- SSE4.1, SSE4.2, AVX, Enhanced Intel SpeedStep
ogy (EIST), Intel 64, XD bit (an NX bit implementa- Technology (EIST), Intel 64, XD bit (an NX bit im-
tion), Intel VT-x, Turbo Boost, Smart Cache. plementation), TXT, Intel VT-x, Intel VT-d, Hyper-
threading, Turbo Boost, AES-NI, Smart Cache, Intel
• FSB has been replaced with DMI.
Insider, vPro.
• Transistors: 774 million • Transistors: 504 million
• Die size: 296 mm² • Die size: 131 mm²
• Stepping: B1
“Sandy Bridge” (quad-core, 32 nm)
9.1.2 Westmere microarchitecture (1st • All models support: MMX, SSE, SSE2, SSE3, SSSE3,
generation) SSE4.1, SSE4.2, AVX, Enhanced Intel SpeedStep
Technology (EIST), Intel 64, XD bit (an NX bit im-
"Clarkdale" (MCP, 32 nm dual-core) plementation), TXT, Intel VT-x, Intel VT-d, Turbo
Boost, AES-NI, Smart Cache, Intel Insider, vPro.
• All models support: MMX, SSE, SSE2, SSE3, SSSE3,
SSE4.1, SSE4.2, Enhanced Intel SpeedStep Technol- • All models support dual-channel DDR3-1333
ogy (EIST), Intel 64, XD bit (an NX bit implementa- RAM.
tion), TXT, Intel VT-x, Intel VT-d, Hyper-Threading, • Core i5-2300, Core i5-2310, Core i5-2320, Core
Turbo Boost, AES-NI, Smart Cache. i5-2380P, Core i5-2405S, Core i5-2450P, Core i5-
• Core i5-655K, Core i5-661 does not support Intel 2500K and Core i5-2550K does not support Intel
TXT and Intel VT-d.[1] TXT, Intel VT-d, and Intel vPro.[2]
• S processors feature lower-than-normal TDP (65W
• Core i5-655K features an unlocked multiplier.
on 4-core models).
• FSB has been replaced with DMI.
• T processors feature an even lower TDP (45W on
• Contains 45 nm “Ironlake” GPU. 4-core models or 35W on 2-core models).
• Transistors: 382 million • K processors are unlockable and designed for over-
clocking. Other processors will have limited over-
• Die size: 81 mm² clocking due to chipset limitations.[3]
38
9.2. MOBILE PROCESSORS 39
• All models support: MMX, SSE, SSE2, SSE3, SSSE3, “Haswell-DT” (quad-core, 22 nm)
SSE4.1, SSE4.2, AVX, Enhanced Intel SpeedStep
Technology (EIST), Intel 64, XD bit (an NX bit imple- • All models support: MMX, SSE, SSE2, SSE3, SSSE3,
mentation), Intel VT-x, Intel VT-d, Hyper-threading, SSE4.1, SSE4.2, AVX, AVX2, FMA3, Enhanced In-
Turbo Boost, AES-NI, Smart Cache, Intel Insider. tel SpeedStep Technology (EIST), Intel 64, XD bit
(an NX bit implementation), Intel VT-x, Turbo Boost,
• Die size: 93.6mm² or 118 mm² [5][6] AES-NI, Smart Cache, Intel Insider.
• i5-3330, i5-3330S, and i5-3350P support Intel VT- • Core i5-4570R and Core i5-4670R also contain
d. “Crystalwell": 128 MiB eDRAM built at (22 nm)
acting as L4 cache
• Non-K processors will have limited turbo overclock- • Transistors: 1.4 billion
ing.
• Die size: 264mm² + 84mm²
• Transistors: 1.4 billion
• All models support: MMX, SSE, SSE2, SSE3, SSSE3, • All models support: MMX, SSE, SSE2, SSE3, SSSE3,
SSE4.1, SSE4.2, Enhanced Intel SpeedStep Technol- SSE4.1, SSE4.2, AVX, Enhanced Intel SpeedStep
ogy (EIST), Intel 64, XD bit (an NX bit implementa- Technology (EIST), Intel 64, XD bit (an NX bit imple-
tion), Intel VT-x,[9] Hyper-Threading, Turbo Boost, mentation), Intel VT-x, Intel VT-d, Hyper-threading,
Smart Cache. Turbo Boost, AES-NI, Smart Cache.
• i5-5xx series (i5-520M, i5-520E, i5-540M, • i5-3320M, i5-3360M, i5-3427U, i5-3437U, i5-
i5-560M, i5-580M, i5-520UM, i5-540UM, i5- 3439Y, and i5-3610ME support TXT and vPro.
560UM) supports AES-NI, TXT and Intel VT-d.
[10]
9.2.4 Haswell microarchitecture (4th gen-
• FSB has been replaced with DMI. eration)
• Transistors: 382 million • All models support: MMX, SSE, SSE2, SSE3, SSSE3,
SSE4.1, SSE4.2, AVX, AVX2, FMA3, Enhanced In-
• Die size: 81 mm² tel SpeedStep Technology (EIST), Intel 64, XD bit
(an NX bit implementation), Intel VT-x, Hyper-
• Graphics Transistors: 177 million threading, Turbo Boost, AES-NI, Intel TSX-NI, Smart
Cache.
• Graphics and Integrated Memory Controller die
size: 114 mm² • Core i5-4300M and higher also support Intel VT-d,
Intel vPro, Intel TXT
• Stepping: C2, K0
• Transistors: 1.3 billion
• Core i5-520E has support for ECC memory and PCI
express port bifurcation. • Die size: 181 mm²
“Haswell-H” (dual-core, 22 nm) [2] Turbo describes the available frequency bins (+100 MHz
for processors based on Sandy Bridge, Ivy Bridge and
• All models support: MMX, SSE, SSE2, SSE3, SSSE3, Haswell microarchitectures) of Intel Turbo Boost Tech-
SSE4.1, SSE4.2, AVX, AVX2, FMA3, Enhanced In- nology that are available for 4, 3, 2, 1 active cores respec-
tel SpeedStep Technology (EIST), Intel 64, XD bit tively (depending on the number of CPU cores, included
in the processor).
(an NX bit implementation), Intel VT-x, Intel VT-
d, Hyper-threading, Turbo Boost (except i5-4402EC
and i5-4410E), AES-NI, Intel TSX-NI, Smart Cache.
9.5 References
• Transistors: 1.3 billion
[1] Core i5-655K, Core i5-661 does not support Intel TXT
• Die size: 181 mm² and Intel VT-d
• Embedded models support Intel vPro, ECC memory [2] Core i5-2300, Core i5-2310, Core i5-2320, Core i5-
2380P, Core i5-2405S, Core i5-2450P, Core i5-2500F
and Core i5-2550K do not support Intel TXT and Intel
9.2.5 Broadwell microarchitecture (5th VT-d
generation) [3] Fully unlocked versus “limited” unlocked core
“Broadwell-U” (dual-core, 14 nm) [4] Counting Transistors: Why 1.16B and 995M Are Both
Correct, by Anand Lal Shimpi on 14 September 2011,
• All models support: MMX, SSE, SSE2, SSE3, SSSE3, www.anandtech.com
SSE4.1, SSE4.2, AVX, AVX2, FMA3, Enhanced In- [5] http://www.anandtech.com/show/5876/
tel SpeedStep Technology (EIST), Intel 64, XD bit the-rest-of-the-ivy-bridge-die-sizes
(an NX bit implementation), Intel VT-x, Intel VT-
d, Hyper-threading, Turbo Boost, AES-NI, Smart [6] http://vr-zone.com/articles/
Cache, and configurable TDP (cTDP) down intel-s-broken-ivy-bridge-sku-s-last-to-arrive/15449.
html
• Core i5-5300U and higher also support Intel vPro, [7] Specifications of Haswell Refresh CPUs
Intel TXT, and Intel TSX-NI
[8] Some details of Haswell Refresh CPUs
• Transistors: 1.3-1.9 billion [12]
[9] http://ark.intel.com/Compare.aspx?ids=43544,43560
• Die size: 82-133 mm² [12]
[10] http://ark.intel.com/ProductCollection.aspx?familyId=
43483
9.3 See also [11] “Intel® Core™ i5-2410M Processor”. Intel. Retrieved
2012-01-01.
• Sandy Bridge
Pentium Dual-Core
42
10.2. REBRANDING 43
10.1.2 Allendale using liquid nitrogen cooling. Intel released the E6500K
model using this core. The model features an unlocked
Main article: Conroe (microprocessor) § Allendale multiplier, but is currently only sold in China.
Subsequently, on June 3, 2007, Intel released the desk-
10.1.5 Penryn-3M
Main article: Penryn (microprocessor) § Penryn-3M
The 45 nm E5200 model was released by Intel on Au- 10.4 See also
gust 31, 2008, with a larger 2MB L2 cache over the 65
nm E21xx series and the 2.5 GHz clock speed. The • Pentium
E5200 model is also a highly overclockable processor,
with some enthusiasts reaching over 6 GHz[8] clock speed • List of Intel Pentium Dual-Core microprocessors
44 CHAPTER 10. PENTIUM DUAL-CORE
10.5 References
[1] DailyTech – Intel “Conroe-L” Details Unveiled
[8] http://ripping.org/database.php?cpuid=858
Xeon
11.1 Overview
A 450 MHz Pentium II Xeon with a 512 kB L2 cache. The car-
The Xeon brand has been maintained over several genera- tridge cover has been removed.
tions of x86 and x86-64 processors. Older models added
the Xeon moniker to the end of the name of their corre-
and thus the Pentium II Xeon used a larger slot, Slot 2. It
sponding desktop processor, but more recent models used
was supported by the 440GX dual-processor workstation
the name Xeon on its own. The Xeon CPUs generally have
chipset and the 450NX quad- or octo-processor chipset.
more cache than their desktop counterparts in addition to
multiprocessing capabilities.
11.2.2 Pentium III Xeon
11.2 P6-based Xeon List: List of Intel Xeon microprocessors#"Tanner” (250
nm)
11.2.1 Pentium II Xeon
In 1999, the Pentium II Xeon was replaced by the
List: List of Intel Xeon microprocessors#"Drake” (250 Pentium III Xeon. Reflecting the incremental changes
nm) from the Pentium II "Deschutes" core to the Pentium
The first Xeon-branded processor was the Pentium II III "Katmai" core, the first Pentium III Xeon, named
Xeon (code-named "Drake"). It was released in 1998, "Tanner", was just like its predecessor except for the ad-
replacing the Pentium Pro in Intel’s server lineup. The dition of Streaming SIMD Extensions (SSE) and a few
Pentium II Xeon was a "Deschutes" Pentium II (and cache controller improvements. The product codes for
shared the same product code: 80523) with a full-speed Tanner mirrored that of Katmai; 80525.
512 kB, 1 MB, or 2 MB L2 cache. The L2 cache was
implemented with custom 512 kB SRAMs developed by List: List of Intel Xeon microprocessors#"Cascades”
Intel. The number of SRAMs depended on the amount (180 nm)
of cache. A 512 kB configuration required one SRAM, a
1 MB configuration: two SRAMs, and a 2 MB configura- The second version, named "Cascades", was based on
tion: four SRAMs on both sides of the PCB. Each SRAM the Pentium III "Coppermine" core. The "Cascades"
was a 12.90 mm by 17.23 mm (222.21 mm²) die fabri- Xeon used a 133 MHz bus and relatively small 256 kB
cated in a 0.35 µm four-layer metal CMOS process and on-die L2 cache resulting in almost the same capabilities
packaged in a cavity-down wire-bonded land grid array as the Slot 1 Coppermine processors, which were capable
(LGA).[1] The additional cache required a larger module of dual-processor operation but not quad-processor oper-
45
46 CHAPTER 11. XEON
11.3.2 “Gallatin”
11.3 Netburst-based Xeon
List: List of Intel Xeon microprocessors#"Gallatin” (130
11.3.1 Xeon (DP) & Xeon MP (32-bit) nm)
List: List of Intel Xeon microprocessors#"Gallatin” MP
Foster (130 nm)
List: List of Intel Xeon microprocessors#"Foster” (180 Subsequent to the Prestonia was the "Gallatin", which
nm) had an L3 cache of 1 MB or 2 MB. Its Xeon MP ver-
List: List of Intel Xeon microprocessors#"Foster MP” sion also performed much better than the Foster MP, and
(180 nm) was popular in servers. Later experience with the 130
nm process allowed Intel to create the Xeon MP branded
Gallatin with 4 MB cache. The Xeon branded Prestonia
In mid-2001, the Xeon brand was introduced (“Pen-
and Gallatin were designated 80532, like Northwood.
tium” was dropped from the name). The initial variant
that used the new NetBurst microarchitecture, "Foster",
was slightly different from the desktop Pentium 4
("Willamette"). It was a decent chip for workstations, but
for server applications it was almost always outperformed 11.3.3 Xeon (DP) & Xeon MP (64-bit)
by the older Cascades cores with a 2 MB L2 cache and
AMD’s Athlon MP. Combined with the need to use ex-
Nocona and Irwindale
pensive Rambus Dynamic RAM, the Foster’s sales were
somewhat unimpressive.
Main article: Pentium 4 § Prescott
At most two Foster processors could be accommodated
List: List of Intel Xeon microprocessors#"Nocona” (90
in a symmetric multiprocessing (SMP) system built with
nm)
a mainstream chipset, so a second version (Foster MP)
List: List of Intel Xeon microprocessors#"Irwindale”
was introduced with a 1 MB L3 cache and the Jackson
(90 nm)
Hyper-Threading capacity. This improved performance
slightly, but not enough to lift it out of third place. It
was also priced much higher than the dual-processor (DP) Due to a lack of success with Intel’s Itanium and Itanium
versions. The Foster shared the 80528 product code with 2 processors, AMD was able to introduce x86-64, a 64-
Willamette. bit extension to the x86 architecture. Intel followed suit
by including Intel 64 (formerly EM64T; it is almost iden-
tical to AMD64) in the 90 nm version of the Pentium
Prestonia 4 ("Prescott"), and a Xeon version codenamed "Nocona"
with 1 MB L2 cache was released in 2004. Released with
List: List of Intel Xeon microprocessors#"Prestonia” it were the E7525 (workstation), E7520 and E7320 (both
(130 nm) server) chipsets, which added support for PCI Express,
DDR-II and Serial ATA. The Xeon was noticeably slower
In 2002 Intel released a 130 nm version of Xeon branded than AMD’s Opteron, although it could be faster in situ-
CPU, codenamed "Prestonia". It supported Intel’s new ations where Hyper-Threading came into play.
Hyper-Threading technology and had a 512 kB L2 cache. A slightly updated core called "Irwindale" was released
This was based on the "Northwood" Pentium 4 core. A in early 2005, with 2 MB L2 cache and the ability to have
new server chipset, E7500 (which allowed the use of its clock speed reduced during low processor demand.
dual-channel DDR SDRAM), was released to support Although it was a bit more competitive than the Nocona
this processor in servers, and soon the bus speed was had been, independent tests showed that AMD’s Opteron
boosted to 533 MT/s (accompanied by new chipsets: the still outperformed Irwindale. Both of these Prescott-
E7501 for servers and the E7505 for workstations). The derived Xeons have the product code 80546.
11.4. PENTIUM M (YONAH) BASED XEON 47
List: List of Intel Xeon microprocessors#"Paxville MP” 11.4 Pentium M (Yonah) based
(90 nm)
Xeon
An MP-capable version of Paxville DP, codenamed
Paxville MP, product code 80560, was released on 1 11.4.1 LV (ULV), “Sossaman”
November 2005. There are two versions: one with 2 MB
of L2 Cache (1 MB per core), and one with 4 MB of List: List of Intel Xeon microprocessors#"Sossaman”
L2 (2 MB per core). Paxville MP, called the dual-core (65 nm)
Xeon 7000-series, was produced using a 90 nm process.
Paxville MP clock ranges between 2.67 GHz and 3.0 GHz On 14 March 2006, Intel released a dual-core processor
(model numbers 7020–7041), with some models having codenamed Sossaman and branded as Xeon LV (low-
a 667 MT/s FSB, and others having an 800 MT/s FSB. voltage). Subsequently an ULV (ultra-low-voltage) ver-
sion was released. The Sossaman was a low-/ultra-low-
power and double-processor capable CPU (like AMD
7100-series “Tulsa” Quad FX), based on the "Yonah" processor, for ultra-
dense non-consumer environment (i.e. targeted at the
List: List of Intel Xeon microprocessors#"Tulsa” (65 nm) blade-server and embedded markets), and was rated at
a thermal design power (TDP) of 31 W (LV: 1.66 GHz,
Released on 29 August 2006,[2] the 7100 series, code- 2 GHz and 2.16 GHz) and 15 W (ULV: 1.66 GHz).[4]
named Tulsa (product code 80550), is an improved ver- As such, it supported most of the same features as earlier
48 CHAPTER 11. XEON
high-end line of Xeon processors using a package that The Xeon E5-16xx processors follow the previous Xeon
supports larger than two-CPU configurations, formerly 3500/3600-series products as the high-end single-socket
the 7xxx series. Similarly, the 3xxx uniprocessor and platform, using the LGA 2011 package introduced with
5xxx dual-processor series turned into E3-xxxx and E5- this processor. They share the Sandy Bridge-E platform
xxxx, respectively, for later processors. with the single-socket Core i7-38xx and i7-39xx proces-
sors. The CPU chips have no integrated GPU but eight
CPU cores, some of which are disabled in the entry-level
products. The Xeon E5-26xx line has the same features
11.7 Sandy Bridge– and Ivy but also enables multi-socket operation like the earlier
Bridge–based Xeon Xeon 5000-series and Xeon 7000-series processors.
11.8.1 E3-12xx v3-series “Haswell” core counts, and bigger last level caches (LLCs). Fol-
lowing the already used nomenclature, Xeon E5-26xx v3
series allows multi-socket operation.
One of the new features of this generation is that Xeon
E5 v3 models with more than 10 cores support cluster
on die (COD) operation mode, allowing CPU’s multiple
columns of cores and LLC slices to be logically divided
into what is presented as two non-uniform memory ac-
cess (NUMA) CPUs to the operating system. By keep-
ing data and instructions local to the “partition” of CPU
which is processing them, thus decreasing the LLC ac-
cess latency, COD brings performance improvements to
NUMA-aware operating systems and applications.[29]
11.9 Supercomputers
Intel Xeon E3-1241 v3 CPU, sitting atop the inside part of its
retail box that contains an OEM fan-cooled heatsink By 2013 Xeon processors were ubiquitous in
supercomputers—more than 80% of the Top500
Introduced in May 2013, Xeon E3-12xx v3 is the first machines in 2013 used them. For the very fastest ma-
Xeon series based on the Haswell microarchitecture. It chines, much of the performance comes from compute
uses the new LGA 1150 socket, which was introduced accelerators; Intel’s entry into that market was the Xeon
with the desktop Core i5/i7 Haswell processors, incom- Phi, the first machines using it appeared in the June 2012
patible with the LGA 1155 that was used in Xeon E3 and list and by June 2013 it was used in the fastest computer
E3 v2. As before, the main difference between the desk- in the world.
top and server versions is added support for ECC memory
in the Xeon-branded parts. The main benefit of the new • The first Xeon-based machines in the top-10 ap-
microarchitecture is better power efficiency. peared in November 2002, two clusters at Lawrence
Livermore National Laboratory and at NOAA.
11.8.2 E5-16xx/26xx v3-series “Haswell- • The first Xeon-based machine to be in the first
EP” place of the Top500 was the Chinese Tianhe-IA
in November 2010, which used a mixed Xeon-
nVIDIA GPGPU configuration; it was overtaken by
the Japanese K computer in 2012, but the Tianhe-
2 system using 12-core Xeon E5-2692 processors
and Xeon Phi cards occupied the first place in both
Top500 lists of 2013.
• The SuperMUC system, using 8-core Xeon E5-
2680 processors but no accelerator cards, managed
fourth place in June 2012 and had dropped to tenth
by November 2013
Intel Xeon E5-1650 v3 CPU; its retail box contains no OEM • An Intel Xeon virtual SMP system leveraging
heatsink ScaleMP’s Versatile SMP (vSMP) architecture with
128 cores and 1TB RAM.[31] This system aggregates
Introduced in September 2014, Xeon E5-16xx v3 and 16 Stoakley platform (Seaburg chipset) systems with
Xeon E5-26xx v3 series use the new LGA 2011-v3 total of 32 Harpertown processors.
socket, which is incompatible with the LGA 2011 socket
used by earlier Xeon E5 and E5 v2 generations based on
Sandy Bridge and Ivy Bridge microarchitectures. Some 11.10 See also
of the main benefits of this generation, when compared to
the previous one, are improved power efficiency, higher • AMD Opteron
54 CHAPTER 11. XEON
• Intel Xeon Phi, brand name for family of products [21] “Chipzilla unveils six-core 'Dunnington' Xeons”. thereg-
using the Intel MIC architecture ister.co.uk.
• List of Intel Xeon microprocessors [22] “Intel® Xeon® Processor E7 Family”. Intel.
• List of Intel microprocessors [23] AnandTech: Intel Xeon 5570: Smashing SAP records, 16
December 2008
[3] Intel prices up Woodcrest, Tulsa server chips, The In- [28] “Intel Introduces Highly Versatile Datacenter Processor
quirer. Family Architected for New Era of Services”. Press re-
lease. 10 September 2013. Retrieved 13 September 2013.
[4] “Intel drops 32-bit dual-core LV processors”. TG Daily.
Retrieved 2007-07-31. [29] Johan De Gelas (2014-09-08). “Intel Xeon E5 Version 3,
Up to 18 Haswell EP Cores: The Magic Inside the Un-
[5] Intel Adds Low End Xeons to Roadmap, DailyTech core”. AnandTech. Retrieved 2014-09-09.
[6] Intel Readies New Xeons and Price Cuts, WinBeta.org [30] STREAM benchmark, Dr. John D. McCalpin
[7] “ARK - Your Source for Intel® Product Information”. In- [31] “Stream Benchmark Results - Top 20 Set”. virginia.edu.
tel® ARK (Product Specs).
[8] HTN_WDP_Datasheet.book
11.12 External links
[9] Intel bringt neue Prozessoren für den Embedded-Markt
auf Basis seiner 45nm-Fertigungstechnologie • Server processors at the Intel website
[10] Intel Hard-Launches Three New Quad-core Processors, • Intel look inside: Xeon E5 v3 (Grantley) launch,
DailyTech Intel, September 2014
[11] “Intel Clovertowns step up, reduce power”. TG Daily. Re-
trieved 2007-09-05.
Distributed computing
“Distributed Information Processing” redirects here. For ties, each of which has its own local memory.[7]
the computer company, see DIP Research.
• The entities communicate with each other by
Distributed computing is a field of computer science message passing.[8]
that studies distributed systems. A distributed system
is a software system in which components located on In this article, the computational entities are called com-
networked computers communicate and coordinate their puters or nodes.
actions by passing messages.[1] The components inter-
A distributed system may have a common goal, such as
act with each other in order to achieve a common goal.
solving a large computational problem.[9] Alternatively,
Three significant characteristics of distributed systems
each computer may have its own user with individual
are: concurrency of components, lack of a global clock,
needs, and the purpose of the distributed system is to
and independent failure of components.[1] Examples of
coordinate the use of shared resources or provide com-
distributed systems vary from SOA-based systems to
munication services to the users.[10]
massively multiplayer online games to peer-to-peer ap-
plications. Other typical properties of distributed systems include the
following:
A computer program that runs in a distributed system is
called a distributed program, and distributed program-
ming is the process of writing such programs.[2] There are • The system has to tolerate failures in individual
many alternatives for the message passing mechanism, in- computers.[11]
cluding RPC-like connectors and message queues. An
• The structure of the system (network topology, net-
important goal and challenge of distributed systems is
work latency, number of computers) is not known in
location transparency.
advance, the system may consist of different kinds
Distributed computing also refers to the use of distributed of computers and network links, and the system
systems to solve computational problems. In distributed may change during the execution of a distributed
computing, a problem is divided into many tasks, each program.[12]
of which is solved by one or more computers,[3] which
communicate with each other by message passing.[4] • Each computer has only a limited, incomplete view
of the system. Each computer may know only one
part of the input.[13]
12.1 Introduction
The word distributed in terms such as “distributed sys- 12.1.1 Architecture
tem”, “distributed programming”, and "distributed algo-
rithm" originally referred to computer networks where Client/Server System : The Client-server architecture is a
individual computers were physically distributed within way to provide a service from a central source. There is a
some geographical area.[5] The terms are nowadays used single server that provides a service, and many clients that
in a much wider sense, even referring to autonomous communicate with the server to consume its products. In
processes that run on the same physical computer and in- this architecture, clients and servers have different jobs.
teract with each other by message passing.[4] While there The server’s job is to respond to service requests from
is no single definition of a distributed system,[6] the fol- clients, while a client’s job is to use the data provided in
lowing defining properties are commonly used: response in order to perform some tasks.
Peer-to-Peer System : The term peer-to-peer is used to
• There are several autonomous computational enti- describe distributed systems in which labour is divided
55
56 CHAPTER 12. DISTRIBUTED COMPUTING
between processors.[17]
(a)
Memory
• Parallel computation:
1. The very nature of an application may require the
use of a communication network that connects sev- • Scientific computing, including cluster
eral computers: for example, data produced in one computing and grid computing and various
physical location and required in another location. volunteer computing projects; see the list of
distributed computing projects
2. There are many cases in which the use of a single
computer would be possible in principle, but the • Distributed rendering in computer graphics
use of a distributed system is beneficial for prac-
tical reasons. For example, it may be more cost-
efficient to obtain the desired level of performance 12.6 Theoretical foundations
by using a cluster of several low-end computers, in
comparison with a single high-end computer. A dis- Main article: Distributed algorithm
tributed system can provide more reliability than a
non-distributed system, as there is no single point of
failure. Moreover, a distributed system may be eas-
ier to expand and manage than a monolithic unipro- 12.6.1 Models
cessor system.[22]
Many tasks that we would like to automate by using a
Ghaemi et al. define a distributed query as a query “that computer are of question–answer type: we would like
selects data from databases located at multiple sites in a to ask a question and the computer should produce an
network” and offer as an SQL example: answer. In theoretical computer science, such tasks are
called computational problems. Formally, a computa-
SELECT ename, dname tional problem consists of instances together with a solu-
FROM company.emp e, com- tion for each instance. Instances are questions that we can
pany.dept@sales.goods d ask, and solutions are desired answers to these questions.
WHERE e.deptno = d.deptno[23] Theoretical computer science seeks to understand which
computational problems can be solved by using a
computer (computability theory) and how efficiently
12.5 Examples (computational complexity theory). Traditionally, it is
said that a problem can be solved by using a computer if
we can design an algorithm that produces a correct solu-
Examples of distributed systems and applications of dis-
tion for any given instance. Such an algorithm can be im-
tributed computing include the following:[24]
plemented as a computer program that runs on a general-
purpose computer: the program reads a problem instance
• Telecommunication networks: from input, performs some computation, and produces
• Telephone networks and cellular networks the solution as output. Formalisms such as random ac-
cess machines or universal Turing machines can be used
• Computer networks such as the Internet as abstract models of a sequential general-purpose com-
• Wireless sensor networks puter executing such an algorithm.
• Routing algorithms The field of concurrent and distributed computing studies
similar questions in the case of either multiple comput-
• Network applications:
ers, or a computer that executes a network of interacting
• World wide web and peer-to-peer networks processes: which computational problems can be solved
in such a network and how efficiently? However, it is not
• Massively multiplayer online games and
at all obvious what is meant by “solving a problem” in
virtual reality communities
the case of a concurrent or distributed system: for exam-
• Distributed databases and distributed database ple, what is the task of the algorithm designer, and what
management systems is the concurrent or distributed equivalent of a sequential
• Network file systems general-purpose computer?
• Distributed information processing systems The discussion below focuses on the case of multiple
such as banking systems and airline reserva- computers, although many of the issues are the same for
tion systems concurrent processes running on a single computer.
58 CHAPTER 12. DISTRIBUTED COMPUTING
Three viewpoints are commonly used: • The graph G is encoded as a string, and the string
is given as input to a computer. The computer pro-
Parallel algorithms in shared-memory model gram finds a coloring of the graph, encodes the col-
oring as a string, and outputs the result.
• All computers have access to a shared memory. The
algorithm designer chooses the program executed by Parallel algorithms
each computer.
• Again, the graph G is encoded as a string. How-
• One theoretical model is the parallel random access
ever, multiple computers can access the same string
machines (PRAM) that are used.[25] However, the
in parallel. Each computer might focus on one part
classical PRAM model assumes synchronous access
of the graph and produce a coloring for that part.
to the shared memory.
• The main focus is on high-performance computa-
• A model that is closer to the behavior of real-world
tion that exploits the processing power of multiple
multiprocessor machines and takes into account
computers in parallel.
the use of machine instructions, such as Compare-
and-swap (CAS), is that of asynchronous shared
memory. There is a wide body of work on this Distributed algorithms
model, a summary of which can be found in the
literature.[26][27] • The graph G is the structure of the computer net-
work. There is one computer for each node of G
Parallel algorithms in message-passing model and one communication link for each edge of G.
Initially, each computer only knows about its imme-
• The algorithm designer chooses the structure of the diate neighbors in the graph G; the computers must
network, as well as the program executed by each exchange messages with each other to discover more
computer. about the structure of G. Each computer must pro-
duce its own color as output.
• Models such as Boolean circuits and sorting net-
works are used.[28] A Boolean circuit can be seen • The main focus is on coordinating the operation of
as a computer network: each gate is a computer that an arbitrary distributed system.
runs an extremely simple computer program. Sim-
ilarly, a sorting network can be seen as a computer While the field of parallel algorithms has a different fo-
network: each comparator is a computer. cus than the field of distributed algorithms, there is a lot
of interaction between the two fields. For example, the
Distributed algorithms in message-passing model Cole–Vishkin algorithm for graph coloring[29] was origi-
nally presented as a parallel algorithm, but the same tech-
• The algorithm designer only chooses the computer nique can also be used directly as a distributed algorithm.
program. All computers run the same program. The Moreover, a parallel algorithm can be implemented ei-
system must work correctly regardless of the struc- ther in a parallel system (using shared memory) or in a
ture of the network. distributed system (using message passing).[30] The tra-
ditional boundary between parallel and distributed algo-
• A commonly used model is a graph with one finite- rithms (choose a suitable network vs. run in any given
state machine per node. network) does not lie in the same place as the boundary
between parallel and distributed systems (shared memory
In the case of distributed algorithms, computational prob- vs. message passing).
lems are typically related to graphs. Often the graph that
describes the structure of the computer network is the
problem instance. This is illustrated in the following ex- 12.6.3 Complexity measures
ample.
In parallel algorithms, yet another resource in addition
to time and space is the number of computers. Indeed,
12.6.2 An example often there is a trade-off between the running time and
the number of computers: the problem can be solved
Consider the computational problem of finding a color- faster if there are more computers running in parallel
ing of a given graph G. Different fields might take the (see speedup). If a decision problem can be solved in
following approaches: polylogarithmic time by using a polynomial number of
processors, then the problem is said to be in the class
Centralized algorithms NC.[31] The class NC can be defined equally well by using
12.7. COORDINATOR ELECTION 59
the PRAM formalism or Boolean circuits – PRAM ma- • Synchronizers can be used to run synchronous algo-
chines can simulate Boolean circuits efficiently and vice rithms in asynchronous systems.[38]
versa.[32]
• Logical clocks provide a causal happened-before or-
In the analysis of distributed algorithms, more attention
dering of events.[39]
is usually paid on communication operations than compu-
tational steps. Perhaps the simplest model of distributed
• Clock synchronization algorithms provide globally
computing is a synchronous system where all nodes op-
consistent physical time stamps.[40]
erate in a lockstep fashion. During each communication
round, all nodes in parallel (1) receive the latest messages
from their neighbours, (2) perform arbitrary local compu-
tation, and (3) send new messages to their neighbours. In
12.6.5 Properties of distributed systems
such systems, a central complexity measure is the number
of synchronous communication rounds required to com- So far the focus has been on designing a distributed system
plete the task.[33] that solves a given problem. A complementary research
problem is studying the properties of a given distributed
This complexity measure is closely related to the diameter system.
of the network. Let D be the diameter of the network. On
the one hand, any computable problem can be solved triv- The halting problem is an analogous example from the
ially in a synchronous distributed system in approximately field of centralised computation: we are given a computer
2D communication rounds: simply gather all information program and the task is to decide whether it halts or runs
in one location (D rounds), solve the problem, and inform forever. The halting problem is undecidable in the gen-
each node about the solution (D rounds). eral case, and naturally understanding the behaviour of a
computer network is at least as hard as understanding the
On the other hand, if the running time of the algorithm behaviour of one computer.
is much smaller than D communication rounds, then the
nodes in the network must produce their output without However, there are many interesting special cases that are
having the possibility to obtain information about distant decidable. In particular, it is possible to reason about
parts of the network. In other words, the nodes must the behaviour of a network of finite-state machines. One
make globally consistent decisions based on informa- example is telling whether a given network of interact-
tion that is available in their local neighbourhood. Many ing (asynchronous and non-deterministic) finite-state ma-
distributed algorithms are known with the running time chines can[41] reach a deadlock. This problem is PSPACE-
much smaller than D rounds, and understanding which complete, i.e., it is decidable, but it is not likely that
problems can be solved by such algorithms is one of the there is an efficient (centralised, parallel or distributed)
central research questions of the field. [34] algorithm that solves the problem in the case of large net-
works.
Other commonly used measures are the total number of
bits transmitted in the network (cf. communication com-
plexity).
12.7 Coordinator election
12.6.4 Other problems Coordinator election (sometimes called leader elec-
tion) is the process of designating a single process as the
Traditional computational problems take the perspective organizer of some task distributed among several com-
that we ask a question, a computer (or a distributed sys- puters (nodes). Before the task is begun, all network
tem) processes the question for a while, and then pro- nodes are either unaware which node will serve as the
duces an answer and stops. However, there are also prob- “coordinator” (or leader) of the task, or unable to com-
lems where we do not want the system to ever stop. Ex- municate with the current coordinator. After a coordina-
amples of such problems include the dining philosophers tor election algorithm has been run, however, each node
problem and other similar mutual exclusion problems. In throughout the network recognizes a particular, unique
these problems, the distributed system is supposed to con- node as the task coordinator.
tinuously coordinate the use of shared resources so that The network nodes communicate among themselves in
no conflicts or deadlocks occur. order to decide which of them will get into the “coor-
There are also fundamental challenges that are unique to dinator” state. For that, they need some method in or-
distributed computing. The first example is challenges der to break the symmetry among them. For example, if
that are related to fault-tolerance. Examples of related each node has unique and comparable identities, then the
problems include consensus problems,[35] Byzantine fault nodes can compare their identities, and decide that the
tolerance,[36] and self-stabilisation.[37] node with the highest identity is the coordinator.
A lot of research is also focused on understanding the The definition of this problem is often attributed to
asynchronous nature of distributed systems: LeLann, who formalized it as a method to create a new
60 CHAPTER 12. DISTRIBUTED COMPUTING
token in a token ring network in which the token has been Input at the client is committed back to the server
lost. when it represents a permanent change.
Coordinator election algorithms are designed to be eco- • 3-tier architecture: Three tier systems move the
nomical in terms of total bytes transmitted, and time. The client intelligence to a middle tier so that stateless
algorithm suggested by Gallager, Humblet, and Spira [42] clients can be used. This simplifies application de-
for general undirected graphs has had a strong impact on ployment. Most web applications are 3-Tier.
the design of distributed algorithms in general, and won
the Dijkstra Prize for an influential paper in distributed • n-tier architecture: n-tier refers typically to web ap-
computing. plications which further forward their requests to
Many other algorithms were suggested for different kind other enterprise services. This type of applica-
of network graphs, such as undirected rings, unidi- tion is the one most responsible for the success of
rectional rings, complete graphs, grids, directed Euler application servers.
graphs, and others. A general method that decouples the • highly coupled (clustered): refers typically to a clus-
issue of the graph family from the design of the coordina- ter of machines that closely work together, running
tor election algorithm was suggested by Korach, Kutten, a shared process in parallel. The task is subdivided
and Moran.[43] in parts that are made individually by each one and
In order to perform coordination, distributed systems em- then put back together to make the final result.
ploy the concept of coordinators. The coordinator elec-
tion problem is to choose a process from among a group • Peer-to-peer: an architecture where there is no spe-
of processes on different processors in a distributed sys- cial machine or machines that provide a service or
tem to act as the central coordinator. Several central co- manage the network resources. Instead all respon-
ordinator election algorithms exist.[44] sibilities are uniformly divided among all machines,
known as peers. Peers can serve both as clients and
servers.
12.7.1 Bully algorithm • Space based: refers to an infrastructure that creates
the illusion (virtualization) of one single address-
When using the Bully algorithm, any process sends a mes-
space. Data are transparently replicated according
sage to the current coordinator. If there is no response
to application needs. Decoupling in time, space and
within a given time limit, the process tries to elect itself
reference is achieved.
as leader.
• Distributed cache
• Client–server: Smart client code contacts the server
for data then formats and displays it to the user. • Distributed operating system
12.10. NOTES 61
• Edsger W. Dijkstra Prize in Distributed Computing [17] Papadimitriou (1994), Chapter 15. Keidar (2008).
[6] Ghosh (2007), p. 10. [35] Lynch (1996), Sections 5–7. Ghosh (2007), Chapter 13.
[7] Andrews (2000), p. 8–9, 291. Dolev (2000), p. 5. Ghosh [36] Lynch (1996), p. 99–102. Ghosh (2007), p. 192–193.
(2007), p. 3. Lynch (1996), p. xix, 1. Peleg (2000), p.
[37] Dolev (2000). Ghosh (2007), Chapter 17.
xv.
[38] Lynch (1996), Section 16. Peleg (2000), Section 6.
[8] Andrews (2000), p. 291. Ghosh (2007), p. 3. Peleg
(2000), p. 4. [39] Lynch (1996), Section 18. Ghosh (2007), Sections 6.2–
6.3.
[9] Ghosh (2007), p. 3–4. Peleg (2000), p. 1.
[40] Ghosh (2007), Section 6.4.
[10] Ghosh (2007), p. 4. Peleg (2000), p. 2.
[41] Papadimitriou (1994), Section 19.3.
[11] Ghosh (2007), p. 4, 8. Lynch (1996), p. 2–3. Peleg
(2000), p. 4. [42] R. G. Gallager, P. A. Humblet, and P. M. Spira (Jan-
uary 1983). “A Distributed Algorithm for Minimum-
[12] Lynch (1996), p. 2. Peleg (2000), p. 1. Weight Spanning Trees”. ACM Transactions on Pro-
gramming Languages and Systems 5 (1): 66–77.
[13] Ghosh (2007), p. 7. Lynch (1996), p. xix, 2. Peleg
doi:10.1145/357195.357200.
(2000), p. 4.
[43] Ephraim Korach, Shay Kutten, Shlomo Moran (1990).
[14] Ghosh (2007), p. 10. Keidar (2008).
“A Modular Technique for the Design of Efficient Dis-
[15] Lynch (1996), p. xix, 1–2. Peleg (2000), p. 1. tributed Leader Finding Algorithms”. ACM Transactions
on Programming Languages and Systems 12 (1): 84–101.
[16] Peleg (2000), p. 1. doi:10.1145/77606.77610.
62 CHAPTER 12. DISTRIBUTED COMPUTING
[44] Hamilton, Howard. “Distributed Algorithms”. Retrieved • Linial, Nathan (1992), “Locality in distributed
2013-03-03. graph algorithms”, SIAM Journal on Computing 21
(1): 193–201, doi:10.1137/0221015.
[45] Lind P, Alm M (2006), “A database-centric virtual
chemistry system”, J Chem Inf Model 46 (3): 1034–9, • Naor, Moni; Stockmeyer, Larry (1995),
doi:10.1021/ci050360b, PMID 16711722. “What can be computed locally?", SIAM
Journal on Computing 24 (6): 1259–1277,
doi:10.1137/S0097539793254571.
12.11 References
Web sites
Books
• Godfrey, Bill (2002). “A primer on distributed
• Andrews, Gregory R. (2000), Foundations of Mul- computing”.
tithreaded, Parallel, and Distributed Programming, • Peter, Ian (2004). “Ian Peter’s History of the Inter-
Addison–Wesley, ISBN 0-201-35752-6. net”. Retrieved 2009-08-04.
• Arora, Sanjeev; Barak, Boaz (2009), Computational
Complexity – A Modern Approach, Cambridge,
ISBN 978-0-521-42426-4. 12.12 Further reading
• Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Books
Ronald L. (1990), Introduction to Algorithms (1st
ed.), MIT Press, ISBN 0-262-03141-8. • Coulouris, George et al. (2011), Distributed Systems:
Concepts and Design (5th Edition), Addison-Wesley
• Dolev, Shlomi (2000), Self-Stabilization, MIT Press,
ISBN 0-132-14301-1.
ISBN 0-262-04178-2.
• Attiya, Hagit and Welch, Jennifer (2004), Dis-
• Elmasri, Ramez; Navathe, Shamkant B. (2000), tributed Computing: Fundamentals, Simulations, and
Fundamentals of Database Systems (3rd ed.), Advanced Topics, Wiley-Interscience ISBN 0-471-
Addison–Wesley, ISBN 0-201-54263-3. 45324-2.
• Ghosh, Sukumar (2007), Distributed Systems – • Faber, Jim (1998), Java Distributed Computing,
An Algorithmic Approach, Chapman & Hall/CRC, O'Reilly: Java Distributed Computing by Jim Faber,
ISBN 978-1-58488-564-1. 1998
• Lynch, Nancy A. (1996), Distributed Algorithms, • Garg, Vijay K. (2002), Elements of Distributed Com-
Morgan Kaufmann, ISBN 1-55860-348-4. puting, Wiley-IEEE Press ISBN 0-471-03600-5.
• Herlihy, Maurice P.; Shavit, Nir N. (2008), The • Tel, Gerard (1994), Introduction to Distributed Algo-
Art of Multiprocessor Programming, Morgan Kauf- rithms, Cambridge University Press
mann, ISBN 0-12-370591-6.
• Chandy, Mani et al., Parallel Program Design
• Papadimitriou, Christos H. (1994), Computational
Complexity, Addison–Wesley, ISBN 0-201-53082- Articles
1.
• Keidar, Idit; Rajsbaum, Sergio, eds. (2000–2009),
• Peleg, David (2000), Distributed Computing: A
“Distributed computing column”, ACM SIGACT
Locality-Sensitive Approach, SIAM, ISBN 0-89871-
News.
464-8.
• Birrell, A. D.; Levin, R.; Schroeder, M. D.;
Articles Needham, R. M. (April 1982). “Grapevine:
An exercise in distributed computing”.
Communications of the ACM 25 (4): 260–274.
• Cole, Richard; Vishkin, Uzi (1986), “Deterministic
doi:10.1145/358468.358487.
coin tossing with applications to optimal parallel list
ranking”, Information and Control 70 (1): 32–53,
doi:10.1016/S0019-9958(86)80023-7. Conference Papers
• Keidar, Idit (2008), “Distributed computing column • C. Rodríguez, M. Villagra and B. Barán,
32 – The year in review”, ACM SIGACT News 39 (4): Asynchronous team algorithms for Boolean
53–54, doi:10.1145/1466390.1466402. Satisfiability, Bionetics2007, pp. 66–69, 2007.
12.13. EXTERNAL LINKS 63
Service-oriented architecture
64
13.4. DESIGN CONCEPT 65
2. The metadata should be provided in a form that sys- 4. Governance - IT strategy is governed to each hori-
tem designers can understand and manage with a zontal layer to achieve required operating and capa-
reasonable expenditure of cost and effort. bility model.
• Service reusability: Logic is divided into services The (standardized service contract) design principle,
with the intention of promoting reuse. keeps service contracts independent from their imple-
mentation. The service contract needs to be documented
• Service autonomy: Services have control over the to formalize the required processing resources by the in-
logic they encapsulate, from a Design-time and a dividual service capabilities. Although it is beneficial
Run-time perspective. to document details about the service architecture, the
service abstraction design principle dictates that any in-
• Service statelessness: Services minimize resource
ternal details about the service are invisible to its con-
consumption by deferring the management of state
sumers so that they do not develop any unstated couplings.
information when necessary[16]
The service architecture serves as a point of reference for
• Service discoverability: Services are supplemented evolving the service or gauging the impact of any change
with communicative meta data by which they can in the service.
be effectively discovered and interpreted.
• Service composability: Services are effective com-
13.5.2 Service composition architecture
position participants, regardless of the size and com-
plexity of the composition.
One of the core characteristics of services developed us-
• Service granularity: A design consideration to pro- ing the service-orientation design paradigm is that they
vide optimal scope and right granular level of the are composition-centric. Services with this characteristic
business functionality in a service operation. can potentially address novel requirements by recompos-
ing the same services in different configurations. Service
• Service normalization: Services are decomposed composition architecture is itself a composition of the in-
and/or consolidated to a level of normal form to dividual architectures of the participating services. In the
minimize redundancy. In some cases, services are light of the Service Abstraction principle, this type of ar-
denormalized for specific purposes, such as perfor- chitecture only documents the service contract and any
mance optimization, access, and aggregation.[17] published service-level agreement (SLA); internal details
• Service optimization: All else being equal, high- of each service are not included.
quality services are generally preferable to low- If a service composition is a part of another (parent) com-
quality ones. position, the parent composition can also be referenced
• Service relevance: Functionality is presented at a in the child service composition. The design of service
granularity recognized by the user as a meaningful composition also includes any alternate paths, such as er-
service. ror conditions, which may introduce new services into the
current service composition.
• Service encapsulation: Many services are consoli-
Service composition is also a key technique in soft-
dated for use under the SOA. Often such services
ware integration, including enterprise software integra-
were not planned to be under SOA.
tion, business process composition and workflow compo-
• Service location transparency: This refers to the abil- sition.
ity of a service consumer to invoke a service regard-
less of its actual location in the network. This also
recognizes the discoverability property (one of the 13.5.3 Service inventory architecture
core principle of SOA) and the right of a consumer
to access the service. Often, the idea of service A service inventory is composed of services that auto-
virtualization also relates to location transparency. mate business processes. It is important to account for the
This is where the consumer simply calls a logical ser- combined processing requirements of all services within
vice while a suitable SOA-enabling runtime infras- the service inventory. Documenting the requirements of
tructure component, commonly a service bus, maps services, independently from the business processes that
this logical service call to a physical service. they automate, helps identify processing bottlenecks. The
13.7. WEB SERVICE PROTOCOLS 67
service inventory architecture is documented from the look-up requests, number of listings or accuracy of
service inventory blueprint, so that service candidates[18] the listings. The Universal Description Discovery
can be redesigned before their implementation. and Integration (UDDI) specification defines a way
to publish and discover information about Web ser-
vices. Other service broker technologies include
13.5.4 Service-oriented enterprise archi- (for example) ebXML (Electronic Business using
tecture eXtensible Markup Language) and those based on
the ISO/IEC 11179 Metadata Registry (MDR) stan-
This umbrella architecture incorporates service, compo- dard.
sition, and inventory architectures, plus any enterprise-
2. Service consumer: The service consumer or web ser-
wide technological resources accessed by these architec-
vice client locates entries in the broker registry using
tures e.g. an ERP system. This can be further supple-
various find operations and then binds to the service
mented by including enterprise-wide standards that apply
provider in order to invoke one of its web services.
to the aforementioned architecture types. Any segments
Whichever service the service-consumers need, they
of the enterprise that are not service-oriented can also be
have to take it into the brokers, bind it with respec-
documented in order to consider transformation require-
tive service and then use it. They can access multiple
ments if a service needs to communicate with the busi-
services if the service provides multiple services.
ness processes automated by such segments.SOA’s main
goal is to deliver agility to business
• SORCER
Elements of SOA, by Dirk Krafzig, Karl Banke, and Dirk SOA can support integration and consolidation activities
Slama[21] within complex enterprise systems, but SOA does not
specify or provide a methodology or framework for doc-
umenting capabilities or services.
We can distinguish the Service Object-Oriented Ar-
chitecture (SOOA), where service providers are net-
work (call/response) objects accepting remote invoca-
tions, from the Service Protocol Oriented Architecture
(SPOA), where a communication (read/write) protocol
is fixed and known beforehand by the provider and re-
questor. Based on that protocol and a service description
obtained from the service registry, the requestor can bind
to the service provider by creating own proxy used for
remote communication over the fixed protocol. If a ser-
vice provider registers its service description by name, the
requestors have to know the name of the service before-
hand. In SOOA, a proxy—an object implementing the
same service interfaces as its service provider—is regis-
tered with the registries and it is always ready for use by
SOA meta-model, The Linthicum Group, 2007
requestors. Thus, in SOOA, the service provider owns
and publishes the proxy as the active surrogate object with
SOA enables the development of applications that are a codebase annotation, e.g., URLs to the code defining
built by combining loosely coupled and interoperable proxy behavior (Jini ERI). In SPOA, by contrast, a pas-
services.[22] sive service description is registered (e.g., an XML docu-
These services inter-operate based on a formal definition ment in WSDL for Web services, or an interface descrip-
(or contract, e.g., WSDL) that is independent of the un- tion in IDL for CORBA); the requestor then has to gener-
derlying platform and programming language. The inter- ate the proxy (a stub forwarding calls to a provider) based
face definition hides the implementation of the language- on a service description and the fixed communication pro-
specific service. SOA-based systems can therefore func- tocol (e.g., SOAP in Web services, IIOP in CORBA).
tion independently of development technologies and plat- This is referred to as a bind operation. The proxy bind-
forms (such as Java, .NET, etc.). Services written in ing operation is not required in SOOA since the requestor
C# running on .NET platforms and services written in holds the active surrogate object obtained via the registry.
Java running on Java EE platforms, for example, can The surrogate object is already bound to the provider that
13.9. ORGANIZATIONAL BENEFITS 69
registered it with its appropriate network configuration talk to other equipment in the network has taken place.
and its code annotations. Web services, OGSA, RMI, and By formally embracing a SOA approach, such systems
CORBA services cannot change the communication pro- can position themselves to stress the importance of well-
tocol between requestors and providers while the SOOA defined, highly inter-operable interfaces.[27]
approach is protocol neutral.[23] Some have questioned whether SOA simply revives con-
High-level languages such as BPEL and specifications cepts like modular programming (1970s), event-oriented
such as WS-CDL and WS-Coordination extend the ser- design (1980s), or interface/component-based design
vice concept by providing a method of defining and sup- (1990s). SOA promotes the goal of separating users
porting orchestration of fine-grained services into more (consumers) from the service implementations. Services
coarse-grained business services, which architects can in can therefore be run on various distributed platforms and
turn incorporate into workflows and business processes be accessed across networks. This can also maximize
implemented in composite applications or portals.[24] reuse of services.
Service-oriented modeling[8] is a SOA framework that A service comprises a stand-alone unit of functionality
identifies the various disciplines that guide SOA practi- available only via a formally defined interface. Services
tioners to conceptualize, analyze, design, and architect can be some kind of “nano-enterprises” that are easy
their service-oriented assets. The Service-oriented mod- to produce and improve. Also services can be “mega-
eling framework (SOMF) offers a modeling language and corporations” constructed as the coordinated work of
a work structure or “map” depicting the various com- subordinate services.
ponents that contribute to a successful service-oriented A mature rollout of SOA effectively defines the API of
modeling approach. It illustrates the major elements that an organization.
identify the “what to do” aspects of a service develop-
ment scheme. The model enables practitioners to craft a Reasons for treating the implementation of services as
project plan and to identify the milestones of a service- separate projects from larger projects include:
oriented initiative. SOMF also provides a common mod-
eling notation to address alignment between business and 1. Separation promotes the concept to the business that
IT organizations. services can be delivered quickly and independently
from the larger and slower-moving projects common
in the organization. The business starts understand-
ing systems and simplified user interfaces calling on
13.9 Organizational benefits services. This advocates agility. That is to say, it
fosters business innovations and speeds up time-to-
Some enterprise architects believe that SOA can market.[28]
help businesses respond more quickly and more cost-
effectively to changing market conditions.[25] This style 2. Separation promotes the decoupling of services
of architecture promotes reuse at the macro (service) level from consuming projects. This encourages good de-
rather than micro (classes) level. It can also simplify in- sign insofar as the service is designed without know-
terconnection to—and usage of—existing IT (legacy) as- ing who its consumers are.
sets. 3. Documentation and test artifacts of the service are
With SOA, the idea is that an organization can look at not embedded within the detail of the larger project.
a problem holistically. A business has more overall con- This is important when the service needs to be
trol. Theoretically there would not be a mass of devel- reused later.
opers using whatever tool sets might please them. But
rather they would be coding to a standard that is set within An indirect benefit of SOA involves dramatically sim-
the business. They can also develop enterprise-wide SOA plified testing. Services are autonomous, stateless, with
that encapsulates a business-oriented infrastructure. SOA fully documented interfaces, and separate from the cross-
has also been illustrated as a highway system providing cutting concerns of the implementation.
efficiency for car drivers. The point being that if ev- If an organization possesses appropriately defined test
eryone had a car, but there was no highway anywhere, data, then a corresponding stub is built that reacts to the
things would be limited and disorganized, in any attempt test data when a service is being built. A full set of regres-
to get anywhere quickly or efficiently. IBM Vice Pres- sion tests, scripts, data, and responses is also captured for
ident of Web Services Michael Liebow says that SOA the service. The service can be tested as a 'black box'
“builds highways”.[26] using existing stubs corresponding to the services it calls.
In some respects, SOA could be regarded as an archi- Test environments can be constructed where the primitive
tectural evolution rather than as a revolution. It captures and out-of-scope services are stubs, while the remainder
many of the best practices of previous software archi- of the mesh is test deployments of full services. As each
tectures. In communications systems, for example, little interface is fully documented with its own full set of re-
development of solutions that use truly static bindings to gression test documentation, it becomes simple to identify
70 CHAPTER 13. SERVICE-ORIENTED ARCHITECTURE
problems in test services. Testing evolves to merely val- Interoperability becomes an important aspect of SOA im-
idate that the test service operates according to its docu-
plementations. The WS-I organization has developed ba-
mentation, and finds gaps in documentation and test cases sic profile (BP) and basic security profile (BSP) to en-
of all services within the environment. Managing the data force compatibility.[31] WS-I has designed testing tools to
state of idempotent services is the only complexity. help assess whether web services conform to WS-I profile
Examples may prove useful to aid in documenting a ser- guidelines. Additionally, another charter has been estab-
vice to the level where it becomes useful. The documen- lished to work on the Reliable Secure Profile.
tation of some APIs within the Java Community Pro- Significant vendor hype surrounds SOA, which can cre-
cess provide good examples. As these are exhaustive, ate exaggerated expectations. Product stacks continue to
staff would typically use only important subsets. The 'os- evolve as early adopters test the development and run-
sjsa.pdf' file within JSR-89 exemplifies such a file.[29] time products with real-world problems. SOA does not
guarantee reduced IT costs, improved systems agility or
shorter time to market. Successful SOA implementations
may realize some or all of these benefits depending on
13.10 Challenges the quality and relevance of the system architecture and
design.[32][33]
One obvious and common challenge faced involves man- Internal IT delivery organizations routinely initiate SOA
aging services metadata. SOA-based environments can efforts, and some do a poor job of introducing SOA con-
include many services that exchange messages to perform cepts to a business with the result that SOA remains mis-
tasks. Depending on the design, a single application may understood within that business. The adoption of SOA
generate millions of messages. Managing and providing starts to meet IT delivery needs instead of those of the
information on how services interact can become com- business, resulting in an organization with, for example,
plex. This becomes even more complicated when these superlative laptop provisioning services, instead of one
services are delivered by different organizations within that can quickly respond to market opportunities. Busi-
the company or even different companies (partners, sup- ness leadership also frequently becomes convinced that
pliers, etc.). This creates huge trust issues across teams; the organization is executing well on SOA.
hence SOA Governance comes into the picture. One of the most important benefits of SOA is its ease of
Another challenge involves the lack of testing in SOA reuse. Therefore accountability and funding models must
space. There are no sophisticated tools that provide ultimately evolve within the organization. A business unit
testability of all headless services (including message and needs to be encouraged to create services that other units
database services along with web services) in a typical will use. Conversely, units must be encouraged to reuse
architecture. Lack of horizontal trust requires that both services. This requires a few new governance compo-
producers and consumers test services on a continuous nents:
basis. SOA’s main goal is to deliver agility to businesses.
Therefore it is important to invest in a testing framework
(build it or buy it) that would provide the visibility re- • Each business unit creating services must have an
quired to find the culprit in the architecture. Business appropriate support structure in place to deliver on
agility requires SOA services to be controlled by the busi- its service-level obligations, and to support enhanc-
ness goals and directives as defined in the business Moti- ing existing services strictly for the benefit of others.
vation Model (BMM).[30] This is typically quite foreign to business leaders.
and SOA,” with some stating that Web 2.0 applica- 13.14 See also
tions are a realization of SOA composite and business
applications.[47] • Architecture of Interoperable Information Systems
• Autonomous decentralized system
• Business-agile enterprise
13.13.2 Web 2.0
• Business-driven development
Tim O'Reilly coined the term “Web 2.0” to de- • Business Intelligence 2.0
scribe a perceived, quickly growing set of web-based
applications.[48] A topic that has experienced extensive • Business-oriented architecture
coverage involves the relationship between Web 2.0 and
• Cloud computing
Service-Oriented Architectures (SOAs).
SOA is the philosophy of encapsulating application logic • Communications-enabled application
in services with a uniformly defined interface and making • Comparison of business integration software
these publicly available via discovery mechanisms. The
notion of complexity-hiding and reuse, but also the con- • Component business model
cept of loosely coupling services has inspired researchers
• Enterprise Mashup Markup Language (EMML)
to elaborate on similarities between the two philoso-
phies, SOA and Web 2.0, and their respective applica- • Enterprise messaging system
tions. Some argue Web 2.0 and SOA have significantly
different elements and thus can not be regarded “paral- • Enterprise service bus
lel philosophies”, whereas others consider the two con- • Event-driven programming
cepts as complementary and regard Web 2.0 as the global
SOA.[45] • HATEOAS (Hypermedia as the Engine of Applica-
tion State)
The philosophies of Web 2.0 and SOA serve different
user needs and thus expose differences with respect to the • iLAND project
design and also the technologies used in real-world appli-
cations. However, as of 2008, use-cases demonstrated • Library Oriented Architecture
the potential of combining technologies and principles of • Message-oriented middleware
both Web 2.0 and SOA.[45]
In an "Internet of Services", all people, machines, and • Microservices
goods will have access via the network infrastructure of • Open ESB
tomorrow. The Internet will thus offer services for all ar-
eas of life and business, such as virtual insurance, online • Platform as a service
banking and music, and so on. Those services will re-
• Resource-oriented architecture
quire a complex services infrastructure including service-
delivery platforms bringing together demand and supply. • Robot as Service
Building blocks for the Internet of Services include SOA,
Web 2.0 and semantics on the technology side; as well as • Search-oriented architecture
novel business models, and approaches to systematic and • Semantic service-oriented architecture
community-based innovation.[49]
• Service layer
Even though Oracle indicates that Gartner is coining a
new term, Gartner analysts indicate that they call this ad- • Service-oriented modeling
vanced SOA and refer to it as “SOA 2.0”.[50] Most of the
major middleware vendors (e. g., Red Hat, webMethods, • Service-oriented architecture implementation
TIBCO Software, IBM, Sun Microsystems, and Oracle) framework
have had some form of SOA 2.0 attributes for years. • Service (systems architecture)
• Service virtualization
• SOA governance
13.13.3 Digital nervous system
• SOALIB
SOA implementations have been described as represent- • SORCER
ing a piece of the larger vision known as the digital ner-
vous system[51][52] or the Zero Latency Enterprise.[53] • Web-oriented architecture
13.15. REFERENCES 73
13.15 References [20] “SOAP Version 1.2 (W3C )" (in Japanese).
W3.org. Retrieved 2012-08-13.
[1] Chapter 1: Service Oriented Architecture (SOA).
[21] Enterprise SOA. Prentice Hall, 2005
Msdn.microsoft.com. Retrieved on 2014-05-30.
[2] “What Is SOA?". opengroup. Retrieved 2013-08-19. [22] Cardoso, Jorge; Sheth, Amit P. (2006). “Foreword”. Se-
mantic Web Services, Processes and Applications. SE-
[3] Velte, Anthony T. (2010). Cloud Computing: A Practical MANTIC WEB AND BEYOND: Computing for Human
Approach. McGraw Hill. ISBN 978-0-07-162694-1. Experience. Foreword by Frank Leymann. Springer. xxi.
ISBN 978-0-387-30239-3. The corresponding architec-
[4] SOA Reference Model definition
tural style is called “service-oriented architecture": fun-
[5] “Service Oriented Architecture : What Is SOA?". open- damentally, it describes how service consumers and ser-
group. vice providers can be decoupled via discovery mecha-
nisms resulting in loosely coupled systems. Implementing
[6] Channabasavaiah, Holley and Tuggle, Migrating to a a service-oriented architecture means to deal with hetero-
service-oriented architecture, IBM DeveloperWorks, 16 geneity and interoperability concerns.
December 2003.
[23] Waldo, Jim (2002). “The Source”. Sum Microsystems.
[7] “SOA Reference Architecture Technical Standard : Basic Retrieved 2013-12-11. |chapter= ignored (help)
Concepts”. opengroup. Retrieved 2014-10-10.
[24] “Service selection and workflow mapping for Grids:
[8] Bell, Michael (2008). “Introduction to Service-Oriented
an approach exploiting quality-of-service information”.
Modeling”. Service-Oriented Modeling: Service Analysis,
Concurrency and Computation: Practice and Expe-
Design, and Architecture. Wiley & Sons. p. 3. ISBN
rience (Wiley) 21 (6): 739–766. 22 July 2008.
978-0-470-14111-3.
doi:10.1002/cpe.1343.
[9] Bell_, Michael (2010). SOA Modeling Patterns for
Service-Oriented Discovery and Analysis. Wiley & Sons. [25] Christopher Koch A New Blueprint For The Enterprise,
p. 390. ISBN 978-0-470-48197-4. CIO Magazine, March 1, 2005
[10] Erl, Thomas. About the Principles. Serviceorientation.org, [26] Elizabeth Millard. “Building a Better Process”. Computer
2005–06 User. January 2005. Page 20.
[11] “Application Platform Strategies Blog: SOA is Dead; [27] Bieberstein et al., Service-Oriented Architecture (SOA)
Long Live Services”. Apsblog.burtongroup.com. 2009- Compass: Business Value, Planning, and Enterprise
01-05. Retrieved 2012-08-13. Roadmap (The developerWorks Series) (Hardcover),
IBM Press books, 2005, 978-0131870024
[12] Yvonne Balzer Improve your SOA project plans, IBM, 16
July 2004 [28] Brayan Zimmerli Business Benefits of SOA, University
of Applied Science of Northwestern Switzerland, School of
[13] Microsoft Windows Communication Foundation team
Business, 11 November 2009
(2012). “Principles of Service Oriented Design”.
msdn.microsoft.com. Retrieved September 3, 2012. [29] https://cds.sun.com/is-bin/INTERSHOP.enfinity/
[14] Principles by Thomas Erl of SOA Systems Inc. eight spe- WFS/CDS-CDS_Developer-Site/en_US/-/USD/
cific service-orientation principles ViewProductDetail-Start?ProductRef=7854-oss_
service_activation-1.0-fr-spec-oth-JSpec@CDS-CDS_
[15] M. Hadi Valipour, Bavar AmirZafari, Kh. Niki Maleki, Developer
Negin Daneshpour, A Brief Survey of Software Architec-
ture Concepts and Service Oriented Architecture, in Pro- [30] “From The Business Motivation Model (BMM) To Ser-
ceedings of 2nd IEEE International Conference on Com- vice Oriented Architecture (SOA)". Jot.fm. Retrieved
puter Science and Information Technology, ICCSIT'09, 2013-06-15.
pp 34-38, Aug 2009, China.
[31] WS-I Basic Profile
[16] Services Oriented Architecture (SOA) - Jargon Buster.
Lansa.com. Retrieved on 2014-05-30. [32] Is There Real Business Value Behind the Hype of SOA?,
Computerworld, June 19, 2006.
[17] Tony Shan, “Building a Service-Oriented eBanking Plat-
form”, scc, pp.237-244, First IEEE International Confer- [33] See also: WS-MetadataExchange OWL-S
ence on Services Computing (SCC'04), 2004
[34] “4CaaSt marketplace: An advanced business environ-
[18] “Service Candidate”. ServiceOrientation.com. Retrieved ment for trading cloud services”. Future Generation
17 October 2014. Computer Systems (Elsevier) 41: 104–120. 2014.
doi:10.1016/j.future.2014.02.020.
[19] E. Oliveros et al. (2012), Web Service Specifications Rele-
vant for Service Oriented Infrastructures, Achieving Real- [35] The Overlapping Worlds of SaaS and SOA
Time in Distributed Computing: From Grids to Clouds,
IGI Global, pp. 174–198, doi:10.4018/978-1-60960- [36] McKendrick, Joe. “Bray: SOA too complex; 'just vendor
827-9.ch010 BS'". ZDNet.
74 CHAPTER 13. SERVICE-ORIENTED ARCHITECTURE
[37] M. Riad, Alaa; E. Hassan, Ahmed; F. Hassan, Qusay • SOA reference architecture from IBM
(2009). “Investigating Performance of XML Web Ser-
vices in Real-Time Business Systems”. Journal of Com- • SOA Practitioners Guide Part 2: SOA Reference
puter Science & Systems Biology 02 (05): 266–271. Architecture
doi:10.4172/jcsb.1000041.
• SOA Practitioners Guide Part 3: Introduction to
[38] Index XML documents with VTD-XML Services Lifecycle
[39] The Performance Woe of Binary XML
[44] Dion Hinchcliffe Is Web 2.0 The Global SOA?, SOA Web
Services Journal, 28 October 2005
[50] Yefim Natis & Roy Schulte Advanced SOA for Advanced
Enterprise Projects, Gartner, July 13, 2006
A massively multiplayer online game (also called computer and video game genres, new acronyms started
MMO and MMOG) is a multiplayer video game which to develop, such as MMORTS. MMOG emerged as a
is capable of supporting large numbers of players simulta- generic term to cover this growing class of games.
neously. By necessity, they are played on the Internet.[1]
The debuts of The Realm Online, Meridian 59 (the
MMOs usually have at least one persistent world, how-
first 3D MMORPG), Ultima Online, Underlight and
ever some games differ. These games can be found for EverQuest in the late 1990s popularized the MMORPG
most network-capable platforms, including the personal
genre. The growth in technology meant that where Nev-
computer, video game console, or smartphones and other erwinter Nights in 1991 had been limited to 50 simul-
mobile devices.
taneous players (a number that grew to 500 by 1995), by
MMOGs can enable players to cooperate and compete the year 2000 a multitude of MMORPGs were each serv-
with each other on a large scale, and sometimes to in- ing thousands of simultaneous players and led the way for
teract meaningfully with people around the world. They games such as World of Warcraft and EVE Online.
include a variety of gameplay types, representing many Despite the genre’s focus on multiplayer gaming, AI-
video game genres. controlled characters are still common. NPCs and mobs
who give out quests or serve as opponents are typical in
MMORPGs. AI-controlled characters are not as com-
14.1 History mon in action-based MMOGs.
The popularity of MMOGs was mostly restricted to
Main article: History of massively multiplayer online the computer game market until the sixth-generation
games consoles, with the launch of Phantasy Star Online on
Dreamcast and the emergence and growth of online ser-
vice Xbox Live. There have been a number of con-
The most popular type of MMOG, and the subgenre sole MMOGs, including EverQuest Online Adventures
that pioneered the category,which was launched in late (PlayStation 2), and the multiconsole Final Fantasy XI.
April 1999, is the massively multiplayer online role- On PCs, the MMOG market has always been dominated
playing game (MMORPG), which descended from uni- by successful fantasy MMORPGs.
versity mainframe computer MUD and adventure games
such as Rogue and Dungeon on the PDP-10. These games MMOGs have only recently begun to break into the mo-
predate the commercial gaming industry and the Internet, bile phone market. The first, Samurai Romanesque set in
but still featured persistent worlds and other elements of feudal Japan, was released in 2001 on NTT DoCoMo's
MMOGs still used today. iMode network in Japan.[3] More recent developments
are CipSoft's TibiaME and Biting Bit’s MicroMonster
The first graphical MMOG, and a major milestone in the which features online and bluetooth multiplayer gaming.
creation of the genre, was the multiplayer flight combat SmartCell Technology is in development of Shadow of
simulation game Air Warrior by Kesmai on the GEnie on- Legend, which will allow gamers to continue their game
line service, which first appeared in 1986. Kesmai later on their mobile device when away from their PC.
added 3D graphics to the game, making it the first 3D
MMO. Science fiction has also been a popular theme, featuring
games such as Mankind, Anarchy Online, Eve Online, Star
Commercial MMORPGs gained acceptance in the late Wars Galaxies and The Matrix Online.
1980s and early 1990s. The genre was pioneered by the
GemStone series on GEnie, also created by Kesmai, and MMOGs emerged from the hard-core gamer community
Neverwinter Nights, the first such game to include graph- to the mainstream strongly in December 2003 with an
ics, which debuted on AOL in 1991.[2] analysis in the Financial Times measuring the value of the
virtual property in the then-largest MMOG, Everquest, to
As video game developers applied MMOG ideas to other
75
76 CHAPTER 14. MASSIVELY MULTIPLAYER ONLINE GAME
result in a per-capita GDP of 2,266 dollars which would commitments not available to everyone. As a result, with-
have placed the virtual world of Everquest as the 77th out external acquisition of virtual currency, some players
wealthiest nation, on par with Croatia, Ecuador, Tunisia are severely limited to being able to experience certain
or Vietnam. aspects of the game.
Happy Farm is the most popular MMOG with 228 million The practice of acquiring large volumes of virtual cur-
active users, and 23 million daily users (daily active users rency for the purpose of selling to other individuals for
logging onto the game with a 24-hour period).[4] tangible and real currency is called gold farming. Many
World of Warcraft is a dominant MMOG in the world players who have poured in all of their personal effort re-
with more than 50% of the subscribing player base,[5] sent that there is this exchange between real and virtual
and with 8-9 million monthly subscribers worldwide. The economies since it devalues their own efforts. As a result,
subscriber base dropped by 1 million after the expan- the term 'gold farmer' now has a very negative connota-
sion Wrath of the Lich King, bringing it to 9 million tion within the games and their communities. This slan-
subscribers,[6] though it remains the most popular West- der has unfortunately also extended itself to racial profil-
ern title among MMOGs. In 2008, Western consumer ing and to in-game and forum insulting.
spending on World of Warcraft represented a 58% share The reaction from many of the game companies varies.
of the subscription MMOG market.[7] The title has gen- In games that are substantially less popular and have a
erated over $2.2 billion in cumulative consumer spending small player base, the enforcement of the elimination of
on subscriptions since 2005.[7] 'gold farming' appears less often. Companies in this situ-
ation most likely are concerned with their personal sales
and subscription revenue over the development of their
14.2 Virtual economies virtual economy, as they most likely have a higher prior-
ity to the games viability via adequate funding. Games
with an enormous player base, and consequently much
Main article: Virtual economy
higher sales and subscription income, can take more dras-
tic actions more often and in much larger volumes. This
Within a majority of the MMOGs created, there is vir- account banning could also serve as an economic gain
tual currency where the player can earn and accumulate for these large games, since it is highly likely that, due
money. The uses for such virtual currency are numerous to demand, these 'gold farming' accounts will be recre-
and vary from game to game. The virtual economies cre- ated with freshly bought copies of the game. In Decem-
ated within MMOGs often blur the lines between real and ber 2007, Jagex Ltd., in a successful effort to reduce real
virtual worlds. The result is often seen as an unwanted world trading levels enough so they could continue using
interaction between the real and virtual economies by the credit cards for subscriptions, introduced highly contro-
players and the provider of the virtual world. This prac- versial changes to its MMOG RuneScape to counter the
tice (economy interaction) is mostly seen in this genre of negative effects gold sellers were having on the game on
games. The two seem to come hand in hand with even the all levels.[8]
earliest MMOGs such as Ultima Online having this kind
The virtual goods revenue from online games and social
of trade, real money for virtual things.
networking exceeded US$7 billion in 2010.[9]
The importance of having a working virtual economy
In 2011, it was estimated that up to 100,000 people in
within an MMOG is increasing as they develop. A sign of
China and Vietnam are playing online games to gather
this is CCP Games hiring the first real-life economist for
gold and other items for sale to Western players.[10]
its MMOG Eve Online to assist and analyze the virtual
economy and production within this game. However single player in MMOs is quite viable, espe-
cially in what is called 'player vs environment' gameplay.
The results of this interaction between the virtual econ-
This may result in the player being unable to experience
omy, and our real economy, which is really the interac-
all content, as many of the most significant and poten-
tion between the company that created the game and the
tially rewarding game experiences are events which re-
third-party companies that want to share in the profits
quire large and coordinated teams to complete.
and success of the game. This battle between companies
is defended on both sides. The company originating the Most MMOGs also share other characteristics that make
game and the intellectual property argue that this is in vi- them different from other multiplayer online games.
olation of the terms and agreements of the game as well MMOGs host a large number of players in a single game
as copyright violation since they own the rights to how world, and all of those players can interact with each
the online currency is distributed and through what chan- other at any given time. Popular MMOGs might have
nels. The case that the third-party companies and their thousands of players online at any given time, usually
customers defend, is that they are selling and exchang- on a company owned servers. Non-MMOGs, such as
ing the time and effort put into the acquisition of the cur- Battlefield 1942 or Half-Life usually have fewer than 50
rency, not the digital information itself. They also express players online (per server) and are usually played on pri-
that the nature of many MMOGs is that they require time vate servers. Also, MMOGs usually do not have any sig-
14.3. GAME TYPES 77
nificant mods since the game must work on company Conductor platform included Fighter Wing, Air Attack,
servers. There is some debate if a high head-count is the Fighter Ace, EverNight, Hasbro Em@ail Games (Clue,
requirement to be an MMOG. Some say that it is the size NASCAR and Soccer), Towers of Fallow, The SARAC
of the game world and its capability to support a large Project, VR1 Crossroads and Rumble in the Void.
number of players that should matter. For example, de- One of the bigger problems with the engines has been
spite technology and content constraints, most MMOGs to handle the vast number of players. Since a typical
can fit up to a few thousand players on a single game server can handle around 10,000–12,000 players, 4000–
server at a time. 5000 active simultaneously, dividing the game into sev-
To support all those players, MMOGs need large-scale eral servers has up until now been the solution. This
game worlds, and servers to connect players to those approach has also helped with technical issues, such as
worlds. Some games have all of their servers connected lag, that many players experience. Another difficulty, es-
so all players are connected in a shared universe. Others pecially relevant to real-time simulation games, is time
have copies of their starting game world put on different synchronization across hundreds or thousands of players.
servers, called “shards”, for a sharded universe. Shards Many games rely on time synchronization to drive their
got their name from Ultima Online, where in the story, physics simulation as well as their scoring and damage
the shards of Mondain’s gem created the duplicate worlds. detection.
Still others will only use one part of the universe at any
time. For example, Tribes (which is not an MMOG)
comes with a number of large maps, which are played
in rotation (one at a time). In contrast, the similar title 14.3 Game types
PlanetSide allows all map-like areas of the game to be
reached via flying, driving, or teleporting.
There are several types of massively multiplayer online
MMORPGs usually have sharded universes, as they pro- games.
vide the most flexible solution to the server load problem,
but not always. For example, the space simulation Eve
Online uses only one large cluster server peaking at over
60,000 simultaneous players. 14.3.1 Role-playing
There are also a few more common differences between
MMOGs and other online games. Most MMOGs charge
the player a monthly or bimonthly fee to have access to
the game’s servers, and therefore to online play. Also, the
game state in an MMOG rarely ever resets. This means
that a level gained by a player today will still be there to-
morrow when the player logs back on. MMOGs often
feature in-game support for clans and guilds. The mem-
bers of a clan or a guild may participate in activities with
one another, or show some symbols of membership to the
clan or guild.
Bulletin board role-playing games This allows each player to accurately control multiple ve-
hicles and pedestrians in racing or combat.
A large number of games categorize under MMOBBG,
massively multiplayer online bulletin board game, can
also be called MMOBBRPG. These particular type of 14.3.5 Simulations
games are primarily made up of text and descriptions, al-
though images are often used to enhance the game.
tic conditions, with one operator an incumbent fixed and games based entirely on puzzle elements. It is usually set
mobile network operator, another a new entrant mobile in a world where the players can access the puzzles around
operator, a third a fixed-line/internet operator etc. Each the world. Most games that are MMOPGs are hybrids
team is measured by outperforming their rivals by market with other genres. Castle Infinity was the first MMOG
expectations of that type of player. Thus each player has developed for children. Its gameplay falls somewhere be-
drastically different goals, but within the simulation, any tween puzzle and adventure.
one team can win. Also to ensure maximum intensity, There are also massively multiplayer collectible card
only one team can win. Telecoms senior executives who games: Alteil, Astral Masters and Astral Tournament.
have taken the Equilibrium/Arbitrage simulation say it is
Other MMOCCGs might exist (Neopets has some CCG
the most intense, and most useful training they have ever elements) but are not as well known.
experienced. It is typical of business use of simulators,
in very senior management training/retraining. Alternate reality games (ARGs) can be massively mul-
tiplayer, allowing thousands of players worldwide to co-
Other online simulation games include War Thunder, operate in puzzle trails and mystery solving. ARGs take
Motor City Online, The Sims Online, and Jumpgate. place in a unique mixture of online and real-world play
that usually does not involve a persistent world, and are
Sports not necessarily multiplayer, making them different from
MMOGs.
A massively multiplayer online sports game is a title
where players can compete in some of the more tra-
Music/Rhythm
ditional major league sports, such as football (soccer),
basketball, baseball, hockey, golf or American football.
Massively multiplayer online music/rhythm games
According to GameSpot.com, Baseball Mogul Online
(MMORGs), sometimes called massively multiplayer
was “the world’s first massively multiplayer online sports
online dance games (MMODGs), are MMOGs that are
game”.[17] Other titles that qualify as MMOSG have been
also music video games. This idea was influenced by
around since the early 2000s, but only after 2010 did they
Dance Dance Revolution. Audition Online is another
start to receive the endorsements of some of the official
casual massively multiplayer online game and it is
major league associations and players.
produced by T3 Entertainment.
Just Dance 2014 has a game mode called World Dance
Racing Floor, which also structures like a MMORPG.
of Adobe Flash and the popularity of Club Penguin, than a bonding one, similar to a “third place”. Therefore,
Growtopia, and The Sims Online. MMOs have the capacity and the ability to serve as a com-
munity that effectively socializes users just like a coffee
shop or pub, but conveniently in the comfort of their own
14.4 Research home.[21]
[8] Runescape.com
[9] Kevin Kwang (12 July 2011). “Online games, social net-
works drive virtual goods”. ZDNet. Retrieved 27 Novem-
ber 2014.
Nurg, Auric, Bkell, Ancheta Wis, Centrx, Giftlite, DavidCary, Gracefool, Solipsist, Falcon Kirtaran, Kiteinthewind, Ludootje, Cynical, Qiq,
Ukexpat, GreenReaper, Alkivar, Real NC, MattKingston, Monkeyman, Reinthal, Archer3, Rich Farmbrough, Florian Blaschke, Sapox,
SECProto, Berkut, Dyl, Bender235, Narcisse, RoyBoy, Dennis Brown, Neilrieck, WhiteTimberwolf, Bobo192, Fir0002, SnowRaptor,
Matt Britt, Hectoruk, Gary, Liao, Polarscribe, Guy Harris, Hoary, Evil Prince, Lerdsuwa, Bsadowski1, Gene Nygaard, Marasmusine, Si-
metrical, Woohookitty, Henrik, Mindmatrix, Aaron McDaid, Splintax, Pol098, Vossanova, Qwertyus, Haikupoet, JIP, Ketiltrout, SMC,
Smithfarm, CQJ, Bubba73, Yamamoto Ichiro, Skizatch, Ian Pitchford, Master Thief Garrett, Crazycomputers, Superchad, Dbader, Da-
Gizza, SirGrant, Hairy Dude, TheDoober, Epolk, Stephenb, Rsrikanth05, NawlinWiki, VetteDude, Thiseye, Rbarreira, Anetode, DGJM,
Falcon9x5, Addps4cat, Closedmouth, Fram, Andyluciano, JLaTondre, Carlosguitar, Mark hermeling, SmackBot, Mmernex, Stux, Hen-
riok, JPH-FM, Jagged 85, Powo, Pinpoint23, Thumperward, Swanner, Hibernian, JagSeal, E946, Shalom Yechiel, Frap, KaiserbBot,
AcidPenguin9873, JonHarder, Kcordina, Aldaron, Fuhghettaboutit, Letowskie, DWM, Natamas, Kellyprice, Fitzhugh, Sonic Hog, A5b,
Homo sapiens, Lambiam, Kyle wood, JzG, Pgk1, Littleman TAMU, Ulner, WhartoX, Disavian, Danorux, Soumyasch, Joffeloff, Gorgalore,
Guy2007, Fernando S. Aldado, 16@r, JHunterJ, NJA, Peyre, Vincecate, Hu12, Quaeler, Iridescent, Pvsuresh, Tawkerbot2, CmdrObot,
Plasticboob, CBM, Nczempin, Jesse Viviano, Michaelbarreto, Shandris, Evilgohan2, Neelix, Babylonfive, ScorpSt, Cydebot, Myscrnnm,
Steinj, Drtechmaster, Kozuch, Thijs!bot, Hervegirod, Mwastrod, Bahnpirat, Squater, Dawnseeker2000, Sherbrooke, AlefZet, AntiVandal-
Bot, Widefox, Seaphoto, Shalewagner, DarthShrine, Chaitanya.lala, Leuko, Od1n, Gamer2325, Arch dude, IanOsgood, Stylemaster, Avi-
adbd, Geniac, Coffee2theorems, Ramurf, Vintei, JamesBWatson, CattleGirl, CountingPine, Midgrid, EagleFan, David Eppstein, DerHexer,
Gimpy530, Gwern, Gjd001, Oren0, Bdsatish, Red66, Ehoogerhuis, Sigmajove, Felipe1982, J.delanoy, Jspiegler, GuitarFreak, NerdyNSK,
Techedgeezine, Acalamari, Barts1a, Thucydides411, EMG Blue, Chrisforster, Mikael Häggström, Hubbabridge, LA Songs, SlightlyMad,
Haoao, Remi0o, 28bytes, Monkeyegg, Imperator3733, Smkoehl, Gwib, Klower, Taurius, HuskyHuskie, ITMADOG, Haseo9999, Cmbay,
Nono1234, ParallelWolverine, Mike4ty4, JasonTWL, Vjardin, Winchelsea, Rockstone35, Caltas, Jerryobject, Flyer22, FSHL, Rogergum-
mer, Lord British, Rupert baines, Ddxc, Ttrevers, Noelhurley, Radical.bison, WikipedianMarlith, ClueBot, Binksternet, GorillaWarfare,
Starkiller88, Rilak, Czarkoff, Taroaldo, Wikicat, LizardJr8, Cirt, DragonBot, Drewster1829, Goodone121, Karlhendrikse, Coralmizu,
Alejandrocaro35, Time2zone, Jeffmeisel, Msrill, Versus22, Un Piton, DumZiBoT, Чръный человек, Parallelized, Zodon, Airplaneman,
Dsimic, Thebestofall007, Addbot, Proofreader77, Hcucu, Scientus, CanadianLinuxUser, MrOllie, LaaknorBot, Markuswiki, Jasper Deng,
IOLJeff, Imirman, Tide rolls, Jarble, Ettrig, Legobot, Yobot, SabbaZ, TaBOT-zerem, Jack Boyce, Goldenthree, Danperryy, SwisterTwister,
Mdegive, AnomieBOT, Enisbayramoglu, VanishedUser sdu9aya9fasdsopa, Jim1138, Piano non troppo, DaveRunner, GFauvel, Flewis, Ma-
terialscientist, RobertEves92, Threadman, Gnumer, Joshxyz, MacintoshWriter, LilHelpa, TheAMmollusc, Miym, Abce2, Bikeman333,
Barfolomio, Adavis444, Gordonrox24, Elemesh, Gastonhillar, Prari, Hemant wikikosh, FrescoBot, Picklecolor2, StaticVision, RaulMe-
tumtam, MBbjv, Winterst, Elockid, UkillaJJ, Skyerise, Meaghan, FoxBot, Sulomania, Ellwd, Glenn Maddox, Gal872875, Sreven.Nevets,
NagabhushanReddy, Gg7777, Jesse V., DARTH SIDIOUS 2, Truthordaretoblockme, Beyond My Ken, WildBot, Virtimo, Helwr, Emaus-
Bot, Az29, Keithathaide, Super48paul, P3+J3^u!, Tommy2010, Serketan, Erpert, Alpha Quadrant (alt), NGPriest, Bmmxc damo, Steed-
horse, L Kensington, Donner60, Jsanthara, DASHBotAV, Rmashhadi, Cswierkowski, ClueBot NG, Jeff Song, Gilderien, Chharper1, Brain-
cricket, Widr, MerlIwBot, Nodulation, OpenSystemsPublishing, Minadasa, Cdog44, Hz.tiang, Charlie6WIND, WinampLlama, Op47, Tre-
vayne08, Harizotoh9, Nolansdad95120, Glacialfox, Simonriley, DigitalMediaSage, Sha-256, Michael Anon, Snippy the heavily-templated
snail, NimbusNiner, DavidLeighEllis, Koza1983, Christian CHABRERIE, Geoyo, Aniru919, Lagoset, Sofia Koutsouveli, RoninDusette,
Kyle1009, ComsciStudent, DorothyGAlvarez and Anonymous: 643
• Intel Atom (CPU) Source: http://en.wikipedia.org/wiki/Intel%20Atom%20(CPU)?oldid=653270591 Contributors: David spector, Freck-
lefoot, Llywrch, Cyde, KAMiKAZOW, Julesd, Brigman, Conti, Ehn, Andrewman327, Taxman, Thue, Nurg, Mervyn, Zzyzx, Somercet,
Andries, Digital infinity, Rchandra, Macrakis, Bo102010, Uzume, Chowbok, Elektron, Moxfyre, Imroy, NeuronExMachina, YUL89YYZ,
Roo72, Bender235, Plugwash, ThierryVignaud, PatrikR, Giraffedata, MARQUIS111, Towel401, Jakew, Alansohn, Walter Görlitz, Guy
Harris, Quatermass, Kenyon, Jkt, Richard Arthur Norton (1958- ), Poppafuze, Scjessey, Pol098, Scootey, GregorB, MiG, Rjecina, JamesH-
enstridge, Laurinkus, Pmj, Rjwilmsi, Koavf, PHenry, Jlundell, Traut, Gelma, DirkvdM, X1987x, TheDJ, Jamessnell, Srleffler, Imnot-
minkus, Jpfagerback, Hairy Dude, DMahalko, Jengelh, Manop, Gaius Cornelius, XX55XX, TDogg310, Blitterbug, SixSix, Lucasred-
dinger, Hugo 87, Huangcjz, E Wing, Luk, SmackBot, Henriok, GeneralAntilles, SmackEater, Brianski, Amux, Thumperward, Onesi-
mos, Mdwh, Aleutia, Xaxxon, Colonies Chris, Jeffreyarcand, Dethme0w, Frap, Glloq, Sh0dan, Rrburke, Metageek, Ne0Freedom, Morio,
Daniel.Cardenas, Curly Turkey, Anss123, Joffeloff, JonT, Dr.K., LaMenta3, Dl2000, Hu12, Skorp, Flaming-tubesock, Andrew Hampe,
Sfm 7, Fernvale, Artemgy, FleetCommand, HenkeB, Bungalowbill, CCFreak2K, Steel, Myscrnnm, Rhe br, Clovis Sangrail, Christian75,
DumbBOT, Alaibot, Kozuch, Thijs!bot, Electron9, Humble Scribe, Hcobb, Oldmanbiker, Seaphoto, Lordmetroid, Smartse, CairoTasog-
are, Plainchips, RedWyverex, Tvbinstinct, Dimawik, JAnDbot, MER-C, Mark Grant, BrotherE, Inspector 2211, Peremen, Yosh3000,
Brownout, KJRehberg, Mdulcey, Chocobogamer, Bjornen, Nicsterr, AVRS, Yura.hm, Keith D, R'n'B, CommonsDelinker, Pacdude9,
Jayayess1190, J.delanoy, Whitebox, R!SC, McSly, Austin512, Ef3, Mazemode, Dubhe.sk, Ken g6, Jaimeastorga2000, My wing hk, Mr-
manny09, Mlorimer, DpuTiger, Imperator3733, Bjquinn, Allan kuan1992, TXiKiBoT, Someguy1221, Delv0n2, Bojan PLOJ, Tomaxer,
Winterspan, Andromeda451, Ham Pastrami, Behind The Wall Of Sleep, Oomgosh, Jhdezjr, Sasawat, Callidior, Jfromcanada, Jeo100,
WikipedianMarlith, Chris D Heath, Dsalt, Jetstreak150, ClueBot, Pistazienfresser, Stygiansonic, Rilak, Boing! said Zebedee, Edknol,
DerekMorr, Ykhwong, Technorj, Mewtu, Thingg, Aronzak, HumphreyW, Crowsnest, DumZiBoT, Jdwinx, Nnewton, Bobbozzo, Zodon,
Jabberwoch, ZackMulgrew, Airplaneman, Deineka, Addbot, E-Tiggy, Mortense, Ocdnctx, JoshuaKuo, Scientus, LaaknorBot, Glane23,
Martin0641, Jasper Deng, Abisys, Tide rolls, Lightbot, Cesiumfrog, Chinavation, Z897623, Ajoao4444, Luckas-bot, Yobot, Nghtwlkr,
Helena srilowa, Bryan.burgers, Ganimoth, AnomieBOT, 1exec1, Versus785, Materialscientist, E2eamon, Undefinedsymbol, Jeremonkey,
LilHelpa, Ziggy3055, Xqbot, Iadrian yu, Capricorn42, TecCougar, Trontonic, Locos epraix, User931, Narunetto, DriverDan, DooMMeeR,
Cyfraw, CaptainMorgan, Demysc, 33rogers, Uusijani, MrAronnax, Brunocip, Mark Renier, Wikiphile1603, BrownsRock10, Mikesteele81,
Arndbergmann, Kwns, Dsavi, Krating tong, Hgb asicwizard, Boobarkee, Jus dreamboy, Sailing ralf, Logipedia, Pklec, Wikk3d1, Blardie,
Bobbyus, Txhorw, Mappy wp, MoreNet, Fitoschido, GoingBatty, ExilorX, ValC, Werieth, Cogiati, Algh0022, Rvrcopy, Tomy9510,
Gsarwa, UnknownFork, Ipsign, Uziel302, ClueBot NG, Matthiaspaul, Logic2000, Gilderien, Cntras, RanmaKei, Dein Gregor, Nodulation,
Technical 13, Frantz369, Zapper067, Trilemma2, Ianteraf, Deepender singla, Manguene, Kwokangus, BattyBot, Sallynice, Drbendanillo,
Codename Lisa, PhelperRF, The.power.p, Pragmatool, Rotlink, Noasshats, Shelbystripes, FDMS4, Aeroid, John.Donoghue, Some Gadget
Geek, Pancho507 and Anonymous: 419
• Intel Core Source: http://en.wikipedia.org/wiki/Intel%20Core?oldid=651296137 Contributors: Caltrop, Atlan, Cherkash, Christopher-
woods, Xanzzibar, Thv, Brouhaha, Rchandra, Chowbok, SimonLyall, Thomas Bohn, DmitryKo, Apalsola, Discospinster, Hydrox, Ben-
der235, Guy Harris, PatrickFisher, Velella, Woohookitty, Pol098, Ruud Koot, GregorB, Bubba73, Fivemack, Riki, WikiWikiPhil, DVdm,
Aluvus, MGodwin, Manop, Mike411, Kvn8907, Hunnyhiteshseth, ItsDrewMiller, SmackBot, Reedy, Gilliam, Ohnoitsjamie, Vvarkey,
Smileyborg, TheDamian, Gobonobo, Hvn0413, Agent007bond, Blehfu, Fernvale, MoHaG, HDCase, Augustlilleaas, FleetCommand,
Pseudo-Richard, Fletcher, Yaris678, Dominicanpapi82, Dmws, JAnDbot, DuncanHill, MER-C, Bebo2good1, Trickv, Magioladitis, Khei-
der, Tracer9999, Rettetast, R'n'B, Tgeairn, Joeinwap, Black Kite, Hammersoft, Lexein, Sbjf, Nxavar, Echrei, Anna Lincoln, Jim Peak,
84 CHAPTER 14. MASSIVELY MULTIPLAYER ONLINE GAME
Jaqen, Zuchinni one, ObjectivismLover, CavalloRazzo, Fnagaton, Dawn Bard, Bentogoa, Lord British, Jimthing, Svick, ImageRemoval-
Bot, Martarius, ClueBot, EoGuy, Boing! said Zebedee, Rockfang, Ktr101, Pzoxicuvybtnrm, Antti29, Avoided, Badgernet, Airplaneman,
Dsimic, Addbot, P4p5, Some jerk on the Internet, Damiens.rf, Melwade, Aunva6, Jasper Deng, Yobot, Legobot II, AnomieBOT, Ciphers,
Hairhorn, 1exec1, Flinders Petrie, Intelati, Addihockey10, Johnny Bin, DSisyphBot, Shadowjams, Spazturtle, FrescoBot, Arndbergmann,
Gligoran, Winterst, Notedgrant, LittleWink, Jschnur, Kuyamarco123, Full-date unlinking bot, Cnwilliams, Edo248, TBloemink, The As-
sassin047, Visite fortuitement prolongée, Specs112, Craigdsthompson, Lopifalko, Mmm333k, John of Reading, Heymid, K6ka, ZéroBot,
Badrury, Cogiati, Najazjalal, Dr. A. J. Olivier, Sakurai Harumi, Donner60, Zee991, Ppw0, Azul120, Llightex, 28bot, Alcazar84, Clue-
Bot NG, Smtchahal, Alegh, Luraj, Thompson james, Widr, Zackaback, 42girls, Wolfmmiii, Titodutta, Popish, Killswitch125, Indexal-
lan, Gdowding, Mark Arsten, BenjaminBrock, BattyBot, Hang9417, Chemya, Rally24, Dexbot, Mogism, Pop99999, Etotheb123, TwoT-
woHello, PhelperRF, Isarra (HG), Npx122sy, Frosty, Intelinside13core, OsCeZrCd, MountSynai, Applist, Haminoon, JoshChoi4121, Ran-
dom23423423523423523, Studcameo, JaconaFrere, Boby2000, Vivekvigian, DiscantX, Anonguest, Computerpc0710 and Anonymous:
271
• List of Intel Core i5 microprocessors Source: http://en.wikipedia.org/wiki/List%20of%20Intel%20Core%20i5%20microprocessors?
oldid=651592968 Contributors: F. Delpierre, Alereon, Mindmatrix, Asav, Sdschulze, Vossanova, Bubba73, Mdeegan, G-smooth2k,
SmackBot, Aqualize, Wislam, Briguychau, Strohel, Inspector 2211, Paranoidmage, KenSharp, Gu1dry, CrackDragon, Addbot, Ma-
tushorvath, Luckas-bot, Yobot, AnomieBOT, 1exec1, Erik9bot, Uusijani, FrescoBot, Ralf König, Arndbergmann, Wpguy, Thilinawi-
jesooriya, Blound, Xplosionist, واژه یاد, LOTG, Tc80, Blinkiz, Ravbr, Adam J Richter, Aularon, Vitkovskiy Roman, Sakurai Harumi,
SporkBot, Sbmeirow, Asdfsfs, Mystery70, Xxxxxls2, Azul120, Everlasting enigma, Frietjes, Killswitch125, Frze, Trevayne08, Hang9417,
Mahadri, Mhdmj, Latias1290, Extec286 and Anonymous: 150
• Pentium Dual-Core Source: http://en.wikipedia.org/wiki/Pentium%20Dual-Core?oldid=634778879 Contributors: Uzume, Ukexpat,
Djyang, M1ss1ontomars2k4, Guy Harris, Amram99, Denniss, Angelic Wraith, Paul1337, Pauli133, GregorB, Coneslayer, FlaBot, Gurch,
Wrightbus, Chobot, Aluvus, Manop, Mike411, Cybergoth, Spliffy, SmackBot, MalafayaBot, O keyes, Letdorf, Ceecookie, BWCNY,
MureninC, Thomasyen, Morio, Luigi.a.cruz, DHR, Radiant chains, Fernvale, JForget, Nathanbarlow, Alaibot, Ike-bana, Suigi, Jdlowery,
Okki, Myanw, JAnDbot, Mufffin man, Jaakobou, Alex Spade, RP88, Cspan64, Cmichael, STBotD, Pcordes, VolkovBot, Imperator3733,
Rafiko77, Hotdog111, Hellcat fighter, Dauthiwatchman, Iammrysh, André Pessoa, Deepakrawal121084, AlphaPyro, BBOPOPOS,
VVVBot, Ham Pastrami, Yongweiru, Jhdezjr, Anakin101, ClueBot, Supertouch, Czarkoff, Boing! said Zebedee, Otolemur crassicau-
datus, Sinaboy06, Geografia75, DumZiBoT, P40K, C. A. Russell, Airplaneman, Addbot, JoshuaKuo, Belekkaya, LaaknorBot, LAAFan,
Luckas-bot, Yobot, 4th-otaku, Rubinbot, Piano non troppo, Xqbot, Yhljjang, Daram.G, BrownsRock10, Arndbergmann, John of Read-
ing, Spacetronaut, H3llBot, Demonkoryu, ClueBot NG, HMSSolent, Jphill19, Henrytang194, Chepny, Emaifunction1, Gindylow, Julian-
prescott2604juuly, Smashmeerkat and Anonymous: 120
• Xeon Source: http://en.wikipedia.org/wiki/Xeon?oldid=653591763 Contributors: Pnm, Ixfd64, Stevenj, Julesd, Conti, Ehn, Anuchit,
Magnus.de, Tempshill, Wernher, Pengo, David Gerard, DocWatson42, Levin, EJDyksen, Bobblewik, Chowbok, Pgan002, DaveJB,
Kiteinthewind, Mormegil, Imroy, Wfaulk, Rich Farmbrough, Hydrox, Jpk, Jaap, Alistair1978, Bender235, Plugwash, MARQUIS111,
MPerel, Ehurtley, Typobox, Guy Harris, Keyser Söze, CyberSkull, Water Bottle, Denniss, Schapel, Paul1337, Pauli133, Gene Nygaard,
Dan100, Forderud, PdDemeter, Morkork, Jannex, Mindmatrix, LOL, Timharwoodx, Megaslow, ^demon, GregorB, Sega381, Alecv, Fre-
plySpang, Yurik, BorgHunter, Rjwilmsi, Jgp, Bubba73, Tommy Kronkvist, Baryonic Being, Fivemack, RexNL, Aluvus, YurikBot, Borgx,
RobotE, Charles Gaudette, Wengier, Yuhong, Ksyrie, Txuspe, Bovineone, Big Brother 1984, NawlinWiki, AugieWest, Seegoon, Thrashed-
Paranoid, Tony1, Alex43223, Phandel, Chris S, Jayamohan, Cloudbound, Cffrost, VodkaJazz, Pádraic MacUidhir, KnightRider, Smack-
Bot, Saihtam, Henriok, Ayocee, Gary Kirk, Darklock, SigurdMagnusson, Bluebot, Keegan, Big.P, Thumperward, SchfiftyThree, Letdorf,
Kungming2, ABACA, TurboTad, Dethme0w, Drkirkby, Frap, Jacob Poon, А, OzymandiasX, Midnightcomm, Metageek, Aditsu, Irides-
cence, Morio, Ugur Basak Bot, Autopilot, DHR, Pgk1, Hectorpal, Kuru, Joffeloff, Nashif, Slakr, Beetstra, Peyre, Kvng, Dl2000, Domini-
cliam, Lee Carre, Coldpower27, Chovain, A1b2c345, Fernvale, CmdrObot, Mix Bouda-Lycaon, Jesse Viviano, Kamujin, CarrerCrytharis,
Christian75, Jdlowery, Hardchemist, Headbomb, Ldoron, Jpchatt, Scepia, Fellix, Deflective, Davewho2, Arch dude, Belg4mit, Bong-
warrior, Banzai!, Paranoidmage, Indon, Bernd vdB, TimSmall, Retroneo, Edward321, Ftiercel, CommonsDelinker, NerdyNSK, Regani,
Sulimo, AntiSpamBot, GHanson, Sollosonic, DpuTiger, Joeinwap, Gerrit C. Groenenboom, Hammersoft, VolkovBot, KJSatz, Impera-
tor3733, PNG crusade bot, Marskuzz, ScriptedGhost, KP-Adhikari, Thunderbird2, Quantpole, Aednichols, P2501, JonnyJD, Roadrunner
gs, SieBot, Tinkoniko, Vexorg, Pxma, Flyer22, Sherlockindo, Jhdezjr, Lightmouse, Hobartimus, Jamiebalfour04, Anchor Link Bot, Im-
ageRemovalBot, Loren.wilton, Martarius, Sfan00 IMG, ClueBot, Mustafa Mustamann, Rune-san, Mooo993, Rilak, Dogtoy, Paperlan,
Fr3d org, TheTruthForSure, Theodric aethelfrith, BlueLikeYou, MatthieuV, Ykhwong, Pluknet, EPIC MASTER, Wdustbuster, AeonHun,
HumphreyW, DumZiBoT, C. A. Russell, NellieBly, Ebbex, Dsimic, Jeff.science, Addbot, Tmacfan4321, JoshuaKuo, Hardwarefreak,
Munkeevegetable, Tide rolls, Yobot, Koman90, Erikb5, Rvt1000r, ArthurBot, Victor.na, Obersachsebot, Winrarlawl, Miahac, J04n, Mag-
nuspg, Wfischer, W Nowicki, Schafdog, Arndbergmann, Jcheckler, Rick.weber.iii, Jonesey95, A412, Xcvista, HappyCamp23, Perversus,
GoneIn60, SkyMachine, Txt.file, Ecaveh, Rx5674, WCLL HK, Wire306, Dewritech, Minael, ZéroBot, Caseybutt, SCF0, Owenmann,
MainFrame, Vanished 1850, ChuispastonBot, Azul120, ClueBot NG, FeiTeng1000, Snotbot, Xeonisedg, Widr, Gavinstubbs09, Satus-
guy, FlashSWT, Johnadamy, Andrzej w k 2, Lancededcena, Andyspam, BattyBot, Xehroz, Oranjblud, Cerabot, Kingaustin42, Velinath,
Taino19xx, I am One of Many, MaXintoshPro, Applist, Monkbot, Filedelinkerbot, Unician, Sarr Cat and Anonymous: 429
• Distributed computing Source: http://en.wikipedia.org/wiki/Distributed%20computing?oldid=648678556 Contributors: Damian Yer-
rick, TwoOneTwo, Szopen, Koyaanis Qatsi, Snorre, Greg Lindahl, SimonP, Kurt Jansson, Heron, Formulax, Metz2000, Bernfarr, Den-
nisDaniels, Edward, Derek, Marvinfreeman, Lexor, Nixdorf, Kku, Wapcaplet, Ixfd64, Dori, Anonymous56789, SebastianHelm, Alfio,
Nanshu, CatherineMunro, Darkwind, Glenn, Whkoh, Rotem Dan, EdH, Rob Hooft, Ghewgill, Adam Conover, Timwi, Reddi, Ww,
Hao2lian, David Shay, Dbabbitt, Optim, Raul654, Frazzydee, Jni, Donarreiskoffer, Robbot, MrJones, Brent Gulanowski, Fredrik, Ki-
zor, R3m0t, Kristof vt, Vespristiano, Nurg, Kuszi, Hadal, Wikibot, Dbroadwell, Michael2, Giftlite, Dbenbenn, Thv, Herbee, Curps,
Waxmop, Dawidl, Mboverload, Khalid hassani, Matt Crypto, Edcolins, Lupine1647, Barneyboo, Beland, Jacob grace, APH, Maximaxi-
max, Nickptar, M1ss1ontomars2k4, Eggstasy, D6, AlexChurchill, Freakofnurture, Spiffy sperry, Mark Zinthefer, Vague Rant, Chrischan,
YUL89YYZ, Bender235, S.K., Darkness Productions, Evice, Bobdoe, El C, Walden, Drektor2oo3, Gyll, Stesmo, BrokenSegue, Viriditas,
Cmdrjameson, Mrdude, Haham hanuka, Paullaw, Mdd, Wayfarer, Autopilots, Ellisonch, Redxiv, Guy Harris, Atlant, Andrewpmk, DL-
Jessup, Flata, InShaneee, Irdepesca572, Stephan Leeds, Suruena, Evil Monkey, 4c27f8e656bb34703d936fc59ede9a, SimonHova, Nigini,
Nuno Tavares, Daira Hopwood, Decrease789, JonH, Ruud Koot, Jeff3000, AlbertCahalan, Qwertyus, Kbdank71, Iflipti, Rjwilmsi, In-
diedan, Karmachrome, Vary, John Nixon, Pascal76, Brighterorange, AlisonW, Fred Bradstadt, FayssalF, FlaBot, JFromm, Ewlyahoocom,
Bihzad, Glenn L, Chobot, DVdm, Garas, Bgwhite, YurikBot, Wavelength, Mdsmedia, Spl, Bhny, Gaius Cornelius, Bovineone, Carl-
Hewitt, SEWilcoBot, Mkbnett, Ichatz, Neum, Voidxor, Amwebb, Nethgirb, Jeh, Tim Watson, Georgewilliamherbert, Cdiggins, Ninly,
Juliano, Wsiegmund, Wikiant, Joysofpi, JoanneB, Rwwww, SmackBot, Ariedartin, David Woolley, Powo, CrypticBacon, Gilliam, RD-
14.9. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 85
Brown, PrimeHunter, EncMstr, LaggedOnUser, Scwlong, Allan McInnes, Cybercobra, Rajrajmarley, Kasperd, Dreadstar, Bejnar, How-
doesthiswo, Kuru, Sosodank, Iosef aetos, Statsone, Codepro, Bjankuloski06en, Hazzeryoda, MikeHearn, Beetstra, Trey56, Skabraham, Lee
Carre, Quaeler, Buyya, Tawkerbot2, Flubeca, Gangesmaster, Page Up, Only2sea, Pmerson, WeggeBot, Ezrakilty, SuperMidget, Gortsack,
CaptainMooseInc, Markov12, Stevag, Vanished User jdksfajlasd, D104, Thijs!bot, Hervegirod, Andyjsmith, Ideogram, Hala54, Papipaul,
Alphachimpbot, Sorry Go Fish, JAnDbot, CosineKitty, Magioladitis, Greg Ward, Nyq, SirDuncan, Geozapf, Tedickey, Cic, David Epp-
stein, Jacobko, Unfactual POV, Sahkuhnder, Chocmah, Chtfn, Softguyus, LedgendGamer, Cadence-, Cpiral, McSly, Akiezun, Aervanath,
Shamjithkv, DorganBot, LightningDragon, VolkovBot, Lee.Sailer, AlnoktaBOT, Philip Trueman, DragonLord, ChuckPheatt, Vanishe-
dUserABC, Spinningspark, Palaeovia, Kbrose, YonaBot, EwokiWiki, Ajtouchstone, Monkeypooisgood, Flyer22, JCLately, Hello71, Xe7al,
Dust Filter, WikiLaurent, Vladrassvet, Tanvir Ahmmed, The Thing That Should Not Be, TallMagic, Nonlinear149, Worldwidegrid, Do-
minikiii, Alexbot, Ahmed abbas helmy, WalterGR, Warrior4321, Aitias, SoxBot III, DumZiBoT, Darkicebot, XLinkBot, Wonderfulsnow,
WikHead, Slashem, RealityDysfunction, NonNobisSolum, Addbot, Proofreader77, Ramu50, Some jerk on the Internet, DOI bot, Maria C
Mosak, EjsBot, AkhtaBot, MrOllie, Kisbesbot, Jarble, Frehley, Legobot, PlankBot, Luckas-bot, Yobot, AnomieBOT, Nit634, Citation bot,
ArthurBot, Hahutch298, Xqbot, Capricorn42, Julianhyde, Mika au, Gilo1969, GrabBrain, Miguel in Portugal, Miym, Wizardist, AreThree,
Felix.rivas, Toonsuperlove, Doulos Christos, D'ohBot, Sae1962, Wifione, Jomifica, Neilksomething, Citation bot 1, Guarani.py, I dream of
horses, Jonesey95, Sohil it, RedBot, Île flottante, Jandalhandler, Trappist the monk, Yunshui, Diannaa, Tbhotch, Jesse V., Shafigoldwasser,
TjBot, Eng azza, EmausBot, Kinshuk jpr19, Janakan86, Goudron, JordiGH, Cincybluffa, Wakebrdkid, Unobjectionable, ClueBot NG,
Matthiaspaul, MelbourneStar, Gilderien, Advuser14, Widr, ساجد امجد ساجد, Helpful Pixie Bot, Mellorf, BG19bot, Cognitivecarbon, Plas-
maTime, Kitisco, Riley Huntley, Khiladi 2010, Boufal, ChrisGualtieri, Mtriana, Catclock, Frosty, Malhelo, Maxcommejesus, Mario.virtu,
Dudewhereismybike, Narendra22, Dtngo, Ma2369, Spmeu, Meyerjo and Anonymous: 365
GuySh, Taibah U, BG19bot, Wasbeer, Hvh22, Compfreak7, AdventurousSquirrel, Pkkao2, BattyBot, Softwareqa, DucerGraphic, Mtri-
ana, Cj211, Czaftan, Jots and graphs, Baringmo, Mwsobol, Gabby Merger, Lemnaminor, Marinac93, Lasith011, GreggRock, Maura
Driscoll, Comp.arch, Dnader90, Wtsai36, K0zka, Nishsvn, Jpmunz, Beavercreekful, Gorohoroh, Rabdill, Amenychtas, Monkbot, Theen-
terprisearchitect, Aerosteak, Juanchristy, Haloedscape, Tsfto01, Yinongchen and Anonymous: 960
• Massively multiplayer online game Source: http://en.wikipedia.org/wiki/Massively%20multiplayer%20online%20game?oldid=
652933076 Contributors: William Avery, DrBob, Mrwojo, Frecklefoot, Yaroslavvb, Booyabazooka, Fred Bauder, Kidburla, Pnm, Liftarn,
Gabbe, CesarB, Haakon, Netsnipe, Evercat, Conti, Furrykef, K1Bond007, Raul654, Johnleemk, Nufy8, Robbot, MrJones, SmileyChris,
Havatcha, Rasmus Faber, Pengo, Dave6, Centrx, DocWatson42, Jacoplane, Brian Kendig, Everyking, WolfenSilva, Rick Block, Edcolins,
Golbez, Chowbok, Geni, DarkLordSeth, Tzarius, Ratiocinate, Mattb90, Esperant, Orange Goblin, Poccil, Discospinster, Fvdham, Ponder,
WikiPediaAid, Duemellon, Kbh3rd, JoeSmack, Dcrt888, TOR, CanisRufus, MBisanz, Mwanner, Sietse Snel, RoyBoy, Cretog8, Small-
jim, Colorfast, John Vandenberg, ZayZayEM, Kx1186, Pearle, Gsklee, NickCatal, Alansohn, LtNOWIS, Khaim, Jtalledo, Ricky81682,
Hohum, Circuitloss, Super-Magician, TheDotGamer, Drat, Marasmusine, Jeffrey O. Gustafson, OwenX, Carlos Porto, LOL, WadeSim-
Miser, Sega381, Toussaint, DeweyQ, Ajshm, SLWK, Alastair, Qwertyus, Rjwilmsi, Gudeldar, Egberts, Brighterorange, Bryan H Bell,
Yamamoto Ichiro, Algebra, Ian Pitchford, Fyrn, Richdiesal, Coll7, SouthernNights, CarolGray, JYOuyang, RexNL, Intgr, D.brodale,
Terrx, Chobot, Kakurady, NTBot, Caliah, Richard Slater, RadioFan2 (usurped), Cryptic, Bovineone, Wimt, NawlinWiki, Madcoverboy,
Wahooker, Dialectric, Grafen, Jaxl, Matticus78, Mechanized, SFC9394, Wknight94, TheSeer, Blueyoshi321, K.Nevelsteen, Laibcoms,
GraemeL, Vicarious, Fram, Emc2, Lando242, Draconus, JDspeeder1, SmackBot, MattieTK, Numsgil, Bobet, McGeddon, Pgk, Jagged
85, J.J.Sagnella, Premchai, Dhochron, Yellowbounder, Drttm, Carl.bunderson, Keegan, Bangy boongy, Thomasjs, MalafayaBot, Thybag,
Sadads, Viewfinder, ACupOfCoffee, Darth Panda, Kotra, Can't sleep, clown will eat me, Mrn71, Allemannster, Frap, Size J Battery,
Onorem, Spacefed, Xyzzyplugh, Addshore, Mosca, Nayl, Tiliqua, Radagast83, E. Sn0 =31337=, Decltype, Mr Minchin, Squashua, Ea-
gleguy125, Weregerbil, Kalathalan, Funkywizard, Expedion, Dux0r, IonStorm, Dreslough, Bodobodot1, Scientizzle, Heimstern, Aviu-
soraculum, Scetoaux, Aleenf1, Wyxel, Chrisch, MarkSutton, EddieVanZant, SQGibbon, Grandpafootsoldier, Ehheh, Meco, Ryulong,
Zapvet, EmmaPeel007, Kyellan, Nabeth, Tythesly, Dl2000, SubSeven, Gunsmith, Woodroar, Iridescent, Akkhima, Znitrx, Beno1000,
Az1568, Courcelles, Danlev, FairuseBot, Tawkerbot2, Powerslide, Cpryd001, Stupidlegs13, CT456568, SkyWalker, CmdrObot, Mat-
tbr, The Missing Piece, Zarex, Ric36, Tomto, Lesgainous, Nczempin, Mika1h, Leevanjackson, Green caterpillar, Some P. Erson, Wrat-
mann, Erencexor, Nilfanion, Jonursenbach, Cagecrawler, Crossmr, Gogo Dodo, Beefnut, Tawkerbot4, Quibik, Apolaine, Smokeyblues,
Lazyboyholla, Legendps, Kozuch, ErrantX, Omicronpersei8, JodyB, DJBullfish, VPliousnine, Thijs!bot, Biruitorul, Qwyrxian, Jedibob5,
N5iln, Gamer007, ClosedEyesSeeing, Join, Sir Simon Tolhurst, ThirdEchelon, Hcobb, AgentPeppermint, Free2flow8, AntiVandalBot,
Luna Santin, Desdewd, MrNoblet, Charlo blade123, TexMurphy, Jhsounds, Spencer, Oddity-, ChrisLamb, Gökhan, JAnDbot, Husond,
MER-C, CaptSkubba, Xeno, Dream Focus, Filnik, Gogs4795, Tomst, Rocktacity, TylerStacie2005, Azzer007, VoABot II, CattleGirl,
God-Emperor, Dragor, SineWave, Supsupwatz, BrianGV, PIrish, Animum, Ben Ram, Atheria, DerHexer, Kaneva, BTMendenhall, Gwern,
MartinBot, MLeeson, Notmyhandle, Ultraviolet scissor flame, Sm8900, R'n'B, PrestonH, Itanius, J.delanoy, Pharaoh of the Wizards, Trusil-
ver, Reorion, Uncle Dick, RSnook, Xangel, H4xx0r, 1mujin22, SharkD, Dared111, Uranium grenade, Masky, NewEnglandYankee, SJP,
Greeves, LHOO, Akhel, TepesX, Inomyabcs, Stoneface1970, Vanished user 39948282, Bonadea, Useight, Izno, Darthnader37, RJASE1,
Mmovsg, Rd556212, WebAXiSS, Dani210, Bennerhingl, Vranak, Bebietdibibi, VolkovBot, Ugriffin, Yobb, Masaruemoto, Chaos5023,
VasilievVV, Soliloquial, Stealthnh, Havocbean, Minister of loco, Philip Trueman, Stateofshock, TXiKiBoT, Boxkarr, AudiS4BiTurbo,
Subspace Continuum, Kww, Jameseditor, JayC, Rhettdec, Melsaran, LizardPariah, Noformation, Snowbot, Ambard, Butterscotch, Mouse
is back, Kilmer-san, GAM3OV3R, Haseo9999, BlKat, Emodinol, Falcon8765, Celosia, AlleborgoBot, Bob the Rabbid, Hcagri, Hitamaru,
ElsaHZT, Spascu, Amoreno65, Tibullus, SieBot, Duality98765, Balthazar, Sorarules7, Addit, Winchelsea, BloodDoll, Matt Brennen, Catal-
inStroescu, Flyer22, FlyxCutlery, Ezh, Allmightyduck, Kaptns, Drf0053, Svick, Baron9, Mebalzer, JohnnyMrNinja, Order of the Nameless,
Blake, Anchor Link Bot, Xonnex, Cocoapropo, JL-Bot, Empire Of Sports, Chewjacka, Triple3D, Roxorius, Kanonkas, Xxe47, Elathiel,
Martarius, Tylercraig85, ClueBot, LAX, Jordc1991, Snigbrook, Londium, The Thing That Should Not Be, Jordzallmighty, Creovex, Dyee,
Frmorrison, Mofuggin bob, CounterVandalismBot, PMDrive1061, Excirial, Alexbot, Flightsoffancy, MatttK, Powerleveling, TaintedZe-
bra, Carsonius, NuclearWarfare, Dirt Tyrant, OekelWm, Razorflame, Teokk, Randomran, Ark25, Muro Bot, Wyrm419, Thingg, DumZ-
iBoT, Bridies, XLinkBot, Nepenthes, Little Mountain 5, Lstanley1979, Pirateer011, SilvonenBot, Jkbena6212, MystBot, Amazone09,
Fiskbil, CharlesGunnLol, Kestry, RyanCross, Thatguyflint, Ramu50, ConCompS, Apex Glide, Pete35887, Lovepopp, Lord of Ra, EjsBot,
NubXPRESS, EvanVolm, Devrit, Cst17, MrOllie, Chamal N, Glane23, Bassbonerocks, Valkyrx, Reddragonbones, RaumVogel, Say nesh,
Strattonbrazil, Sammyhammy23, Tide rolls, Lightbot, Luckas Blade, PlankBot, Yobot, Taleofyouth, Ptbotgourou, Legobot II, II MusLiM
HyBRiD II, Sandybremer, Groveislet, Bryan.burgers, AnomieBOT, Andrewrp, Cheetah255, Piano non troppo, Kingpin13, Xqbot, Xepra,
Jojoyee, Ecstacy666, Capricorn42, TechBot, Millahnna, Nasnema, DSisyphBot, Anna Frodesiak, Nerrojin, 7h3 3L173, RibotBOT, Lo-
cobot, Shadowjams, Omgtyman, BoomerAB, Paavope, Flapperistic, LSG1-Bot, LDGB1337, Dcdannyf, Sidna, HurricanesFan5, Salta10,
Wetbird15, CoreyDavis787, Jongittens, Evilmidget38, Arasaylaurel, ThatRunescaper, Myrtok, Amadsen80, Seftinho, BRUTE, Jschnur,
RedBot, Writelabor, Leonardo801, Bgiff, Irbisgreif, Lotje, GamerScourge, Dinamik-bot, Ieriv, Savybones, Eshaneman, Ratatatoo, Dark-
ScizorMaster, Reach Out to the Truth, Levininja, Faraz43, Amiteli, DARTH SIDIOUS 2, Bulgun, Vannoboy, Slonenok, Charliemcckidd,
EmausBot, John of Reading, Wolhound, Tallungs, Kumul, Samredfern, RenamedUser01302013, K6ka, Agencyinc, Xubuntero, IVA-
Nis1, Fæ, MithrandirAgain, Wikignom, Zamscob3, Vgmddg, Trimutius, Erianna, KazekageTR, Omomom1, Polisher of Cobwebs, Boxlid,
Longwayaway, ClueBot NG, IndigoMertel, Evergreen17, Wiki helper guy, O.Koslowski, Widr, JaybeeTheHutt, Shovan Luessi, Jeraphine
Gryphon, Mikikian, Oraklus, Kttp3, Run4YourLife, Peresans, Muneil, Mark Arsten, Compfreak7, Coolio665, Workmuch, Andy-roo,
BattyBot, Saígúrun, Khazar2, MadGuy7023, Mogism, Katzand, ABunnell, Graphium, HariBhol, Ganymede2222, Farbodebrahimi, Epic-
genius, NewzooHQ, Hanamanteo, Betino, Tasowiki, Jacobdude5672, Glaisher, Dimitry.grossman, Jccuaki, Skr15081997, Melcous, Blitz-
Greg, Lubieni2, Vasinoz98, Imays76, Poseidon4231, Mediavalia, Krzykoala, MMOPhilosopher, Coolkidlol221 and Anonymous: 895
14.9.2 Images
• File:Ambox_current_red.svg Source: http://upload.wikimedia.org/wikipedia/commons/9/98/Ambox_current_red.svg License: CC0
Contributors: self-made, inspired by Gnome globe current event.svg, using Information icon3.svg and Earth clip art.svg Original artist:
Vipersnake151, penubag, Tkgd2007 (clock)
• File:Ambox_important.svg Source: http://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do-
main Contributors: Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs)
• File:Ambox_rewrite.svg Source: http://upload.wikimedia.org/wikipedia/commons/1/1c/Ambox_rewrite.svg License: Public domain
Contributors: self-made in Inkscape Original artist: penubag
14.9. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 87