Cisco UCS B200 M5 Blade Server: Spec Sheet
Cisco UCS B200 M5 Blade Server: Spec Sheet
Cisco UCS B200 M5 Blade Server: Spec Sheet
OVERVIEW
Delivering performance, versatility and density without compromise, the Cisco UCS B200 M5 Blade Server
addresses the broadest set of workloads, from IT and web infrastructure through distributed database.
The enterprise-class Cisco UCS B200 M5 blade server extends the capabilities of Cisco’s Unified Computing
System portfolio in a half-width blade form factor.
DETAILED VIEWS
Blade Server Front View
Figure 2 is a detailed front view of the Cisco UCS B200 M5 Blade Server.
4 5
306058
306058
1 2 3 6 7 8 9 10 11
Notes:
1. A KVM I/O cable plugs into the console connector, and can be ordered as a spare. The KVM I/O
cable in included with every Cisco UCS 5100 Series blade server chassis accessory kit.
Capability/Feature Description
Chassis The UCS B200 M5 Blade Server mounts in a Cisco UCS 5108 Series blade
server chassis or UCS Mini blade server chassis.
CPU One or two 2nd Generation Intel® Xeon® scalable family CPUs. Also note that
the B200 M5 Blade Server BIOS inherently enables support for Intel Advanced
Encryption Standard New Instructions (AES-NI) and does not have an option
to disable this feature.
Modular LOM One modular LOM (mLOM) connector at the rear of the blade for a Cisco
mLOM VIC adapter, which provides Ethernet or Fibre Channel over Ethernet
(FCoE) connectivity
Mezzanine Adapters One rear mezzanine connector for various types of Cisco mezzanine adapters
(Rear)
■ Cisco Mezzanine VIC Adapter, or
■ Cisco Mezzanine Port Expander, or
■ Cisco Mezzanine NVMe Storage Adapter, or
■ Cisco Mezzanine nVIDIA P6 GPU (can also be placed in front connector)
Capability/Feature Description
Video The Cisco Integrated Management Controller (CIMC) provides video using the
Matrox G200e video/graphics controller:
Power subsystem Integrated in the Cisco UCS 5108 blade server chassis
Integrated management The built-in Cisco Integrated Management Controller (CIMC) GUI or CLI
processor interface enables monitoring of server inventory, health, and system event
logs
ACPI Advanced Configuration and Power Interface (ACPI) 4.0 Standard Supported.
UCSB-B200-M5 UCS B200 M5 Blade Server without CPU, memory, drive bays, HDD, VIC adapter,
or mezzanine adapters (ordered as a blade chassis option)
UCSB-B200-M5-U UCS B200 M5 Blade Server without CPU, memory, drive bays, HDD, VIC adapter,
or mezzanine adapters (ordered standalone)
UCSB-B200-M5-CH DISTI: UCS B200 M5 Blade Server without CPU, memory, drive bays, HDD, VIC
adapter, or mezzanine adapters
A base Cisco UCS B200 M5 blade server ordered in Table 2 does not include any components or
option They must be selected during product ordering.
Please follow the steps on the following pages to order components such as the following, which
are required in a functional blade:
• CPUs
• Memory
• Cisco FlexStorage RAID controller with drive bays (or blank, for no local drive support)
• Drives
• Cisco adapters (such as the VIC 1340,VIC 1380, VIC 1440, VIC 1480, or Port Expander)
• Cisco UCS NVMe Flash Storage Adapters or GPUs.
UCSB-B200M5-RSV1A Qty 1 B200 M5 Blade Server with Qty 2 Intel 6252 , Qty 12 64GB Memory, Qty 1
1340 MLOM, Qty 1 M2 240GB HW RAID
UCSB-B200M5-RSV1B Qty 1 B200 M5 Blade Server with Qty 2 Intel 6226R , Qty 12 64GB Memory, Qty 1
1440 MLOM, Qty 1 M2 240GB HW RAID
UCSB-B200M5-RSV1C Qty 1 B200 M5 Blade Server with Qty 2 Intel 6248 , Qty 12 32GB Memory, Qty 1
1340 MLOM, Qty 1 M2 240GB HW RAID
UCSB-B200M5-RSV1D Qty 1 B200 M5 Blade Server with Qty 2 Intel 6254 , Qty 12 64GB Memory, Qty 1
1340 MLOM, Qty 1 M2 240GB HW RAID
UCSB-B200M5-RSV1E Qty 1 B200 M5 Blade Server with Qty 2 Intel 5218R , Qty 12 64GB Memory, Qty 1
1440 MLOM, Qty 1 M2 240GB HW RAID
NOTE:
■ 3-5 days is Business days, does not include delivery time and is based on receipt
of final Partner PO; Subject to parts in stock at time of final PO
■ Only orderable through Authorized Build To Order (BTO) Distributors – Tech Data
US.
NOTE:
The CPUs designated as Ix2xx are 2nd Generation Intel® Xeon® scalable processor
family CPUs.
Highest DDR4
Clock Cache
Power UPI2 Links DIMM Clock
Product ID (PID)1 Freq Size Cores Workload
(W) (GT/s) Support
(GHz) (MB)
(MHz)3
Cisco Recommended Processors 4, 5 (2nd Generation Intel® Xeon® Processors)
UCS-CPU-I8276 2.2 165 38.50 28 3 x 10.4 2933 Oracle, SAP
UCS-CPU-I8260 2.4 165 35.75 24 3 x 10.4 2933 Microsoft Azure Stack
UCS-CPU-I6262V 1.9 135 33.00 24 3 x 10.4 2400 Virtual Server
infrastructure or VSI
UCS-CPU-I6248R 3.0 205 35.75 24 2 x 10.4 2933
UCS-CPU-I6248 2.5 150 27.50 20 3 x 10.4 2933 VDI, Oracle, SQL, Microsoft
Azure Stack
UCS-CPU-I6238R 2.2 165 38.50 28 2 x 10.4 2933 Oracle, SAP (2-Socket TDI
only), Microsoft AzureStack
UCS-CPU-I6238 2.1 140 30.25 22 3 x 10.4 2933 SAP
UCS-CPU-I6234 3.3 130 24.75 8 3 x 10.4 2933 Oracle, SAP
UCS-CPU-I6230R 2.1 150 35.75 26 2 x 10.4 2933 Virtual Server
Infrastructure, Data
Protection, Big Data,
Splunk, Microsoft
AzureStack
UCS-CPU-I6230 2.1 125 27.50 20 3 x 10.4 2933 Big Data, Virtualization
UCS-CPU-I5220R 2.2 150 35.75 24 2 x 10.4 2666 Virtual Server
Infrastructure, Splunk,
Microsoft Azure Stack
UCS-CPU-I5220 2.2 125 24.75 18 2 x 10.4 2666 HCI
Highest DDR4
Clock Cache
Power UPI2 Links DIMM Clock
Product ID (PID)1 Freq Size Cores Workload
(W) (GT/s) Support
(GHz) (MB)
(MHz)3
UCS-CPU-I5218R 2.1 125 27.50 20 2 x 10.4 2666 Virtual Server
Infrastructure, Data
Protection, Big Data,
Splunk, Scale-out Object
Storage, Microsoft
AzureStack
UCS-CPU-I5218 2.3 125 22.00 16 2 x 10.4 2666 Virtualization, Microsoft
Azure Stack, Splunk, Data
Protection
UCS-CPU-I4216 2.1 100 22.00 16 2 x 9.6 2400 Data Protection, Scale Out
Storage
UCS-CPU-I4214R 2.4 100 16.50 12 2 x 9.6 2400 Data Protection, Splunk,
Scale-out Object Storage,
Microsoft AzureStack
UCS-CPU-I4214 2.2 85 16.50 12 2 x 9.6 2400 Data Protection, Scale Out
Storage
Highest DDR4
Clock Cache
Power UPI2 Links DIMM Clock
Product ID (PID)1 Freq Size Cores Workload
(W) (GT/s) Support
(GHz) (MB)
(MHz)3
UCS-CPU-I6248R 3.0 205 35.75 24 2 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6248 2.5 150 27.50 20 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6246R 3.4 205 35.75 16 2 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6246 3.3 165 24.75 12 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6244 3.6 150 24.75 8 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6242R 3.1 205 35.75 20 2 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6242 2.8 150 22.00 16 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6240R 2.4 165 35.75 24 2 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6240Y 2.6 150 24.75 18/1 3 x 10.4 2933 2nd Gen Intel® Xeon®
4/8
UCS-CPU-I6240L 2.6 150 24.75 18 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6240 2.6 150 24.75 18 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6238R 2.2 165 38.50 28 2 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6238L 2.1 140 30.25 22 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6238 2.1 140 30.25 22 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6234 3.3 130 24.75 8 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6230R 2.1 150 35.75 26 2 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6230N 2.3 125 27.50 20 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6230 2.1 125 27.50 20 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6226R 2.9 150 22.00 16 2 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6226 2.7 125 19.25 12 3 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I6222V 1.8 115 27.50 20 3 x 10.4 2400 2nd Gen Intel® Xeon®
5000 Series Processor
UCS-CPU-I5222 3.8 125 16.50 4 2 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I5220S 2.6 125 19.25 18 2 x 10.4 2666 2nd Gen Intel® Xeon®
UCS-CPU-I5220R 2.2 150 35.75 24 2 x 10.4 2666 2nd Gen Intel® Xeon®
UCS-CPU-I5220 2.2 125 24.75 18 2 x 10.4 2666 2nd Gen Intel® Xeon®
UCS-CPU-I5218R 2.1 125 27.50 20 2 x 10.4 2666 2nd Gen Intel® Xeon®
UCS-CPU-I5218B 2.3 125 22.00 16 2 x 10.4 2933 2nd Gen Intel® Xeon®
UCS-CPU-I5218N 2.3 105 22.00 16 2 x 10.4 2666 2nd Gen Intel® Xeon®
UCS-CPU-I5218 2.3 125 22.00 16 2 x 10.4 2666 2nd Gen Intel® Xeon®
UCS-CPU-I5217 3.0 115 11.00 8 2 x 10.4 2666 2nd Gen Intel® Xeon®
UCS-CPU-I5215L 2.5 85 13.75 10 2 x 10.4 2666 2nd Gen Intel® Xeon®
UCS-CPU-I5215 2.5 85 13.75 10 2 x 10.4 2666 2nd Gen Intel® Xeon®
4000 Series Processor
Highest DDR4
Clock Cache
Power UPI2 Links DIMM Clock
Product ID (PID)1 Freq Size Cores Workload
(W) (GT/s) Support
(GHz) (MB)
(MHz)3
UCS-CPU-I4216 2.1 100 22.00 16 2 x 9.6 2400 2nd Gen Intel® Xeon®
UCS-CPU-I4215R 3.2 130 11.00 8 2 x 9.6 2400 2nd Gen Intel® Xeon®
UCS-CPU-I4215 2.5 85 11.00 8 2 x 9.6 2400 2nd Gen Intel® Xeon®
UCS-CPU-I4214Y 2.4 85 16.50 12 2 x 9.6 2400 2nd Gen Intel® Xeon®
UCS-CPU-I4214R 2.2 105 16.75 12/1 2 x 9.6 2400 2nd Gen Intel® Xeon®
0/8
UCS-CPU-I4214 2.2 85 16.50 12 2 x 9.6 2400 2nd Gen Intel® Xeon®
UCS-CPU-I4210R 2.4 100 13.75 10 2 x 9.6 2400 2nd Gen Intel® Xeon®
UCS-CPU-I4210 2.2 85 13.75 10 2 x 9.6 2400 2nd Gen Intel® Xeon®
UCS-CPU-I4208 2.1 85 11.00 8 2 x 9.6 2400 2nd Gen Intel® Xeon®
3000 Series Processor
UCS-CPU-I3206R 1.9 85 11.00 8 2 x 9.6 2133 2nd Gen Intel® Xeon®
UCS-CPU-I3204 1.9 85 8.25 6 2 x 9.6 2133 2nd Gen Intel® Xeon®
Notes:
1. For additional details on ambient temperature limitation or configuration restrictions see Table 3a Page on 15
2. UPI = Ultra Path Interconnect
3. If higher or lower speed DIMMs are selected than what is shown in the table for a given CPU, the DIMMs will be
clocked at the lowest common denominator of CPU clock and DIMM clock.
4. For details on memory support for processor classes and CPU modes, see Memory Support for CPU Classes and
CPU Modes on page 53
5. For 2nd Generation Intel® Xeon® Scalable Processors, UCSM 4.0(4) software release is required.
CAUTION: In Table 5, systems configured with the processors shown must adhere to
the ambient inlet temperature thresholds specified. If not, a fan fault or executing
workloads with extensive use of heavy instructions sets such as Intel® Advanced
Vector Extensions 512 (Intel® AVX-512) may assert thermal and/or performance
faults with an associated event recorded in the System Event Log (SEL). Table 5 lists
ambient temperature limitations below 35° C (95° F) and configuration restrictions
to ensure proper cooling and avoid excessive processor throttling, which may
impact system performance.
Ambient
Processor Thermal Configuration
CPU PID Blade Slot Temperature
Design Power (TDP) Restriction
Limitation
UCS-CPU-I8260Y
UCS-CPU-I6252N
Any Y or N SKUs Any
UCS-CPU-I6240Y
UCS-CPU-I6230N
UCS-CPU-I8280L
UCS-CPU-I8280
UCS-CPU-I8270
UCS-CPU-I8268
32oC (90o F)
200 W or 205 W UCS-CPU-8180M Any
UCS-CPU-8180
UCS-CPU-8168
UCS-CPU-I6254
UCS-CPU-6154 Front Mezzanine
GPU
UCS-CPU-I6246
Frequency optimized
UCS-CPU-I6244 Any
150/165/125 W
UCS-CPU-I5222
UCS-CPU-I6258R
UCS-CPU-I6248R
1 through 7 25o C (77o F)
UCS-CPU-6246R
UCS-CPU-6242R
205W R SKUs
UCS-CPU-I6258R 8
UCS-CPU-I6248R
UCS-CPU-6246R 8 22o C (72o F)
UCS-CPU-6242R
Supported Configurations
— Choose one CPU from any one of the rows of Table 4 Available CPUs, page 10
— Choose two identical CPUs from any one of the rows of Table 4 Available CPUs, page 10
NOTE: See CHOOSE MEMORY on page 15 for details on the compatibility of CPUs
and DIMM speeds.
Memory is organized with six memory channels per CPU, with up to two DIMMs per channel, as
shown in Figure 3.
Select the memory configuration and whether or not you want the memory mirroring option.
The supported memory DIMMs, PMEMs, PMEM Memory Mode, and the mirroring option are listed
in Table 6.
Ranks
Product ID (PID) PID Description Voltage
/DIMM
2933-MHz DIMMs
UCS-ML-256G8RT-H 256 GB DDR4-2933-MHz LRDIMM/8Rx4 1.2 V
UCS-ML-128G4RT-H 128 GB DDR4-2933-MHz LRDIMM/4Rx4 1.2 V 4
UCS-ML-X64G4RT-H 64 GB DDR4-2933-MHz LRDIMM/4Rx4 1.2 V 4
UCS-MR-X64G2RT-H 64 GB DDR4-2933-MHz RDIMM/2Rx4 1.2 V 2
UCS-MR-X32G2RT-H 32 GB DDR4-2933-MHz RDIMM/2Rx4 1.2 V 2
UCS-MR-X16G1RT-H 16 GB DDR4-2933-MHz RDIMM/1Rx4 1.2 V 1
Intel® Optane™ Persistent Memory Product
UCS-MP-128GS-A0 Intel® Optane™ Persistent Memory, 128GB, 2666-MHz
UCS-MP-256GS-A0 Intel® Optane™ Persistent Memory, 256GB, 2666-MHz
UCS-MP-512GS-A0 Intel® Optane™ Persistent Memory, 512GB, 2666-MHz
Intel® Optane™ Persistent Memory Product Operational Modes
UCS-DCPMM-AD App Direct Mode
UCS-DCPMM-MM Memory Mode
Memory Mirroring Option1
N01-MMIRROR Memory mirroring option
Notes:
1. For Memory Configuration and Mirroring, please refer to Memory Configuration and Mirroring on page
50 and Memory Support for CPU Classes and CPU Modes on page 53.
NOTE:
■ Based on the Intel tech spec, the below DIMMs be used with the 1st Generation Intel®
Xeon® scalable processor family CPUs and the 2nd Generation Intel® Xeon® scalable
processor family CPUs
UCS-MR-X16G1RT-H
UCS-MR-X32G2RT-H
UCS-ML-X64G4RT-H
■ Based on the Intel tech spec, the below DIMMs can be used only with 2nd Generation
Intel® Xeon® scalable processor family CPUs, not with Intel® Xeon® scalable processor
family CPUs.
UCS-ML-256G8RT-H
UCS-ML-128G4RT-H
UCS-MR-X64G2RT-H
■ The B200 M5 server supports the following memory reliability, availability, and serviceability
(RAS) modes:
— Independent Channel Mode
— Mirrored Channel Mode
■ Below are the system level RAS Mode combination limitations:
— Mixing of Independent and Lockstep channel mode is not allowed per platform.
— Mixing of Non-Mirrored and Mirrored mode is not allowed per platform.
— Mixing of Lockstep and Mirrored mode is not allowed per platform.
— Do not mix RDIMMs, LRDIMMs, or TSV-RDIMMs.
— Single-rank DIMMs can be mixed with dual-rank DIMMs in the same channel
■ For best performance, observe the following:
— DIMMs with different timing parameters can be installed on different slots within the
same channel, but only timings that support the slowest DIMM will be applied to all.
As a consequence, faster DIMMs will be operated at timings supported by the slowest
DIMM populated.
— When one DIMM is used, it must be populated in DIMM slot 1 (farthest away from the
CPU) of a given channel.
— When single- or dual-rank DIMMs are populated in two DIMMs per channel (2DPC)
configurations, always populate the higher number rank DIMM first (starting from
the farthest slot). For a 2DPC example, first populate with dual-rank DIMMs in DIMM
slot 1. Then populate single-rank DIMMs in DIMM 2 slot.
■ DIMMs for CPU 1 and CPU 2 (when populated) must always be configured identically.
■ Cisco memory from previous generation servers (DDR3 and DDR4) is not compatible with the
UCS B200 M5 Blade.
■ Memory can be configured in any number of DIMMs as pairs, although for optimal
performance, see the document at the following link:
https://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-
c-series-rack-servers/memory-guide-c220-c240-b200-m5.pdf.
■ For additional information, refer to Memory Configuration and Mirroring on page 50.
■ For detailed Intel PMEM configurations, refer to the Cisco UCS B200 M5 Server Installation
Guide
See Table 7 and Table 8 for information on DIMM speeds with Intel Scalable Processors.
Table 7 2933-MHz DIMM Memory Speeds with 2nd Generation Intel® Xeon® Scalable Processors
Table 8 2666-MHz DIMM Memory Speeds with Intel® Xeon® Scalable Processors
TSV- TSV-
LRDIMM RDIMM LRDIMM
DIMM and CPU RDIMM RDIMM
(4Rx4) - (2Rx4) - (2Rx4) -
Frequencies DPC (8Rx4) - (4Rx4) -
64 GB 32 GB 32 GB
(MHz) 128 GB 64 GB
(MHz) (MHz) (MHz)
(MHz) (MHz)
Table 8 2666-MHz DIMM Memory Speeds with Intel® Xeon® Scalable Processors
TSV- TSV-
LRDIMM RDIMM LRDIMM
DIMM and CPU RDIMM RDIMM
(4Rx4) - (2Rx4) - (2Rx4) -
Frequencies DPC (8Rx4) - (4Rx4) -
64 GB 32 GB 32 GB
(MHz) 128 GB 64 GB
(MHz) (MHz) (MHz)
(MHz) (MHz)
App Direct Mode: PMEM operates as a solid-state disk storage device. Data is saved and is
non-volatile. Both PMEM and DIMM capacity counts towards CPU tiering
(both PMEM and DIMM capacities count towards the CPU capacity limit)
Memory Mode:1 PMEM operates as a 100% memory module. Data is volatile and DRAM acts
as a cache for PMEMs. Only PMEM capacity counts towards CPU tiering
(only the PMEM capacity counts towards the CPU capacity limit). This is
the factory default mode.
Mix Mode: DRAM as cache. Only PMEM capacity counts towards CPU tiering (only the
PMEM capacity counts towards the CPU capacity limit).
Notes:
1. For Memory Mode, the Intel-recommended DIMM to PMEM capacity ratio in the same CPU channel is from 1:2 to
1:16. So if you use a 128 GB DIMM in a channel, you could use a 512 GB PMEM for a 1:4 capacity ratio. If you use
a 32 GB DIMM in a channel, you could use a 512 GB PMEM for a 1:16 capacity ratio. There are several other
combinations possible
Table 10 2nd Generation Intel® Xeon® Scalable Processor DIMM and PMEM1 Physical Configurations (dual
socket)
DIMM
to
CPU 1
PMEM
Count
iMC1 iMC0
F2 F1 E2 E1 D2 D1 C2 C1 B2 B1 A2 A1
6 to 4 DIMM PMEM DIMM PMEM DIMM DIMM PMEM DIMM PMEM DIMM
6 to 6 PMEM DIMM PMEM DIMM PMEM DIMM PMEM DIMM PMEM DIMM PMEM DIMM
DIMM
to CPU 2
PMEM
Count
iMC1 iMC0
M2 M1 L2 L1 K2 K1 J2 J1 H2 H1 G2 G1
6 to 4 DIMM PMEM DIMM PMEM DIMM DIMM PMEM DIMM PMEM DIMM
6 to 6 PMEM DIMM PMEM DIMM PMEM DIMM PMEM DIMM PMEM DIMM PMEM DIMM
Notes:
1. All systems must be fully populated with two CPUs when using PMEMs at this time.
NOTE: There are three possible memory configurations for each CPU when
combining DIMMs and PMEMs, and the configurations must be the same for each CPU:
• 6 DIMMs and 2 PMEMs, or
• 6 DIMMs and 4 PMEMs, or
• 6 DIMMs and 6 PMEMs
UCSB-MLOM-40G-04 Cisco UCS VIC 1440 modular LOM for blade servers mLOM
UCSB-MLOM-40G-03 Cisco UCS VIC 1340 modular LOM for blade servers mLOM
VIC 1440 401 401 401 20 Yes Yes Yes Yes Yes
mLOM (20 Gb/s)
VIC 1340 401 401 401 20 Yes Yes Yes Yes Yes
mLOM (20 Gb/s)
Notes:
1. These configurations implement two 2x10 Gbps port-channels
Cisco developed 1300 and 1400 Series Virtual Interface Cards (VICs) to provide flexibility to
create multiple NIC and HBA devices.The VIC features are listed here:
The mLOM VIC on the UCS B200 M5 enables connectivity to the Fabric Interconnect either
through the Fabric Extender (FEX) or directly using the UCS 6324 Fabric Connector (UCS Mini) on
the UCS 5108 Blade Chassis.
The recommended UCS Manager (UCSM) release for the B200 M5 is UCSM 3.2(2), due to support
of higher wattage CPUs. The Cisco UCS 64xx Fabric Interconnect and/or VIC 1440 requires UCSM
4.0(1) or greater.
NOTE:
• This is a new mandatory option for standalone blades starting with the UCS
B200 M5.
• When the UCS B200 M5 is configured inside of a chassis in the CCW ordering
tool, the UCSM software version is selected at the chassis level. The software
option will not be available under the UCS B200 M5 in that case.
• The recommended UCS releases for the UCS B200 M5 are UCSM 3.2(2) and
UCSM 4.0. These releases support higher wattage CPUs. FI 6454 and/or VIC
1400 require release UCSM 4.0(1) or greater.
Notes:
1. If selected cannot select with DIMM PID UCS-ML-128G4RT-H
UCSB-MLOM-PT-01 Cisco UCS Port Expander Card for VIC 1 or 2 CPUs Rear Mezzanine
UCSB-VIC-M84-4P Cisco UCS VIC 1480 mezzanine adapter 2 CPUs required Rear Mezzanine
UCSB-VIC-M83-8P Cisco UCS VIC 1380 mezzanine adapter 2 CPUs required Rear Mezzanine
Notes:
1. For GPU P6, maximum cards per node supported is two.
Supported Configurations
See Table 15 for aggregate bandwidths with various rear mezzanine cards installed.
2x 2x 2x 2x 2x 2x 2x 2x 2x
2408 2304V 2208XP 2204XP 62xx 6324 6332 6332-16UP 6400
2
Port 802 801 802 403 Yes Yes Yes Yes Yes
Expander+ (40Gbps)
VIC 1340
VIC 1380 806 806 806 40 Yes Yes Yes Yes Yes
mezz (40Gbps)
VIC 1480 806 806 806 40 Yes Yes Yes Yes Yes
mezz (40Gbps)
Flash Card 403 403 403 20 Yes Yes Yes Yes Yes
(20Gbps)
Notes:
1. Uses a dual native 40G interface
2. Two 4x10Gbps port-channeled
3. Two 2x10Gbps port-channeled
4. Supported starting with UCSM 4.1(2). The maximum single-flow is 25 Gbps with an aggregate of 40
Gbps. To avoid IOM/Fabric Extender transient drops due to a speed mismatch of 40 Gbps towards
the server and 25 Gbps towards the fabric interconnect (FI), vNIC rate-limiting to 25 Gbps is
recommended.
5. If operating in 4x10 mode, bandwidth drops down to 40Gbps (two 2x10 G port-channeled)
6. Four 2x10 Gbps port-channeled
NOTE: A front GPU cannot be used with CPUs that dissipate greater than 165 W.
Storage Controller1 (required for installing local drives in the UCS B200 M5
UCSB-MRAID12G2,3 Cisco FlexStorage 12G SAS RAID controller with drive Front Mezzanine
bays
UCSB-MRAID12G-HE4,3 Cisco FlexStorage 12G SAS RAID controller with 2 GB Front Mezzanine
flash-backed write cache with drive bays
Notes:
1. A Storage Controller is required for installing local drives (HDD, SSD, NVMe) on the B200 M5.
2. For hard disk drives (HDDs) or solid-state drives (SSDs), a Cisco FlexStorage 12G SAS RAID Controller
is required.
3. The Cisco FlexStorage 12G SAS RAID Controller is based on the LSI 3108 ROC and runs the iMegaRAID
software stack. It provides 12 Gbps RAID functionality for SAS/SATA SSD/HDD and has RAID 0, 1 and
JBOD support.If supercapacitor needs to be replaced, it can be done so by ordering
UCSB-MRAID-SC=. See Installation Document for instructions.
4. The Cisco FlexStorage 12G SAS RAID controller with 2 GB Flash-backed write cache is based on the
LSI 3108 ROC and runs the LSI MegaRAID software stack. It provides 12 Gbps RAID Functionality for
SAS/SATA HDD/SSD and has RAID 0, 1, 5 and 6 Support. If supercapacitor needs to be replaced, it
can be done so by ordering UCSB-MRAID-SC=. See Installation Document for instructions
The flash-backed write cache provides RAID controller cache protection using NAND flash memory and a
supercapacitor. In the event of a power or server failure, cached data is automatically transferred from the
RAID controller DRAM write cache to flash. Once power is restored, the data in the NAND flash is copied back
into the DRAM write cache until it can be flushed to the disk drives.
5. For NVMe, the Cisco FlexStorage NVMe Passthrough module is required.
6. For servers that do not need local storage, and where no storage controllers are included, storage
blanking panels are auto-included as a part of the ordering configuration rules. In order for the UCS
B200 M5 to function properly and not overheat, drive blanks must be installed if no storage
controller or GPU is used.
7. For GPU P6, maximum cards per node supported is two
Select one or two drives from the list of supported drives available in Table 16.
Performance/
Drive
Product ID (PID) Description Speed Endurance Size
Type
/Value
HDD1
UCS-HD900G15K12G 900 GB 12G SAS 15K RPM SFF HDD SAS 15K RPM N/A 900 GB
UCS-HD600G15K12G 600 GB 12G SAS 15K RPM SFF HDD SAS 15K RPM N/A 600 GB
UCS-HD300G15K12G 300 GB 12G SAS 15K RPM SFF HDD SAS 15K RPM N/A 300 GB
UCS-HD24TB10KS4K 2.4 TB 12G SAS 10K RPM SFF HDD (4K)2 SAS 10K RPM N/A 2400 GB
UCS-HD18TB10KS4K 1.8 TB 12G SAS 10K RPM SFF HDD (4K)2 SAS 10K RPM N/A 1800 GB
UCS-HD12TB10K12G 1.2 TB 12G SAS 10K RPM SFF HDD SAS 10K RPM N/A 1200 GB
UCS-HD600G10K12G 600 GB 12G SAS 10K RPM SFF HDD SAS 10K RPM N/A 600 GB
UCS-HD300G10K12G 300 GB 12G SAS 10K RPM SFF HDD SAS 10K RPM N/A 300 GB
SSD1
Enterprise Performance (high endurance, supports up to 10X or 3X DWPD (drive writes per day))
UCS-SD16TH3-EP 1.6 TB 2.5 inch Enterprise performance SAS 12G Ent. Perf 3X 1600 GB
12G SAS SSD (3X DWPD)
UCS-SD32TH3-EP 3.2 TB 2.5in Enterprise performance SAS 12G Ent. Perf 3X 3200 GB
12G SAS SSD (3X DWPD)
UCS-SD800GKB3X-EP 800GB 2.5in Enterprise Performance SAS 12G Ent. Perf 3X 800 GB
12G SAS SSD(3X endurance)
UCS-SD16TKB3X-EP 1.6TB 2.5in Enterprise Performance SAS 12G Ent. Perf 3X 1600 GB
12G SAS SSD(3X endurance)
Performance/
Drive
Product ID (PID) Description Speed Endurance Size
Type
/Value
UCS-SD32TKB3X-EP 3.2TB 2.5in Enterprise Performance SAS 12G Ent. Perf 3X 3200 GB
12G SAS SSD(3X endurance)
Enterprise Value SSDs (Low endurance, supports up to 1X DWPD (drive writes per day))
UCS-SD960GKB1X-EV 960GB 2.5 inch Enterprise Value 12G SAS 12G Ent. Value 960 GB
SAS SSD
UCS-SD19TKB1X-EV 1.9TB 2.5 inch Enterprise Value 12G SAS SAS 12G Ent. Value 1900 GB
SSD
UCS-SD38TKB1X-EV 3.8TB 2.5 inch Enterprise Value 12G SAS SAS 12G Ent. Value 3800 GB
SSD
UCS-SD76TKB1X-EV 7.6TB 2.5 inch Enterprise Value 12G SAS SAS 12G Ent. Value 7600 GB
SSD
UCS-SD15TKB1X-EV 15.3TB 2.5 inch Enterprise Value 12G SAS 12G Ent. Value 15300
SAS SSD GB
UCS-SD960GBKS4-EV 960 GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 960 GB
SATA SSD (Samsung PM863A/PM883)
UCS-SD38TBKS4-EV 3.8 TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 3800 GB
SATA SSD (Samsung PM863A/PM883)
UCS-SD120GBMS4-EV 120 GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 120 GB
SATA SSD (Micron 5100/5200)
UCS-SD240GBMS4-EV 240 GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 240 GB
SATA SSD (Micron 5100/5200)
UCS-SD480GBMS4-EV 480 GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 480 GB
SATA SSD (Micron 5100/5200)
Performance/
Drive
Product ID (PID) Description Speed Endurance Size
Type
/Value
UCS-SD960GBMS4-EV 960 GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 960 GB
SATA SSD (Micron 5100/5200)
UCS-SD16TBMS4-EV 1.6 TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 1600 GB
SATA SSD (Micron 5100/5200)
UCS-SD19TBMS4-EV 1.9 TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 1900 GB
SATA SSD (Micron 5100/5200)
UCS-SD38TBMS4-EV 3.8 TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 3800 GB
SATA SSD (Micron 5100/5200)
UCS-SD76TSB61X-EV 7.6 TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 7600 GB
SATA SSD
UCS-SD76TBMS4-EV 7.6 TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 7600 GB
SATA SSD (Micron 5100/5200)
UCS-SD480GBIS6-EV 480 GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 480 GB
SATA SSD (Intel S4500/S4150)
UCS-SD960GBIS6-EV 960 GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 960 GB
SATA SSD (Intel S4500/S4150)
UCS-SD38TBIS6-EV 3.8 TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 3800 GB
SATA SSD (Intel S4500/S4150)
UCS-HD600G15K9 600 GB 12G SAS 15K RPM SFF HDD SAS 15K RPM N/A 600 GB
(SED)FIPS 140-2
UCS-HD18G10K9 1.8 TB 12G SAS 10K RPM SFF HDD (4K SAS 10K RPM N/A 1800 GB
format, SED)FIPS 140-2
UCS-HD24T10BNK9 2.4 TB 12G SAS 10K RPM SFF HDD (4K SAS 10K RPM N/A 2400 GB
format, SED)FIPS 140-2
UCS-SD800GBKNK9 800GB Enterprise Performance SAS SSD SAS Ent. Perf 3X 800 GB
(3X FWPD, SED)
UCS-SD960GBKNK9 960GB Enterprise Value SAS SSD (1X SAS Ent. Value 1X 960 GB
FWPD, SED)
UCS-SD38TBKNK9 3.8TB Enterprise Value SAS SSD (1X SAS Ent. Value 1X 3.8 TB
FWPD, SED)
UCS-SD16TBKNK9 1.6TB Enterprise performance SAS SSD SAS Ent. Perf 3X 1.6 TB
(3X FWPD, SED)
NVMe3,4,5
Performance/
Drive
Product ID (PID) Description Speed Endurance Size
Type
/Value
UCSB-NVMEHW-H800 Cisco 2.5" U.2 800 GB HGST SN200 NVMe High High 800 GB
NVMe High Perf. High Endurance Perf Perf/High
Endurance
UCSB-NVMEHW-H6400 Cisco 2.5" U.2 6.4 TB HGST SN200 NVMe NVMe High High 6400 GB
High Perf. High Endurance Perf Perf/High
Endurance
UCSB-NVMEHW-H7680 Cisco 2.5" U.2 7.7 TB HGST SN200 NVMe NVMe High High 7680 GB
High Perf. Value Endurance Perf Perf/Value
Endurance
UCSB-NVMEHW-I8000 Cisco 2.5" U.2 8TB Intel P4510 NVMe NVMe High High 8000 GB
High Perf. Value Endurance Perf Perf/Value
Endurance
UCSB-NVMEXPB-I375 Cisco 2.5in U.2 375GB Intel P4800 NVMe NVMe Med Med Perf 375 GB
Med. Perf Perf
UCSB-NVMEXP-I750 750 GB 2.5in Intel Optane NVMe NVMe Extrm Extreme Perf 750 GB
Extreme Perf Perf
UCSB-NVME2H-I1000 Cisco 2.5" U.2 1,0 TB Intel P4510 NVMe NVMe High High 1000 GB
High Perf. Value Endurance Perf Perf/Value
Endurance
UCSB-NVME2H-I1600 Cisco 2.5" U.2 1.6TB Intel P4610 NVMe NVMe High High 1600 GB
High Perf. High Endurance Perf Perf/High
Endurance
UCSB-NVME2H-I2TBV Cisco 2.5" U.2 2.0TB Intel P4510 NVMe NVMe High High 2000 GB
High Perf. Value Endurance Perf Perf/Value
Endurance
UCSB-NVME2H-I3200 Cisco 2.5" U.2 3.2TB Intel P4610 NVMe NVMe High High 3200 GB
High Perf. High Endurance Perf Perf/High
Endurance
UCSB-NVME2H-I4000 Cisco 2.5" U.2 4.0TB Intel P4510 NVMe NVMe High High 4000 GB
High Perf. Value Endu Perf Perf/Value
Endurance
UCSB-NVMHG-W1600 1.6TB 2.5in U.2 WD SN840 NVMe NVMe Extrm Extrm 1.6 TB
Extreme Perf. High Endurance Perf Perf/High
Endurance
UCSB-NVMHG-W3200 3.2TB 2.5in U.2 WD SN840 NVMe NVMe Extrm Extrm 3.2 TB
Extreme Perf. High Endurance Perf Perf/High
Endurance
UCSB-NVMHG-W6400 6.4TB 2.5in U.2 WD SN840 NVMe NVMe Extrm Extrm 6.4 TB
Extreme Perf. High Endurance Perf Perf/High
Endurance
Performance/
Drive
Product ID (PID) Description Speed Endurance Size
Type
/Value
UCSB-NVMHG-W7600 7.6TB 2.5in U.2 WD SN840 NVMe NVMe Extrm Extrm 7.6 TB
Extreme Perf. Value Endurance Perf Perf/High
Endurance
UCSB-NVMHG-W15300 15.3TB 2.5in U.2 WD SN840 NVMe NVMe Extrm Extrm 15.3 TB
Extreme Perf. High Endurance Perf Perf/High
Endurance
Notes:
1. HDDs and SSDs require either of the following storage controllers in the front mezzanine slot:
UCSB-MRAID12G, or
UCSB-MRAID12G-HE
2. For 4K native (4Kn) drives:
VMWare ESXi 6.0 does not support 4Kn Drives. 4Kn drive support with VMWare is available in release
6.7 and later.
4K native drives require UEFI Boot
3. NVMe drives require the following storage controller in the front mezzanine slot:
UCSB-LSTOR-PT
4. For HDD or SSD drives to be in a RAID group, two identical HDDs or SSDs must be used in the group.
5. If HDD or SSD are in JBOD Mode, the drives do not need to be identical.
NOTE: Cisco uses solid state drives (SSDs) from a number of vendors. All solid state
drives (SSDs) are subject to physical write limits and have varying maximum usage
limitation specifications set by the manufacturer. Cisco will not replace any solid
state drives (SSDs) that have exceeded any maximum usage specifications set by
Cisco or the manufacturer, as determined solely by Cisco.
Table 17 PIDs for Secure Digital High-Capacity Card(s) and Modular Adapter
Notes:
1. The SD modular adapter (PID UCS-MSTOR-SD) is auto-included in CCW and is not selectable.
NOTE: Starting from vSphere 8.0, SD cards/USB media as a standalone boot device
will not be supported by VMware. For more information please refer to the VMware
0
KB article: https://kb.vmware.com/s/article/85685
Supported Configurations
(3) If you select SDHC cards, you cannot select any M.2 SATA SSD drive.
Each mini storage carrier or boot-optimized RAID controller can accommodate up to two SATA
M.2 SSDs shown in Table 18.
UCS-MSTOR-M2 Mini Storage Carrier for M.2 SATA (holds up to 2 M.2 SATA SSDs)
UCS-M2-HWRAID Cisco Boot optimized M.2 RAID controller (holds up to 2 M.2 SATA SSDs)
NOTE:
■ The UCS-M2-HWRAID boot-optimized RAID controller supports RAID 1 and JBOD mode
■ The UCS-M2-HWRAID controller is available only with 240 GB and 960 GB M.2 SSDs.
■ (CIMC/UCSM) is supported for configuring of volumes and monitoring of the controller
and installed SATA M.2 drives
■ The minimum version of Cisco IMC and Cisco UCS Manager that support this controller
is 4.0(4) and later. The name of the controller in the software is MSTOR-RAID
■ The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported
■ Hot-plug replacement is not supported. The server must be powered off.
■ The boot-optimized RAID controller is not supported when the server is used as a
compute node in HyperFlex configurations
■ Order either the Mini Storage carrier or the Boot-Optimized RAID controller from Table 19.
— Choose the UCS-MSTOR-M2 mini storage carrier for controlling the M.2 SATA drives
with no RAID control.
— Choose the UCS-M2-HWRAID Boot-Optimized RAID controller for hardware RAID
across the two internal SATA M.2 drives. The Boot-Optimized RAID controller holds
up to 2 matching M.2 SATA drives.
■ Order up to two matching M.2 SATA SSDs from Table 18.
NOTE: The Boot-Optimized RAID controller supports VMWare, Windows and Linux
Operating Systems
Caveats
When ordering two M.2 SATA drives with embedded software RAID, the maximum number of
internal SATA drives supported is six. To support greater than six internal drives, a Cisco 12G
Raid Controller or a Cisco 12G SAS HBA must be ordered
NOTE:
1. The TPM module used in this system conforms to TPM v1.2 and 2.0, as defined by the Trusted
Computing Group (TCG). It is also SPI-based.
■ 2. TPM installation is supported after-factory. However, a TPM installs with a one-way screw
and cannot be replaced, upgraded, or moved to another server. If a server with a TPM is
returned, the replacement server must be ordered with a new TPM. If there is no existing
TPM in the server, you can install TPM 2.0. Refer to the following document for Installation
location and instructions: Cisco UCS B200 M5 Server Installation Guide.
NOTE: A clearance of 0.950 inches (24.1 mm) is required for the USB device to be
inserted and removed (see Figure 4).
The USB drive listed in Table 21 has the correct clearance. If you choose your own
USB drive, it must have the required clearance.
Select
N1K-VSG-UCS-BUN Nexus 1000V Adv Edition for vSphere Paper License Qty 1
NOTE: IF you must order quantity greater than 1 of UCS-MDMGR-1S, you need to reference the UCS
Central Per Server Data Sheet to order the standalone PIDs: UCS-MDMGR-LIC= or UCS-MDMGR-1DMN=
VMware vCenter
MSWS-19-ST16C-NS Windows Server 2019 Standard (16 Cores/2 VMs) - No Cisco SVC
Red Hat
RHEL-2S2V-1A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 1-Yr Support Req
RHEL-2S2V-3A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 3-Yr Support Req
RHEL-2S2V-5A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 5-Yr Support Req
RHEL-VDC-2SUV-1A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 1 Yr Supp Req
RHEL-VDC-2SUV-3A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 3 Yr Supp Req
RHEL-VDC-2SUV-5A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 5 Yr Supp Req
RHEL-2S2V-1S Red Hat Enterprise Linux (1-2 CPU,1-2 VN); Prem 1-Yr SnS
RHEL-2S2V-3S Red Hat Enterprise Linux (1-2 CPU,1-2 VN); Prem 3-Yr SnS
RHEL-VDC-2SUV-1S RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 1 Yr SnS Reqd
RHEL-VDC-2SUV-3S RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 3 Yr SnS Reqd
RHEL-SAP-2S2V-1S RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 1-Yr SnS Reqd
RHEL-SAP-2S2V-3S RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 3-Yr SnS Reqd
VMware
SUSE
SLES-2S2V-1A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 1-Yr Support Req
SLES-2SUV-1A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); 1-Yr Support Req
SLES-2S2V-3A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 3-Yr Support Req
SLES-2SUV-3A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); 3-Yr Support Req
SLES-2S2V-5A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 5-Yr Support Req
SLES-2SUV-5A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); 5-Yr Support Req
SLES-2S2V-1S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 1-Yr SnS
SLES-2SUV-1S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); Prio 1-Yr SnS
SLES-2S2V-3S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 3-Yr SnS
SLES-2SUV-3S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); Prio 3-Yr SnS
SLES-2S2V-5S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 5-Yr SnS
SLES-2SUV-5S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); Prio 5-Yr SnS
SLES-2S-HA-1S SUSE Linux High Availability Ext (1-2 CPU); 1yr SnS
SLES-2S-HA-3S SUSE Linux High Availability Ext (1-2 CPU); 3yr SnS
SLES-2S-HA-5S SUSE Linux High Availability Ext (1-2 CPU); 5yr SnS
SLES-2S-GC-1S SUSE Linux GEO Clustering for HA (1-2 CPU); 1yr Sns
SLES-2S-GC-3S SUSE Linux GEO Clustering for HA (1-2 CPU); 3yr SnS
SLES-2S-GC-5S SUSE Linux GEO Clustering for HA (1-2 CPU); 5yr SnS
SLES-2S-LP-1S SUSE Linux Live Patching Add-on (1-2 CPU); 1yr SnS Required
SLES-2S-LP-3S SUSE Linux Live Patching Add-on (1-2 CPU); 3yr SnS Required
SLES-2S-LP-1A SUSE Linux Live Patching Add-on (1-2 CPU); 1yr Support Req
SLES-2S-LP-3A SUSE Linux Live Patching Add-on (1-2 CPU); 3yr Support Req
SLES-SAP-2S2V-1A SLES for SAP Apps (1-2 CPU, 1-2 VM); 1-Yr Support Reqd
SLES-SAP-2SUV-1A SLES for SAP Apps (1-2 CPU, Unl VM); 1-Yr Support Reqd
SLES-SAP-2S2V-3A SLES for SAP Apps (1-2 CPU, 1-2 VM); 3-Yr Support Reqd
SLES-SAP-2SUV-3A SLES for SAP Apps (1-2 CPU, Unl VM); 3-Yr Support Reqd
SLES-SAP-2S2V-5A SLES for SAP Apps (1-2 CPU, 1-2 VM); 5-Yr Support Reqd
SLES-SAP-2SUV-5A SLES for SAP Apps (1-2 CPU, Unl VM); 5-Yr Support Reqd
SLES-SAP-2S2V-1S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 1-Yr SnS
SLES-SAP-2SUV-1S SLES for SAP Apps (1-2 CPU, Unl VM); Priority 1-Yr SnS
SLES-SAP-2S2V-3S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 3-Yr SnS
SLES-SAP-2SUV-3S SLES for SAP Apps (1-2 CPU, Unl VM); Priority 3-Yr SnS
SLES-SAP-2S2V-5S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 5-Yr SnS
SLES-SAP-2SUV-5S SLES for SAP Apps (1-2 CPU, Unl VM); Priority 5-Yr SnS
Table 25 OS Media
MSWS-19-ST16C-RM Windows Server 2019 Stan (16 Cores/2 VMs) Rec Media DVD Only
MSWS-19-DC16C-RM Windows Server 2019 DC (16Cores/Unlim VM) Rec Media DVD Only
If you have noncritical implementations and choose to have no service contract, the following
coverage is supplied:
For systems that include Unified Computing System Manager, the support service includes
downloads of UCSM upgrades. The Cisco Smart Net Total Care for UCS Service includes flexible
hardware replacement options, including replacement in as little as two hours. There is also
access to Cisco's extensive online technical resources to help maintain optimal efficiency and
uptime of the unified computing environment. For more information please refer to the following
url: http://www.cisco.com/c/en/us/services/technical/smart-net-total-care.html?stickynav=1You
can choose a desired service listed in Table 26.
Note: For PID UCSB-B200-M5-U, select Service SKU with BB200M5U suffix (Example: CON-PREM- BB200M5U)
For PID UCSB-B200-M5-CH, select Service SKU with B200M5CH suffix (Example: CON-PREM- B200M5CH)
*Includes Drive Retention (see UCS Drive Retention Service on page 47)
**Includes Local Language Support (see Local Language Technical Support for UCS on page 48) – Only
available in China and Japan
***Includes Local Language Support and Drive Retention – Only available in China and Japan
Smart Net Total Care for Cisco UCS Onsite Troubleshooting Service
For faster parts replacement than is provided with the standard Cisco Unified Computing System warranty,
Cisco offers the Cisco Smart Net Total Care for UCS Hardware Only Service. You can choose from two levels of
advanced onsite parts replacement coverage in as little as four hours. Smart Net Total Care for UCS
Hardware Only Service provides remote access any time to Cisco support professionals who can determine if
a return materials authorization (RMA) is required. You can choose a desired service listed in Table 27.
Table 27 SNTC for Cisco UCS Onsite Troubleshooting Service (PID UCSB-B200-M5)
Table 27 SNTC for Cisco UCS Onsite Troubleshooting Service (PID UCSB-B200-M5) (continued)
Note: For PID UCSB-B200-M5-U, select Service SKU with BB200M5U suffix (Example: CON-PREM- BB200M5U)
For PID UCSB-B200-M5-CH, select Service SKU with B200M5CH suffix (Example: CON-PREM- B200M5CH)
*Includes Drive Retention (see UCS Drive Retention Service on page 47)
**Includes Local Language Support (see Local Language Technical Support for UCS on page 48) – Only
available in China and Japan
***Includes Local Language Support and Drive Retention – Only available in China and Japan
■ Expand their service portfolios to support the most complex network environments
■ Lower delivery costs
■ Deliver services that increase customer loyalty
PSS options enable eligible Cisco partners to develop and consistently deliver high-value technical support
that capitalizes on Cisco intellectual assets. This helps partners to realize higher margins and expand their
practice.
PSS for UCS provides hardware and software support, including triage support for third party software, backed
by Cisco technical resources and level three support. You can choose a desired service listed in Table 28.
Note: For PID UCSB-B200-M5-U, select Service SKU with BB200M5U suffix (Example: CON-PREM- BB200M5U)
For PID UCSB-B200-M5-CH, select Service SKU with B200M5CH suffix (Example: CON-PREM- B200M5CH)
*Includes Drive Retention (see UCS Drive Retention Service on page 47)
Note: For PID UCSB-B200-M5-U, select Service SKU with BB200M5U suffix (Example: CON-PREM- BB200M5U)
For PID UCSB-B200-M5-CH, select Service SKU with B200M5CH suffix (Example: CON-PREM- B200M5CH)
*Includes Drive Retention (see UCS Drive Retention Service on page 47)
Note: For PID UCSB-B200-M5-U, select Service SKU with BB200M5U suffix (Example: CON-PREM- BB200M5U)
For PID UCSB-B200-M5-CH, select Service SKU with B200M5CH suffix (Example: CON-PREM- B200M5CH)
Sophisticated data recovery techniques have made classified, proprietary, and confidential information
vulnerable, even on malfunctioning disk drives. The Drive Retention service enables you to retain your drives
and ensures that the sensitive data on those drives is not compromised, which reduces the risk of any
potential liabilities. This service also enables you to comply with regulatory, local, and federal
requirements.
If your company has a need to control confidential, classified, sensitive, or proprietary data, you might want
to consider one of the Drive Retention Services listed in the above tables (where available)
NOTE: Cisco does not offer a certified drive destruction service as part of this
service.
For a complete listing of available services for Cisco Unified Computing System, see the following URL:
http://www.cisco.com/en/US/products/ps10312/serv_group_home.html
SUPPLEMENTAL MATERIAL
System Board
A top view of the UCS B200 M5 system board is shown in Figure 5.
5 CPU heat sink install guide pins 6 CPU 2 socket (shown unpopulated)
Note: When the front mezzanine storage module is installed, the USB connector is underneath
it. Use the small cutout opening in the storage module to visually determine the location of the
USB connector when you need to insert a USB drive. When the NVIDIA GPU is installed in the
front mezzanine slot, you cannot see the USB connector.
Each DIMM channel has two slots: slot 1 and slot 2. The blue-colored DIMM slots are for slot 1 and
the black slots for slot 2.
As an example, DIMM slots A1, B1, C1, D1, E1, and F1 belong to slot 1, while A2, B2, C2, D2, E2,
and F2 belong to slot 2.
Figure 6 shows how slots and channels are physically laid out on the motherboard. The DIMM slots
on the right half of the motherboard (channels A, B, C, D, E, and F) are associated with CPU 1,
while the DIMM slots on the left half of the motherboard (channels G, H, J, K, L, and M) are
associated with CPU 2. The slot 1 (blue) DIMM slots are always located farther away from a CPU
than the corresponding slot 2 (black) slots.
For all allowable DIMM populations, please refer to the “Memory Population Guidelines” section
of the B200 M5 Installation Guide, at the following link:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B200M5.p
df
For more details, see the Cisco UCS C220/C240/B200 M5 memory Guide at the following link:
https://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-se
ries-rack-servers/memory-guide-c220-c240-b200-m5.pdf
When considering the memory configuration of your server, consider the following items:
■ Each channel has two DIMM slots (for example, channel A = slots A1 and A2).
— A channel can operate with one or two DIMMs installed.
■ When both CPUs are installed, populate the DIMM slots of each CPU identically.
■ Any DIMM installed in a DIMM socket for which the CPU is absent is not recognized.
DIMM Parameter DIMMs in the Same Channel DIMM in the Same Slot1
DIMM Capacity DIMMs in the same channel (for For best performance, DIMMs in
example, A1 and A2) can have the same slot (for example, A1,
RDIMM = 16, 32, or 64 GB
different capacities. B1, C1, D1, E1, F1) should have the
LRDIMM = 32, 64, or 128 GB
same capacity.
TSV-RDIMM = 64 or 128 GB,
DIMM Speed 2933- or DIMMs will run at the lowest DIMMs will run at the lowest speed
2666-MHz speed of the CPU installed of the CPU installed
DIMM Type Do not mix DIMM types in a Do not mix DIMM types in a slot
TSV-RDIMMS, RDIMMs, or channel
LRDIMMs
Notes:
1. Although different DIMM capacities can exist in the same slot, this will result in less than optimal performance.
For optimal performance, all DIMMs in the same slot should be identical.
Memory Mirroring
When Memory Mirroring PID (N01-MMIRROR) is selected in STEP 3 CHOOSE MEMORY, page 15,
the DIMMS will be placed by the factory as shown in the Table 32.
■ Select 4, 6, 8, or 12 identical DIMMS per CPU.
■ If only 1 CPU is selected, please refer only to the CPU 1 DIMM placement columns in the
Table 32.
Table 32 Memory Mirroring
DIMM CPU 1 DIMM Placement in Channels CPU 2 DIMM Placement in Channels
Rank (for identical ranked DIMMS) (for identical ranked DIMMS)
4 (A1, B1); (D1, E1) (G1, H1); (K1, L1)
6 (A1, B1); (C1, D1); (E1, F1) (G1, H1); (J1, K1); (L1, M1)
8 (A1, B1); (D1, E1); (A2, B2); (D2, E2) (G1, H1); (K1, L1); (G2, H2); (K2, L2)
(A1, B1); (C1, D1); (E1, F1); (A2, B2); (C2, D2); (G1, H1); (J1, K1); (L1, M1); (G2, H2); (J2, K2);
12
(E2, F2) (L2, M2)
NOTE: For Memory and Mixed Modes, DIMMs are used as cache and do not factor into
CPU capacity.
■ CPU PIDs ending in “M” support capacities up to 2048 GB per CPU using:
— 6 x 128 GB DIMMs as cache and 4 x 512 GB PMEMs as memory, or
— 6x 256 GB DIMMs as cache and 4 x 512 GB PMEMs as memory
■ CPU PIDs ending in “L” support capacities up to 3072 GB using:
— 6 x 128 GB DIMMs as cache and 6 x 512 GB PMEMs as memory, or
The allowable 4608 limit for PMEM capacity is not reached in this case.
■ CPU PIDs not ending in “L” or “M” support capacities up to 1024 GB per CPU using:
— 6 x 128 GB DIMMs as cache and 2 x 512 GB PMEMs as memory, or
— 6 x 256 GB DIMMs as cache and 2 x 512 GB PMEMs as memory
SPARE PARTS
This section lists the upgrade and service-related parts for the UCS B200 M5 server. Some of these parts are
configured with every server or with every UCS 5108 blade server chassis.
Packaging Kit
CPU Accessories
UCSB-HS-M5-F= CPU Heat Sink for UCS B-Series M5 CPU socket (Front)
UCSB-HS-M5-R= CPU Heat Sink for UCS B-Series M5 CPU socket (Rear)
UCS-CPU-TIM= Single CPU thermal interface material syringe for M5 server HS seal
UCSX-HSCK= UCS Processor Heat Sink Cleaning Kit (when replacing a CPU)2
Memory
Storage Controller
UCSB-MRAID-SC= Supercap for FlexStorage 12G SAS RAID controller w/1GB FBWC
UCSB-MRAID12G= Cisco FlexStorage 12G SAS RAID controller with drive bays
UCSB-MRAID12G-HE= Cisco FlexStorage 12G SAS RAID controller with 2 GB flash-backed write cache and
drive bays
UCSB-LSTOR-BK= Cisco FlexStorage blanking panel w/o controller, w/o drive bays
Drives
HDDs
Enterprise Performance SSDs (High endurance, supports up to 10X or 3X DWPD (drive writes per day))
SAS SSDs
UCS-SD16TH3-EP= 1.6TB 2.5 inch Enterprise performance 12G SAS SSD (3X DWPD)
UCS-SD32TH3-EP= 3.2TB 2.5in Enterprise performance 12G SAS SSD (3X DWPD)
SATA SSDs
UCS-SD480GIS3-EP= 480GB 2.5in Enterprise performance 6G SATA SSD(3X endurance) (Intel S4600)
UCS-SD960GIS3-EP= 960GB 2.5in Enterprise performance 6G SATA SSD(3X endurance) (Intel S4600)
UCS-SD19TIS3-EP= 1.9TB 2.5in Enterprise performance 6G SATA SSD(3X endurance) (Intel S4600)
Enterprise Value (Low endurance, supports up to 1X DWPD (drive writes per day))
UCS-SD960GBKS4-EV= 960 GB 2.5 inch Enterprise Value 6G SATA SSD (Samsung PM863A/PM883)
UCS-SD38TBKS4-EV= 3.8 TB 2.5 inch Enterprise Value 6G SATA SSD (Samsung PM863A/PM883)
UCS-SD120GBMS4-EV= 120 GB 2.5 inch Enterprise Value 6G SATA SSD (Micron 5100/5200)
UCS-SD240GBMS4-EV= 240 GB 2.5 inch Enterprise Value 6G SATA SSD (Micron 5100/5200)
UCS-SD480GBMS4-EV= 480 GB 2.5 inch Enterprise Value 6G SATA SSD (Micron 5100/5200)
UCS-SD76TBMS4-EV= 7.6 TB 2.5 inch Enterprise Value 6G SATA SSD (Micron 5100/5200)
UCS-SD480GBIS6-EV= 480 GB 2.5 inch Enterprise Value 6G SATA SSD (Intel S4500/S4150)
UCS-SD960GBIS6-EV= 960 GB 2.5 inch Enterprise Value 6G SATA SSD (Intel S4500/S4150)
UCS-SD38TBIS6-EV= 3.8 TB 2.5 inch Enterprise Value 6G SATA SSD (Intel S4500/S4150)
UCS-HD18G10K9= 1.8TB 12G SAS 10K RPM SFF HDD (4K format, SED)
UCS-HD24T10BNK9= 2.4 TB 12G SAS 10K RPM SFF HDD (4K) SED
NVMe1,2,3
UCSB-NVMEHW-I8000= Cisco 2.5" U.2 8TB Intel P4510 NVMe High Perf. Value Endurance
UCSB-NVMEXPB-I375= Cisco 2.5in U.2 375GB Intel P4800 NVMe Med. Perf
UCSB-NVME2H-I2TBV= Cisco 2.5" U.2 2.0TB Intel P4510 NVMe High Perf. Value Endurance
UCSB-NVME2H-I1000= Cisco 2.5" U.2 1,0 TB Intel P4510 NVMe High Perf. Value Endu
UCSB-NVME2H-I1600= Cisco 2.5" U.2 1.6TB Intel P4610 NVMe High Perf. High Endu
UCSB-NVME2H-I3200= Cisco 2.5" U.2 3.2TB Intel P4610 NVMe High Perf. High Endurance
UCSB-NVME2H-I4000= Cisco 2.5" U.2 4.0TB Intel P4510 NVMe High Perf. Value Endu
UCSB-MLOM-40G-04= UCS VIC 1440 modular LOM for blade servers mLOM
UCSB-MLOM-40G-03= UCS VIC 1340 modular LOM for blade servers mLOM
GPUs
Power Cables
Software/Firmware
IMC Supervisor
CIMC-SUP-B01= IMC Supervisor-Branch Mgt SW for C-Series & E-Series up to 100 Svrs
CIMC-SUP-B02= IMC Supervisor- Branch Mgt SW for C & E-Series up to 250 Svrs
CIMC-SUP-A01= IMC Supervisor Adv-Branch Mgt SW for C & E-Series 100 Svrs
CIMC-SUP-A02= IMC Supervisor Adv-Branch Mgt SW for C & E-Series 250 Svrs
CIMC-SUP-A10= IMC Supervisor Adv-Branch Mgt SW for C & E-Series 1000 Svrs
CIMC-SUP-A25= IMC Supervisor Adv-Branch Mgt SW for C & E-Series 250 Svrs
NOTE: IF you must order quantity greater than 1 of UCS-MDMGR-1S, you need to reference the UCS Central Per
Server Data Sheet to order the standalone PIDs: UCS-MDMGR-LIC= or UCS-MDMGR-1DMN=
VMware vCenter
Red Hat
RHEL-SAP-2S2V-1S= RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 1-Yr SnS Reqd
RHEL-SAP-2S2V-3S= RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 3-Yr SnS Reqd
VMware
VMW-VSP-EPL-1S= VMware vSphere 6 Ent Plus (1 CPU), 1-yr Vmware SnS Reqd
VMW-VSP-EPL-3S= VMware vSphere 6 Ent Plus (1 CPU), 3-yr Vmware SnS Reqd
SUSE
Notes:
1. NVMe drives require the following storage controller in the front mezzanine slot:
UCSB-LSTOR-PT
2. For HDD or SSD drives to be in a RAID group, two identical HDDs or SSDs must be used in the group.
3. If HDD or SSD are in JBOD Mode, the drives do not need to be identical.
Please refer to the UCS B200 M5 Installation Guide for installation procedures.
(1) Have the following tools and materials available for the procedure:
(2) Order the appropriate replacement CPU from Available CPUs on page 10.
(3) Carefully remove and replace the CPU and heatsink in accordance with the instructions
found in “Cisco UCS B200 M5 Blade Server Installation and Service Note,” found at:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B200M5/B200
M5_chapter_011.html#id_104667.
(1) Have the following tools and materials available for the procedure:
(2) Order the appropriate new CPU from Table 4 on page 10.
(3) Order one heat sink for each new CPU. Order PID UCSB-HS-M5-F= for the front CPU socket
and PID UCSB-HS-M5-R= for the rear CPU socket.
(4) Carefully install the CPU and heatsink in accordance with the instructions found in “Cisco
UCS B200 M5 Blade Server Installation and Service Note,” found at:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B200M5/B200
M5_chapter_011.html#id_104667.
(1) Order new DIMMs or PMEMs as needed from Table 6 on page 16.
(3) Open both connector latches and remove and replace the DIMM/PMEM or blank as needed.
(4) Press evenly on both ends of the DIMM/PMEM until it clicks into place in its slot.
NOTE: Ensure that the notch in the DIMM/PMEM aligns with the slot. If the notch is
misaligned, it is possible to damage the DIMM/PMEM, the slot, or both.
(5) Press the connector latches inward slightly to seat them fully.
(6) Populate all slots with a DIMM, PMEM, or DIMM blank. A slot cannot be empty.
For additional details on replacing or upgrading DIMMs, see “Cisco UCS B200 M5 Blade Server
Installation and Service Note,” found at
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B200M5/B
200M5_chapter_011.html#concept_on5_vzl_kz.
TECHNICAL SPECIFICATIONS
Dimensions and Weight
Parameter Value
Power Specifications
For configuration-specific power specifications, use the Cisco UCS Power Calculator at:
http://ucspowercalc.cisco.com
NOTE: When using 256 GB DDR DIMMs (UCS-ML-256G8RT-H) in this server, the blade-level power capping
must be set to 550 W. For information about blade-level power capping, see the Power Capping and Power
Management chapter in the Cisco UCS Manager Server Management Guide for your release: Cisco UCS
Manager Configuration Guides