Sr650 Setup Guide
Sr650 Setup Guide
Setup Guide
Before using this information and the product it supports, be sure to read and understand the safety
information and the safety instructions, which are available at:
http://thinksystem.lenovofiles.com/help/topic/safety_documentation/pdf_files.html
In addition, be sure that you are familiar with the terms and conditions of the Lenovo warranty for your server,
which can be found at:
http://datacentersupport.lenovo.com/warrantylookup
Performance, ease of use, reliability, and expansion capabilities were key considerations in the design of the
server. These design features make it possible for you to customize the system hardware to meet your needs
today and provide flexible expansion capabilities for the future.
The server comes with a limited warranty. For details about the warranty, see:
https://support.lenovo.com/us/en/solutions/ht503310
The machine type and serial number are on the ID label on the right rack latch in the front of the server.
Figure 3. QR code
Note: Items marked with asterisk (*) are available on some models only.
1 Server
2 Rail kit*. Detailed instructions for installing the rail kit are provided in the package with the rail kit.
3 Cable management arm*
4 Material box, including items such as accessory kit, power cords* and documentation
Features
Performance, ease of use, reliability, and expansion capabilities were key considerations in the design of the
server. These design features make it possible for you to customize the system hardware to meet your needs
today and provide flexible expansion capabilities for the future.
Chapter 1. Introduction 3
Some of the features that are unique to the Lenovo XClarity Controller are enhanced performance, higher-
resolution remote video, and expanded security options. For additional information about the Lenovo
XClarity Controller, see:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/product_page.html
• UEFI-compliant server firmware
Lenovo ThinkSystem firmware is Unified Extensible Firmware Interface (UEFI) 2.5 compliant. UEFI
replaces BIOS and defines a standard interface between the operating system, platform firmware, and
external devices.
Lenovo ThinkSystem servers are capable of booting UEFI-compliant operating systems, BIOS-based
operating systems, and BIOS-based adapters as well as UEFI-compliant adapters.
Note: The server does not support DOS (Disk Operating System).
• Large system-memory capacity
The server supports registered DIMMs (RDIMMs), load-reduced DIMMs (LRDIMMs), three-dimensional
stack registered DIMMs (3DS RDIMMs) and DC Persistent Memory Modules (DCPMMs). For more
information about the specific types and maximum amount of memory, see “Specifications” on page 5.
• Flexible network support
The server has a connector for the LOM adapter, which provides two or four network connectors for
network support.
• Integrated Trusted Platform Module (TPM)
This integrated security chip performs cryptographic functions and stores private and public secure keys.
It provides the hardware support for the Trusted Computing Group (TCG) specification. You can
download the software to support the TCG specification.
Trusted Platform Module (TPM) has two versions - TPM 1.2 and TPM 2.0. You can change the TPM
version from 1.2 to 2.0 and back again.
For more information on TPM configurations, see “Enable TPM/TCM” in the Maintenance Manual.
Note: For customers in the Chinese Mainland, a Lenovo-qualified TPM 2.0 adapter or a Trusted
Cryptographic Module (TCM) adapter (sometimes called a daughter card) may be pre-installed.
• Large data-storage capacity and hot-swap capability
The server models support a maximum of fourteen 3.5-inch hot-swap SAS/SATA storage drives or a
maximum of twenty-four 2.5-inch hot-swap SAS/SATA/NVMe storage drives.
With the hot-swap feature, you can add, remove, or replace drives without turning off the server.
• Light path diagnostics
Light path diagnostics provides LEDs to help you diagnose problems. For more information about the light
path diagnostics, see:
– “Front I/O assembly” on page 22
– “Rear view LEDs” on page 27
– “System board LEDs” on page 31
• Mobile access to Lenovo Service Information website
The server provides a QR code on the system service label, which is on the cover of the server, that you
can scan using a QR code reader and scanner with a mobile device to get quick access to the Lenovo
Service Information website. The Lenovo Service Information website provides additional information for
parts installation and replacement videos, and error codes for server support.
Specifications
The following information is a summary of the features and specifications of the server. Depending on the
model, some features might not be available, or some specifications might not apply.
Specification Description
Dimension • 2U
• Height: 86.5 mm (3.4 inches)
• Width:
– With rack latches: 482.0 mm (19.0 inches)
– Without rack latches: 444.6 mm (17.5 inches)
• Depth: 763.7 mm (30.1 inches)
Note: The depth is measured with rack latches installed, but without the security
bezel installed.
Weight Up to 32.0 kg (70.6 lb), depending on the server configuration
Chapter 1. Introduction 5
Table 1. Server specifications (continued)
Specification Description
https://static.lenovo.com/us/en/serverproven/index.shtml
Notes:
• Intel Xeon 6137, 6242R, 6246R, 6248R, 6250, 6256, or 6258R processor is
supported only when the following requirements are met:
– The server chassis is the twenty-four 2.5-inch-bay chassis.
– The operating temperature is equal to or less than 30°C.
– Up to eight drives are installed in the drive bays 8–15.
• Intel Xeon 6144, 6146, 8160T, 6126T, 6244, and 6240Y processor, or processors
with TDP equal to 200 watts or 205 watts (excluding 6137, 6242R, 6246R, 6248R,
6250, 6256, or 6258R) are supported only when the following requirements are
met:
– The server chassis is the twenty-four 2.5-inch-bay chassis.
– Up to eight drives are installed in the drive bays 8–15 if the operating
temperature is equal to or less than 35°C, or up to sixteen drives are installed in
the drive bays 0–15 if the operating temperature is equal to or less than 30°C.
• For server models with sixteen/twenty/twenty-four NVMe drives, two processors
are needed, and the maximum supported processor TDP is 165 watts.
• For server models with twenty-four 2.5-inch and twelve 3.5-inch-drive bays, if Intel
Xeon 6144 and 6146 processors installed, the operating temperature is equal to or
less than 27°C.
• Intel Xeon 6154, 8168, 8180, and 8180M processors support the following server
models: eight 3.5-inch-drive bays, eight 2.5-inch-drive bays, or sixteen 2.5- inch-
drive bays. For server models with sixteen 2.5-inch and eight 3.5-inch drive bays,
the operating temperature is equal to or less than 30°C.
• Intel Xeon 6246, 6230T, and 6252N processors support the following server
models: eight 3.5-inch-drive bays, eight 2.5-inch-drive bays, or sixteen 2.5- inch-
drive bays.
• If two TruDDR4 2933, 128 GB 3DS RDIMMs are installed in one channel, the
operating temperature is equal to or less than 30°C.
Memory For 1st Generation Intel Xeon Scalable Processor (Intel Xeon SP Gen 1):
• Slots: 24 memory module slots
• Minimum: 8 GB
• Maximum:
– 768 GB using registered DIMMs (RDIMMs)
– 1.5 TB using load-reduced DIMMs (LRDIMMs)
– 3 TB using three-dimensional stack registered DIMMs (3DS RDIMMs)
• Type (depending on the model):
– TruDDR4 2666, single-rank or dual-rank, 8 GB/16 GB/32 GB RDIMM
– TruDDR4 2666, quad-rank, 64 GB LRDIMM
– TruDDR4 2666, octa-rank, 128 GB 3DS RDIMM
For 2nd Generation Intel Xeon Scalable Processor (Intel Xeon SP Gen 2):
• Slots: 24 DIMM slots
• Minimum: 8 GB
• Maximum:
Specification Description
Notes:
• Memory dummy is required when any of the following hardware configuration
requirement is met:
– Processors with TDP more than 125 watts installed
– Any of following processors installed: 5122, 8156, 6128, 6126, 4112, 5215,
5217, 5222, 8256, 6226, 4215, 4114T, 5119T, 5120T, 4109T, 4116T, 6126T,
6130T, 6138T, 5218T, 6238T
– GPU installed
– Server model: twenty-four 2.5-inch-drive bays, twelve 3.5-inch-drive bays
(except for Chinese Mainland)
• For the server model with the processors with TDP less than 125 watts installed
and without memory dummy installed, the memory performance might be
degraded if one fan fails.
• Operating speed and total memory capacity depend on the processor model and
UEFI settings.
• For a list of supported memory modules, see:
https://static.lenovo.com/us/en/serverproven/index.shtml
Chapter 1. Introduction 7
Table 1. Server specifications (continued)
Specification Description
https://lenovopress.com/osig
Specification Description
Graphics processing unit Your server supports the following GPUs or processing adapters:
(GPU) • Full-height, full-length, double-slot GPUs or processing adapters: AMD MI25,
AMD V340, NVIDIA® M10, NVIDIA M60, NVIDIA P40, NVIDIA P100, NVIDIA P6000,
NVIDIA RTX5000, NVIDIA RTX A6000, NVIDIA V100, NVIDIA V100S, and NVIDIA
A100.
• Full-height, full-length, single-slot GPU: NVIDIA P4000, NVIDIA RTX4000, and
Cambricon MLU100-C3
• Full-height, half-length, single-slot GPU: NVIDIA V100
• Low-profile, half-length, single-slot GPUs: NVIDIA P4, NVIDIA P600, and NVIDIA
P620, NVIDIA T4, and Cambricon MLU270-S4
Note: The NVIDIA V100 GPU has two types of form factor: full-height full-length
(FHFL) and full-height half-length (FHHL). Hereinafter the full-height full-length V100
GPU is called as the FHFL V100 GPU; the full-height half-length V100 GPU is called
as the FHHL V100 GPU.
Chapter 1. Introduction 9
Table 1. Server specifications (continued)
Specification Description
• For server models installed with three NVIDIA P4 can be installed in PCIe slot 1,
PCIe slot 5, and PCIe slot 6 at the same time, the operating temperature must be
equal to or less than 35°C.
• If up to five NVIDIA P4 GPUs are installed, the server models support no more
than eight 2.5-inch hot-swap SAS/SATA/NVMe drives and the operating
temperature must be equal to or less than 35°C.
• For server models installed with FHHL V100 GPU, NVIDIA T4 or Cambricon
MLU270-S4 GPU, the operating temperature must be equal to or less than 30°C.
• If one NVIDIA T4 or Cambricon MLU270-S4 GPU is installed, install in slot 1.
• For server models installed with one CPU, if two NVIDIA T4 or Cambricon
MLU270-S4 GPUs are installed, install in slot 1 and slot 2. For server models
installed with two CPUs, if two NVIDIA T4 or Cambricon MLU270-S4 GPUs are
installed, install in slot 1 and slot 5.
• For server models installed with one CPU, if three NVIDIA T4 or Cambricon
MLU270-S4 GPUs are installed, install in slot 1, slot 2 and slot 3. For server
models installed with two CPUs, if three NVIDIA T4 or Cambricon MLU270-S4
GPUs are installed, install in slot 1, slot 5 and slot 6.
• Four NVIDIA T4 or Cambricon MLU270-S4 GPUs are supported only for server
models installed with two CPUs, and installed in slot 1, slot 2, slot 5, and slot 6.
• Five NVIDIA T4 or Cambricon MLU270-S4 GPUs are supported only for server
models installed with two CPUs, and installed in slot 1, slot 2, slot 3, slot 5, and
slot 6.
• If NVIDIA P600, NVIDIA P620, NVIDIA P4000, NVIDIA RTX4000, NVIDIA P6000,
NVIDIA RTX A6000, or NVIDIA RTX5000 GPU is installed, the fan redundancy
function is not supported. If one fan fails, power off the system immediately to
prevent GPU overheat and replace the fan with a new one.
• Cambricon MLU100-C3 processing adapter supports CentOS 7.6 when used in
combination with Intel Xeon SP Gen 2, and supports CentOS 7.5 when used in
combination with Intel Xeon SP Gen 1.
GPU is supported only when the following hardware configuration requirements are
met at the same time:
• Server model: eight 3.5-inch-drive bays, eight 2.5-inch-drive bays, or sixteen 2.5-
inch-drive bays
• Processor: High Tcase type; TDP less than or equal to 150 watts
Notes:
– For server models with eight 2.5-inch-drive bays, if the server is installed with
GPUs (except for GPU model NVIDIA P4, NVIDIA T4, NVIDIA V100 FHHL,
NVIDIA P600, NVIDIA P620, NVIDIA P4000, NVIDIA RTX4000, NVIDIA P6000,
NVIDIA RTX A6000, and NVIDIA RTX5000) and the operating temperature is
equal to or less than 30°C, the TDP should be less than or equal to 165 watts.
– For server models with eight 3.5-inch-drive bays or sixteen 2.5-inch-drive bays,
if server is installed with NVIDIA T4 or Cambricon MLU270-S4 GPU, the TDP
should be less than or equal to 150 watts.
– For server models with eight 2.5-inch-drive bays, if the server is installed with
up to four NVIDIA T4 or Cambricon MLU270-S4 GPUs, the TDP can be more
than 150 watts, if the server is installed with five NVIDIA T4 or Cambricon
MLU270-S4 GPUs, the TDP should be less than or equal to 150 watts.
Specification Description
• Drive: no more than four NVMe drives installed, and no PCIe NVMe add-in-card
(AIC) installed.
• Power supply: for one GPU, 1100-watt or 1600-watt power supplies installed; for
two or three GPUs, 1600-watt power supplies installed
RAID adapters (depending on • An HBA 430-8i or 430-16i SAS/SATA adapter that support JBOD mode but does
the model) not support RAID
• A RAID 530-8i SAS/SATA adapter that supports JBOD mode and RAID levels 0, 1,
5, 10, and 50
• A RAID 530-16i SAS/SATA adapter that supports JBOD mode and RAID levels 0,
1, and 10
• A RAID 730-8i 1G Cache SAS/SATA adapter that supports JBOD mode and RAID
levels 0, 1, 5, 10, and 50
• A RAID 730-8i 2G Cache SAS/SATA adapter that supports JBOD mode and RAID
levels 0, 1, 5, 6, 10, 50, and 60
• A RAID 730-8i 4G Flash SAS/SATA adapter with CacheCade (for some models
only) that supports JBOD mode and RAID levels 0, 1, 5, 6, 10, 50, and 60
• A RAID 930-8e SAS/SATA adapter that supports JBOD mode and RAID levels 0,
1, 5, 6, 10, 50, and 60
• A RAID 930-8i, 930-16i, or 930-24i SAS/SATA adapter that supports JBOD mode
and RAID levels 0, 1, 5, 6, 10, 50, and 60
Notes:
• If 730-8i-2G Cache SAS/SATA adapter is installed, 730-8i-1G or 930-8i SAS/SATA
adapter cannot be installed.
System fans • One processor: five hot-swap fans (including one redundant fan)
• Two processors: six hot-swap fans (including one redundant fan)
Notes:
• For server models installed with Intel Xeon 6137, 6144, 6146, 6154, 6242R,
6246R, 6248R, 6250, 6256, 6258R, 8168, 8180, and 8180M, if one fan fails, the
server performance might be degraded.
• If your server comes with only one processor, five system fans (fan 1 to fan 5) are
adequate to provide proper cooling. However, you must keep the location for fan 6
occupied by a fan filler to ensure proper airflow.
• For server models with sixteen/twenty/twenty-four NVMe drives, the maximum
operating temperature is 30°C. The server performance might be degraded at 27°
C or above 27°C if one fan fails.
Power supplies (depending on One or two hot-swap power supplies for redundancy support
the model)
• 550-watt ac 80 PLUS Platinum
• 750-watt ac 80 PLUS Platinum
• 750-watt ac 80 PLUS Titanium
• 1100-watt ac 80 PLUS Platinum
• 1600-watt ac 80 PLUS Platinum
Chapter 1. Introduction 11
Table 1. Server specifications (continued)
Specification Description
CAUTION:
• 240 V dc input (input range: 180-300 V dc) is supported in Chinese Mainland
ONLY. Power supply with 240 V dc input cannot support hot plugging power
cord function. Before removing the power supply with dc input, please turn
off server or disconnect dc power sources at the breaker panel or by turning
off the power source. Then, remove the power cord.
• In order for the ThinkSystem products to operate error free in both a DC or
AC electrical environment, a TN-S earthing system which complies to 60364-
1 IEC 2005 standard has to be present or installed.
Specification Description
Chapter 1. Introduction 13
Table 1. Server specifications (continued)
Specification Description
ASHRAE class A3 and class A4 specifications, the server models must meet the
following hardware configuration requirements at the same time:
• Two power supplies installed
• NVMe drives not installed
• PCIe flash adapter not installed
• ThinkSystem QLogic QL41134 PCIe 10Gb 4-Port Base-T Ethernet Card not
installed
• Mellanox ConnectX-6 and Innova-2 FPGA not installed.
• 240 GB or 480 GB M.2 drives not installed
• GPU not installed
• Certain processors not installed:
– Processors with TDP more than or equal to 150 watts not installed
– For server models with twenty-four 2.5-inch drives or twelve 3.5-inch drives,
the following frequency optimized processors not installed: Intel Xeon 4112,
4215, 5122, 5215, 5217, 5222, 6126, 6128, 6132, 6134, 6134M, 6137, 6226,
6242R, 6246R, 6248R, 6250, 6256, 6258R, 8156, and 8256 processors
Notes:
• For server models without GPU installed, select the standard air baffle.
• Before installing the large-size air baffle, ensure that the height of the installed heat sinks is 1U to leave
adequate space for installing the large-size air baffle.
Chapter 1. Introduction 15
Management offerings
The XClarity portfolio and other system management offerings described in this section are available to help
you manage the servers more conveniently and efficiently.
Overview
Offerings Description
Consolidates the service processor functionality, Super I/O, video controller, and
remote presence capabilities into a single chip on the server system board.
Interface
• CLI application
• GUI application
Lenovo XClarity Controller
• Mobile application
• Web interface
• REST API
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/
product_page.html
Interface
• GUI application
• Mobile application
Lenovo XClarity Administrator
• Web interface
• REST API
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/aug_product_page.html
Portable and light toolset for server configuration, data collection, and firmware
updates. Suitable both for single-server or multi-server management contexts.
Interface
• OneCLI: CLI application
Lenovo XClarity Essentials
toolset • Bootable Media Creator: CLI application, GUI application
• UpdateXpress: GUI application
http://sysmgt.lenovofiles.com/help/topic/xclarity_essentials/overview.html
UEFI-based GUI tool on a single server that can simplify management tasks.
Interface
Lenovo XClarity Provisioning
• Web interface (BMC remote access)
Manager
• GUI application
Interface
Lenovo XClarity Integrator
GUI application
https://sysmgt.lenovofiles.com/help/topic/lxci/lxci_product_page.html
Application that can manage and monitor server power and temperature.
Interface
https://datacentersupport.lenovo.com/solutions/lnvo-lxem
Interface
• GUI application
Lenovo Capacity Planner
• Web Interface
https://datacentersupport.lenovo.com/solutions/lnvo-lcp
Functions
Functions
Firm- Event-
Offerings Multi- OS System Inven- Pow-
ware s/alert Power
system deploy- configu- tory/ er
up- moni- planning
mgmt ment ration logs mgmt
dates toring
Chapter 1. Introduction 17
Functions
Firm- Event-
Offerings Multi- OS System Inven- Pow-
ware s/alert Power
system deploy- configu- tory/ er
up- moni- planning
mgmt ment ration logs mgmt
dates toring
Lenovo OneCLI √ √ √ √ √
XClarity
Bootable Media
Essen- √ √ √
Creator
tials
toolset UpdateXpress √ √
Lenovo XClarity Provisioning
Manager √ √ √ √
Notes:
1. Most options can be updated through the Lenovo tools. Some options, such as GPU firmware or Omni-
Path firmware, require the use of vendor tools.
2. Firmware updates are limited to Lenovo XClarity Provisioning Manager, BMC firmware, and UEFI
updates only. Firmware updates for optional devices, such as adapters, are not supported.
3. The server UEFI settings for option ROM must be set to UEFI to update firmware using Lenovo XClarity
Essentials Bootable Media Creator.
4. The server UEFI settings for option ROM must be set to UEFI for detailed adapter card information, such
as model name and firmware levels, to be displayed in Lenovo XClarity Administrator, Lenovo XClarity
Controller, or Lenovo XClarity Essentials OneCLI.
5. It’s highly recommended that you check the power summary data for your server using Lenovo Capacity
Planner before purchasing any new parts.
Front view
The front view of the server varies by model.
The illustrations in this topic show the server front views based on the supported drive bays.
Notes:
• Your server might look different from the illustrations in this topic.
• The chassis for sixteen 2.5-inch-drive bays cannot be upgraded to the chassis for twenty-four 2.5-inch-
drive bays.
Figure 4. Front view of server models with eight 2.5-inch drive bays (0–7)
Figure 5. Front view of server models with sixteen 2.5-inch drive bays (0–15)
Figure 7. Front view of server models with twenty-four 2.5-inch drive bays (0–23)
Figure 8. Front view of server models with eight 3.5-inch drive bays (0–7)
Figure 9. Front view of server models with twelve 3.5-inch drive bays (0–11)
Callout Callout
1 Pull-out information tab 2 Front I/O assembly
The XClarity Controller network access label is attached on the top side of the pull-out information tab.
For information about the controls, connectors, and status LEDs on the front I/O assembly, see “Front I/O
assembly” on page 22.
3 5 Rack latches
If your server is installed in a rack, you can use the rack latches to help you slide the server out of the rack.
You also can use the rack latches and screws to secure the server in the rack so that the server cannot slide
out, especially in vibration-prone areas. For more information, refer to the Rack Installation Guide that comes
with your rail kit.
4 Drive bays
The number of the installed drives in your server varies by model. When you install drives, follow the order of
the drive bay numbers.
The EMI integrity and cooling of the server are protected by having all drive bays occupied. The vacant drive
bays must be occupied by drive bay fillers or drive fillers.
Used to attach a high-performance monitor, a direct-drive monitor, or other devices that use a VGA
connector.
7 Drive activity LED Solid green The drive is powered but not active.
Blinking yellow (blinking slowly, about one The drive is being rebuilt.
flash per second)
Blinking yellow (blinking rapidly, about four The RAID adapter is locating the drive.
flashes per second)
The following illustrations show the controls, connectors, and LEDs on the front I/O assembly of the server.
To locate the front I/O assembly, see “Front view” on page 19.
Figure 10. Front I/O assembly for server models with eight 3.5-inch-drive bays, eight 2.5-inch-drive bays, and sixteen 2.5-
inch-drive bays
Figure 11. Front I/O assembly for server models with twelve 3.5-inch-drive bays and twenty-four 2.5-inch-drive bays
Callout Callout
1 XClarity Controller USB connector 2 USB 3.0 connector
Depending on the setting, this connector supports USB 2.0 function, XClarity Controller management
function, or both.
• If the connector is set for USB 2.0 function, you can attach a device that requires a USB 2.0 connection,
such as a keyboard, a mouse, or a USB storage device.
• If the connector is set for XClarity Controller management function, you can attach a mobile device
installed with the application to run XClarity Controller event logs.
• If the connector is set to have both functions, you can press the system ID button for three seconds to
switch between the two functions.
For more information, see “Set the network connection for the Lenovo XClarity Controller” on page 197.
Used to attach a device that requires a USB 2.0 or 3.0 connection, such as a keyboard, a mouse, or a USB
storage device.
You can press the power button to turn on the server when you finish setting up the server. You also can hold
the power button for several seconds to turn off the server if you cannot turn off the server from the operating
system. The power status LED helps you to determine the current power status.
Slow blinking Green The server is off and is ready to be powered on (standby state).
(about one flash
per second)
Fast blinking Green The server is off, but the XClarity Controller is initializing, and the server is not
(about four ready to be powered on.
flashes per
second)
The network activity LED on the front I/O assembly helps you identify the network connectivity and activity.
Use this system ID button and the blue system ID LED to visually locate the server. A system ID LED is also
located on the rear of the server. Each time you press the system ID button, the state of both the system ID
LEDs changes. The LEDs can be changed to on, blinking, or off. You can also use the Lenovo XClarity
Controller or a remote management program to change the state of the system ID LEDs to assist in visually
locating the server among other servers.
If the XClarity Controller USB connector is set to have both the USB 2.0 function and XClarity Controller
management function, you can press the system ID button for three seconds to switch between the two
functions.
The system error LED provides basic diagnostic functions for your server. If the system error LED is lit, one or
more LEDs elsewhere in the server might also be lit to direct you to the source of the error.
On Yellow An error has been detected on the server. Check the event log to determine the exact
Causes might include but not limited to the cause of the error.
following errors: Alternatively, follow the light path
diagnostics to determine if additional LEDs
• The temperature of the server reached
are lit that will direct you to identify the
the non-critical temperature threshold.
cause of the error. For information about
• The voltage of the server reached the light path diagnostics, see Maintenance
non-critical voltage threshold. Manual for your server.
• A fan has been detected to be running at
low speed.
• A hot-swap fan has been removed.
• The power supply has a critical error.
• The power supply is not connected to
the power.
Rear view
The rear of the server provides access to several connectors and components.
Figure 13. Rear view of server models with two rear 3.5-inch drive bays (12/13 or 24/25) and three PCIe slots
Callout Callout
1 Ethernet connectors on the LOM adapter (available on 2 XClarity Controller network connector
some models)
9 PCIe slot 6 (on riser 2) 10 PCIe slot 4 (with a serial port module installed on
some models)
The LOM adapter provides two or four extra Ethernet connectors for network connections.
The leftmost Ethernet connector on the LOM adapter can be set as XClarity Controller network connector. To
set the Ethernet connector as XClarity Controller network connector, start Setup utility, go to BMC Settings
➙ Network Settings ➙ Network Interface Port and select Shared. Then, go to Shared NIC on and select
PHY card.
Used to attach an Ethernet cable to manage the system using XClarity Controller.
3 VGA connector
Used to attach a high-performance monitor, a direct-drive monitor, or other devices that use a VGA
connector.
Used to attach a device that requires a USB 2.0 or 3.0 connection, such as a keyboard, a mouse, or a USB
storage device.
5 NMI button
Press this button to force a nonmaskable interrupt (NMI) to the processor. By this way, you can blue screen
the server and take a memory dump. You might have to use a pen or the end of a straightened paper clip to
press the button.
6 Power supply 1
7 Power supply 2 (available on some models)
The hot-swap redundant power supplies help you avoid significant interruption to the operation of the
system when a power supply fails. You can purchase a power supply option from Lenovo and install the
power supply to provide power redundancy without turning off the server.
On each power supply, there are three status LEDs near the power cord connector. For information about the
status LEDs, see “Rear view LEDs” on page 27.
8 9 10 11 12 13 PCIe slots
You can find the PCIe slot numbers on the rear of the chassis.
Notes:
• Your server supports PCIe slot 5 and PCIe slot 6 when two processors are installed.
• Do not install PCIe adapters with small form factor (SFF) connectors in PCIe slot 6.
• Observe the following PCIe slot selection priority when installing an Ethernet card or a converged network
adapter:
One processor 4, 2, 3, 1
Two processors 4, 2, 6, 3, 5, 1
There are five different riser cards that can be installed in riser 1.
• Type 1
– Slot 1: PCIe x16 (x8, x4, x1), full-height, half-length/full-height, full-length
– Slot 2: PCIe x16 (x8, x4, x1), full-height, half-length/full-height, full-length
– Slot 3: PCIe x16 (x8, x4, x1), full-height, half-length
• Type 2
– Slot 1: PCIe x16 (x8, x4, x1), full-height, half-length/full-height, full-length
– Slot 2: PCIe x16 (x8, x4, x1), full-height, half-length/full-height, full-length
– Slot 3: ML2 x8 (x8, x4, x1), full-height, half-length
• Type 3
– Slot 1: PCIe x16 (x16, x8, x4, x1), full-height, half-length/full-height, full-length
Used to install up to two 3.5-inch hot-swap drives on the rear of the server. The rear 3.5-inch drive bays are
available on some models.
The number of the installed drives in your server varies by model. The EMI integrity and cooling of the server
are protected by having all drive bays occupied. The vacant drive bays must be occupied by drive bay fillers
or drive fillers.
1 System ID LED
The blue system ID LED helps you to visually locate the server. A system ID LED is also located on the front
of the server. Each time you press the system ID button, the state of both the system ID LEDs changes. The
LEDs can be changed to on, blinking, or off. You can also use the Lenovo XClarity Controller or a remote
management program to change the state of the system ID LEDs to assist in visually locating the server
among other servers.
The system error LED provides basic diagnostic functions for your server. If the system error LED is lit, one or
more LEDs elsewhere in the server might also be lit to direct you to the source of the error. For more
information, see “Front I/O assembly” on page 22.
5 Power input LED • Green: The power supply is connected to the ac power source.
• Off: The power supply is disconnected from the ac power source or a power problem
occurs.
6 Power output LED • Green: The server is on and the power supply is working normally.
• Blinking green: The power supply is in zero-output mode (standby). When the server
power load is low, one of the installed power supplies enters into the standby state
while the other one delivers entire load. When the power load increases, the standby
power supply will switch to active state to provide sufficient power to the server.
To disable zero-output mode, start the Setup utility, go to System Settings ➙ Power
➙ Zero Output and select Disable. If you disable zero-output mode, both power
supplies will be in the active state.
• Off: The server is powered off, or the power supply is not working properly. If the
server is powered on but the power output LED is off, replace the power supply.
7 Power supply error LED • Yellow: The power supply has failed. To resolve the issue, replace the power supply.
• Off: The power supply is working normally.
Callout Callout
1 Riser 2 slot 2 Serial-port-module connector
Callout Callout
23 System fan 5 connector 24 System fan 6 connector
Notes:
• 1 Trusted Cryptography Module
• 2 Trusted Platform Module
Callout Callout
1 System power LED 2 System ID LED
When this LED is lit, it indicates that the server is powered on.
2 System ID LED
The blue system ID LED helps you to visually locate the server. A system ID LED is also located on the front
of the server. Each time you press the system ID button, the state of both the system ID LEDs changes. The
LEDs can be changed to on, blinking, or off. You can also use the Lenovo XClarity Controller or a remote
management program to change the state of the system ID LEDs to assist in visually locating the server
among other servers.
When this yellow LED is lit, one or more LEDs elsewhere in the server might also be lit to direct you to the
source of the error. For more information, see “Front I/O assembly” on page 22.
When a memory module error LED is lit, it indicates that the corresponding memory module has failed.
When a fan error LED is lit, it indicates that the corresponding system fan is operating slowly or has failed.
Note: Disengage all latches, release tabs, or locks on cable connectors when you disconnect cables from
the system board. Failing to release them before removing the cables will damage the cable sockets on the
system board, which are fragile. Any damage to the cable sockets might require replacing the system board.
GPU
Use the section to understand the cable routing for the GPUs.
Figure 17. Cable routing for server models with up to two GPUs
Cable From To
1 GPU power cable Power connector on the GPU GPU power connector 1 on the
installed in PCIe slot 5 system board
2 GPU power cable Power connector on the GPU GPU power connector 2 on the
installed in PCIe slot 1 system board
Figure 18. Cable routing for server models with up to three GPUs
Cable From To
1 GPU power cable Power connectors on the GPUs GPU power connector 1 on the
installed in PCIe slots 5 and 6 system board
2 GPU power cable Power connector on the GPU GPU power connector 2 on the
installed in PCIe slot 1 system board
Figure 19. Cable routing for server models with two Cambricon MLU100-C3 processing adapters
Cable From To
1 GPU power cable Power connectors on the adapters GPU power connector 1 on the
installed in PCIe slots 5 and 6 system board
Figure 20. Cable routing for server models with four Cambricon MLU100-C3 processing adapters
Cable From To
1 GPU power cable Power connectors on the adapters GPU power connector 1 on the
installed in PCIe slots 5 and 6 system board
2 GPU power cable Power connectors on the adapters GPU power connector 2 on the
installed in PCIe slots 1 and 2 system board
Before you route cables for backplanes, observe the adapter priority and the PCIe slot selection priority when
installing the NVMe switch adapter or a RAID adapter.
• Adapter priority: NVMe switch adapter, 24i RAID adapter, 8i RAID adapter, 16i RAID adapter
• PCIe slot selection priority when installing the NVMe switch adapter:
One processor 1
Two processors 1, 5, 6
– For server models with sixteen/twenty/twenty-four NVMe drives (with two processors installed):
One processor 1, 2, 3
Two processors 1, 2, 3, 5, 6
One processor 7, 4, 2, 3, 1
Two processors 7, 4, 2, 3, 1, 5, 6
Notes:
• The PCIe slot 7 refers to the RAID adapter slot on the system board.
• If the rear hot-swap drive assembly is installed, PCIe slots 1, 2, and 3 will become unavailable because the
space is occupied by the rear hot-swap drive assembly.
• The adapter priority of the 530-16i or 930-16i RAID adapter adapter can be higher than the 930-8i RAID
adapter when both 16i RAID adapter and 8i RAID adapter are chosen.
Server model: eight 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one 16i RAID
adapter
Note: The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is
installed. Depending on the model, the rear hot-swap drive assembly and the cable 3 might not be available
on your server.
Figure 21. Cable routing for server models with eight 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
one 16i RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane backplane RAID adapter installed on the RAID
adapter slot
3 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed on the RAID adapter
slot
Figure 22. Cable routing for server models with eight 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
one 24i RAID adapter
Cable From To
1 Power cable for front backplane Power connector on front backplane Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane backplane RAID adapter
3 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 24i RAID
swap drive assembly swap drive assembly adapter
Figure 23. Cable routing for server models with eight 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
two 8i RAID adapters
Cable From To
1 Power cable for front backplane Power connector on front backplane Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane backplane adapter installed on the RAID adapter
slot
3 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector on the 8i RAID adapter
swap drive assembly swap drive assembly installed in PCIe slot 4
Figure 24. Cable routing for server models with eight 2.5-inch SAS/SATA drives, and one 730-8i 4G Flash SAS/SATA
RAID adapter with CacheCade
Cable From To
1 Power cable for front backplane Power connector on front backplane Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane backplane adapter installed in PCIe slot 4
Figure 25. Cable routing for server models with eight 2.5-inch SAS/SATA drives, Intel Xeon 6137, 6242R, 6246R, 6248R,
6250, 6256, or 6258R processors, and one 8i RAID adapter
Cable From To
1 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane backplane adapter installed on the RAID adapter
slot
2 Power cable for front backplane Power connector on front backplane Backplane power connector 2 on the
system board
Figure 26. Cable routing for server models with four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and two 8i RAID adapters
Cable From To
1 Power cable for front backplane Power connector on front backplane Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane backplane adapter installed on the RAID adapter
slot
4 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector on the 8i RAID adapter
swap drive assembly swap drive assembly installed in PCIe slot 4
Server model: four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-swap
drive assembly, one 16i RAID adapter
Note: The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is
installed. Depending on the model, the rear hot-swap drive assembly and the cable 4 might not be available
on your server.
Figure 27. Cable routing for server models with four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and one 16i RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane backplane RAID adapter installed on the RAID
adapter slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane NVMe 3 connectors on front on the system board
backplane
4 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed on the RAID adapter
slot
Figure 28. Cable routing for server models with four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and one 24i RAID adapter
Cable From To
1 Power cable for front backplane Power connector on front backplane Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane backplane RAID adapter installed in PCIe slot 5
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane NVMe 3 connectors on front on the system board
backplane
4 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 24i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 5
Figure 29. Cable routing for server models with four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
Intel Xeon 6137, 6242R, 6246R, 6248R, 6250, 6256, or 6258R processors, and one 8i RAID adapter
Cable From To
1 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane NVMe 3 connectors on front on the system board
backplane
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane backplane adapter installed on the RAID adapter
slot
3 Power cable for front backplane Power connector on front backplane Backplane power connector 2 on the
system board
Server model: sixteen 2.5-inch SAS/SATA drives, one 16i RAID adapter
Figure 30. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives and one 16i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 1 backplane 1 RAID adapter installed on the RAID
adapter slot
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Figure 31. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one
8i RAID adapter, and one 16i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 4
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 4
Figure 32. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
one 24i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 5
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 5
5 SAS signal cable for the rear hot- Signal connector on the rear hot- C4 connector on the 24i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 5
Figure 33. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
three 8i RAID adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 2 backplane 2 adapter installed in PCIe slot 4
5 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector on the 8i RAID adapter
swap drive assembly swap drive assembly installed in PCIe slot 5
Figure 34. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
and one 16i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on backplane 1 Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on C0 and C1 connectors on the 16i
backplane 1 backplane 1 RAID adapter installed on the RAID
adapter slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 16i
backplane 2 backplane 2 RAID adapter installed on the RAID
adapter slot
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Figure 35. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
and one 24i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on backplane 1 Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed on the riser
assembly
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed on the riser
assembly
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Note: The 24i RAID adapter can be installed on riser assembly 1 or riser assembly 2.
Figure 36. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, one 8i RAID adapter, and one 16i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 4
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 4
Figure 37. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and one 24i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 5
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 5
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 SAS signal cable for the rear hot- Signal connector on the rear hot- C4 connector on the 24i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 5
Figure 38. Cable routing for server models with eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives,
one 16i RAID adapter, and one NVMe switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 1 backplane 1 RAID adapter installed on the RAID
adapter slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on backplane 1 on the system board
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 16i
backplane 2 backplane 2 RAID adapter installed on the RAID
adapter slot
6 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 1
Server model: eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, one 8i RAID adapter, one 16i RAID adapter, one NVMe switch adapter
Figure 39. Cable routing for server models with eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, one 8i RAID adapter, one 16i RAID adapter, and one NVMe switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 4
Figure 40. Cable routing for server models with eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, three 8i RAID adapters, and one NVMe switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector on the 8i RAID adapter
swap drive assembly swap drive assembly installed in PCIe slot 6
Figure 41. Cable routing for server models with eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, one 24i RAID adapter, and one NVMe switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 6
7 SAS signal cable for the rear hot- Signal connector on the rear hot- C4 connector on the 24i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 6
Figure 42. Cable routing for server models with sixteen 2.5-inch NVMe drives, two NVMe 810-4P switch adapters, and
two NVMe 1610-4P switch adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on NVMe 2–3 and NVMe 0–1 connectors
backplane 1 front backplane 1 on the system board
3 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 1 front backplane 1 the NVMe 1610-4P switch adapter
installed in PCIe slot 6
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 NVMe signal cable for front NVMe 0 connector on front C0 and C1 connector on the NVMe
backplane 2 backplane 2 810-4P switch adapter installed in
PCIe slot 4
6 NVMe signal cable for front NVMe 1 connector on front C0 and C1 connector on the NVMe
backplane 2 backplane 2 810-4P switch adapter installed in
RAID adapter slot on the system
board
7 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 2 front backplane 2 the NVMe 1610-4P switch adapter
installed in PCIe slot 1
Server model: twenty 2.5-inch NVMe drives, two NVMe 810-4P switch adapters, three NVMe 1610-4P
switch adapters
Figure 43. Cable routing for server models with twenty 2.5-inch NVMe drives, two NVMe 810-4P switch adapters, and
three NVMe 1610-4P switch adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on NVMe 2–3 and NVMe 0–1 connectors
backplane 1 front backplane 1 on the system board
3 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 1 front backplane 1 the NVMe 1610-4P switch adapter
installed in PCIe slot 6
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on C0, C1, C2, and C3 connectors on
backplane 2 front backplane 2 the NVMe 1610-4P switch adapter
installed in PCIe slot 5
6 NVMe signal cable for front NVMe 2 connector on front C0 and C1 connectors on the NVMe
backplane 2 backplane 2 810-4P switch adapter installed in
PCIe slot 4
9 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on C0, C1, C2, and C3 connectors on
backplane 3 front backplane 3 the NVMe 1610-4P switch adapter
installed in PCIe slot 1
Server model: twenty-four 2.5-inch SAS/SATA drives, one 8i RAID adapter, one 16i RAID adapter
Figure 44. Cable routing for server models with twenty-four 2.5-inch SAS/SATA drives, one 8i RAID adapter, and one 16i
RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
6 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 16i
backplane 3 backplane 3 RAID adapter installed in PCIe slot 4
Figure 45. Cable routing for server models with twenty-four 2.5-inch SAS/SATA drives and one 24i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed on riser 1
assembly
3 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed on riser 1
assembly
6 SAS signal cable for front SAS 0 and SAS 1 connectors on front C4 and C5 connectors on the 24i
backplane 3 backplane 3 RAID adapter installed on riser 1
assembly
Figure 46. Cable routing for server models with twenty-four 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly,
and four 8i RAID adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 2 backplane 2 adapter installed in PCIe slot 4
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
7 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector on the 8i RAID adapter
swap drive assembly swap drive assembly installed in PCIe slot 6
Figure 47. Cable routing for server models with twenty-four 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly,
two 8i RAID adapters, and one 16i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 2 backplane 2 adapter installed in PCIe slot 4
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
6 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 3 backplane 3 RAID adapter installed in PCIe slot 5
7 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 5
Figure 48. Cable routing for server models with twenty-four 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly,
one 8i RAID adapter, and one 24i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 5
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 5
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
7 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector on the 8i RAID adapter
swap drive assembly swap drive assembly installed on the RAID adapter slot
Figure 49. Cable routing for server models with twenty-four 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly,
and two 16i RAID adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 1 backplane 1 RAID adapter installed on the RAID
adapter slot
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 16i
backplane 2 backplane 2 RAID adapter installed on the RAID
adapter slot
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
6 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 3 backplane 3 RAID adapter installed in PCIe slot 4
7 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 4
Figure 50. Cable routing for server models with twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
one 8i RAID adapter, and one 16i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 1 backplane 1 RAID adapter installed on the RAID
adapter slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 4
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 16i
backplane 3 backplane 3 RAID adapter installed in PCIe slot 4
Figure 51. Cable routing for server models with twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
and one 24i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed on riser 1
assembly
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed on riser 1
assembly
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front C4 and C5 connectors on the 24i
backplane 3 backplane 3 RAID adapter installed on riser 1
assembly
Figure 52. Cable routing for server models with twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and four 8i RAID adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 2 backplane 2 adapter installed in PCIe slot 4
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
8 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector on the 8i RAID adapter
swap drive assembly swap drive assembly installed in PCIe slot 6
Figure 53. Cable routing for server models with twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, two 8i RAID adapters, and one 16i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 2 backplane 2 adapter installed in PCIe slot 4
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
8 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 5
Figure 54. Cable routing for server models with twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, one 8i RAID adapter, and one 24i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 5
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 5
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
8 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector on the 8i RAID adapter
swap drive assembly swap drive assembly installed on the RAID adapter slot
Figure 55. Cable routing for server models with twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and two 16i RAID adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 1 backplane 1 RAID adapter installed on the RAID
adapter slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 16i
backplane 2 backplane 2 RAID adapter installed on the RAID
adapter slot
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
8 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 4
Figure 56. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe
drives, one 24i RAID adapter, and one NVMe switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed on an
available PCIe slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed on
backplane 2 an available PCIe slot
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed on an
available PCIe slot
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
8 SAS signal cable for front SAS 0 and SAS 1 connectors on front C4 and C5 connectors on the 24i
backplane 3 backplane 3 RAID adapter installed on an
available PCIe slot
Figure 57. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe
drives, the rear hot-swap drive assembly, one 8i RAID adapter, one 24i RAID adapter, and one NVMe switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 6
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
9 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector on the 8i RAID adapter
swap drive assembly swap drive assembly installed on the RAID adapter slot
Figure 58. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe
drives, the rear hot-swap drive assembly, two 16i RAID adapters, and one NVMe switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 1 backplane 1 RAID adapter installed on the RAID
adapter slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 16i
backplane 2 backplane 2 RAID adapter installed on the RAID
adapter slot
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
8 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 3 backplane 3 RAID adapter installed in PCIe slot 4
9 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 4
Figure 59. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe
drives, the rear hot-swap drive assembly, two 8i RAID adapters, one 16i RAID adapter, and one NVMe switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 3 backplane 3 RAID adapter installed in PCIe slot 6
6 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 6
8 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
9 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Figure 60. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe
drives, one 24i RAID adapter, and two NVMe switch adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 6
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front C4 and C5 connectors on the 24i
backplane 3 backplane 3 RAID adapter installed in PCIe slot 6
9 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 3 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 3 PCIe slot 1
Figure 61. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe
drives, three 8i RAID adapters, and two NVMe switch adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 3 backplane 3 adapter installed in PCIe slot 2
6 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 3 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 3 PCIe slot 1
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 2 backplane 2 adapter installed in PCIe slot 4
9 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Figure 62. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe
drives, one 8i RAID adapter, one 16i RAID adapters, and two NVMe switch adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 1 backplane 1 adapter installed on the RAID adapter
slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 16i
backplane 3 backplane 3 RAID adapter installed in PCIe slot 4
6 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 3 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 3 PCIe slot 1
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 16i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 4
9 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Figure 63. Cable routing for server models with sixteen 2.5-inch NVMe drives, eight SAS/SATA drives, two NVMe 810-4P
switch adapters, two NVMe 1610-4P switch adapters and one 8i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on NVMe 2–3 and NVMe 0–1 connectors
backplane 1 front backplane 1 on the system board
3 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 1 front backplane 1 the NVMe 1610-4P switch adapter
installed in PCIe slot 6
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 NVMe signal cable for front NVMe 0 connector on front C0 and C1 connector on the NVMe
backplane 2 backplane 2 810-4P switch adapter installed in
PCIe slot 4
6 NVMe signal cable for front NVMe 1 connector on front C0 and C1 connector on the NVMe
backplane 2 backplane 2 810-4P switch adapter installed in
RAID adapter slot on the system
board
7 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 2 front backplane 2 the NVMe 1610-4P switch adapter
installed in PCIe slot 1
9 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i RAID
backplane 3 backplane 3 adapter installed in PCIe slot 3
Figure 64. Cable routing for server models with twenty-four 2.5-inch NVMe drives, four NVMe 810-4P switch adapters,
and one NVMe 1610-8P switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on C0, C1, C2, and C3 connectors on
backplane 1 front backplane 1 the NVMe 810-4P switch adapter
installed in PCIe slot 6
3 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0 and C1 connectors on the NVMe
backplane 1 front backplane 1 1610-8P switch adapter installed in
PCIe slot 1
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on C2 and C3 connectors on the NVMe
backplane 2 front backplane 2 1610-8P switch adapter installed in
PCIe slot 1
6 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 2 front backplane 2 the NVMe 810-4P switch adapter
installed in PCIe slot 4
7 NVMe signal cable for onboard NVMe 0–1 and NVMe 2–3 connectors C0 and C1 connectors on the riser
NVMe connectors on the system board card 1
9 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on C0, C1, C2, and C3 connectors on
backplane 3 front backplane 3 the NVMe 810-4P switch adapter
installed in RAID adapter slot on the
system board
10 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 3 front backplane 3 the NVMe 810-4P switch adapter
installed in PCIe slot 2
Server model: eight 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, two 8i RAID
adapters
Note: The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is
installed. Depending on the model, the rear hot-swap drive assembly and the 8i RAID adapter in PCIe slot 4
might not be available on your server.
Figure 65. Cable routing for server models with eight 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
two 8i RAID adapters
2 SAS signal cable SAS 0 and SAS 1 connectors on the C0 and C1 connectors on the 8i RAID
backplane adapter installed on the RAID adapter
slot
3 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector on the 8i RAID adapter
swap drive assembly swap drive assembly installed in PCIe slot 4
Figure 66. Cable routing for server models with eight 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
one 16i RAID adapter
Cable From To
1 Power cable Power connector on the backplane Backplane power connector 1 on the
system board
2 SAS signal cable SAS 0 and SAS 1 connectors on the C0 and C1 connectors on the 16i
backplane RAID adapter installed on the RAID
adapter slot
3 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed on the RAID adapter
slot
Server model: twelve 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one 16i RAID
adapter
Note: The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is
installed. Depending on the model, the rear hot-swap drive assembly might not be available on your server.
Figure 67. Cable routing for server models with twelve 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
one 16i RAID adapter
Cable From To
1 Power cable Power 1 connector on the front Backplane power connector 1 on the
backplane system board
2 SAS signal cable SAS 0, SAS 1, and SAS 2 connectors C0, C1, and C2 connectors on the 16i
on the backplane RAID adapter installed on the RAID
adapter slot
4 SAS signal cable for the rear hot- Signal connector on the rear hot- C3 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed on the RAID adapter
slot
Figure 68. Cable routing for server models with twelve 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one
8i RAID adapter, and one 16i RAID adapter
Cable From To
1 Power cable Power 1 connector on the front Backplane power connector 1 on the
backplane system board
2 SAS signal cable SAS 0, SAS 1, and SAS 2 connectors C0, C1, and C2 connectors on the 16i
on the backplane RAID adapter installed on the RAID
adapter slot
4 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector on the 8i RAID adapter
swap drive assembly swap drive assembly installed in PCIe slot 4
Figure 69. Cable routing for server models with eight 3.5-inch SAS/SATA drives, four 3.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and one 16i RAID adapter
Cable From To
1 Power cable Power 1 connector on the front Backplane power connector 1 on the
backplane system board
2 SAS signal cable SAS 0, SAS 1, and SAS 2 connectors C0, C1, and C2 connectors on the 16i
on the backplane RAID adapter installed on the RAID
adapter slot
3 Power cable Power 2 connector on the front Backplane power connector 2 on the
backplane system board
4 NVMe signal cable NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
NVMe 3 connectors on the front on the system board
backplane
5 SAS signal cable for the rear hot- Signal connector on the rear hot- C3 connector on the 16i RAID
swap drive assembly swap drive assembly adapter installed on the RAID adapter
slot
Figure 70. Cable routing for server models with eight 3.5-inch SAS/SATA drives, four 3.5-inch NVMe drives, and one 8i
RAID adapter
Cable From To
1 Power cable Power 1 connector on the backplane Backplane power connector 1 on the
system board
2 SAS signal cable SAS 0 and SAS 1 connectors on the C0 and C1 connectors on the 8i RAID
backplane adapter installed on the RAID adapter
slot
3 Power cable Power 2 connector on the backplane Backplane power connector 2 on the
system board
4 NVMe signal cable NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
NVMe 3 connectors on the backplane on the system board
Figure 71. Cable routing for server models with eight 3.5-inch SAS/SATA drives, four 3.5-inch NVMe drives, the rear hot-
swap drive assembly, and one 8i RAID adapter
Cable From To
1 Power cable Power 1 connector on the backplane Backplane power connector 1 on the
system board
2 SAS signal cable SAS 0 connector on the backplane C0 connectors on the 8i RAID
adapter installed on the RAID adapter
slot
3 SAS signal cable for the rear hot- Signal connector on the rear hot- C1 connectors on the 8i RAID
swap drive assembly swap drive assembly adapter installed on the RAID adapter
slot
4 NVMe signal cable NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
NVMe 3 connectors on the backplane on the system board
For more information about ordering the parts shown in Figure 72 “Server components” on page 122:
http://datacentersupport.lenovo.com/us/en/products/servers/thinksystem/sr650/7x05/parts
Note: Depending on the model, your server might look slightly different from the illustration.
The parts listed in the following table are identified as one of the following:
For more information about ordering the parts shown in Figure 72 “Server components” on page 122:
http://datacentersupport.lenovo.com/us/en/products/servers/thinksystem/sr650/7x05/parts
1 Top cover √
2 RAID adapter √
3 LOM-adapter air baffle √
4 LOM adapter √
Memory module (DCPMM might look
5
slightly different from the illustration) √
6 Heat sink √
7 Processor √
TCM/TPM adapter (for Chinese
8
Mainland only) √
9 System board √
10 P4 GPU air baffle √
11 FHHL V100 GPU air baffle √
12 Fan √
13 Fan cage √
14 RAID super capacitor module √
15 Standard air baffle √
16 Serial port module √
Backplane, eight 2.5-inch hot-swap
17 √
drives
Backplane, twelve 3.5-inch hot-swap
18 √
drives
Backplane, eight 3.5-inch hot-swap
19 √
drives
Right rack latch, with front I/O
20
assembly √
Consumable
and
Index Description Tier 1 CRU Tier 2 CRU FRU
Structural
parts
To view the power cords that are available for the server:
1. Go to:
http://dcsc.lenovo.com/#/
2. Click Preconfigured Model or Configure to order.
3. Enter the machine type and model for your server to display the configurator page.
4. Click Power ➙ Power Cables to see all line cords.
Notes:
• For your safety, a power cord with a grounded attachment plug is provided to use with this product. To
avoid electrical shock, always use the power cord and plug with a properly grounded outlet.
• Power cords for this product that are used in the United States and Canada are listed by Underwriter's
Laboratories (UL) and certified by the Canadian Standards Association (CSA).
• For units intended to be operated at 115 volts: Use a UL-listed and CSA-certified cord set consisting of a
minimum 18 AWG, Type SVT or SJT, three-conductor cord, a maximum of 15 feet in length and a parallel
blade, grounding-type attachment plug rated 15 amperes, 125 volts.
• For units intended to be operated at 230 volts (U.S. use): Use a UL-listed and CSA-certified cord set
consisting of a minimum 18 AWG, Type SVT or SJT, three-conductor cord, a maximum of 15 feet in length
and a tandem blade, grounding-type attachment plug rated 15 amperes, 250 volts.
• For units intended to be operated at 230 volts (outside the U.S.): Use a cord set with a grounding-type
attachment plug. The cord set should have the appropriate safety approvals for the country in which the
equipment will be installed.
• Power cords for a specific country or region are usually available only in that country or region.
The server setup procedure varies depending on the configuration of the server when it was delivered. In
some cases, the server is fully configured and you just need to connect the server to the network and an ac
power source, and then you can power on the server. In other cases, the server needs to have hardware
options installed, requires hardware and firmware configuration, and requires an operating system to be
installed.
The following steps describe the general procedure for setting up a server:
1. Unpack the server package. See “Server package contents” on page 3.
2. Set up the server hardware.
a. Install any required hardware or server options. See the related topics in “Install server hardware
options” on page 131.
b. If necessary, install the server into a standard rack cabinet by using the rail kit shipped with the
server. See the Rack Installation Guide that comes with optional rail kit.
c. Connect the Ethernet cables and power cords to the server. See “Rear view” on page 24 to locate
the connectors. See “Cable the server” on page 195 for cabling best practices.
d. Power on the server. See “Turn on the server” on page 195.
Note: You can access the management processor interface to configure the system without
powering on the server. Whenever the server is connected to power, the management processor
interface is available. For details about accessing the management server processor, see:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/dw1lm_c_chapter2_
openingandusing.html
e. Validate that the server hardware was set up successfully. See “Validate server setup” on page 195.
3. Configure the system.
a. Connect the Lenovo XClarity Controller to the management network. See “Set the network
connection for the Lenovo XClarity Controller” on page 197.
b. Update the firmware for the server, if necessary. See “Update the firmware” on page 198.
c. Configure the firmware for the server. See “Configure the firmware” on page 201.
The following information is available for RAID configuration:
• https://lenovopress.com/lp0578-lenovo-raid-introduction
• https://lenovopress.com/lp0579-lenovo-raid-management-tools-and-resources
d. Install the operating system. See “Deploy the operating system” on page 207.
e. Back up the server configuration. See “Back up the server configuration” on page 208.
f. Install the applications and programs for which the server is intended to be used.
Attention: Prevent exposure to static electricity, which might lead to system halt and loss of data, by
keeping static-sensitive components in their static-protective packages until installation, and handling these
devices with an electrostatic-discharge wrist strap or other grounding system.
• Read the safety information and guidelines to ensure that you work safely.
– A complete list of safety information for all products is available at:
http://thinksystem.lenovofiles.com/help/topic/safety_documentation/pdf_files.html
– The following guidelines are available as well: “Handling static-sensitive devices” on page 130 and
“Working inside the server with the power on” on page 130.
• Make sure the components you are installing are supported by the server. For a list of supported optional
components for the server, see https://static.lenovo.com/us/en/serverproven/index.shtml.
• When you install a new server, download and apply the latest firmware. This will help ensure that any
known issues are addressed, and that your server is ready to work with optimal performance. Go to
ThinkSystem SR650 Drivers and Software to download firmware updates for your server.
Important: Some cluster solutions require specific code levels or coordinated code updates. If the
component is part of a cluster solution, verify that the latest level of code is supported for the cluster
solution before you update the code.
• It is good practice to make sure that the server is working correctly before you install an optional
component.
• Keep the working area clean, and place removed components on a flat and smooth surface that does not
shake or tilt.
• Do not attempt to lift an object that might be too heavy for you. If you have to lift a heavy object, read the
following precautions carefully:
– Make sure that you can stand steadily without slipping.
– Distribute the weight of the object equally between your feet.
– Use a slow lifting force. Never move suddenly or twist when you lift a heavy object.
– To avoid straining the muscles in your back, lift by standing or by pushing up with your leg muscles.
• Back up all important data before you make changes related to the disk drives.
• Have a small flat-blade screwdriver, a small Phillips screwdriver, and a T8 torx screwdriver available.
• To view the error LEDs on the system board and internal components, leave the power on.
• You do not have to turn off the server to remove or install hot-swap power supplies, hot-swap fans, or hot-
plug USB devices. However, you must turn off the server before you perform any steps that involve
removing or installing adapter cables, and you must disconnect the power source from the server before
you perform any steps that involve removing or installing a riser card.
• Blue on a component indicates touch points, where you can grip to remove a component from or install it
in the server, open or close a latch, and so on.
• The Red strip on the drives, adjacent to the release latch, indicates that the drive can be hot-swapped if
the server and operating system support hot-swap capability. This means that you can remove or install
the drive while the server is still running.
Note: The product is not suitable for use at visual display workplaces according to §2 of the Workplace
Regulations.
CAUTION:
This equipment must be installed or serviced by trained personnel, as defined by the NEC, IEC 62368-
1 & IEC 60950-1, the standard for Safety of Electronic Equipment within the Field of Audio/Video,
Information Technology and Communication Technology. Lenovo assumes you are qualified in the
servicing of equipment and trained in recognizing hazards energy levels in products. Access to the
equipment is by the use of a tool, lock and key, or other means of security, and is controlled by the
authority responsible for the location.
Important: Electrical grounding of the server is required for operator safety and correct system function.
Proper grounding of the electrical outlet can be verified by a certified electrician.
Use the following checklist to verify that there are no potentially unsafe conditions:
1. Make sure that the power is off and the power cord is disconnected.
2. Check the power cord.
• Make sure that the third-wire ground connector is in good condition. Use a meter to measure third-
wire ground continuity for 0.1 ohm or less between the external ground pin and the frame ground.
• Make sure that the power cord is the correct type.
To view the power cords that are available for the server:
a. Go to:
http://dcsc.lenovo.com/#/
b. In the Customize a Model pane:
1) Click Select Options/Parts for a Model.
2) Enter the machine type and model for your server.
c. Click the Power tab to see all line cords.
• Make sure that the insulation is not frayed or worn.
3. Check for any obvious non-Lenovo alterations. Use good judgment as to the safety of any non-Lenovo
alterations.
4. Check inside the server for any obvious unsafe conditions, such as metal filings, contamination, water or
other liquid, or signs of fire or smoke damage.
5. Check for worn, frayed, or pinched cables.
6. Make sure that the power-supply cover fasteners (screws or rivets) have not been removed or tampered
with.
Attention: The server might stop and loss of data might occur when internal server components are
exposed to static electricity. To avoid this potential problem, always use an electrostatic-discharge wrist
strap or other grounding systems when working inside the server with the power on.
• Avoid loose-fitting clothing, particularly around your forearms. Button or roll up long sleeves before
working inside the server.
• Prevent your necktie, scarf, badge rope, or long hair from dangling into the server.
• Remove jewelry, such as bracelets, necklaces, rings, cuff links, and wrist watches.
• Remove items from your shirt pocket, such as pens and pencils, in case they fall into the server as you
lean over it.
• Avoid dropping any metallic objects, such as paper clips, hairpins, and screws, into the server.
Attention: Prevent exposure to static electricity, which might lead to system halt and loss of data, by
keeping static-sensitive components in their static-protective packages until installation, and handling these
devices with an electrostatic-discharge wrist strap or other grounding system.
Attention: To ensure the components you install work correctly without problems, read the following
precautions carefully.
• Make sure the components you are installing are supported by the server. For a list of supported optional
components for the server, see https://static.lenovo.com/us/en/serverproven/index.shtml.
• Always download and apply the latest firmware. This will help ensure that any known issues are
addressed, and that your server is ready to work with optimal performance. Go to ThinkSystem SR650
Drivers and Software to download firmware updates for your server.
• It is good practice to make sure that the server is working correctly before you install an optional
component.
• Follow the installation procedures in this section and use appropriate tools. Incorrectly installed
components can cause system failure from damaged pins, damaged connectors, loose cabling, or loose
components.
“Read the
installation
Guidelines” on
page 128
Step 2. Press the release latch 1 and pivot the security bezel outward to remove it from the chassis.
Attention: Before you ship the rack with the server installed, reinstall and lock the security bezel
into place.
S033
S014
CAUTION:
Hazardous voltage, current, and energy levels might be present. Only a qualified service technician is
authorized to remove the covers where the label is attached.
Step 1. Use a screwdriver to turn the cover lock to the unlocked position as shown.
Step 2. Press the release button on the cover latch and then fully open the cover latch.
Attention:
• Handle the top cover carefully. Dropping the top cover with the cover latch open might damage
the cover latch.
• For proper cooling and airflow, install the top cover before you turn on the server. Operating the
server with the top cover removed might damage server components.
S033
CAUTION:
Hazardous energy present. Voltages with hazardous energy might cause heating when shorted with
metal, which might result in spattered metal, burns, or both.
S017
CAUTION:
Hazardous moving fan blades nearby. Keep fingers and other body parts away.
Before removing the air baffle, if there is a RAID super capacitor module installed on top of the air baffle,
remove the RAID super capacitor module first.
Attention: For proper cooling and airflow, install the air baffle before you turn on the server.
Operating the server with the air baffle removed might damage server components.
Step 1. Rotate the levers of the system fan cage to the rear of the server.
Step 2. Lift the system fan cage straight up and out of the chassis.
After removing the system fan cage, begin installing any options that you have purchased.
Note: If you are installing multiple options relating to the system board, the PHM installation should be
performed first.
Attention:
• Intel Xeon SP Gen 2 is supported on the system board with part number 01PE847. If you use the system
board with part number 01GV275, 01PE247, or 01PE934, update your system firmware to the latest level
before installing a Intel Xeon SP Gen 2. Otherwise, the system cannot be powered on.
• Each processor socket must always contain a cover or a PHM. When removing or installing a PHM,
protect empty processor sockets with a cover.
• Do not touch the processor socket or processor contacts. Processor-socket contacts are very fragile and
easily damaged. Contaminants on the processor contacts, such as oil from your skin, can cause
connection failures.
• Remove and install only one PHM at a time. If the system board supports multiple processors, install the
PHMs starting with the first processor socket.
• Do not allow the thermal grease on the processor or heat sink to come in contact with anything. Contact
with any surface can compromise the thermal grease, rendering it ineffective. Thermal grease can damage
components, such as electrical connectors in the processor socket. Do not remove the grease cover from
a heat sink until you are instructed to do so.
• To ensure the best performance, check the manufacturing date on the new heat sink and make sure it
does not exceed 2 years. Otherwise, wipe off the existing thermal grease and apply the new grease onto it
for optimal thermal performance.
Notes:
• PHMs are keyed for the socket where they can be installed and for their orientation in the socket.
• See https://static.lenovo.com/us/en/serverproven/index.shtml for a list of processors supported for your
server. All processors on the system board must have the same speed, number of cores, and frequency.
• Before you install a new PHM or replacement processor, update your system firmware to the latest level.
See “Update the firmware” on page 198.
• Installing an additional PHM can change the memory requirements for your system. See “Memory module
installation rules” on page 142 for a list of microprocessor-to-memory relationships.
• Optional devices available for your system might have specific processor requirements. See the
documentation that comes with the optional device for information.
• The PHM for your system might be different than the PHM shown in the illustrations.
• Intel Xeon 6137, 6242R, 6246R, 6248R, 6250, 6256, or 6258R processor is supported only when the
following requirements are met:
– The server chassis is the twenty-four 2.5-inch-bay chassis.
– The operating temperature is equal to or less than 30°C.
– Up to eight drives are installed in the drive bays 8–15.
• Intel Xeon 6144, 6146, 8160T, 6126T, 6244, and 6240Y processor, or processors with TDP equal to 200
watts or 205 watts (excluding 6137, 6242R, 6246R, 6248R, 6250, 6256, or 6258R) are supported only
when the following requirements are met:
– The server chassis is the twenty-four 2.5-inch-bay chassis.
– Up to eight drives are installed in the drive bays 8–15 if the operating temperature is equal to or less
than 35°C, or up to sixteen drives are installed in the drive bays 0–15 if the operating temperature is
equal to or less than 30°C.
a. Align the triangular marks and guide pins on the processor socket with the PHM; then, insert
the PHM into the processor socket.
Attention: To prevent damage to components, make sure that you follow the indicated
tightening sequence.
b. Fully tighten the Torx T30 captive fasteners in the installation sequence shown on the heat-sink
label. Tighten the screws until they stop; then, visually inspect to make sure that there is no
gap between the screw shoulder beneath the heat sink and the microprocessor socket. (For
reference, the torque required for the nuts to fully tighten is 1.4 — 1.6 newton-meters, 12 — 14
inch-pounds).
Attention:
• Disconnect all power cords for this task.
• memory modules are sensitive to static discharge and require special handling. In addition to the standard
guidelines for Handling static-sensitive devices:
– Always wear an electrostatic-discharge strap when removing or installing memory modules.
Electrostatic-discharge gloves can also be used.
– Never hold two or more memory modules together so that they do not touch each other. Do not stack
memory modules directly on top of each other during storage.
– Never touch the gold memory module connector contacts or allow these contacts to touch the outside
of the memory module connector housing.
– Handle memory modules with care: never bend, twist, or drop a memory module.
– Do not use any metal tools (such as jigs or clamps) to handle the memory modules, because the rigid
metals may damage the memory modules.
– Do not insert memory modules while holding packages or passive components, which can cause
package cracks or detachment of passive components by the high insertion force.
Note: Ensure that you observe the installation rules and sequence in “Memory module installation
rules” on page 142.
3. If you are going to install a DCPMM for the first time, refer to “DC Persistent Memory Module (DCPMM)
setup” on page 140.
Note: A DCPMM module looks slightly different from a DRAM DIMM in the illustration, but the
installation method is the same.
Step 1. Open the retaining clips on each end of the memory module slot.
Attention: To avoid breaking the retaining clips or damaging the memory module slots, open and
close the clips gently.
Step 2. Align the memory module with the slot, and gently place the memory module on the slot with both
hands.
Step 3. Firmly press both ends of the memory module straight down into the slot until the retaining clips
snap into the locked position.
Note: If there is a gap between the memory module and the retaining clips, the memory module
has not been correctly inserted. In this case, open the retaining clips, remove the memory module,
and then reinsert it.
Complete the following steps to finish system setup to support DCPMM, and install the memory modules
according to the designated combination.
1. Update the system firmware to the latest version that supports DCPMM (see “Update the firmware” on
page 198).
2. Make sure to meet all the following requirements before installing DCPMMs.
• All the DCPMMs that are installed must be of the same part number.
Complete the following steps to finish system setup to support DCPMM, and install the memory modules
according to the designated combination.
1. Update the system firmware to the latest version that supports DCPMM (see “Update the firmware” on
page 198).
2. Consider the following DCPMM requirements before acquiring new DCPMM units.
• All the DCPMMs that are installed must be of the same part number.
• All the DRAM DIMMs that are installed must be of the same type, rank, and capacity with minimum
capacity of 16 GB. It is recommended to use Lenovo DRAM DIMMs of the same part number.
3. Refer to “DCPMM and DRAM DIMM installation order” on page 147 to determine the new configuration,
and acquire memory modules accordingly.
4. If the DCPMMs are in Memory Mode and will stay in Memory Mode after new units are installed, follow
the combination in “Memory Mode” on page 151 to install the new modules in the correct slots.
Otherwise, go to the next step.
5. Make sure to back up the stored data.
6. If the App Direct capacity is interleaved:
a. Delete all the created namspaces and filesystems in the operating system.
b. Perform secure erase on all the DCPMMs that are installed. Go to Intel Optane DCPMMs ➙
Security ➙ Press to Secure Erase to perform secure erase.
Note: If one or more DCPMMs are secured with passphrase, make sure security of every unit is
disabled before performing secure erase. In case the passphrase is lost or forgotten, contact Lenovo
service.
Your server has 24 memory module slots. It supports up to 12 memory modules when one processor is
installed, and up to 24 memory modules when two processors are installed. It has the following features:
The following illustration helps you to locate the memory module slots on the system board.
Note: It is recommended to install memory modules with the same rank in each channel.
Independent mode
Independent mode provides high performance memory capability. You can populate all channels with no
matching requirements. Individual channels can run at different memory module timings, but all channels
must run at the same interface frequency.
Notes:
• All memory modules to be installed must be the same type.
• All Performance+ DIMMs in the server must be of the same type, rank, and capacity (the same Lenovo
part number) to operate at 2933 MHz in the configurations with two DIMMs per channel. Performance+
DIMMs cannot be mixed with other DIMMs.
• When you install memory modules with same rank and different capacity, install the memory module that
has the highest capacity first.
The following table shows the memory module population sequence for independent mode when only one
processor (Processor 1) is installed.
Notes:
• If there are three identical memory modules to be installed for Processor 1, and the three memory
modules have the same Lenovo part number, move the memory module to be installed in slot 8 to slot 1.
• If there are ten identical memory modules to be installed for Processor 1, and the ten memory modules
have the same Lenovo part number, move the memory module to be installed in slot 6 to slot 12.
The following table shows the memory module population sequence for independent mode when two
processors (Processor 1 and Processor 2) are installed.
Notes:
• If there are three identical memory modules to be installed for Processor 1, and the three memory
modules have the same Lenovo part number, move the memory module to be installed in slot 8 to slot 1.
• If there are three identical memory modules to be installed for Processor 2, and the three memory
modules have the same Lenovo part number, move the memory module to be installed in slot 20 to slot
13.
• If there are ten identical memory modules to be installed for Processor 1, and the ten memory modules
have the same Lenovo part number, move the memory module to be installed in slot 6 to slot 12.
• If there are ten identical memory modules to be installed for Processor 2, and the ten memory modules
have the same Lenovo part number, move the memory module to be installed in slot 18 to slot 24.
Mirroring mode
In mirroring mode, each memory module in a pair must be identical in size and architecture. The channels are
grouped in pairs with each channel receiving the same data. One channel is used as a backup of the other,
which provides redundancy.
Notes:
• All memory modules to be installed must be the same type with the same capacity, frequency, voltage,
and ranks.
• All Performance+ DIMMs in the server must be of the same type, rank, and capacity (the same Lenovo
part number) to operate at 2933 MHz in the configurations with two DIMMs per channel. Performance+
DIMMs cannot be mixed with other DIMMs.
The following table shows the memory module population sequence for mirroring mode when only one
processor (Processor 1) is installed.
Notes:
• All memory modules to be installed must be the same type with the same capacity, frequency, voltage,
and ranks.
• All Performance+ DIMMs in the server must be of the same type, rank, and capacity (the same Lenovo
part number) to operate at 2933 MHz in the configurations with two DIMMs per channel. Performance+
DIMMs cannot be mixed with other DIMMs.
• If the rank of installed memory modules is one rank, follow the installation rules listed in the following
tables. If the rank of installed memory modules is more than one rank, follow the installation rules of
independent mode.
The following table shows the memory module population sequence for rank sparing mode when only one
processor (Processor 1) is installed.
The following table shows the memory module population sequence for rank sparing mode when two
processors (Processor 1 and Processor 2) are installed.
Notes:
• Before installing DCPMMs and DRAM DIMMs, refer to “DC Persistent Memory Module (DCPMM) setup”
on page 140 and make sure to meet all the requirements.
• To verify if the presently installed processors support DCPMMs, examine the four digits in the processor
description. Only the processor with description meeting both of the following requirements support
DCPMMs.
– The first digit is 5 or a larger number.
– The second digit is 2.
Note: The only exception to this rule is Intel Xeon Silver 4215, which also supports DCPMM.
• DCPMMs are supported only by Intel Xeon SP Gen 2. For a list of supported processors and memory
modules, see http://www.lenovo.com/us/en/serverproven/
• When you install two or more DCPMMs, all DCPMMs must have the same Lenovo part number.
• All DRAM memory modules installed must have the same Lenovo part number.
• 16 GB RDIMM has two different types: 16 GB 1Rx4 and 16 GB 2Rx8. The part number of the two types
are different.
• Supported memory capacity range varies with the following types of DCPMMs.
– Large memory tier (L): The processors with L after the four digits (for example: Intel Xeon 5215 L)
– Medium memory tier (M): The processors with M after the four digits (for example: Intel Xeon Platinum
8280 M)
– Other: Other processors that support DCPMMs (for example: Intel Xeon Gold 5222)
In addition, you can take advantage of a memory configurator, which is available at the following site:
http://1config.lenovo.com/#/memory_configuration
The following illustration helps you to locate the memory module slots on the system board.
Note: Before installing DCPMM, refer to “Memory configuration” on page 202 and “Configure DC Persistent
Memory Module (DCPMM)” on page 202 for the requirements.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Configuration Processor 1
12 11 10 9 8 7 6 5 4 3 2 1
1 DCPMM and 6 D D D P D D D
DIMMs
2 DCPMMs and P D D D D P
4 DIMMs
2 DCPMMs and D D D P P D D D
6 DIMMs
2 DCPMMs and P D D D D D D D D P
8 DIMMs
4 DCPMMs and D D P D P P D P D D
6 DIMMs
6 DCPMMs and D P D P D P P D P D P D
6 DIMMs
Table 16. Supported DCPMM capacity in App Direct Mode with one processor
Total Total
Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DCPMMs DIMMs
L √ √ √
1 6 M √ √ √
Other √ √ √2
L √ √ √
2 4 M √ √ √
Other √ √
L √ √ √
2 6 M √ √ √
Other √ √2
L √ √ √
2 8 M √ √ √
Other √2 √2
L √ √ √
4 6 M √ √
Other √2
L √ √ √
6 6 M √ √2
Other √1
Notes:
1. Supported DIMM capacity is up to 32 GB.
2. Supported DIMM capacity is up to 64 GB.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Table 18. Supported DCPMM capacity in App Direct Mode with two processors
Total Total
Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DCPMMs DIMMs
L √ √ √
1 12 M √ √ √
Other √ √ √2
L √ √ √
2 12 M √ √ √
Other √ √ √2
L √ √ √
4 8 M √ √ √
Other √ √
L √ √ √
4 12 M √ √ √
Other √ √2
L √ √ √
4 16
M √ √ √
Other √2 √2
L √ √ √
8 12 M √ √
Other √2
L √ √ √
12 12 M √ √2
Other √1
Notes:
1. Supported DIMM capacity is up to 32 GB.
2. Supported DIMM capacity is up to 64 GB.
Memory Mode
In this mode, DCPMMs act as volatile system memory, while DRAM DIMMs act as cache.
Note: Before installing DCPMM, refer to “Memory configuration” on page 202 and “Configure DC Persistent
Memory Module (DCPMM)” on page 202 for the requirements.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Processor 1
Configuration
12 11 10 9 8 7 6 5 4 3 2 1
2 DCPMMs and 4 P D D D D P
DIMMs
2 DCPMMs and 6 D D D P P D D D
DIMMs
4 DCPMMs and 6 D D P D P P D P D D
DIMMs
6 DCPMMs and 6 D P D P D P P D P D P D
DIMMs
Table 20. Supported DCPMM capacity in Memory Mode with one processor
Total
Total
DCPM- Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DIMMs
Ms
L √1 √2 √3
2 4 M √1 √2 √3
Other √1 √2
2 6 L √1 √2
M √1 √2
Other √1
L √1 √2 √4
4 6 M √1 √2
Other √1
L √2 √3 √5
6 6 M √2 √3
Other √2
Notes:
1. Supported DIMM capacity is 16 GB.
2. Supported DIMM capacity is 16 to 32 GB.
3. Supported DIMM capacity is 16 GB to 64 GB.
4. Supported DIMM capacity is 32 GB to 64 GB.
5. Supported DIMM capacity is 32 GB to 128 GB.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Processor 2 Processor 1
Configuration
24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
4 DCPMMs and P D D D D P P D D D D P
8 DIMMs
4 DCPMMs and D D D P P D D D D D D P P D D D
12 DIMMs
8 DCPMMs and D D P D P P D P D D D D P D P P D P D D
12 DIMMs
12 DCPMMs D P D P D P P D P D P D D P D P D P P D P D P D
and 12 DIMMs
Table 22. Supported DCPMM capacity in Memory Mode with two processors
Total Total
Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DCPMMs DIMMs
L √1 √2 √3
4 8 M √1 √2 √3
Other √1 √2
L √1 √2
4 12
M √1 √2
Other √1
L √1 √2 √4
8 12 M √1 √2
Other √1
L √2 √3 √5
12 12 M √2 √3
Other √2
Notes:
1. Supported DIMM capacity is 16 GB.
2. Supported DIMM capacity is 16 to 32 GB.
3. Supported DIMM capacity is 16 GB to 64 GB.
4. Supported DIMM capacity is 32 GB to 64 GB.
5. Supported DIMM capacity is 32 GB to 128 GB.
Note: Before installing DCPMM, refer to “Memory configuration” on page 202 and “Configure DC Persistent
Memory Module (DCPMM)” on page 202 to define the percentage of DCPMM capacity.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Processor 1
Configuration
12 11 10 9 8 7 6 5 4 3 2 1
2 DCPMMs and P D D D D P
4 DIMMs
2 DCPMMs and D D D P P D D D
6 DIMMs
4 DCPMMs and D D P D P P D P D D
6 DIMMs
6 DCPMMs and D P D P D P P D P D P D
6 DIMMs
Total Total
Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DCPMMs DIMMs
L √1 √2
2 4 M √1 √2
Other √1
L √1 √2
2 6 M √1 √2
Other √1
L √1 √2 √3
4 6 M √1 √2
Other √1
L √1 √2 √3
6 6 M √1 √2
Other √1
Notes:
1. Supported DIMM capacity is 16 GB.
2. Supported DIMM capacity is 16 to 32 GB.
3. Supported DIMM capacity is 16 to 64 GB.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Processor 2 Processor 1
Configuration
24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
4 DCPMMs and P D D D D P P D D D D P
8 DIMMs
4 DCPMMs and D D D P P D D D D D D P P D D D
12 DIMMs
8 DCPMMs and D D P D P P D P D D D D P D P P D P D D
12 DIMMs
12 DCPMMs D P D P D P P D P D P D D P D P D P P D P D P D
and 12 DIMMs
Table 26. Supported DCPMM capacity in Mixed Memory Mode with two processors
Total Total
Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DCPMMs DIMMs
L √1 √2
4 8
M √1 √2
Other √1
L √1 √2
4 12 M √1 √2
Other √1
L √1 √2 √3
8 12 M √1 √2
Other √1
L √1 √2 √3
12 12 M √1 √2
Other √1
Notes:
1. Supported DIMM capacity is 16 GB.
2. Supported DIMM capacity is 16 to 32 GB.
3. Supported DIMM capacity is 16 to 64 GB.
Note:
Your server supports three types of 2.5-inch-drive backplanes: SATA/SAS 8-bay backplane (eight SATA/SAS
drive bays), AnyBay 8-bay backplane (four SATA/SAS drive bays and four NVMe drive bays), and NVMe 8-
bay backplane. Depending on the backplane type and quantity, the installation location of the backplanes
varies.
• One backplane
Always install either the SATA/SAS 8-bay backplane or the AnyBay 8-bay backplane to drive bays 0–7.
• Two backplanes
– Two SATA/SAS 8-bay backplanes, two AnyBay 8-bay backplanes, or two NVMe 8-bay backplanes:
install the two backplanes to drive bays 0–7 and drive bays 8–15
– One SATA/SAS 8-bay backplane and one AnyBay 8-bay backplane: install the AnyBay 8-bay
backplane to drive bays 0–7; install the SATA/SAS 8-bay backplane to drive bays 8–15
Before installing the 2.5-inch-drive backplane, touch the static-protective package that contains the new
backplane to any unpainted surface on the outside of the server. Then, take the new backplane out of the
package and place it on a static-protective surface.
Note: Depending on the specific type, the connectors on your backplane might look different from the
illustration in this topic.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_wn7D7XTgDS_
• Youku: http://list.youku.com/albumlist/show/id_50483444
Step 1. Determine the location of the backplanes to be installed.
Step 2. Connect the cables to the backplane.
Step 4. Apply drive bay labels based on the type of the installed backplanes. Several drive bay labels come
with each type of the supported drive backplane:
• 4–7
Apply this label to drive bays 4–7 if a SATA/SAS 8-bay backplane is installed to drive bays 0–7.
• 12–15
Apply this label to drive bays 12–15 if a SATA/SAS 8-bay backplane is installed to drive bays 8–
15.
• 4–7 (NVMe)
Apply this label to drive bays 4–7 if an AnyBay 8-bay backplane is installed to drive bays 0–7.
• 12–15 (NVMe)
The following illustration shows the location for applying the drive bay labels to the server models
with AnyBay 8-bay backplanes installed. The location is the same for applying the drive bay labels
to server models with SATA/SAS 8-bay backplanes installed. Ensure that the drive bay labels are
stuck in the correct location. The labels help you to locate the correct drive during problem
determination.
Figure 84. Drive bay labels for server models with AnyBay 8-bay backplanes installed
After installing the 2.5-inch-drive backplane, connect the cables to the system board. For information about
the cable routing, see “Internal cable routing” on page 33.
Notes:
• The procedure is based on the scenario that you want to install the backplane for up to twelve 3.5-inch
drives. The procedure is similar for the backplane for up to eight 3.5-inch drives.
• If you are installing the 3.5-inch-drive backplane with expander and the 8i RAID adapter for the server
models with twelve 3.5-inch-drive bays, GPU is not supported, the maximum supported processor TDP is
165 watts, and you need to create the RAID volume to avoid the disorder of the HDD sequence. Besides,
if the rear hot-swap drive is installed, the server performance might be degraded.
Before installing the 3.5-inch-drive backplane, touch the static-protective package that contains the new
backplane to any unpainted surface on the outside of the server. Then, take the new backplane out of the
package and place it on a static-protective surface.
Figure 86. Drive bay label for server models with a 12-bay backplane installed
After installing the 3.5-inch-drive backplane, connect the cables to the system board. For information about
the cable routing, see “Internal cable routing” on page 33.
Before installing the rear hot-swap drive assembly, touch the static-protective package that contains the new
rear hot-swap drive assembly to any unpainted surface on the outside of the server. Then, take the new rear
hot-swap drive assembly out of the package and place it on a static-protective surface.
Note: If you are installing the ThinkSystem SR650 Rear 3.5 HDD kit Without Fan (provided for Chinese
Mainland only), the maximum supported processor TDP is 125 watts.
To install the rear hot-swap drive assembly, complete the following steps:
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_wn7D7XTgDS_
• Youku: http://list.youku.com/albumlist/show/id_50483444
Step 2. Connect the signal cable to the rear hot-swap drive assembly and the RAID adapter. See “Internal
cable routing” on page 33.
After installing the rear hot-swap drive assembly, you can install hot-swap drives to the assembly. See
“Install a hot-swap drive” on page 191.
Ensure that you follow the installation order if you install more than one RAID adapter:
• The RAID adapter slot on the system board
• The PCIe slot 4 on the system board if the serial port module is not installed
• A PCIe slot on the riser card
Notes:
To install the RAID adapter in the RAID adapter slot on the system board, complete the following steps:
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_wn7D7XTgDS_
• Youku: http://list.youku.com/albumlist/show/id_50483444
After installing the RAID adapter, connect cables to the RAID adapter. See “Internal cable routing” on page
33.
Notes:
• Some M.2 backplanes support two identical M.2 drives. When two M.2 drives are installed, align and
support both M.2 drives when sliding the retainer forward to secure the M.2 drives.
• Install the M.2 drive in slot 0 first.
1 Slot 0
2 Slot 1
To install the M.2 backplane and M.2 drive, complete the following steps:
Step 1. Insert the M.2 drive at an angle of approximately 30 degrees into the connector.
Note: If your M.2 backplane supports two M.2 drives, insert the M.2 drives into the connectors at
both sides.
Step 2. Rotate the M.2 drive down until the notch 1 catches on the lip of the retainer 2 .
Attention: When sliding the retainer forward, ensure that the two nubs 3 on the retainer enter the
small holes 4 on the M.2 backplane. Once they enter the holes, you will hear a soft “click” sound.
2. Use the Lenovo XClarity Provisioning Manager to configure the RAID. For more information, see:
http://sysmgt.lenovofiles.com/help/topic/LXPM/RAID_setup.html
Before adjusting the retainer on the M.2 backplane, locate the correct keyhole that the retainer should be
installed into to accommodate the particular size of the M.2 drive you wish to install.
To adjust the retainer on the M.2 backplane, complete the following steps:
Notes:
• To install a full-height GPU or the NVIDIA P4 GPU, you need to use the GPU thermal kit. The GPU thermal
kit contains the following items:
– The large-size air baffle
– Two 1U heat sinks
– Three GPU holders
• To install the other supported low-profile GPUs, refer to “Install a PCIe adapter on the riser assembly” on
page 170.
• For information about the form factor of GPUs, refer to GPU specifications. See “Specifications” on page
5.
• Depending on the specific type, your GPU might look different from the illustrations in this topic.
Before installing the GPU thermal kit and a GPU, touch the static-protective package that contains the GPU
thermal kit and the GPU to any unpainted surface on the outside of the server. Then, take the components
out of the packages and place them on a static-protective surface.
To install a GPU with the GPU thermal kit, complete the following steps:
Note: For server models with one processor, you can install one GPU in PCIe slot 1. For server
models with two processors, you can install up to two GPUs in PCIe slot 1 and PCIe slot 5, or up to
three GPUs in PCIe slots 1, 5 and 6. For more information, see “Specifications” on page 5.
Step 5. Align the GPU with the PCIe slot on the riser card. Then, carefully press the GPU straight into the
slot until it is securely seated. See “Install a PCIe adapter on the riser assembly” on page 170.
Step 6. If a GPU power cable is required, do the following:
a. Connect one end of the power cable to the GPU power connector on the system board.
b. Connect the other end of the power cable to the GPU.
c. Route the GPU power cable properly. See “GPU cabling routing” on page 33.
Step 7. Install the riser assembly with the GPU into the chassis.
After installing a GPU with the GPU thermal kit, continue to install other PCIe adapter if necessary. See
“Install a PCIe adapter on the riser assembly” on page 170.
One processor 1
Two processors 1, 5, 6
– For server models with sixteen/twenty/twenty-four NVMe drives (with two processors installed):
One processor 1, 2, 3
Two processors 1, 2, 3, 5, 6
One processor 7, 4, 2, 3, 1
Two processors 7, 4, 2, 3, 1, 5, 6
One processor 4, 2, 3, 1
Two processors 4, 2, 6, 3, 5, 1
Notes:
• Depending on the specific type, your PCIe adapter and riser card for the riser assembly might look
different from the illustration in this topic.
• Use any documentation that comes with the PCIe adapter and follow those instructions in addition to the
instructions in this topic.
• Do not install PCIe adapters with small form factor (SFF) connectors in PCIe slot 6.
• ThinkSystem Mellanox ConnectX-6 HDR100 QSFP56 1-port PCIe InfiniBand adapter or ThinkSystem
Mellanox ConnectX-6 HDR100 QSFP56 2-port PCIe InfiniBand adapter is supported only when the
following requirements are met:
– The server chassis is the eight 3.5-inch-drive bays chassis, eight 2.5-inch-drive bays chassis, sixteen
2.5- inch- drive bays chassis, or twenty 2.5- inch-drive bays chassis.
– The operating temperature is equal to or less than 35°C.
To install a PCIe adapter on the riser assembly, complete the following steps:
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_wn7D7XTgDS_
• Youku: http://list.youku.com/albumlist/show/id_50483444
Step 1. Align the PCIe adapter with the PCIe slot on the riser card. Then, carefully press the PCIe adapter
straight into the slot until it is securely seated and its bracket also is secured.
Notes:
• Depending on the specific type, your PCIe adapter might look different from the illustration in this topic.
• Use any documentation that comes with the PCIe adapter and follow those instructions in addition to the
instructions in this topic.
To install a PCIe adapter on the system board, complete the following steps:
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_wn7D7XTgDS_
• Youku: http://list.youku.com/albumlist/show/id_50483444
Step 1. Position the PCIe adapter near the PCIe slot. Then, carefully press the PCIe adapter straight into
the slot until it is securely seated and its bracket also is secured by the chassis.
Step 2. Pivot the PCIe adapter retention latch to the closed position to secure the PCIe adapter in position.
After installing the PCIe adapter on the system board, connect cables to the PCIe adapter.
CAUTION:
Use a tool to remove the LOM adapter slot bracket to avoid injury.
2. Lift the LOM-adapter air baffle out of the chassis.
3. Touch the static-protective package that contains the new LOM adapter to any unpainted surface on the
outside of the server. Then, take the new LOM adapter out of the package and place it on a static-
protective surface.
Step 2. Connect the cable of the serial port module to the serial-port-module connector on the system
board. For the location of the serial-port-module connector, refer to “System board components”
on page 30.
After installing the serial port module, do one of the following to enable it according to the installed operating
system:
• For Linux operating system:
Open the ipmitool and enter the following command to disable the Serial over LAN (SOL) feature:
-I lanplus -H IP -U USERID -P PASSW0RD sol deactivate
• For Microsoft Windows operating system:
1. Open the ipmitool and enter the following command to disable the SOL feature:
-I lanplus -H IP -U USERID -P PASSW0RD sol deactivate
2. Open Windows PowerShell and enter the following command to disable the Emergency Management
Services (EMS) feature:
Bcdedit /ems no
3. Restart the server to ensure that the EMS setting takes effect.
Step 1. Align both sides of the system fan cage with the corresponding mounting posts in the chassis.
Then, press the system fan cage straight down into the chassis.
Step 2. Rotate the levers of the system fan cage to the front of the server to secure the system fan cage.
CAUTION:
Hazardous energy present. Voltages with hazardous energy might cause heating when shorted with
metal, which might result in spattered metal, burns, or both.
S017
CAUTION:
Hazardous moving fan blades nearby. Keep fingers and other body parts away.
Watch the procedure. A video of the installation process for U.2 24-Bay/20-Bay upgrade kit is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_wn7D7XTgDS_
• Youku: http://list.youku.com/albumlist/show/id_50483444
The following information is a summary of the PCIe switch adapters and the corresponding PCIe slots. For
information about PCIe slot location, refer to “Rear view” on page 24.
810-4P NVMe switch adapter RAID adapter slot on the system board
Step 1. Install the three 2.5-inch NVMe 8-Bay backplanes. See Install the 2.5-inch-drive backplane. Then,
apply drive bay sequence labels above the drive bays on your server.
Step 2. Install the bracket on one 810-4P NVMe switch adapter. To install the bracket, align the screw
holes in the bracket with the corresponding holes in the switch adapter, and then install the screws
to secure the bracket to the switch adapter.
Step 3. Install the 810-4P NVMe switch adapter in the RAID adapter slot on the system board. See Install
the RAID adapter.
Step 4. Install the 810-4P NVMe switch adapter with 2U bracket in PCIe slot 4 on the system board. See
Install a PCIe adapter on the system board.
Step 5. Install the riser card 1 on riser 1 bracket. See Install a riser card.
Step 6. Install the 1610-4P NVMe switch adapter in PCIe slot 1 on riser card 1. See Install a PCIe adapter on
the riser assembly.
Step 7. Install the riser 1 assembly to the chassis. See Install a riser card.
Step 8. Install the riser card 2 on riser 2 bracket. See Install a riser card.
Step 9. Install one 1610-4P NVMe switch adapter in PCIe slot 5 on riser card 2. Then, install the other
1610-4P NVMe switch adapter in PCIe slot 6 on riser card 2. See Install a PCIe adapter on the riser
assembly.
Step 10. Install the riser 2 assembly to the chassis. See Install a riser card.
Step 11. Install any required hardware or server options, and then cable the server. For information about
how to connect cables for server models with twenty NVMe drives, see “Server model: twenty 2.5-
inch NVMe drives, two NVMe 810-4P switch adapters, three NVMe 1610-4P switch adapters” in
the topic Server models with twenty 2.5-inch drives.
The following information is a summary of the PCIe switch adapters and the corresponding PCIe slots. For
information about PCIe slot location, refer to “Rear view” on page 24.
810-4P NVMe switch adapter (with the 2U bracket Slot 4 on the system board
installed)
810-4P NVMe switch adapter (with the 3U bracket Slot 6 on riser card 2
installed)
810-4P NVMe switch adapter RAID adapter slot on the system board
Step 1. Install the three 2.5-inch NVMe 8-Bay backplanes. See Install the 2.5-inch-drive backplane. Then,
apply drive bay sequence labels above the drive bays on your server.
Step 2. Install three brackets on three 810-4P NVMe switch adapters. To install the bracket, align the screw
holes in the bracket with the corresponding holes in the switch adapter, and then, install the screws
to secure the bracket to the switch adapter.
Step 3. Install the 810-4P NVMe switch adapter in the RAID adapter slot on the system board. See Install
the RAID adapter.
Step 4. Install the 810-4P NVMe switch adapter with 2U bracket in PCIe slot 4 on the system board. See
Install a PCIe adapter on the system board.
Step 5. Install the riser card 1 on riser 1 bracket. See Install a riser card.
Step 6. Install the 1610-8P NVMe switch adapter in PCIe slot 1 on riser card 1. Then, install the 810-4P
NVMe switch adapter with 3U bracket in PCIe slot 2 on riser card 1. See Install a PCIe adapter on the
riser assembly.
Step 7. Install the riser 1 assembly to the chassis. See Install a riser card.
Step 8. Install the riser card 2 on riser 2 bracket. See Install a riser card.
Step 9. Install the 810-4P NVMe switch adapter with 3U bracket in PCIe slot 6 on riser card 2. See Install a
PCIe adapter on the riser assembly.
Step 10. Install the riser 2 assembly to the chassis. See Install a riser card.
Step 11. Install any required hardware or server options, and then cable the server. To connect the cable for
server models with twenty-four NVMe drives, see “Server model: twenty-four 2.5-inch NVMe
drives, four NVMe 810-4P switch adapters, one NVMe 1610-8P switch adapter” in the topic Server
models with twenty-four 2.5-inch drives.
Notes:
• Ensure that the two power supplies installed on the server have the same wattage.
• If you are replacing the existing power supply with a new power supply of different wattage, attach the
power information label that comes with this option onto the existing label near the power supply.
S035
CAUTION:
Never remove the cover on a power supply or any part that has this label attached. Hazardous voltage,
current, and energy levels are present inside any component that has this label attached. There are no
serviceable parts inside these components. If you suspect a problem with one of these parts, contact
a service technician.
S002
CAUTION:
The power-control button on the device and the power switch on the power supply do not turn off the
electrical current supplied to the device. The device also might have more than one power cord. To
remove all electrical current from the device, ensure that all power cords are disconnected from the
power source.
S001
The following tips describe the information that you must consider when you install a power supply with dc
input.
CAUTION:
• 240 V dc input (input range: 180-300 V dc) is supported in Chinese Mainland ONLY. Power supply
with 240 V dc input cannot support hot plugging power cord function. Before removing the power
supply with dc input, please turn off server or disconnect dc power sources at the breaker panel or
by turning off the power source. Then, remove the power cord.
• In order for the ThinkSystem products to operate error free in both a DC or AC electrical
environment, a TN-S earthing system which complies to 60364-1 IEC 2005 standard has to be
present or installed.
在直流输入状态下,若电源供应器插座不支持热插拔功能,请务必不要对设备电源线进行热插拔,此操作可能
导致设备损坏及数据丢失。因错误执行热插拔导致的设备故障或损坏,不属于保修范围。
NEVER CONNECT AND DISCONNECT THE POWER SUPPLY CABLE AND EQUIPMENT WHILE YOUR
EQUIPMENT IS POWERED ON WITH DC SUPPLY (hot-plugging). Otherwise you may damage the
equipment and result in data loss, the damages and losses result from incorrect operation of the equipment
will not be covered by the manufacturers’ warranty.
S035
CAUTION:
Never remove the cover on a power supply or any part that has this label attached. Hazardous voltage,
current, and energy levels are present inside any component that has this label attached. There are no
serviceable parts inside these components. If you suspect a problem with one of these parts, contact
a service technician.
CAUTION:
The power-control button on the device does not turn off the electrical current supplied to the device.
The device also might have more than one connection to dc power. To remove all electrical current
from the device, ensure that all connections to dc power are disconnected at the dc power input
terminals.
Before installing a hot-swap power supply, touch the static-protective package that contains the new hot-
swap power supply to any unpainted surface on the outside of the server. Then, take the new hot-swap
power supply out of the package and place it on a static-protective surface.
S033
CAUTION:
Hazardous energy present. Voltages with hazardous energy might cause heating when shorted with
metal, which might result in spattered metal, burns, or both.
S017
CAUTION:
Hazardous moving fan blades nearby. Keep fingers and other body parts away.
After installing the air baffle, install any RAID super capacitor modules that you removed.
Step 1. Gently press and hold the tab on the air baffle as shown.
Step 2. Insert the RAID super capacitor module into the holder on the air baffle.
Step 3. Press down the RAID super capacitor module to install it into the holder.
After installing the RAID super capacitor module, connect the RAID super capacitor module to a RAID
adapter with the extension cable that comes with the RAID super capacitor module.
Note: Before you slide the top cover forward, ensure that all the tabs on the top cover engage the chassis
correctly. If the tabs do not engage the chassis correctly, it will be very difficult to remove the top cover later.
Step 1. Ensure that the cover latch is in the open position. Lower the top cover onto the chassis until both
sides of the top cover engage the guides on both sides of the chassis.
Step 2. Pivot the cover latch and slide the top cover to the front of the chassis at the same time until the top
cover snaps into position. Ensure that the cover latch is closed.
Step 3. Use a screwdriver to turn the cover lock to the locked position.
The following notes describe the type of drives that your server supports and other information that you must
consider when you install a drive and place it on a static-protective surface.
Server models with a 2.5-inch AnyBay backplane Up to four NVMe drives in the bays 4–7
installed
Server models with two 2.5-inch AnyBay backplanes Up to eight NVMe drives in the bays 4–7 and bays 12–
installed 15
Server models with two 2.5-inch NVMe backplanes Up to 16 NVMe drives in the bays 0–15
installed
Server models with three 2.5-inch AnyBay backplanes Up to twelve NVMe drives in the bays 4–7, bays 12–15,
installed and bays 20–23
Server models with three 2.5-inch NVMe backplanes Up to 24 NVMe drives in the bays 0–23
installed
Server models with a 3.5-inch AnyBay backplane Up to four NVMe drives in the bays 8–11
installed
Important: Ensure that you install the correct type of drives into corresponding drive bays. Drive type
information is available on the bottom of the front side of a drive.
3. Touch the static-protective package that contains the new drive to any unpainted surface on the outside
of the server. Then, take the new drive out of the package and place it on a static-protective surface.
Step 1. Ensure that the drive tray handle is in the open position. Slide the drive into the drive bay until it
snaps into position.
Step 2. Close the drive tray handle to lock the drive in place.
Step 3. Continue to install additional hot-swap drives if necessary.
Connect to power
Connect the server to power.
Connect to storage
Connect the server to any storage devices.
The server can be turned on (power LED on) in any of the following ways:
• You can press the power button.
• The server can restart automatically after a power interruption.
• The server can respond to remote power-on requests sent to the Lenovo XClarity Controller.
For information about powering off the server, see “Turn off the server” on page 195.
To place the server in a standby state (power status LED flashes once per second):
Note: The Lenovo XClarity Controller can place the server in a standby state as an automatic response to a
critical system failure.
• Start an orderly shutdown using the operating system (if supported by your operating system).
• Press the power button to start an orderly shutdown (if supported by your operating system).
• Press and hold the power button for more than 4 seconds to force a shutdown.
The following methods are available to set the network connection for the Lenovo XClarity Controller if you
are not using DHCP:
• If a monitor is attached to the server, you can use Lenovo XClarity Controller to set the network
connection.
• If no monitor attached to the server, you can set the network connection through the Lenovo XClarity
Controller interface. Connect an Ethernet cable from your laptop to Lenovo XClarity Controller connector,
which is located at the rear of the server. For the location of the Lenovo XClarity Controller connector, see
“Rear view” on page 24.
Note: Make sure that you modify the IP settings on the laptop so that it is on the same network as the
server default settings.
The default IPv4 address and the IPv6 Link Local Address (LLA) is provided on the Lenovo XClarity
Controller Network Access label that is affixed to the Pull Out Information Tab.
• If you are using the Lenovo XClarity Administrator Mobile app from a mobile device, you can connect to
the Lenovo XClarity Controller through the Lenovo XClarity Controller USB connector on the front of the
server.
Note: The Lenovo XClarity Controller USB connector mode must be set to manage the Lenovo XClarity
Controller (instead of normal USB mode). To switch from normal mode to Lenovo XClarity Controller
management mode, hold the blue ID button on the front panel for at least 3 seconds until its LED flashes
slowly (once every couple of seconds).
To connect using the Lenovo XClarity Administrator Mobile app:
1. Connect the USB cable of your mobile device to the Lenovo XClarity Administrator USB connector on
the front panel.
2. On your mobile device, enable USB tethering.
3. On your mobile device, launch the Lenovo XClarity Administrator mobile app.
4. If automatic discovery is disabled, click Discovery on the USB Discovery page to connect to the
Lenovo XClarity Controller.
For more information about using the Lenovo XClarity Administrator Mobile app, see:
http://sysmgt.lenovofiles.com/help/index.jsp?topic=%2Fcom.lenovo.lxca.doc%2Flxca_usemobileapp.html
Important: The Lenovo XClarity Controller is set initially with a user name of USERID and password of
PASSW0RD (with a zero, not the letter O). This default user setting has Supervisor access. It is required to
change this user name and password during your initial configuration for enhanced security.
• If you choose a static IP connection, make sure that you specify an IPv4 or IPv6 address that is
available on the network.
• If you choose a DHCP connection, make sure that the MAC address for the server has been
configured in the DHCP server.
Step 4. Click OK to continue starting the server.
You can use the tools listed here to update the most current firmware for your server and the devices that are
installed in the server.
Note: Lenovo typically releases firmware in bundles called UpdateXpress System Packs (UXSPs). To ensure
that all of the firmware updates are compatible, you should update all firmware at the same time. If you are
updating firmware for both the Lenovo XClarity Controller and UEFI, update the firmware for Lenovo XClarity
Controller first.
http://lenovopress.com/LP0656
Important terminology
• In-band update. The installation or update is performed using a tool or application within an operating
system that is executing on the server’s core CPU.
• Out-of-band update. The installation or update is performed by the Lenovo XClarity Controller collecting
the update and then directing the update to the target subsystem or device. Out-of-band updates have no
dependency on an operating system executing on the core CPU. However, most out-of-band operations
do require the server to be in the S0 (Working) power state.
• On-Target update. The installation or update is initiated from an Operating System executing on the
server’s operating system.
• Off-Target update. The installation or update is initiated from a computing device interacting directly with
the server’s Lenovo XClarity Controller.
• UpdateXpress System Packs (UXSPs). UXSPs are bundled updates designed and tested to provide the
interdependent level of functionality, performance, and compatibility. UXSPs are server machine-type
specific and are built (with firmware and device driver updates) to support specific Windows Server, Red
Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) operating system distributions.
Machine-type-specific firmware-only UXSPs are also available.
See the following table to determine the best Lenovo tool to use for installing and setting up the firmware:
Note: The server UEFI settings for option ROM must be set to Auto or UEFI to update firmware using
Lenovo XClarity Administrator or Lenovo XClarity Essentials. For more information, see the following Tech
Tip:
Lenovo XClarity √ √ √ √
Controller
Supports core system
firmware and most
advanced I/O option
firmware updates
Lenovo XClarity √ √ √ √
Essentials OneCLI
Supports all core
system firmware, I/O
firmware, and installed
operating system
driver updates
Lenovo XClarity √ √ √ √
Essentials
UpdateXpress
Supports all core
system firmware, I/O
firmware, and installed
operating system
driver updates
Lenovo XClarity √ √ √ √
Essentials Bootable
Media Creator
Supports core system
firmware and I/O
firmware updates. You
can update the
Microsoft Windows
operating system, but
device drivers are not
included on the
bootable image
Lenovo XClarity √1 √2 √ √
Administrator
Supports core system
firmware and I/O
firmware updates
Notes:
1. For I/O firmware updates.
2. For BMC and UEFI firmware updates.
Note: By default, the Lenovo XClarity Provisioning Manager Graphical User Interface is displayed when
you press F1. If you have changed that default to be the text-based system setup, you can bring up the
Graphical User Interface from the text-based system setup interface.
Additional information about using Lenovo XClarity Provisioning Manager to update firmware is available
at:
• Lenovo XClarity Controller
If you need to install a specific update, you can use the Lenovo XClarity Controller interface for a specific
server.
Notes:
– To perform an in-band update through Windows or Linux, the operating system driver must be installed
and the Ethernet-over-USB (sometimes called LAN over USB) interface must be enabled.
Additional information about configuring Ethernet over USB is available at:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/NN1ia_c_
configuringUSB.html
– If you update firmware through the Lenovo XClarity Controller, make sure that you have downloaded
and installed the latest device drivers for the operating system that is running on the server.
Specific details about updating firmware using Lenovo XClarity Controller are available at:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/NN1ia_c_
manageserverfirmware.html
• Lenovo XClarity Essentials OneCLI
Lenovo XClarity Essentials OneCLI is a collection of command line applications that can be used to
manage Lenovo servers.Its update application can be used to update firmware and device drivers for your
servers. The update can be performed within the host operating system of the server (in-band) or remotely
through the BMC of the server (out-of-band).
Specific details about updating firmware using Lenovo XClarity Essentials OneCLI is available at:
http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_c_update.html
• Lenovo XClarity Essentials UpdateXpress
Lenovo XClarity Essentials UpdateXpress provides most of OneCLI update functions through a graphical
user interface (GUI). It can be used to acquire and deploy UpdateXpress System Pack (UXSP) update
packages and individual updates. UpdateXpress System Packs contain firmware and device driver
updates for Microsoft Windows and for Linux.
You can obtain Lenovo XClarity Essentials UpdateXpress from the following location:
https://datacentersupport.lenovo.com/solutions/lnvo-xpress
• Lenovo XClarity Essentials Bootable Media Creator
You can use Lenovo XClarity Essentials Bootable Media Creator to create bootable media that is suitable
for applying firmware updates, running preboot diagnostics, and deploying Microsoft Windows operating
systems.
You can obtain Lenovo XClarity Essentials BoMC from the following location:
Important: Do not configure option ROMs to be set to Legacy unless directed to do so by Lenovo Support.
This setting prevents UEFI drivers for the slot devices from loading, which can cause negative side effects for
Lenovo software, such as Lenovo XClarity Administrator and Lenovo XClarity Essentials OneCLI, and to the
Lenovo XClarity Controller. The side effects include the inability to determine adapter card details, such as
model name and firmware levels. When adapter card information is not available, generic information for the
model name, such as "Adapter 06:00:00" instead of the actually model name, such as "ThinkSystem RAID
930-16i 4GB Flash." In some cases, the UEFI boot process might also hang.
Note: The Lenovo XClarity Provisioning Manager provides a Graphical User Interface to configure a
server. The text-based interface to system configuration (the Setup Utility) is also available. From Lenovo
XClarity Provisioning Manager, you can choose to restart the server and access the text-based interface.
In addition, you can choose to make the text-based interface the default interface that is displayed when
you press F1.
• Lenovo XClarity Essentials OneCLI
You can use the config application and commands to view the current system configuration settings and
make changes to Lenovo XClarity Controller and UEFI. The saved configuration information can be used
to replicate or restore other systems.
For information about configuring the server using Lenovo XClarity Essentials OneCLI, see:
http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_c_settings_info_commands.html
• Lenovo XClarity Administrator
You can quickly provision and pre-provision all of your servers using a consistent configuration.
Configuration settings (such as local storage, I/O adapters, boot settings, firmware, ports, and Lenovo
XClarity Controller and UEFI settings) are saved as a server pattern that can be applied to one or more
managed servers. When the server patterns are updated, the changes are automatically deployed to the
applied servers.
Specific details about updating firmware using Lenovo XClarity Administrator are available at:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/server_configuring.html
• Lenovo XClarity Controller
You can configure the management processor for the server through the Lenovo XClarity Controller Web
interface or through the command-line interface.
Memory configuration
Memory performance depends on several variables, such as memory mode, memory speed, memory ranks,
memory population and processor.
More information about optimizing memory performance and configuring memory is available at the Lenovo
Press website:
https://lenovopress.com/servers/options/memory
In addition, you can take advantage of a memory configurator, which is available at the following site:
http://1config.lenovo.com/#/memory_configuration
For specific information about the required installation order of memory modules in your server based on the
system configuration and memory mode that you are implementing, see “DIMM installation rules” on page
142.
Table 29. Channel and slot information of DIMMs around processor 1 and 2
The memory-channel configuration table is a three-column table that shows the relationship between the
processors, memory controllers, memory channels, slot number and the DIMM connectors.
Notes:
– In App Direct Mode, the DRAM DIMMs that are installed can be configured to mirror mode.
– When only one DCPMM is installed for each processor, only not-interleaved App Direct Mode is
supported.
• Mixed Memory Mode (1-99% of DCPMM capacity acts as system memory):
Note: If the text-based interface of Setup Utility opens instead of Lenovo XClarity Provisioning Manager,
go to System Settings ➙ <F1> Start Control and select Tool Suite. Then, reboot the system and press
F1 as soon as the logo screen appears to open Lenovo XClarity Provisioning Manager.
• Setup Utility
To enter Setup Utility:
1. Power on the system and press F1 to open LXPM.
2. Go to UEFI Settings ➙ System Settings, click on the pull-down menu on the upper right corner of
the screen, and select Text Setup.
3. Reboot the system, and press F1 as soon as the logo screen appears.
Go to System Configuration and Boot Management ➙ System Settings ➙ Intel Optane DCPMMs to
configure and manage DCPMMs.
• Lenovo XClarity Essentials OneCLI
Some management options are available in commands that are executed in the path of Lenovo XClarity
Essentials OneCLI in the operating system. See https://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_
lenovo/onecli_t_download_use_tcscli.html to learn how to download and use Lenovo XClarity Essentials
OneCLI.
Notes:
– USERID stands for XCC user ID.
– PASSW0RD stands for XCC user password.
– 10.104.195.86 stands for IP address.
• Goals
– Memory Mode [%]
Select this option to define the percentage of DCPMM capacity that is invested in system memory, and
hence decide the DCPMM mode:
– 0%: App Direct Mode
– 1-99%: Mixed Memory Mode
– 100%: Memory Mode
Go to Goals ➙ Memory Mode [%], input the memory percentage, and reboot the system.
Notes:
– Before changing from one mode to another:
1. Make sure the capacity of installed DCPMMs and DRAM DIMMs meets system requirements for
the new mode (see “DCPMM and DRAM DIMM installation order” on page 147).
2. Back up all the data and delete all the created namespaces. Go to Namespaces ➙ View/
Modify/Delete Namespaces to delete the created namespaces.
3. Perform secure erase on all the installed DCPMMs. Go to Security ➙ Press to Secure Erase to
perform secure erase.
– After the system is rebooted and the input goal value is applied, the displayed value in System
Configuration and Boot Management ➙ Intel Optane DCPMMs ➙ Goals will go back to the
following default selectable options:
• Scope: [Platform]
• Memory Mode [%]: 0
• Persistent Memory Type: [App Direct]
These values are selectable options for DCPMM settings, and do not represent the current DCPMM
status.
In addition, you can take advantage of a memory configurator, which is available at the following site:
http://1config.lenovo.com/#/memory_configuration
Note: Setting DCPMM App Direct capacity to not interleaved will turn the displayed App Direct regions
from one region per processor to one region per DCPMM.
• Regions
After the memory percentage is set and the system is rebooted, regions for the App Direct capacity will be
generated automatically. Select this option to view the App Direct regions.
• Namespaces
App Direct capacity of DCPMMs requires the following steps before it is truly available for applications.
1. Namespaces must be created for region capacity allocation.
2. Filesystem must be created and formatted for the namespaces in the operating system.
Each App Direct region can be allocated into one namespace. Create namespaces in the following
operating systems:
– Windows: Use Pmem command.
– Linux: Use ndctl command.
– VMware: Reboot the system, and VMware will create namespaces automatically.
After creating namespaces for App Direct capacity allocation, make sure to create and format filesystem
in the operating system so that the App Direct capacity is accessible for applications.
• Security
– Enable Security
Attention: By default, DCPMM security is disabled. Before enabling security, make sure all the country
or local legal requirements regarding data encryption and trade compliance are met. Violation could
cause legal issues.
DCPMMs can be secured with passphrases. Two types of passphrase protection scope are available
for DCPMM:
– Platform: Choose this option to run security operation on all the installed DCPMM units at once. A
platform passphrase is stored and automatically applied to unlock DCPMMs before operating
system starts running, but the passphrase still has to be disabled manually for secure erase.
Alternatively, enable/disable platform level security with the following commands in OneCLI:
• Enable security:
1. Enable security.
onecli.exe config set IntelOptaneDCPMM.SecurityOperation "Enable Security"
--imm USERID:PASSW0RD@10.104.195.86
Notes:
• Single DCPMM passphrases are not stored in the system, and security of the locked units needs
to be disabled before the units are available for access or secure erase.
• Always make sure to keep records of the slot number of locked DCPMMs and corresponding
passphrases. In the case the passphrases are lost or forgotten, the stored data cannot be backed
up or restored, but you can contact Lenovo service for administrative secure erase.
• After three failed unlocking attempts, the corresponding DCPMMs enter “exceeded” state with a
system warning message, and the DCPMM unit can only be unlocked after the system is
rebooted.
To enable passphrase, go to Security ➙ Press to Enable Security.
– Secure Erase
Note: If the DCPMMs to be secure erased are protected with a passphrase, make sure to disable
security and reboot the system before performing secure erase.
Secure erase cleanses all the data that is stored in the DCPMM unit, including encrypted data. This
data deletion method is recommended before returning or disposing a malfunctioning unit, or changing
DCPMM mode. To perform secure erase, go to Security ➙ Press to Secure Erase.
Alternatively, perform platform level secure erase with the following command in OneCLI:
onecli.exe config set IntelOptaneDCPMM.SecurityOperation "Secure Erase Without Passphrase"
--imm USERID:PASSW0RD@10.104.195.86
• DCPMM Configuration
DCPMM contains spared internal cells to stand in for the failed ones. When the spared cells are exhausted
to 0%, there will be an error message, and it is advised to back up data, collect service log, and contact
Lenovo support.
There will also be a warning message when the percentage reaches 1% and a selectable percentage
(10% by default). When this message appears, it is advised to back up data and run DCPMM diagnostics
(see ). To adjust the selectable percentage that the warning message requires, go to Intel Optane
DCPMMs ➙ DCPMM Configuration, and input the percentage.
Alternatively, change the selectable percentage with the following command in OneCLI:
RAID configuration
Using a Redundant Array of Independent Disks (RAID) to store data remains one of the most common and
cost-efficient methods to increase server's storage performance, availability, and capacity.
RAID increases performance by allowing multiple drives to process I/O requests simultaneously. RAID can
also prevent data loss in case of a drive failure by reconstructing (or rebuilding) the missing data from the
failed drive using the data from the remaining drives.
RAID array (also known as RAID drive group) is a group of multiple physical drives that uses a certain
common method to distribute data across the drives. A virtual drive (also known as virtual disk or logical
drive) is a partition in the drive group that is made up of contiguous data segments on the drives. Virtual drive
is presented up to the host operating system as a physical disk that can be partitioned to create OS logical
drives or volumes.
https://lenovopress.com/lp0578-lenovo-raid-introduction
Detailed information about RAID management tools and resources is available at the following Lenovo Press
website:
https://lenovopress.com/lp0579-lenovo-raid-management-tools-and-resources
Tool-based deployment
• Multi-server
Available tools:
– Lenovo XClarity Administrator
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/compute_node_image_deployment.html
– Lenovo XClarity Essentials OneCLI
http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_r_uxspi_proxy_tool.html
• Single-server
Available tools:
– Lenovo XClarity Provisioning Manager
https://sysmgt.lenovofiles.com/help/topic/LXPM/os_installation.html
– Lenovo XClarity Essentials OneCLI
http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_r_uxspi_proxy_tool.html
Make sure that you create backups for the following server components:
• Management processor
You can back up the management processor configuration through the Lenovo XClarity Controller
interface. For details about backing up the management processor configuration, see:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/NN1ia_c_
backupthexcc.html
Alternatively, you can use the s a v e command from Lenovo XClarity Essentials OneCLI to create a backup
of all configuration settings. For more information about the s a v e command, see:
http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_r_save_command.html
• Operating system
Use your own operating-system and user-data backup methods to back up the operating system and
user data for the server.
<uuid_value>
Up to 16-byte hexadecimal value assigned by you.
[access_method]
The access method that you selected to use from the following methods:
Note: The KCS access method uses the IPMI/KCS interface, which requires that the IPMI
driver be installed.
– Remote LAN access, type the command:
Note: When using the remote LAN/WAN access method to access Lenovo XClarity Controller
from a client, the host and the xcc_external_ip address are required parameters.
[−−imm xcc_user_id:xcc_password@xcc_external_ip]
or
[−−bmc xcc_user_id:xcc_password@xcc_external_ip]
Where:
xcc_external_ip
The BMC/IMM/XCC external IP address. There is no default value. This parameter is
required.
xcc_user_id
The BMC/IMM/XCC account name (1 of 12 accounts). The default value is USERID.
xcc_password
Note: BMC, IMM, or XCC external IP address, account name, and password are all valid for
this command.
Example that does use the user ID and password default values:
onecli config set SYSTEM_PROD_DATA.SysInfoUUID <uuid_value>
4. Restart the Lenovo XClarity Controller.
5. Restart the server.
<asset_tag>
The server asset tag number. Type asset aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, where
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa is the asset tag number.
[access_method]
The access method that you select to use from the following methods:
xcc_internal_ip
The BMC/IMM/XCC internal LAN/USB IP address. The default value is 169.254.95.118.
xcc_user_id
The BMC/IMM/XCC account name (1 of 12 accounts). The default value is USERID.
xcc_password
The BMC/IMM/XCC account password (1 of 12 accounts). The default value is
PASSW0RD (with a zero 0 not an O).
Notes:
a. BMC, IMM, or XCC internal LAN/USB IP address, account name, and password are all
valid for this command.
b. If you do not specify any of these parameters, OneCLI will use the default values. When the
default values are used and OneCLI is unable to access the Lenovo XClarity Controller
using the online authenticated LAN access method, OneCLI will automatically use the
unauthenticated KCS access method.
Example that does use the user ID and password default values:
onecli config set SYSTEM_PROD_DATA.SysEncloseAssetTag <asset_tag>
– Online KCS access (unauthenticated and user restricted): You do not need to specify a value
for access_method when you use this access method.
Note: The KCS access method uses the IPMI/KCS interface, which requires that the IPMI
driver be installed.
Example that does not use the user ID and password default values:
onecli config set SYSTEM_PROD_DATA.SysEncloseAssetTag <asset_tag>
– Remote LAN access, type the command:
Note: When using the remote LAN/WAN access method to access Lenovo XClarity Controller
from a client, the host and the xcc_external_ip address are required parameters.
[−−imm xcc_user_id:xcc_password@xcc_external_ip]
or
[−−bmc xcc_user_id:xcc_password@xcc_external_ip]
Where:
xcc_external_ip
The BMC/IMM/XCC IP address. There is no default value. This parameter is required.
xcc_user_id
The BMC/IMM/XCC account (1 of 12 accounts). The default value is USERID.
xcc_password
The BMC/IMM/XCC account password (1 of 12 accounts). The default value is
PASSW0RD (with a zero 0 not an O).
Note: BMC, IMM, or XCC internal LAN/USB IP address, account name, and password are all
valid for this command.
Example that uses the user ID and password default values:
Use the information in this section to diagnose and resolve problems that you might encounter during the
initial installation and setup of your server.
• “Server does not power on” on page 213
• “The server immediately displays the POST Event Viewer when it is turned on” on page 213
• “Embedded hypervisor is not in the boot list” on page 213
• “Server cannot recognize a hard disk drive” on page 214
• “Displayed system memory less than installed physical memory” on page 215
• “A Lenovo optional device that was just installed does not work.” on page 216
• “Voltage system board fault is displayed in the event log” on page 216
The server immediately displays the POST Event Viewer when it is turned on
Complete the following steps until the problem is solved.
1. Correct any errors that are indicated by the light path diagnostics LEDs.
2. Make sure that the server supports all the processors and that the processors match in speed and
cache size.
You can view processor details from system setup.
To determine if the processor is supported for the server, see https://static.lenovo.com/us/en/
serverproven/index.shtml.
3. (Trained technician only) Make sure that Processor 1 is seated correctly.
4. (Trained technician only) Remove Processor 2 and restart the server.
5. Replace the following components one at a time, in the order shown, restarting the server each time:
a. (Trained technician only) Processor
b. (Trained technician only) System board
Note: Each time you install or remove a memory module, you must disconnect the server from the power
source; then, wait 10 seconds before restarting the server.
1. Make sure that:
• No error LEDs are lit on the operator information panel.
• Memory mirrored channel does not account for the discrepancy.
• The memory modules are seated correctly.
• You have installed the correct type of memory.
• If you changed the memory, you updated the memory configuration in the Setup utility.
• All banks of memory are enabled. The server might have automatically disabled a memory bank when
it detected a problem, or a memory bank might have been manually disabled.
• There is no memory mismatch when the server is at the minimum memory configuration.
• When DCPMMs are installed:
a. If the memory is set in App Direct or Mixed Memory Mode, all the saved data have been backed
up, and created namespaces are deleted before any DCPMM is replaced.
b. Refer to “DC Persistent Memory Module (DCPMM) setup” on page 140 and see if the displayed
memory fits the mode description.
c. If DCPMMs are recently set in Memory Mode, turn it back to App Direct Mode and examine if
there is namespace that has not been deleted (see “DC Persistent Memory Module (DCPMM)
setup” on page 140).
d. Go to the Setup Utility, select System Configuration and Boot Management ➙ Intel Optane
DCPMMs ➙ Security, and make sure all the DCPMM units are unlocked.
2. Reseat the memory modules, and then restart the server.
3. Check the POST error log:
• If a memory module was disabled by a systems-management interrupt (SMI), replace the memory
module.
• If a memory module was disabled by the user or by POST, reseat the memory module; then, run the
Setup utility and enable the memory module.
4. Run memory diagnostics. Power on the system and press F1 when the logo screen appears, the Lenovo
XClarity Provisioning Manager interface will start. Perform memory diagnostics with this interface. Go to
Diagnostics ➙ Run Diagnostic ➙ Memory test or DCPMM test.
When DCPMMs are installed, run diagnostics based on the current DCPMM mode:
• App Direct Mode
– Run DCPMM Test for DCPMMs.
– Run Memory Test for DRAM DIMMs.
• Memory Mode and Mixed Memory Mode
– Run DCPMM Test for App Direct capacity of DCPMMs.
– Run Memory Test for memory capacity of DCPMMs.
Note: DRAM DIMMs in these two modes act as cache, and are not applicable to memory
diagnostics.
Note: When DCPMMs are installed, only adopt this method in Memory Mode.
6. Re-enable all memory modules using the Setup Utility, and restart the system.
7. (Trained technician only) Install the failing memory module into a memory module connector for
processor 2 (if installed) to verify that the problem is not the processor or the memory module connector.
A Lenovo optional device that was just installed does not work.
1. Make sure that:
• The device is supported for the server (see https://static.lenovo.com/us/en/serverproven/index.shtml).
• You followed the installation instructions that came with the device and the device is installed
correctly.
• You have not loosened any other installed devices or cables.
• You updated the configuration information in system setup. When you start a server and press F1 to
display the system setup interface. Whenever memory or any other device is changed, you must
update the configuration.
2. Reseat the device that you just installed.
3. Replace the device that you just installed.
On the World Wide Web, up-to-date information about Lenovo systems, optional devices, services, and
support are available at:
http://datacentersupport.lenovo.com
You can find the product documentation for your ThinkSystem products at the following location:
http://thinksystem.lenovofiles.com/help/index.jsp
You can take these steps to try to solve the problem yourself:
• Check all cables to make sure that they are connected.
• Check the power switches to make sure that the system and any optional devices are turned on.
• Check for updated software, firmware, and operating-system device drivers for your Lenovo product. The
Lenovo Warranty terms and conditions state that you, the owner of the Lenovo product, are responsible
for maintaining and updating all software and firmware for the product (unless it is covered by an
additional maintenance contract). Your service technician will request that you upgrade your software and
firmware if the problem has a documented solution within a software upgrade.
• If you have installed new hardware or software in your environment, check https://static.lenovo.com/us/en/
serverproven/index.shtml to make sure that the hardware and software is supported by your product.
• Go to http://datacentersupport.lenovo.com and check for information to help you solve the problem.
– Check the Lenovo forums at https://forums.lenovo.com/t5/Datacenter-Systems/ct-p/sv_eg to see if
someone else has encountered a similar problem.
Contacting Support
You can contact Support to obtain help for your issue.
You can receive hardware service through a Lenovo Authorized Service Provider. To locate a service
provider authorized by Lenovo to provide warranty service, go to https://datacentersupport.lenovo.com/
serviceprovider and use filter searching for different countries. For Lenovo support telephone numbers, see
https://datacentersupport.lenovo.com/supportphonelist for your region support details.
H
B handling static-sensitive devices 130
hardware options
back up the server configuration 208 installing 131
bezel hardware service and support telephone numbers 219
removing 131 help 217
hot-swap power supply
installing 183
C
cable routing
backplane 38 I
eight 2.5-inch drives 40 ID label 1
eight 3.5-inch SAS/SATA drives 110 Independent mode 143
GPU 33 install server in a rack 195
sixteen 2.5-inch drives 51 installation
twelve 3.5-inch drives 114 guidelines 128
twenty 2.5-inch drives 70 installation guidelines 128
twenty-four 2.5-inch drives 71 installing
cable the server 195 2.5-inch-drive backplane 155
collecting service data 218 20-Bay Upgrade Kit 181
Common installation issues 213 24-Bay Upgrade Kit 182
Configuration - ThinkSystem SR650 197 3.5-inch-drive backplane 158
configure the firmware 201 air baffle 187
cover GPU 167
installing 190 GPU thermal kit 167
removing 132 hot-swap power supply 183
CPU LOM adapter 176
option install 136 M.2 backplane and M.2 drive 162
creating a personalized support web page 217 memory module 139
custom support web page 217 PCIe card 170
RAID adapter 161
RAID super capacitor module 189
rear hot-swap drive assembly 160
D serial port module 177
DC Persistent Memory Module 141, 202 system fan 179
DC Persistent Memory Module (DCPMM) 147 system fan cage 179
DCPMM 140–141, 202 the upgrade kit 181–182
devices, static-sensitive top cover 190
handling 130 Intel Optane DC Persistent Memory 140
DIMM installation order 148, 151, 153 internal cable routing 33
drive activity LED 19 introduction 1
drive status LED 19
Dynamic random access memory (DRAM) 143
L
Lenovo Capacity Planner 16
F Lenovo XClarity Essentials 16
fan Lenovo XClarity Provisioning Manager 16
installing 179 LOM adapter
fan error LED 31 installing 176
features 3
front I/O assembly 19, 22
front view 19