Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
51 views

Unity Hig

Uploaded by

Mr lab
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Unity Hig

Uploaded by

Mr lab
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Dell Unity™ All Flash and Unity Hybrid

Hardware Information Guide

Part Number: 302-002-563


October 2022
Rev. 09
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2016 - 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents

Additional resources..................................................................................................................... 5
About this guide................................................................................................................................................................... 5
Related documentation...................................................................................................................................................... 5

Chapter 1: Platform Overview....................................................................................................... 6


Overview................................................................................................................................................................................6
Description............................................................................................................................................................................ 6

Chapter 2: Technical specifications............................................................................................. 12


Dimensions and weights................................................................................................................................................... 12
Power requirements.......................................................................................................................................................... 13
System operating limits.................................................................................................................................................... 18
DPE airflow.................................................................................................................................................................... 19
Environmental recovery..............................................................................................................................................19
Air quality requirements..............................................................................................................................................19
Fire suppressant disclaimer.......................................................................................................................................20
Shock and vibration.................................................................................................................................................... 20
Shipping and storage requirements.............................................................................................................................. 20

Chapter 3: Hardware component descriptions............................................................................. 22


Disk processor enclosure.................................................................................................................................................22
General disk processor enclosure information......................................................................................................22
2U, 12 (3.5-inch) disk drive DPE............................................................................................................................. 24
2U, 25 (2.5-inch) disk drive DPE............................................................................................................................ 25
2U DPE rear view.............................................................................................................................................................. 26
Storage processor rear view.....................................................................................................................................27
About converged network adapter (CNA) ports.................................................................................................29
SP I/O module types.................................................................................................................................................. 30
SP power supply module........................................................................................................................................... 39
Storage processor internal components......................................................................................................................40

Chapter 4: Disk-array enclosures................................................................................................. 41


General information on front-loading DAEs.................................................................................................................41
Disk drive type ............................................................................................................................................................ 42
2U, 25 (2.5-inch) DAE..................................................................................................................................................... 42
2U, 25-drive DAE front view.................................................................................................................................... 42
2U, 25 (2.5-inch) rear view...................................................................................................................................... 43
3U, 15 (3.5-inch) DAE...................................................................................................................................................... 45
3U, 15-drive DAE Front view.................................................................................................................................... 46
3U, 15-drive DAE rear view....................................................................................................................................... 47
General information on drawer-type DAEs................................................................................................................. 49
3U, 80 (2.5-inch) DAE..................................................................................................................................................... 50
3U, 80-drive DAE top view........................................................................................................................................ 51
3U, 80-drive DAE front view.................................................................................................................................... 52

Contents 3
3U, 80-drive DAE rear view...................................................................................................................................... 53

Appendix A: Cabling.................................................................................................................... 57
Cable label wraps...............................................................................................................................................................57
Cabling the DPE to a DAE............................................................................................................................................... 57
Cabling the first optional DAE to create back-end bus 1...................................................................................58
Cabling the second optional DAE to extend back-end bus 0........................................................................... 60
Cabling the DPE SAS module ports to create back-end buses 2 through 5................................................. 61
Cabling an expansion DAE to an existing DAE to extend a back-end bus...........................................................64
12Gb/s SAS cabling for interleaved DAE configurations......................................................................................... 67
12Gb/s SAS cabling for stacked DAE configurations...............................................................................................69
Attaching expansion (back-end) cables to an 80-drive DAE.................................................................................. 71
Cabling for x4 connections........................................................................................................................................ 71
Cabling for x8 connections........................................................................................................................................76

Appendix B: Rail kits and cables.................................................................................................. 78


Rail kits.................................................................................................................................................................................78
Cable types......................................................................................................................................................................... 78
DAE-to-DAE copper cabling..................................................................................................................................... 80

4 Contents
Preface

As part of an improvement effort, revisions of the software and hardware are periodically released. Therefore, some functions
described in this document might not be supported by all versions of the software or hardware currently in use. The product
release notes provide the most up-to-date information on product features. Contact your technical support professional if a
product does not function properly or does not function as described in this document.

Where to get help


Support, product, and licensing information can be obtained as described below.

Product information
For product and feature documentation or release notes, go to Unity Technical Documentation at: https://www.dell.com/
unitydocs.

Troubleshooting
For information about products, software updates, licensing, and service, go to Support (registration required) at: https://
www.dell.com/support. After logging in, locate the appropriate product page.

About this guide


This guide is designed for personnel who install, configure, and maintain the Unity 300/300F/350F/380/380F, Unity
400/400F/450F, Unity 500/500F/550F, and Unity 600/600F/650F platform. To use this hardware publication, you should
be familiar with digital storage equipment and cabling.
NOTE: This document was accurate at publication time. New versions of this document might be released. Check to ensure
that you are using the latest version of this document.

Related documentation
The following Unity system documents provide additional information.
● Dell Unity™ Family Installation Guide
● Dell Unity™ Family Release Notes
Additional relevant documentation can be obtained at:
● https://www.dell.com/unitydocs
● https://www.dell.com/support

Additional resources 5
1
Platform Overview
This section provides an overview for the Unity 300/300F/350F/380/380F, Unity 400/400F/450F, Unity 500/500F/550F,
and Unity 600/600F/650F platforms as well as an overview of their architecture, features, and components.
Topics:
• Overview
• Description

Overview
Unity Hybrid and All Flash storage systems implement an integrated architecture for block, file, and VMware VVols with
concurrent support for native NAS, iSCSI, and Fibre Channel protocols based on the powerful new family of Intel E5-2600
processors. Each system leverages dual storage processors, full 12-Gb SAS back-end connectivity and patented multi-core
architected operating environment to deliver unparalleled performance & efficiency. Additional storage capacity is added via Disk
Array Enclosures (DAEs).
Unity is the only storage system that successfully meets all four storage requirements of today's IT professionals:

Unity is Simple Unity solutions set new standards for storage systems with compelling simplicity, modern design,
affordable prices, and flexible deployments - to meet the needs of resource-constrained IT professionals
in large or small companies.
Unity is Modern Unity has a modern 2U architecture designed for all-flash, designed to support the high density SSD's
including 3D NAND TLC (triple level cell) drives. Unity includes automated data lifecycle management
to lower costs, integrated copy data management to control local point-in-time snapshots, built-in
encryption and remote replication, and deep ecosystem integration with VMware and Microsoft.
Unity is Our dual-active controller system was designed to optimize the performance, density, and cost of your
Affordable storage to deliver all-flash or hybrid configurations for much less than you thought possible.
Unity is Flexible Unity is available as a virtual storage appliance, purpose-built all flash or hybrid configurations, or as
converged systems - with one Unity operating environment that connects them all together.

Description
This section shows examples of the front and rear views of Unity 300/300F/350F/380/380F, Unity 400/400F/450F, Unity
500/500F/550F, and Unity 600/600F/650F and a discussion of the hardware features.

Hardware views
Shown here are examples of the front and rear of a Unity 300/300F/350F/380/380F, Unity 400/400F/450F, Unity 500/500F/
550F, and Unity 600/600F/650F platform disk processor enclosure (DPE).

6 Platform Overview
0 24

Caution: Array Software on drives 0-3. Removing or relocating them

Will Make the Array Unusable

0 3
SAS12Gb
Removing these drives will SAS12Gb
Removing these drives will SAS12Gb
Removing these drives will SAS12Gb
Removing these drives will
make the array unusable make the array unusable make the array unusable make the array unusable

SPD SPD SPD SPD

SAS12Gb SAS12Gb SAS12Gb SAS12Gb

SAS12Gb SAS12Gb SAS12Gb SAS12Gb

Figure 1. Disk processor enclosure front views

x4 10 GbE
3 2
x4
1 0
4
4
1
1
MAC:
3 2
5
5
DC
AC
1 GbE

1 GbE
AC
DC
5
5
MAC:
2 3
1 1 4
4
2 3 0 1
10 GbE x4 x4

Figure 2. Disk processor enclosure rear view

NOTE: These figures are examples of the front and rear views without any DAEs attached and are for illustrative purposes
only.

Hardware features
Contained in a 2U architecture, the Unity™ All Flash and Unity Hybrid platform DPE fully loaded with hard disk drives and
without I/O modules or DAEs weighs either:
● 12-drive DPE: 65 lb (29 kg)
NOTE: 12-drive DPE not available on the Unity All Flash models.
● 25-drive DPE: 44 lb (20 kg)
The 2U DPE measures:
● 12-drive DPE: 3.4 inches high x 17.5 inches wide x 27 inches deep (8.64 cm x 44.45 cm x 68.58 cm)
● 25-drive DPE: 3.4 inches high x 17.5 inches wide x 24.17 inches deep (8.64 cm x 44.45 cm x 61.39 cm)
Between the front and rear of the enclosure, a midplane distributes power and signals to all the enclosure components. On the
front DPE, drives plug directly into the midplane connections. On the rear of the DPE, the storage processors, power supply
modules, and I/O modules plug directly into the midplane connections. Internal to each storage processor are a battery backup
unit (BBU), redundant cooling modules, DDR4 memory, and an E5 v3 Intel processor.

Platform Overview 7
The following table describes the hardware limits for Unity All Flash models.

Table 1. Hardware limits per Unity All Flash model


Limit description Unity Unity Unity Unity Unity Unity Unity Unity
300F 350F 400F 450F 500F 550F 600F 650F
CPU type in SP 6-core 1.6 6-core 1.7 8-core 2.4 10-core 10-core 14-core 12-core 14-core
GHz GHz GHz 2.2 GHz 2.6 GHz 2.0 GHz 2.5 GHz 2.4 GHz
(E5-2603) (E5-2603) (E5-2630) (E5-2630) (E5-2660) (E5-2660) (E5-2680) (E5-2680)
Memory per SP 24 GB 48 GB 48 GB 64 GB 64 GB 128 GB 128 GB 256 GB
Three 8 Three 16 Three 16 Four 16 Four 16 Four 32 Four 32 Four 64
GB DDR4 GB DDR4 GB DDR4 GB DDR4 GB DDR4 GB DDR4 GB DDR4 GB DDR4
DIMMs DIMMs DIMMs DIMMs DIMMs DIMMs DIMMs DIMMs
Embedded CNA ports per 2 ports, configurable as either:
SP
8/16 Gb Fibre Channel
4/8/16 Gb Fibre Channel
16 Gb Fibre Channel (single mode)
1/10 Gb IP/iSCSI

Embedded 10GbaseT 2 ports


ports per SP
Max. SAS I/O ports per 2 (2 embedded mini-HD SAS ports) 6 (2 embedded and 4 I/O mini HD SAS ports)
SP
Max. number of I/O 2
modules per SP
Supported Back-End I/O None Four-port 12-Gb/s SAS
modules
Supported Front-End I/O Four-port 16-Gb/s Fibre Channel
modules Four-port 10-Gb/s optical
Four-port 10GBASE-T
Four-port 1GBASE-T
Two-port 10Gb/s optical

Max. number of Front- 12


End ports per SP (all
types)
Max. number of Front- 10
End Fibre Channel ports
per SP (CNA and I/O
modules)
Max. number of Front- 8
End ports 1GbaseT/iSCSI
ports per SP (Onboard,
CNA, and I/O modules)
Max. number of Front- 12
End ports 10GbE iSCSI
ports per SP (Onboard,
CNA, and I/O modules)
(Dell Unity OE 4.1 and 5/150 5/150 5/250 5/250 5/500 5/500 5/1000 5/1000
later) Min./Max. number
of drives a
(Dell Unity OE 4.0 only) 5/150 N/A 5/250 N/A 5/350 N/A 5/500 N/A
Min./Max. number of
drives

8 Platform Overview
Table 1. Hardware limits per Unity All Flash model (continued)
Limit description Unity Unity Unity Unity Unity Unity Unity Unity
300F 350F 400F 450F 500F 550F 600F 650F
Disk-array enclosures 2U 25-drive DAE with 2.5-inch drives
types supported 3U 80-drive DAE with 2.5-inch drives

Max. number of 2U 25- 5 5 9 9 19 19 39 39


drive DAEs supported
Max. number of 3U 80- 1 1 2 2 5 5 12 12
drive DAEs supported
Max. raw capacity (PB) 2.4 2.4 4 4 8 8 16 16

a. The minimum number of drives required to create a 4+1 RAID group is five. Four drives are required for starting up the
array.

Platform Overview 9
The following table describes the hardware limits for Unity Hybrid models.

Table 2. Hardware limits per Unity Hybrid model


Limit description Unity 300 Unity 400 Unity 500 Unity 600
CPU type in SP 6-core 1.6 GHz 8-core 2.4 GHz 10-core 2.6 GHz 12-core 2.5 GHz
(E5-2603) (E5-2630) (E5-2660) (E5-2680)
Memory per SP 24 GB 48 GB 64 GB 128 GB
Three 8 GB Three 16 GB Four 16 GB Four 32 GB
DDR4 DIMMs DDR4 DIMMs DDR4 DIMMs DDR4 DIMMs
Embedded CNA ports per SP 2 ports, configurable as either:
8/16 Gb Fibre Channel
4/8/16 Gb Fibre Channel
16 Gb Fibre Channel (single mode)
1/10 Gb IP/iSCSI

Embedded 10GbaseT ports per SP 2 ports


Max. SAS I/O ports per SP 2 (2 embedded 2 (2 embedded 6 (2 embedded 6 (2 embedded
mini-HD SAS mini-HD SAS and 4 I/O mini- and 4 I/O mini-
ports) ports) HD SAS ports) HD SAS ports)
Max. number of I/O modules per SP 2
Supported Back-End I/O modules None Four-port 12-Gb/s SAS
Supported Front-End I/O modules Four-port 16-Gb/s Fibre Channel
Four-port 10-Gb/s optical
Four-port 10GBASE-T
Four-port 1GBASE-T
Two-port 10Gb/s optical

Max. number of Front-End ports per SP (all types) 12


Max. number of Front-End Fibre Channel ports per 10
SP (CNA and I/O modules)
Max. number of Front-End ports 1GbaseT/iSCSI 8
ports per SP (Onboard, CNA, and I/O modules)
Max. number of Front-End ports 10GbE iSCSI ports
per SP (Onboard, CNA, and I/O modules)
Min./Max. number of drives 5/150 5/250 5/500 5/1000
Disk-array enclosures types supported 2U 25-drive DAE with 2.5-inch drives
3U 15-drive DAE with 3.5-inch drives
3U 80-drive DAE with 2.5-inch drives

Max. number of DAEs supported per system a up to 9 up to 15 up to 33 up to 59


Max. number of 80-drive DAEs supported per system 1 2 5 12
Max. raw capacity (PB) 2.4 4 8 16

a. Depending on the DPE and DAE types in the system. Maximum DAE limits shown here use the 12-drive DPE and 15-drive
DAE. Higher capacity DPE/DAEs support fewer maximum DAEs.

The Unity™ All Flash and Unity Hybrid platform includes the following hardware features:

One 2U disk processor enclosure


On the front of the 2U DPE:

10 Platform Overview
● Unity Hybrid models support two types of drive carriers in the DPE with either:
○ 12 slots for 3.5-inch drives
○ 25 slots for 2.5-inch drives
● Unity All Flash models support only the DPE and drive carrier with 25 slots for 2.5-inch drives.
● Two enclosure LEDs; power on and fault.
On the rear of the 2U DPE are two storage processors. Each storage processor consists of:
● Two RJ-45 LAN connectors (labeled with a network management symbol and a wrench symbol) management ports
● Two 10GBASE-T ports
● Two embedded Converged Network Adapter (CNA) ports
● Two embedded x4 lane 12-Gb/s mini-HD SAS (encryption capable) back-end ports (labeled 0 and 1, respectively)
● One power supply module (hot-swappable)
● Two PCI Gen 3, x8 lane I/O module slots (A0 - A1 and B0 - B1) are available for use, supporting:
○ Four-port 12-Gb/s SAS I/O module -- where supported, provides four mini-HD SAS ports (x16 lane) of 12Gb SAS
expansion for connecting additional DAEs. This I/O module also supports controller based encryption. Labeled 12Gb SAS
v1.
○ Four-port 16-Gb/s Fibre Channel I/O module -- provides Fibre Channel connectivity as listed below. Labeled 16Gb Fibre
v3.
■ Four ports auto-negotiating to 4/8/16Gbps. Uses optical SFP+ and OM2/OM3 cabling to connect directly to a host
HBA or FC switch.
■ One FC port negotiating to 16Gbps, which can be configured for synchronous replication between two Unity systems,
either directly connected or connected through a switch. Uses optical SFP+ and SM or MM cabling to provide
synchronous replication. The three remaining ports auto-negotiate to 4/8/16 Gbps, and use optical SFP+ and
OM2/OM3 cabling to connect directly to a HBA or FC switch.
○ Four-port 10-Gb/s optical I/O module -- provides four SFP+ optical or Active/Passive TwinAx 10GbE IP/iSCSI ports
for connections to an Ethernet switch. Supports both IP(file) and iSCSI (Block) on the same I/O module. Ports can be
configured as both IP and iSCSI simultaneously. Labeled 10 GbE v5.
○ Four-port 10GBASE-T I/O module -- provides four copper 10GBASE-T RJ45 Ethernet ports for copper connections to an
Ethernet switch. Supports both IP (file) and iSCSI (Block) on the same IO module. Ports can be configured as both IP
and iSCSI simultaneously. Labeled 10GbE BaseT v2.
○ Four-port 1GBASE-T I/O module -- provides four 1000BASE-T RJ-45 copper ports for Cat 5/6-cabling connections to an
Ethernet switch. Supports both IP (file) and iSCSI (Block) on the same I/O module. Ports can be configured as both IP
and iSCSI simultaneously. Labeled 1 GbE BaseT v3.
○ Two-port 10Gb/s optical I/O module -- provides two SFP+ optical or Active/Passive TwinAx 10GbE ports for
connections to an Ethernet switch. Supports both IP (file) and full iSCSI Offload engine (Block) on the same IO module.
Ports can be configured as both IP and iSCSI simultaneously. Labeled 10 GbE V6.

Expansion disk-array enclosures


Each model supports a different number of drive slots and DAEs.
● Unity 300F/350F and Unity 300 - 150 drive slots
● Unity 400F/450F and Unity 400 - 250 drive slots
● Unity 500F/550F and Unity 500 - 500 drive slots
● Unity 600F/650F and Unity 600 - 1000 drive slots
The number of DAEs supported by the Unity™ All Flash and Unity Hybrid is variable depending on the drive type in the DPE and
DAEs. A Dell Unity™ All Flash and Unity Hybrid system cannot be configured with more drive slots than supported and will fault
the DAE that contains the slots above the system limits.
Unless the array is restricted by its slot count, each back-end loop could contain:
● Ten 3U, 15-drive DAEs (150 slots)
● Ten 2U, 25-drive DAEs (250 slots)
● Three 3U, 80-drive DAEs (240 slots)

Platform Overview 11
2
Technical specifications
This section provides the technical specifications for the platform components.
Topics:
• Dimensions and weights
• Power requirements
• System operating limits
• Shipping and storage requirements

Dimensions and weights


Plan your rack and system placement using these component weight and dimension information.

2U, 12-drive disk processor enclosure (DPE)


Table 3. DPE with 12 3.5" Disks, dimensions and weight
Dimensions Vertical size Weight (see note)
Height: 3.40 in (8.64 cm)
Width: 17.50 in (44.45 cm) 2 NEMA units 65.8 lb (29.8 kg)
Depth: 27.0 in (68.58 cm)
Note: The weight does not include mounting rails. Allow 2.3-4.5 kg (5-10 lb) for a rail set. The weights listed in this table do
not describe enclosures with solid state disk drives with Flash memory (called Flash or SSD drives). These Flash drive modules
weigh 20.8 ounces (1.3 lb) each.

2U, 25-drive disk processor enclosure (DPE)


Table 4. DPE with 25 2.5" Disks, dimensions and weight
Dimensions Vertical size Weight (see note)
Height: 3.40 in (8.64 cm)
Width: 17.50 in (44.45 cm) 2 NEMA units 44.0 lb (20.0 kg)
Depth: 24.17 in (61.39 cm)
Note: The weight does not include mounting rails. Allow 2.3-4.5 kg (5-10 lb) for a rail set. The weights listed in this table do
not describe enclosures with solid state disk drives with Flash memory (called Flash or SSD drives). These Flash drive modules
weigh 20.8 ounces (1.3 lb) each.

3U, 15-drive disk-array enclosure (DAE)


Table 5. Dimensions and weight
Dimensions Vertical size Weight (see note)
Height: 5.25 in (13.34 cm)
3 NEMA units 68 lb (30.8 kg) with 15 disks
Width: 17.62 in (44.75 cm)

12 Technical specifications
Table 5. Dimensions and weight (continued)
Dimensions Vertical size Weight (see note)
Depth: 14.0 in (35.6 cm)
Note: The weight does not include mounting rails. Allow 5-10 lb (2.3-4.5 kg) for a rail set. The weights listed in this table do
not describe enclosures with solid state disk drives with Flash memory (called Flash or SSD drives). These Flash drive modules
weigh 20.8 ounces (1.3 lb) each.

2U, 25-drive disk-array enclosure (DAE)


Table 6. Dimensions and weight
Dimensions Vertical size Weight (see note)
Height: 3.40 in (8.64 cm)
Width: 17.50 in (44.45 cm) 2 NEMA units 44.61 lb (20.23 kg) with 25 disks
Depth: 14.0 in (35.56 cm)
Note: The weight does not include mounting rails. Allow 5-10 lb (2.3-4.5 kg) for a rail set. The weights listed in this table do
not describe enclosures with solid state disk drives with Flash memory (called Flash or SSD drives). These Flash drive modules
weigh 20.8 ounces (1.3 lb) each.

3U, 80-drive disk-array enclosure (DAE)


Table 7. DAE with 80 2.5" Disks, dimensions and weight
Dimensions a Vertical size Weight b

Height: 5.2 in (20.0 cm) 3 NEMA units ● Weight with all CRU/FRUs and 80
2.5" drives populated: 130 lbs (59 kg)
Width:17.6 in (44.7 cm)
● Weight of empty chassis with all
Depth: 30 in (76.2 cm) CRU/FRUs and drives removed: 25
lbs (11.3 kg)

a. Dimensions are of enclosure chassis only. Dimensions do not include bezel mounting hardware.
b. Full system weight does not include mounting rails. Allow 5–10 lbs (2.3–4.5 kg) for a rail set.

Power requirements
Plan your rack and system placement using these component power requirements.
The input current, power (VA), and dissipation per enclosure listed in this document are based on measurements of fully
configured enclosures under worst-case operating conditions. Use the operating maximum values to plan the configuration of
your storage system. These values represent either:
● values for a single power supply line cord, or
● the sum of the values shared by the line cords of the combined power supplies in the same enclosure, with the division
between the line cords and supplies at the current sharing ratio (approximately 50% each).
Use the provided power and weight calculator to refine the power and heat values in the following tables to more-closely match
the hardware configuration for your system.
A failure of one of the combined power supplies per enclosure results in the remaining power supply supporting the full load. You
must use a rackmount cabinet or rack with appropriate power distribution, and have main branch AC distribution that can handle
these values for each enclosure in the cabinet.
All power figures shown represent a worst case product configuration with max normal values operating in an ambient
temperature environment of 20°C to 25°C.
The chassis power numbers provided may increase when operating in a higher ambient temperature environment.

Technical specifications 13
Unity 2U disk processor enclosure (DPE)
Table 8. 25-drive slot disk processor AC enclosure power specifications
Unity 300F | Unity Unity 400F | Unity Unity 500F | Unity Unity 600F | Unity 600
300 400 500
AC line voltage 100 to 240 VAC ± 10%, single phase, 47 to 63 Hz
AC line current 9.04 A max at 100 VAC 9.09 A max at 100 VAC 9.55 A max at 100 VAC 9.89 A max at 100 VAC
(operating maximum)
4.48 A max at 200VAC 4.55 A max at 200VAC 4.78 A max at 200VAC 4.89 A max at 200VAC
Power consumption 907.5 VA (903.5 W) 909.0 VA (905.0 W) 955.0 VA (951.0 W) 9.89.0 VA (985.0 W)
(operating maximum) max at 100 VAC max at 100 VAC max at 100 VAC max at 100 VAC
907.5 VA (895.5 W) 909.0 VA (897.0 W) 955.0 VA (943.0 W) 989.0 VA (977.0 W) max
max at 200 VAC max at 200 VAC max at 200 VAC at 200 VAC
Power factor 0.95 mi at full load 100/ 200 VAC
Heat dissipation 3.25 x 10 6 J/hr, (3,083 3.26 x 10 6 J/hr, (3,088 3.42 x 10 6 J/hr, (3,245 3.55 x 10 6 J/hr, (3,361
(operating maximum) Btu/hr) max at 100 Btu/hr) max at 100 Btu/hr) max at 100 Btu/hr) max at 100 VAC;
VAC; 3.22 x 10 6 J/hr, VAC; 3.23 x 10 6 J/hr, VAC; 3.40 x 10 6 J/hr, 3.52 x 10 6 J/hr, (3,334
(3,056 Btu/hr) max (3,061 Btu/hr) max (3,218 Btu/hr) max Btu/hr) max (100V*)
(100V*) (100V*) (100V*)
In-rush current 45 Apk "cold" per line cord, at any line voltage
Startup surge 120 Apk "hot" per line cord, at any line voltage
current
AC protection 15 A fuse on each power supply, single line
AC inlet type IEC320-C14 appliance coupler, per power zone
Ride-through time 10 ms min
Current sharing ± 5 percent of full load, between power supplies

Table 9. 12-drive slot disk processor enclosure AC power specifications


Unity 300 Unity 400 Unity 500 Unity 600
AC line voltage 100 to 240 VAC ± 10%, single phase, 47 to 63 Hz
AC line current 6.94 A max at 100 VAC 6.95 A max at 100 VAC 7.41 A max at 100 VAC 7.80 A max at 100 VAC
(operating maximum)
3.59 A max at 200VAC 3.60 A max at 200VAC 3.83 A max at 200VAC 4.00 A max at 200VAC
Power consumption 693.5 VA (678.5 W) 695.0 VA (681.0 W) max 741.0 VA (727.0 W) max 775.0 VA (761.0 W) max
(operating maximum) max at 100 VAC at 100 VAC at 100 VAC at 100 VAC
718.5 VA (678.5 W) 720.0 VA (680.0 W) 766.0 VA (726.0 W) 800.0 VA (760.0 W) max
max at 200 VAC max at 200 VAC max at 200 VAC at 200 VAC
Power factor 0.95 mi at full load 100/ 200 VAC
Heat dissipation 2.45 x 10 6 J/hr, (2,319 2.45 x 10 6 J/hr, (2,324 2.62 x 10 6 J/hr, (2,481 2.74 x 10 6 J/hr, (2,597
(operating maximum) Btu/hr) max at 100 Btu/hr) max at 100 Btu/hr) max at 100 Btu/hr) max at 100 VAC;
VAC; 2.44 x 10 6 J/hr, VAC; 2.45 x 10 6 J/hr, VAC; 2.61 x 10 6 J/hr, 2.74 x 10 6 J/hr, (2,593
(3,313 Btu/hr) max (2,320 Btu/hr) max (2,477 Btu/hr) max Btu/hr) max (100V*)
(100V*) (100V*) (100V*)
In-rush current 45 Apk "cold" per line cord, at any line voltage
Startup surge 120 Apk "hot" per line cord, at any line voltage
current
AC protection 15 A fuse on each power supply, single line
AC inlet type IEC320-C14 appliance coupler, per power zone

14 Technical specifications
Table 9. 12-drive slot disk processor enclosure AC power specifications (continued)
Unity 300 Unity 400 Unity 500 Unity 600
Ride-through time 10 ms min
Current sharing ± 5 percent of full load, between power supplies

Table 10. 25-drive slot disk processor enclosure DC power specifications


Unity 300 Unity 400 Unity 500 Unity 600
DC line voltage DC Line Voltage -39 to -72 V DC (Nominal -48V or -60V power systems)
DC line current 23.7 A max at -39 V 23.7 max at -39 V DC; 24.9 max at -39 V DC; 25.8 max at -39 V DC;
(operating maximum) DC; 18.8 A max at -48 18.9 A max at -48 V DC; 19.8 A max at -48 V DC; 20.6 A max at -48 V DC;
V DC; 12.8 A max at -72 12.8 A max at -72 V DC 13.5 A max at -72 V DC 14.0 A max at -72 V DC
V DC
Power consumption 923 W max at -39 V 925 W max at -39 V DC; 972 W max at -39 V 1,006 W max at -39 V
(operating maximum) DC; 905 W max at -48 906 W max at -48 V DC; DC; 953 W max at -48 DC; 987 W max at -48 V
V DC; 921 W max at -72 922 W max at -72 V DC V DC; 970 W max at DC; 1,005 W max at -72
V DC -72 V DC V DC
Heat dissipation 3.32 x 10 6 J/hr, (3,150 3.33 x 10 6 J/hr, (3,156 3.50 x 10 6 J/hr, (3,317 3.62 x 10 6 J/hr, (3,433
(operating maximum) Btu/hr) max at -39 V Btu/hr) max at -39 V Btu/hr) max at -39 V Btu/hr) max at -39 V
DC; 3.26 x 10 6 J/hr, DC; 3.26 x 10 6 J/hr, DC; 3.43 x 10 6 J/hr, DC; 3.55 x 10 6 J/hr,
(3,088 Btu/hr) max at (3,091 Btu/hr) max at (3,252 Btu/hr) max at (3,368 Btu/hr) max at
-48 V DC; 3.32 x 10 6 -48 V DC; 3.32 x 10 6 -48 V DC; 3.49 x 10 6 -48 V DC; 3.62 x 10 6
J/hr, (3,142 Btu/hr) J/hr, (3,146 Btu/hr) J/hr, (3,310 Btu/hr) J/hr, (3,429 Btu/hr)
max at -72 V DC max at -72 V DC max at -72 V DC max at -72 V DC
In-rush current 40 A peak, per requirement in EN300 132-2 Sect. 4.7 limit curve
DC protection 50 A fuse in each power supply
DC inlet type Positronics PLBH3W3M4B0A1/AA
Mating DC connector Positronics PLBH3W3F0000/AA; Positronics Inc., www.connectpositronic.com
Ride-through time 1 ms min at -50 V input
Current sharing ± 5 percent of full load, between power supplies

Table 11. 12-drive slot disk processor enclosure DC power specifications


Unity 300 Unity 400 Unity 500 Unity 600
DC line voltage DC Line Voltage -39 to -72 V DC (Nominal -48V or -60V power systems)
DC line current 18.0 A max at -39 V 17.9 A max at -39 V DC; 19.3 max at -39 V DC; 20.2 max at -39 V DC;
(operating maximum) DC; 14.5 A max at -48 14.4 A max at -48 V DC; 15.4 A max at -48 V DC; 16.2 A max at -48 V DC;
V DC; 9.8 A max at -72 9.8 A max at -72 V DC 10.5 A max at -72 V DC 11.0 A max at -72 V DC
V DC
Power consumption 701 W max at -39 V 700 W max at -39 V DC; 751 W max at -39 V DC; 789 W max at -39 V DC;
(operating maximum) DC; 695 W max at -48 693 W max at -48 V DC; 741 W max at -48 V DC; 776 W max at -48 V DC;
V DC; 706 W max at 704 W max at -72 V DC 753 W max at -72 V DC 789 W max at -72 V DC
-72 V DC
Heat dissipation 2.52 x 10 6 J/hr, (2,392 2.52 x 10 6 J/hr, (2,388 2.70 x 10 6 J/hr, (2,562 2.84 x 10 6 J/hr, (2,692
(operating maximum) Btu/hr) max at -39 V Btu/hr) max at -39 V Btu/hr) max at -39 V Btu/hr) max at -39 V
DC; 2.50 x 10 6 J/hr, DC; 2.49 x 10 6 J/hr, DC; 2.67 x 10 6 J/hr, DC; 2.79 x 10 6 J/hr,
(2,370 Btu/hr) max at (2,365 Btu/hr) max at (2,528 Btu/hr) max at (2,648 Btu/hr) max at
-48 V DC; 2.54 x 10 6 -48 V DC; 2.53 x 10 6 -48 V DC; 2.71 x 10 6 -48 V DC; 2.84 x 10 6
J/hr, (2,409 Btu/hr) J/hr, (2,402 Btu/hr) J/hr, (2,569 Btu/hr) J/hr, (2,692 Btu/hr)
max at -72 V DC max at -72 V DC max at -72 V DC max at -72 V DC
In-rush current 40 A peak, per requirement in EN300 132-2 Sect. 4.7 limit curve
DC protection 50 A fuse in each power supply

Technical specifications 15
Table 11. 12-drive slot disk processor enclosure DC power specifications (continued)
Unity 300 Unity 400 Unity 500 Unity 600
DC inlet type Positronics PLBH3W3M4B0A1/AA
Mating DC connector Positronics PLBH3W3F0000/AA; Positronics Inc., www.connectpositronics.com
Ride-through time 1 ms min at -50 V input
Current sharing ± 5 percent of full load, between power supplies

3U, 15-drive disk-array enclosure (DAE)


Table 12. 15-drive slot disk array enclosure AC power specifications
Requirement Description
AC line voltage 100 to 240 VAC ± 10%, single phase, 47 to 63 Hz
AC line current (operating maximum) 2.90 A max at 100 VAC
1.60 A max at 200 VAC
Power consumption (operating maximum) 287.0 VA|281.0 W max at 100 VAC
313.0 VA|277.0 W max at 200VAC
Power factor 0.90 minimum at full load, 100V/200V
Heat dissipation (operating maximum) 1.01 x 10 6 J/hr, (959 Btu/hr) max at 100 VAC
100.0 x 10 6 J/hr, (945 Btu/hr) max at 200 VAC
In-rush current 30 A max for ½ line cycle, per line cord at 240 VAC
Startup surge current 25 Amps peak max per line cord, at any line voltage
AC protection 10 A fuse on each power supply, both Line and Neutral
AC inlet type IEC320-C14 appliance coupler, per power zone
Ride-through time 30 ms minimum
Current sharing Droop Load Sharing

Table 13. 15-drive slot disk array enclosure DC power specifications


Requirement Description
DC line voltage -39 to -72V DC (nominal -48 or -60 V power systems)
DC line current (operating maximum) 7.92 A max at -39V DC
6.43 A max at -48V DC
4.39 A max at -72V DC
Power consumption (operating maximum) 309 W max at -39V DC
309 W max at -48V DC
316 W max at -72V DC
Heat dissipation (operating maximum) 1.11 x 10 6 J/hr (1054 Btu/hr) max at -39V DC
1.11 x 10 6 J/hr (1054 Btu/hr) max at -48V DC
1.14 x 10 6 J/hr (1078 Btu/hr) max at -72V DC
In-rush current 20 A peak per requirements in EN300 132-2 Sect 4.7 limit curve
DC protection 20 A fuse in each power supply

16 Technical specifications
Table 13. 15-drive slot disk array enclosure DC power specifications (continued)
Requirement Description
DC inlet type Positronics PLB3W3M1000
Mating DC connector Positronics PLB3W3F7100A1; Positronics Inc., http://
www.connectpositronic.com
Ride-through time 5 ms min. (test condition: Vin = -40V DC)
Current sharing Droop Load Sharing

2U, 25-drive disk-array enclosure (DAE)


Table 14. 25-drive slot disk array enclosure AC power specifications
Requirement Description
AC line voltage 100 to 240 VAC ± 10%, single phase, 47 to 63 Hz
AC line current (operating maximum) 4.50 A max at 100 VAC
2.40 A max at 200 VAC
Power consumption (operating maximum) 453.0 VA|432.0 W max at 100 VAC
485.0 VA|427.0 W max at 200VAC
Power factor 0.95 minimum at full load, 100V/200V
Heat dissipation (operating maximum) 1.56 x 10 6 J/hr, (1,474 Btu/hr) max at 100 VAC
154.0 x 10 6 J/hr, (1,457 Btu/hr) max at 200 VAC
In-rush current 30 A max for ½ line cycle, per line cord at 240 VAC
Startup surge current 40 Amps peak max per line cord, at any line voltage
AC protection 15 A fuse on each power supply, both Line and Neutral
AC inlet type IEC320-C14 appliance coupler, per power zone
Ride-through time 12 ms minimum
Current sharing ± 5 percent of full load, between power supplies

Table 15. 25-drive slot disk array enclosure DC power specifications


Requirement Description
DC line voltage -39 to -72 V DC (Nominal -48V or -60V power systems)
DC line current (operating maximum) 11.0 max at -39 V DC; 9.10 A max at -48 V DC; 6.2 A max at
-72 V DC
Power consumption (operating maximum) 428 W max at -39 V DC; 437 W max at -48 V DC; 448 W max
at -72 V DC
Heat dissipation (operating maximum) 1.54 x 106 J/hr, (1,460 Btu/hr) max at -39 V DC; 1.57 x 106
J/hr, (1,491 Btu/hr) max at - 48 V DC; 1.61 x 106 J/hr, (1,529
Btu/hr) max at -72 V DC
In-rush current 40 A peak, per requirement in EN300 132-2 Sect. 4.7 limit
curve
DC protection 50 A fuse in each power supply
DC inlet type Positronics PLBH3W3M4B0A1/AA
Mating DC connector Positronics PLBH3W3F0000/AA; Positronics Inc.,
www.connectpositronic.com

Technical specifications 17
Table 15. 25-drive slot disk array enclosure DC power specifications (continued)
Requirement Description
Ride-through time 1 ms min at -50 V input
Current sharing ± 5 percent of full load, between power supplies

3U, 80-drive disk-array enclosure (DAE)


Table 16. 80-drive disk-array enclosure AC power specifications
Requirement Description
AC line voltage 200 to 240 V AC ± 10%, single-phase, 47 to 63 Hz
AC line current (operating maximum) 8.06 A max at 200 V AC
Power consumption (operating maximum) 1,611 VA (1,564 W) max
Power factor 0.98 min at full load, low voltage
Heat dissipation 5.63 x 10 6 J/hr (5,337 Btu/hr) max
In-rush current 30 A max for ½ line cycle, per line cord at 240 V AC
Startup surge current 25 A rms max for 100 ms, per line cord at any line voltage
AC protection 12 A fuse on each line cord, both phases
AC inlet type IEC320-C14 appliance coupler, two per power zone
Ride-through time 12 msecs per minute per power supply
Current sharing ± 10% of full load, between power supplies
Note: Ratings assume a fully configured 80-drive DAE that includes 4 power supplies, 2 LCCs, and 80 disk drives.

System operating limits


The ambient temperature specification is measured at the rear inlet. The site must have air conditioning of the correct size and
placement to maintain the specified ambient temperature range and offset the heat dissipation listed below.

Table 17. System operating limits


Requirement Description
Ambient temperature 10° C to 50° C (50° F to 122° F) 1

Temperature gradient 10° C/hr (18° F/hr)


Relative humidity (extremes) 20% to 80% noncondensing
Relative humidity (recommended 2 ) 40% to 50% noncondensing
Elevation -50 to 10,000 ft (-16 to 3,048 m)
1- See High ambient temperature shutdown for system behavior at high ambient temperatures. 2 - The allowable relative
humidity level is 20 to 80% noncondensing. However, the recommended operating environment range is 40 to 55%. To
minimize the risk of hardware corrosion and degradation, we recommend lower temperatures and humidity for data centers
with gaseous contamination such as high sulfur content. In general, the humidity fluctuations within the data center should be
minimized. We also recommend that the data center be positively pressured and have air curtains on entry ways to prevent
outside air contaminants and humidity from entering the facility. For facilities below 40% relative humidity, we recommend
grounding straps when contacting the equipment to avoid the risk of electrostatic discharge (ESD), which can harm electronic
equipment.

NOTE: For systems mounted in a cabinet, the operating limits listed above must not be exceeded inside the closed cabinet.
Equipment mounted directly above or below an enclosure must not restrict the front-to-rear airflow of the storage system.

18 Technical specifications
Cabinet doors must not impede the front-to-rear airflow. The cabinet must exhaust air at a rate that is equal to or greater
than the sum of the exhaust rates of all the equipment mounted in the cabinet.

Table 18. High ambient temperature shutdown


Ambient temperature Hardware fault Consequence
Above 62° C (143° F) None System shuts down
52° C (125° F) None System cache disabled
50° C (122° F) Single fan fault System shuts down
Any Multiple fan faults System shuts down after five
minute timer expires for destaging
cache

DPE airflow
The enclosure uses an adaptive cooling algorithm that increases/decreases fan speed as the unit senses changes to the external
ambient temperature. Exhaust increases with ambient temperature and fan speed, and is roughly linear within recommended
operating parameters. Note that the information in the table below is typical, and was measured without cabinet front/rear
doors that would potentially reduce front-to-back air flow.

Table 19. DPE airflow


Max Airflow CFM Min Airflow CFM Max Power
Usage (Watts)
106 CFM 40 CFM 850 W

Environmental recovery
If the system exceeds the maximum ambient temperature by approximately 10°C (18°F), the storage processors (SPs) in the
system begin an orderly shutdown that saves cached data, and then shut themselves down. Link control cards (LCCs) in each
DAE in the system power down their disks but remain powered on. If the system detects that the temperature has dropped to
an acceptable level, it restores power to the SPs and the LCCs restore power to their disks.

Air quality requirements


The products are designed to be consistent with the requirements of the American Society of Heating, Refrigeration and Air
Conditioning Engineers (ASHRAE) Environmental Standard Handbook and the most current revision of Thermal Guidelines for
Data Processing Environments, Second Edition, ASHRAE 2009b.
Cabinets are best suited for Class 1 datacom environments, which consist of tightly controlled environmental parameters,
including temperature, dew point, relative humidity and air quality. These facilities house mission-critical equipment and are
typically fault-tolerant, including the air conditioners.
The data center should maintain a cleanliness level as identified in ISO 14664-1, class 8 for particulate dust and pollution control.
The air entering the data center should be filtered with a MERV 11 filter or better. The air within the data center should be
continuously filtered with a MERV 8 or better filtration system. In addition, efforts should be maintained to prevent conductive
particles, such as zinc whiskers, from entering the facility.
The allowable relative humidity level is 20 to 80% non condensing, however, the recommended operating environment range
is 40 to 55%. For data centers with gaseous contamination, such as high sulfur content, lower temperatures and humidity are
recommended to minimize the risk of hardware corrosion and degradation. In general, the humidity fluctuations within the data
center should be minimized. It is also recommended that the data center be positively pressured and have air curtains on entry
ways to prevent outside air contaminants and humidity from entering the facility.
For facilities below 40% relative humidity, it is recommended to use grounding straps when contacting the equipment to avoid
the risk of Electrostatic discharge (ESD), which can harm electronic equipment.
As part of an ongoing monitoring process for the corrosiveness of the environment, it is recommended to place copper and
silver coupons (per ISA 71.04-1985, Section 6.1 Reactivity), in airstreams representative of those in the data center. The

Technical specifications 19
monthly reactivity rate of the coupons should be less than 300 Angstroms. When monitored reactivity rate is exceeded, the
coupon should be analyzed for material species and a corrective mitigation process put in place.
Storage time (unpowered) recommendation: do not exceed 6 consecutive months of unpowered storage.

Fire suppressant disclaimer


Fire prevention equipment in the computer room should always be installed as an added safety measure. A fire suppression
system is the responsibility of the customer. When selecting appropriate fire suppression equipment and agents for the data
center, choose carefully. An insurance underwriter, local fire marshal, and local building inspector are all parties that you should
consult during the selection of a fire suppression system that provides the correct level of coverage and protection.
Equipment is designed and manufactured to internal and external standards that require certain environments for reliable
operation. We do not make compatibility claims of any kind nor do we provide recommendations on fire suppression systems. It
is not recommended to position storage equipment directly in the path of high pressure gas discharge streams or loud fire sirens
so as to minimize the forces and vibration adverse to system integrity.
NOTE: The previous information is provided on an “as is” basis and provides no representations, warranties, guarantees or
obligations on the part of our company. This information does not modify the scope of any warranty set forth in the terms
and conditions of the basic purchasing agreement between the customer and the manufacturer.

Shock and vibration


Products have been tested to withstand the shock and random vibration levels. The levels apply to all three axes and should
be measured with an accelerometer on the equipment enclosures within the cabinet and shall not exceed any of the following
values.

Platform condition Response measurement level


Non operational shock 25 Gs, 3 ms duration
Operational shock 6 Gs, 11 ms duration
Non operational random vibration 0.40 Grms, 5–500 Hz, 30 minutes
Operational random vibration 0.21 Grms, 5–500 Hz, 10 minutes

Systems that are mounted on an approved package have completed transportation testing to withstand the following shock and
vibrations in the vertical direction only and shall not exceed:

Packaged system condition Response measurement level


Transportation shock 10 Gs, 12ms duration
Transportation random vibration ● 0.28 Grms
● 4 hours Frequency range 1-100 Hz

Shipping and storage requirements


CAUTION: Systems and components must not experience changes in temperature and humidity that are likely
to cause condensation to form on or in that system or component. Do not exceed the shipping and storage
temperature gradient of 45°F/hr (25°C/hr).

Table 20. Shipping and storage requirements


Requirement Description
Ambient temperature -40° F to 149°F (-40°C to 65°C)
Temperature gradient 45°F/hr (25°C/hr)
Relative humidity 10% to 90% noncondensing

20 Technical specifications
Table 20. Shipping and storage requirements (continued)
Requirement Description
Elevation -50 to 35,000 ft (-16 to 10,600 m)
Storage time (unpowered) Do not exceed 6 consecutive months of
unpowered storage.

Technical specifications 21
3
Hardware component descriptions
This section describes the Unity 300/300F/350F/380/380F, Unity 400/400F/450F, Unity 500/500F/550F, and Unity
600/600F/650F platform components. Included with the component description are illustrations and tables of the LEDs, ports
or connectors, and any controls.
NOTE: In the following sections, the illustrations and corresponding tables describe these individual components. These
descriptions are for illustrative purposes only.

Topics:
• Disk processor enclosure
• 2U DPE rear view
• Storage processor internal components

Disk processor enclosure


Two types of disk drive DPEs are supported:
● either 3.5-inch disk drives (hot-swappable)
● either 2.5-inch disk drives (hot-swappable)
NOTE: Disk drives used in the 2U, 12 disk drive DPE cannot be interchanged with the disk drives from a 2U, 25 disk drive
DPE.

NOTE: When calculating the number of drives supported, the DPE is included in the total drive slot quantity.

Each model supports a different number of drive slots and DAEs.


● Unity 300F/350F and Unity 300 - 150 drive slots
● Unity 400F/450F and Unity 400 - 250 drive slots
● Unity 500F/550F and Unity 500 - 500 drive slots
● Unity 600F/650F and Unity 600 - 1000 drive slots
The number of DAEs supported by the Unity™ All Flash and Unity Hybrid is variable depending on the drive type in the DPE and
DAEs. A Dell Unity™ All Flash and Unity Hybrid system cannot be configured with more drive slots than supported and will fault
the DAE that contains the slots above the system limits.
Unless the array is restricted by its slot count, each back-end loop could contain:
● Ten 3U, 15-drive DAEs (150 slots)
● Ten 2U, 25-drive DAEs (250 slots)
● Three 3U, 80-drive DAEs (240 slots)

General disk processor enclosure information


The DPE (disk processor enclosure) comprises the following components:
● Drive carrier
● Disk drives
● Midplane
● Storage processor (SP) CPU
● SP power supply module
● EMI shielding

22 Hardware component descriptions


Drive carrier
The disk drive carriers are metal and plastic assemblies that provide smooth, reliable contact with the enclosure slot guides and
midplane connectors. Each carrier has a handle with a latch and spring clips. The latch holds the disk drive in place to ensure
proper connection with the midplane. Disk drive activity/fault LEDs are located on the front of the enclosure.

Disk drives
Each disk drive consists of one disk drive in a carrier. You can visually distinguish between disk drive types by their different
latch and handle mechanisms and by type, capacity, and speed labels on each disk drive. You can add or remove a disk drive
while the DPE is powered up, but you should exercise special care when removing modules while they are in use. Disk drives are
extremely sensitive electronic components.

Midplane
A midplane separates the front-facing disk drives from the rear-facing SPs. It distributes power and signals to all components in
the enclosure. SPs and disk drives plug directly into the midplane.

Storage processor (SP) assembly


The SP assembly is the intelligent component of the DPE. Acting as the control center, each SP assembly includes status LEDs.

SP power supply module


Each SP contains a power supply module that connect the system to an exterior power source. Each power supply includes
LEDs to indicate component status. A latch on the module locks it into place to ensure proper connection.

EMI shielding
EMI compliance requires a properly installed electromagnetic interference (EMI) shield in front of the DPE disk drives. When
installed in cabinets that include a front door, the DPE includes a simple EMI shield. Other installations require a front bezel that
has a locking latch and integrated EMI shield. You must remove the bezel/shield to remove and install the disk drives.

Hardware component descriptions 23


2U, 12 (3.5-inch) disk drive DPE
The following illustration shows the location of the disk drives and the status LEDs in a 2U, 12 (3.5-inch) disk drive DPE.

1 2 3

0 3
SAS12Gb
Removing these drives will SAS12Gb
Removing these drives will SAS12Gb
Removing these drives will SAS12Gb
Removing these drives will
make the array unusable make the array unusable make the array unusable make the array unusable

SPD SPD SPD SPD

SAS12Gb SAS12Gb SAS12Gb SAS12Gb

SAS12Gb SAS12Gb SAS12Gb SAS12Gb

Figure 3. Example of the 2U, 12 (3.5-inch) disk drive DPE (front view)

Table 21. 2U, 12 (3.5-inch) disk drive DPE descriptions


Location Description Location Description

1 3.5-inch SAS disk drive 3 DPE power on LED (blue)

2 DPE fault LED (amber) 4 Disk drive ready/activity and fault LED (blue
and amber)

The following table describes the 2U, 12 (3.5-inch) disk drive DPE and the disk drive status LEDs.

Table 22. 2U, 12 (3.5-inch) DPE and disk drive LEDs


LED Location Color State Description
DPE fault 2 Amber On DPE fault, including SP faults.
— Off Normal
DPE power 3 Blue On Powering and powered up
— Off Powered down
Disk drive ready/activity and fault 4 Blue On Powering and powered up
NOTE: The disk drive LED (a
Blinking, mostly Disk drive is on with I/O activity
left or right triangle symbol)
on
points to the disk drive that it
refers to. Blinking at Disk drive is spinning up or down
constant rate normally
Blinking, mostly Disk drive is powered up but not
off spinning
NOTE: This is a normal
part of the spin-up sequence,
occurring during the spin-up
delay of a slot.

Amber On Fault has occurred


— Off Disk drive is powered down

24 Hardware component descriptions


Product Serial Number Tag
The Product Serial Number Tag (PSNT) is a serialized label allowing Dell service to track nested hardware material in the field.
The PSNT for the 12-slot DPE is a pull-out tag that is located in the upper right side of the enclosure.

91
78
56
34 )
12 ON
M0 ER
: FN ( OB
/SN 89
ID 6-7
OD -45
PR 23
:1
PN

91
78
56
34 N)
12
M0 RO
FN BE
N: 89
(O
/S -7
ID 56
OD 3 -4
PR 12
:
PN

CL5779

Figure 4. PSNT location

2U, 25 (2.5-inch) disk drive DPE


The following illustration shows the location of the disk drives and the status LEDs in a 2U, 25 (2.5-inch) disk drive DPE.

1 2 3

Caution: Array Software on drives 0-3. Removing or relocating them

Will Make the Array Unusable

5 4

Figure 5. Example of the 2U, 25 (2.5-inch) disk drive DPE (front view)

Table 23. 2U, 25 (2.5-inch) disk drive DPE details


Location Description Location Description

1 2.5-inch SAS disk drive 4 Disk drive fault LED (amber)

2 DPE fault LED (amber) 5 Disk drive ready/activity LED (blue)

3 DPE power status LED (blue)

Hardware component descriptions 25


The following table describes the 2U, 25 (2.5-inch) disk drive DPE and the disk drive status LEDs.

Table 24. 2U, 25 (2.5-inch) DPE and disk drive LEDs


LED Location Color State Description
DPE fault 2 — Off No fault has occurred, normal
operation
Amber On Fault has occurred
DPE power 3 Blue On Powering and powered up
— Off Powered down
Disk drive fault 4 Amber On Fault has occurred
— Off No fault has occurred
Disk drive on/activity 5 Blue On Powering and powered up
Blinking Disk drive activity

Product Serial Number Tag


The Product Serial Number Tag (PSNT) is a serialized label allowing Dell service to track nested hardware material in the field.
The PSNT for the 25-slot DPE is a pull-out tag that is located between the disk drives in slots 16 and 17.

1
89
67
45
23
M01
FN
N: )
ID
/S ON
OD BER
PR 9 (O
78
6-
45
3-
: 12
PN

1
789
4 56
23
M 01
FN
N: N)
/S RO
ID E
OD (OB
PR 89
-7
4 56
2 3-
:1
PN

CL5780

Figure 6. PSNT location

2U DPE rear view


On the rear of the 2U DPE, viewing from top to bottom, each logical SP (B and A), consists of:
● One power supply module
● One storage processor
● up to two Ultraflex I/O modules

26 Hardware component descriptions


The following illustration shows the location of the replaceable components at the back of the DPE.

1 2 3

x4 10 GbE
3 2
x4
1 0
4
4
1
1
MAC:
3 2
5
5
DC
AC
1 GbE

1 GbE
AC
DC
5
5
MAC:
2 3
1 1 4
4
2 3 0 1
10 GbE x4 x4

Figure 7. DPE rear view with component locations

Table 25. DPE rear view descriptions


Location Description Location Description

1 Power supply module (SP B) 3 Ultraflex I/O module slots (SP


B), filler modules shown

2 Storage processor assembly 4 SP A


(SP B)

Storage processor rear view


On the rear of the storage processor, viewing from left to right, are:
● Two RJ45 LAN connectors (labeled with a network management symbol and a wrench symbol) management ports
● SP status LEDs
● One mini-HDMI port and one USB 3.0 port
● Reset button (NMI)
● Two 10-GbE ports
● Two 12-Gb/s mini-SAS HD ports
● Two integrated Converged Network Adapter (CNA) ports

Hardware component descriptions 27


The following illustration shows the location of the SP components:

1 2 3 4

1 GbE

5
5
MAC:
2 3
1
1 4
4
2 3 0 1
10 GbE x4 x4

14 12 10 7 6 5
13 11 9 8

Figure 8. Example storage processor rear view

Table 26. Storage processor rear view descriptions


Location Description Location Description

1 Management LAN (RJ45) 8 SP unsafe to remove LED


port (black with white hand)

2 Grounding screw (required for 9 SP fault LED


DC-powered systems)

3 Torque knob for SP removal 10 SP power LED

4 Two converged network 11 Non-maskable interrupt (NMI)


adapter (CNA) ports (labeled push button (password reset
4 and 5) button) a

5 Two 12 Gb/s mini-SAS HD 12 SP memory or boot fault LED


ports (labeled 0 and 1)

6 Two 10 GbE ports (labeled 2 13 Mini-HDMI port (not used)


and 3)
NOTE: Unity XT
380/380F systems that
are manufactured in the
second half of 2022 and
later, will not include
these two 10 GbE ports.
For more information,
see the Dell Unity
XT: Introduction to the
Platform - A Detailed
Review white paper on
Online Support.

7 USB 3.0 port 14 Service LAN (RJ45) port

a. NMI = non-maskable interrupt, push button used for password reset and forcing a system dump. Hold for 2 seconds to
reset the password. Hold for 10 seconds or more forces a reboot.

The following table describes the SP status LEDs.

Table 27. Storage processor LED details


LED Location Color State Description
SP power LED 10 Green On The SP is on main power.

28 Hardware component descriptions


Table 27. Storage processor LED details (continued)
LED Location Color State Description
Blinking (1 Hz) The SP is initializing a serial
over LAN (SOL) session (standby
mode).
— Off The SP is off.
Unsafe to 8 White On DO NOT remove the SP.
Remove LED NOTE: Improper removal of
the SP when this LED is lit
could cause data loss during
critical situations.

— Off Safe to remove the SP without


the risk of data loss when the SP
has been properly prepared.
SP fault LED 9 Amber Blinking once every four seconds BIOS is running.
(.25 Hz)
Blinking once every second (1 Hz) POST is running.
Blinking four times every second POST has completed, and
(4 Hz) operating system boot has started.
On An SP fault is detected.
Blue Blinking once every four seconds Operating system is booting.
(.25 Hz)
Blinking once every second (1 Hz) Operating system driver is
starting.
Blinking four times every second Operating system caching driver is
(4 Hz) starting.
On ● SP is in degraded mode.
● System is not initialized. A
management IP address is
assigned.
NOTE: Once the license is
accepted, the SP fault LED
turns off.

— Off All operating system software has


booted, and the SP is ready for
I/O.
Amber and blue Alternating at one second intervals SP is in Service mode.
Amber and then immediately blue System is not initialized, and no
every three seconds management IP address assigned.
SP memory or 12 Amber On The SP cannot boot due to a
boot fault LED memory or boot fault.
— Off Normal Operation.

About converged network adapter (CNA) ports


Each SP contains two integrated CNA ports (labeled 4 and 5). These ports are PCI Express 3.0 x4 adapters that provide
interfaces that can be configured as Ethernet, or Fibre Channel, but once set to a protocol, cannot be changed. If CNA ports are
set to Ethernet, then you can use either 1Gb/s, 10Gb/s SFPs or Twin-AX for File (IP) or iSCSI Block access. If the ports are set
to Fibre Channel, then you can use SFPs supporting 4, 8, 16 Gb/s FC multi-mode, or single-mode SFPs supporting 16Gb/s only.

Hardware component descriptions 29


NOTE: Once you set the network protocol on the CNA ports you cannot switch to a different network protocol.
Additionally, the four CNA ports cannot be configured independently; they must all be configured with the same network
protocol. For example, if you configure the CNA ports for 10Gb/s Ethernet you cannot then later switch these ports to
Fibre Channel.

Table 28. CNA configurations


Speed Protocol Connection
1 Gb/s iSCSI and IP/file BASE-T RJ45 Ethernet
10 Gb/s iSCSI and IP/file SFP+ or Active/Passive TwinAx
4/8/16 Gb/s Fibre Channel 1 SFP+ or OM2/OM3
4/8 Gb/s Fibre Channel SFP+ or OM2/OM3
16 Gb/s Fibre Channel (Single Mode 2 ) SFP+ or OS1/OS2
1- You may experience performance issues when directly attaching 16Gb/s FC ports to some 16Gb/s HBAs. See the Unity
Family Release Notes for more details.
2- If there is a synchronous replication port, it can be configured as single mode and the remaining ports can be configured as
multi mode.

1 GbE

55
55
MAC:
2 3
1
44
1
44
2 3 0 1
10 GbE x4 x4

Figure 9. CNA port locations

CNA port activity LED


The CNA activity port LED — a bi-color blue/green LED between the two CNA ports each connector — indicates the link/
activity of the port. The port activity LED color depends on the protocol configured on the CNA.
● Fibre Channel CNA ports use a blue LED
● Ethernet CNA ports use a green LED
The following table describes the link/activity and connection speed associated with the CNA port LEDs.

Table 29. CNA port LEDs


LED Color State Description
Link/Activity Green On Ethernet link active
Blinking (1 Hz) Ethernet port fault
Blue On Fibre Channel link active
Blinking (1 Hz) Fibre Channel port fault
— Off Link inactive (Ethernet or FC)

SP I/O module types


Many I/O module types are supported by the storage processor.
NOTE: When adding new I/O modules, always install I/O modules in pairs—one module in SP A and one module in SP B.
Both SPs must have the same type of I/O modules in the same slots.
Refer to Platform Overview for a details on the supported types and the system limits of storage processor I/O modules.

30 Hardware component descriptions


● Four-port 12-Gb/s SAS I/O module -- where supported, provides four mini-HD SAS ports (x16 lane) of 12Gb SAS expansion
for connecting additional DAEs. This I/O module also supports controller based encryption. Labeled 12Gb SAS v1.
● Four-port 16-Gb/s Fibre Channel I/O module -- provides Fibre Channel connectivity as listed below. Labeled 16Gb Fibre v3.
○ Four ports auto-negotiating to 4/8/16Gbps. Uses optical SFP+ and OM2/OM3 cabling to connect directly to a host HBA
or FC switch.
○ One FC port negotiating to 16Gbps, which can be configured for synchronous replication between two Unity systems,
either directly connected or connected through a switch. Uses optical SFP+ and SM or MM cabling to provide
synchronous replication. The three remaining ports auto-negotiate to 4/8/16 Gbps, and use optical SFP+ and OM2/OM3
cabling to connect directly to a HBA or FC switch.
● Four-port 10-Gb/s optical I/O module -- provides four SFP+ optical or Active/Passive TwinAx 10GbE IP/iSCSI ports for
connections to an Ethernet switch. Supports both IP(file) and iSCSI (Block) on the same I/O module. Ports can be
configured as both IP and iSCSI simultaneously. Labeled 10 GbE v5.
● Four-port 10GBASE-T I/O module -- provides four copper 10GBASE-T RJ45 Ethernet ports for copper connections to an
Ethernet switch. Supports both IP (file) and iSCSI (Block) on the same IO module. Ports can be configured as both IP and
iSCSI simultaneously. Labeled 10GbE BaseT v2.
● Four-port 1GBASE-T I/O module -- provides four 1000BASE-T RJ-45 copper ports for Cat 5/6-cabling connections to an
Ethernet switch. Supports both IP (file) and iSCSI (Block) on the same I/O module. Ports can be configured as both IP and
iSCSI simultaneously. Labeled 1 GbE BaseT v3.
● Two-port 10Gb/s optical I/O module -- provides two SFP+ optical or Active/Passive TwinAx 10GbE ports for connections
to an Ethernet switch. Supports both IP (file) and full iSCSI Offload engine (Block) on the same IO module. Ports can be
configured as both IP and iSCSI simultaneously. Labeled 10 GbE V6.

Detailed introduction to supported I/O modules


Overview of the supported optional I/O modules available for use in your system.
Review these sections to learn about the uses, features, ports, and LEDs for the supported optional I/O modules.

Hardware component descriptions 31


Four-port 12-Gb/s SAS
Where supported, the four-port (x16 lane) 12-Gb/s SAS I/O module comes with four x4 lane mini-SAS HD (High Density) ports,
one power/fault LED, and a combination link/activity LED for each port. Install this I/O module into the SP to provide additional
SAS buses. Labeled 12Gb SAS v1.

NOTE: The optional back-end 12-Gb/s SAS module is not supported on all Unity storage systems.

The four-port 12-Gb/s SAS I/O module can also be configured to support x8 lane cabling for the 80-drive DAE by combining
ports 0 and 1 as back-end 2, or ports 2 and 3 to create back-end 4. The I/O module can also be configured to support both x4
lane and x8 lane back-ends simultaneously.
NOTE: If the 12-Gb/s SAS I/O module is to be configured for x8 lane cabling, the x8 lane cable must be inserted into the
I/O module before persisting it. If the x8 lane cables are not inserted into the I/O module first, all four ports default to x4
lane ports.

!
0 1 2 3

4 3
Figure 10. Four-port 12-Gb/s SAS locations

Table 30. Four-port 12-Gb/s SAS location details


Location Description Location Description

1 Push button latch handle and 3 12-Gb/s mini-SAS HD port


part number label

2 Power/fault LED 4 Port link/activity LED

This four-port 12-Gb/s SAS I/O module has two different types of status LEDs.

Table 31. Four-port 12-Gb/s SAS LED descriptions


LED Location Color State Description
Power/Fault 2 Green On I/O module is powered on.
Amber On I/O module has faulted.
— Off I/O module is powered off.
Link/activity 4 Blue On Network connection
Blue Blinking Transmit/receive activity
— Off No activity

32 Hardware component descriptions


Four-port 16-Gb/s Fibre Channel
The four-port 16-Gb/s FC I/O module comes with four optical (fibre) ports, one power/fault LED, and a link/activity LED for
each optical port. This I/O module can interface at speeds of 4, 8, and 16 Gb/s FC for host or initiator layered connections.
Labeled 16Gb Fibre v3.

!
0 1 2 3 2

4 3
Figure 11. Four-port 16-Gb/s Fibre Channel locations

Table 32. Four-port 16-Gb/s Fibre Channel location details


Location Description Location Description

1 Push button latch handle and 3 16-Gb/s FC port


part number label

2 Power/fault LED 4 Port link/activity LED (blue)

This four-port 16-Gb/s FC I/O module has two different types of status LEDs.

Table 33. Four-port 16-Gb/s Fibre Channel LED descriptions


LED Location Color State Description
Power/Fault 2 Green On I/O module is powered on.
Amber On I/O module has faulted.
— Off I/O module is powered off.
Link/activity 4 Blue On Network connection
Blue Blinking Small form-factor pluggable
(SFP+) transceiver module
faulted, unsupported, or optical
cable fault.
— Off No network connection

Hardware component descriptions 33


Four-port 10-Gb/s optical
The four-port 10-GbE optical SFP or active/passive TwinAx I/O module with four 10-Gb/s ports, one power/fault LED, and
link/activity LED for each port. This I/O module can interface at 10 Gb/s and supports both IP(file) and iSCSI (Block) on the
same IO module. Ports can be configured as both IP and iSCSI simultaneously. Labeled 10 GbE v5.

!
0 1 2 3 2

4 3
Figure 12. Four-port 10-Gb/s optical locations

Table 34. Four-port 10-Gb/s optical location details


Location Description Location Description

1 Push button latch handle and 3 10Gb/s optical or TwinAx


part number label Ethernet port

2 Power/fault LED 4 Port link/activity LED

This four-port 10-GbE optical SFP or active/passive TwinAx I/O module has two types of status LEDs.

Table 35. Four-port 10-Gb/s optical LED descriptions


LED Location Color State Description
Power/Fault 2 Green On I/O module is powered on.
Amber On I/O module has faulted.
— Off I/O module is powered off.
Link/activity 4 Green On Network connection
Green Blinking Small form-factor pluggable
(SFP+) transceiver module
faulted, unsupported, or optical
cable fault.
— Off No network connection

34 Hardware component descriptions


Four-port 10GBASE-T
The four-port 10-GbE BaseT I/O module comes with four 10-Gb/s RJ-45 ports, one power/fault LED, activity LED, and link LED
for each port. This I/O module can interface at speeds of 1 Gb/s and 10 Gb/s and supports both IP(file) and iSCSI (Block) on
the same IO module. Ports can be configured as both IP and iSCSI simultaneously. Labeled 10GbE BaseT v2.

!
0 1 2 3 2

5 4 3
Figure 13. Four-port 10GBASE-T locations

Table 36. Four-port 10GBASE-T location details


Location Description Location Description

1 Push button latch handle and 4 Link


part number label

2 Power/fault LED 5 Activity

3 RJ-45 (copper) port

This four-port 10-GbE BaseT I/O module has three types of status LEDs.

Table 37. Four-port 10GBASE-T LED descriptions


LED Location Color State Description
Power/Fault 2 Green On I/O module is powered on.
Amber On I/O module has faulted.
— Off I/O module is powered off.
Link 4 Green On Network connection
— Off No network connection
Activity 5 Amber Blinking Transmit/receive activity
— Off No activity

Hardware component descriptions 35


Four-port 1GBASE-T
The four-port 1-GbE BaseT I/O module comes with four 1-Gb/s RJ-45 ports, one power/fault LED, activity LED, and link LED
for each port. This I/O module can interface at speeds of 10 Mb/s, 100 Mb/s, and 1000 Mb/s. Supports both IP(file) and iSCSI
(Block) on the same IO module. Ports can be configured as both IP and iSCSI simultaneously. Labeled 10GbE BaseT v2.

!
0 1 2 3 2

5 4 3
Figure 14. Four-port 1GBASE-T locations

Table 38. Four-port 1GBASE-T location details


Location Description Location Description

1 Push button latch handle and 4 Link


part number label

2 Power/fault LED 5 Activity

3 RJ-45 (copper) port

This four-port 1-GbE BaseT I/O module has three types of status LEDs.

Table 39. Four-port 1GBASE-T LED descriptions


LED Location Color State Description
Power/Fault 2 Green On I/O module is powered on.
Amber On I/O module has faulted.
— Off I/O module is powered off.
Link 4 Green On Network connection
— Off No network connection
Activity 5 Amber Blinking Transmit/receive activity
— Off No activity

36 Hardware component descriptions


Two-port 10Gb/s optical
The two-port 10-Gb/s optical SFP or active/passive TwinAx I/O module with two 10-Gb/s ports, one power/fault LED, and
link/activity LED for each port. This I/O module can interface at 10 Gb/s and supports full iSCSI Offload. Supports both IP (file)
and full iSCSI Offload (Block) on the same IO module. Ports can be configured as both IP and iSCSI simultaneously. Labeled 10
GbE V6.

!
0 1 2

4 3
Figure 15. Two-port 10Gb/s optical locations

Table 40. Two-port 10Gb/s optical location details


Location Description Location Description

1 Push button latch handle and 3 10-Gb/s optical SFP or active


part number label TwinAx port

2 Power/fault LED 4 Port link/activity LED

This two-port 10-Gb/s optical SFP or active/passive TwinAx I/O module has two types of status LEDs.

Table 41. Two-port 10Gb/s optical LED descriptions


LED Location Color State Description
Power/Fault 2 Green On I/O module is powered on.
Amber On I/O module has faulted.
— Off I/O module is powered off.
Link/activity 4 Green On Network connection
Green Blinking Small form-factor pluggable
(SFP+) transceiver module
faulted, unsupported, or optical
cable fault.
— Off No network connection

Small form-factor pluggable (SFP) transceiver modules


Certain I/O modules use a small form-factor pluggable plus (SFP+) transceiver module for cable connections. The SFP+
transceiver modules connect to Lucent Connector (LC) type interface (see Lucent Connector type interface for more
information) optical fibre cables. These SFP+ transceiver modules are input/output (I/O) devices. These SFP+ modules are
hot swappable. This means that you can install and remove an SFP+ module while the component is operating.
Example of an SFP+ module shows an example of an SFP+ module.

Hardware component descriptions 37


4
3

2
1 CNS-001090

Figure 16. Example of an SFP+ module

Table 42. SFP+ module descriptions


Location Description
1 Dust plug (protective cap)
2 Bale clasp latch
3 Send or transmit (TX) optical bore
4 Receive (RX) optical bore

Lucent Connector type interface


The Lucent Connector (LC) type interface was developed by Lucent Technologies (hence, Lucent Connector). It uses a
push-pull mechanism. LC connectors are normally held together in a multimode duplex configuration with a plastic clip.
These cables are usually colored orange for OM2 multimode optical fiber type cables, aqua for OM3 multimode optical fiber type
cables, and yellow for single mode optical fiber type cables. The multimode cables have the duplex connectors encased in a gray
plastic covering. The single mode cables are encased in a blue plastic covering.
To determine the send or transmit (TX) and receive (RX) ferrules (connector ends), these cables will show a letter and numeral
(for example A1 and A2 for the TX and RX, respectively) or a white and yellow rubber gasket (jacket) for the send or transmit
(TX) and receive (RX) ends. Example of LC-type connectors shows an example of LC-type connectors.

38 Hardware component descriptions


1

2
A
1
A

4 CNS-001102

Figure 17. Example of LC-type connectors

Table 43. LC-type connector details


Location Description
1 Cable
2 Rubber gasket (jacket), send or transmit (TX)
3 Rubber gasket (jacket), receive (RX)
4 Ferrule (connector end to SFP+ module)

SP power supply module


SP latch, power supply (power in) recessed connector (plug), and status LEDs shows the SP power supply module. Each power
supply includes three LEDs (AC, DC, and fault). A latch on the module locks it into place to ensure proper connection.

AC
1
DC
2

Figure 18. SP latch, power supply (power in) recessed connector (plug), and status LEDs

NOTE: The power supply used in your storage system must meet the storage system power requirements and must be the
same type of power supply to be used in both SPs (SP A and B). You cannot mix power supply types.
SP power supply (fault and power on) LEDs describes the power supply (fault and power on) LEDs.

Hardware component descriptions 39


Table 44. SP power supply (fault and power on) LEDs
LED Location Color State Description
AC power (input) 1 Green On AC Power on
— Off AC Power off, verify source power
DC power (output) 2 Green On DC Power on
— Off DC Power off, verify source power
Fault 3 Amber On Power supply or backup fault,
check cable connection
Blinking BIOS, POST and OS booting up or
system overheating
— Off No fault or power off

Storage processor internal components


Included within the SP are the following replaceable components:
● Memory modules
● Battery backup unit (BBU)
● SSD internal disk
● Cooling modules (5)

Memory modules Four memory module slots reside on the SP printed circuit board (motherboard) within the SP. Depending
on the model, three or four of these DIMM slots will be populated with 8 GB, 16 GB, or 32 GB DIMMs.
DIMMs used in Unity systems support error-correcting code (ECC) memory.
Batter backup The SP includes a Lithium-ion (Li-ion) internal battery or BBU that powers the associated SP module
unit (BBU) during a power event.
SSD internal disk Each SP has an internal disk on at top side of the SP motherboard and is located adjacent to cooling
module 4.
Cooling modules Five redundant cooling modules connect to the motherboard within the SP to provide continuous airflow
through the front disks and through the rear SP to keep the DPE components at optimal operating
temperatures. Within each SP assembly are two adaptive cooling zones managed by the five internal
cooling modules. Cooling modules 0-2 direct airflow through zone 1 and cooling modules 3 and 4 direct
airflow though zone 2.
NOTE: An SP will perform a protective thermal shutdown if two cooling modules fault within the
same SP.

40 Hardware component descriptions


4
Disk-array enclosures
This section describes and illustrates the front- and rear-panel controls, ports, and LED indicators on the supported disk-array
enclosures (DAEs).
Topics:
• General information on front-loading DAEs
• 2U, 25 (2.5-inch) DAE
• 3U, 15 (3.5-inch) DAE
• General information on drawer-type DAEs
• 3U, 80 (2.5-inch) DAE

General information on front-loading DAEs


Each DAE with front facing drives typically consists of the following components:
● Drive carrier
● Disk drive
● Midplane
● Link control cards (LCCs)
● Power supply/cooling modules
● EMI shielding

Drive carrier
The disk drive carriers are metal and plastic assemblies that provide smooth, reliable contact with the enclosure slot guides and
midplane connectors. Each carrier has a handle with a latch and spring clips. The latch holds the disk drive in place to ensure
proper connection with the midplane. Disk drive activity/fault LEDs are integrated into the carrier.

Disk drives
Each disk drive consists of one disk drive in a carrier. You can visually distinguish between disk drive types by their different
latch and handle mechanisms and by type, capacity, and speed labels on each disk drive. You can add or remove a disk drive
while the DAE is powered up, but you should exercise special care when removing disk drives while they are in use. Disk drives
are extremely sensitive electronic components.

Midplane
A midplane separates the front-facing disk drives from the rear-facing LCCs and power supply/cooling modules. It distributes
power and signals to all components in the enclosure. LCCs, power supply/cooling modules, and disk drives plug directly into the
midplane.

Link control cards (LCCs)


An LCC supports, controls, and monitors the DAE, and is the primary interconnect management element. Each LCC includes
connectors for input and expansion to downstream devices. An enclosure address (EA) indicator is located on each LCC. Each
LCC also includes a bus (loop) identification indicator.

Disk-array enclosures 41
Power supply/cooling modules
The power supply/cooling module integrates independent power supply and blower cooling assemblies into a single module.
Each power supply is an auto-ranging power-factor-corrected, multi-output, off-line converter with its own line cord. The drives
and LCC have individual soft-start switches that protect the disk drives and LCC if you install them while the disk enclosure is
powered up. A disk or blower with power-related faults will not affect the operation of any other device.
Each power/cooling module has three status LEDs.

EMI shielding
EMI compliance requires a properly installed electromagnetic interference (EMI) shield in front of the DAE disk drives. When
installed in cabinets that include a front door, the DAE includes a simple EMI shield. Other installations require a front bezel that
has a locking latch and integrated EMI shield. You must remove the bezel/shield to remove and install the disk drive modules.

Disk drive type


Serial Attached SCSI (SAS) and Flash (solid state disk drives with flash memory, or SSD) disk drives are 12-volt, and support the
SAS interface. Firmware and drive carriers are unique to Dell.

2U, 25 (2.5-inch) DAE


The 25 (2.5-inch) disk drive DAE is 2 rack units (U), 3.40 inches, high and includes slots for 25 2.5-inch disk drives. It uses a
12-Gb/s SAS interface for communication between the storage processors (SPs) and the DAE.
Review the following sections for details on the components and LEDs comprising this DAE.

2U, 25-drive DAE front view


On the front, the 2U, 25 disk drive DAE includes the following components:
● Disk drives in 2.5-inch carriers (hot-swappable)
● Status LEDs
Example of a 2U, 25 (2.5-inch) disk drive DAE (front view) shows the location of these components.

1 2 3

0 24

5 4

Figure 19. Example of a 2U, 25 (2.5-inch) disk drive DAE (front view)

Table 45. 2U, 25-drive DAE descriptions


Location Description Location Description

1 2.5-inch 6-Gb/s SAS drives 4 Disk drive fault LED (amber)

2 DAE fault LED (amber) 5 Disk drive status/activity (blue)

42 Disk-array enclosures
Table 45. 2U, 25-drive DAE descriptions (continued)
Location Description Location Description

3 DAE power status LED (blue)

2U, 25-drive DAE and disk drive status LEDs describes the 2U, 25 (2.5-inch) DAE and disk drive status LEDs.

Table 46. 2U, 25-drive DAE and disk drive status LEDs
LED Location Color State Description
DAE fault 2 Blue On No fault has occurred
Amber On Fault has occurred
DAE power 3 Blue On Powering and powered up
— Off Powered down
Disk drive fault 4 Amber On Fault has occurred
— Off No fault has occurred
Disk drive on/activity 5 Blue On Powering and powered up
Blinking Disk drive activity

2U, 25 (2.5-inch) rear view


On the rear of a 2U, 25 (2.5-inch) DAE are the following components:

● Two 12-Gb/s SAS link control cards (LCC); A ( 4 ) and B ( 2 )

● Two power supply/cooling modules; A ( 3 ) and B ( 1 )


1 2 3

#
x4 0 1 x4

B 1
0

x4 x4 #

Figure 20. 2U, 25-drive DAE rear component locations

2U, 25-drive DAE LCC

LCC functions and features


The LCC supports, controls, and monitors the DAE, and is the primary interconnect management element. Each LCC includes
connectors for input and output to downstream devices.
The LCCs in a DAE connects to the storage processors and other DAEs. The cables connect the LCCs in a system in a
daisy-chain topology.
Internally, each DAE LCC uses protocols to emulate a loop; it connects to the drives in its enclosure in a point-to-point fashion
through a switch. The LCC independently receives and electrically terminates incoming signals. For traffic from the system's
storage processors, the LCC switch passes the signal from the input port to the drive being accessed; the switch then forwards
the drive output signal to the port.

Disk-array enclosures 43
Each LCC independently monitors the environmental status of the entire enclosure, using a microcomputer-controlled monitor
program. The monitor communicates the status to the storage processor, which polls disk enclosure status. LCC firmware also
controls the SAS Phys and the disk-module status LEDs.
An enclosure ID, sometimes referred to as the enclosure address (EA), indicator is located on each LCC. Each LCC also includes
a bus (back-end port) identification indicator. The SP initializes the bus ID when the operating system is loaded.

12-Gb/s LCC ports, LEDs, and connectors


Each 3U, 15 (3.5-inch) DAE LCC shows the following ports, LEDs, and connectors:

1 2

x4 x4 #

8 7 6 5 4 3

Figure 21. 2U, 25-drive DAE LCC ports, LEDs, and connectors

Table 47. 2U, 25 (2.5-inch) DAE LCC descriptions


Location Description Location Description

1 Ejector latch handles 5 LCC power LED

2 LCC fault LED 6 Enclosure ID display

3 LCC management port (RJ-12) (not used) 7 12-Gb/s SAS ports

4 Back-end (BE) bus ID display 8 SAS port status LED

Table 48. 12-Gb/s LCC LEDs


LED Location Color State Description
LCC fault LED 2 Amber On Fault within the LCC
— Off No fault or powered off
LCC power LED 5 Blue On Powered on and no fault
— Off Powered off
SAS port status LED 8 Amber On SAS port faulted
Blue On SAS port linked up
— Off No connector in port

2U, 25-drive DAE power supply and cooling module

Power supply and cooling module functions and features


The power supply/cooling modules are located to the left and right of the LCCs. The units integrate independent power supply
and two dual-blower cooling assemblies into a single module.

44 Disk-array enclosures
Each power supply is an auto-ranging, power-factor-corrected, multi-output, offline converter with its own line cord. Each
supply supports a fully configured DAE and shares load currents with the other supply. The drives and LCCs have individual
soft-start switches that protect the disk drives and LCCs if they are installed while the disk enclosure is powered up. The
enclosure cooling system includes two dual-blower modules.

Power supply and cooling module connectors and LEDs


2U, 25-drive DAE AC power supply and cooling module shows an example of a 2U, 25-drive DAE AC power supply/cooling
module with a power in (recessed) connector (plug) and status LEDs.

2
3
4

6 5

Figure 22. 2U, 25-drive DAE AC power supply and cooling module

Table 49. 2U, 25 (2.5-inch) DAE descriptions


Location Description Location Description

1 Ejector latch handle 4 Power supply/cooling module fault LED

2 AC power LED (input) 5 Grounding screw

3 DC power LED (output) 6 LCC B AC power supply power in (recessed


plug)

Table 50. 2U, 25-drive DAE AC power supply/cooling module LEDs


LED Location Color State Description
AC power LED (input) 2 Green On AC power on
— Off AC power off, verify source power
DC power LED (output) 3 Green On DC power on
— Off DC power off, verify source power
Power supply/cooling module 4 Amber On Fault
fault LED
Blinking During power shutdown and during
overvoltage (OVP) and undervoltage
protection (UVP) fault
— Off No fault or power off

3U, 15 (3.5-inch) DAE


The 15 (3.5-inch) disk drive DAE is 3 rack units (U), 5.25 inches, high and includes slots for 15 3.5-inch disk drives. It uses a
12-Gb/s SAS interface for communication between the storage processors (SPs) and the DAE.

Disk-array enclosures 45
Review the following sections for details on the components and LEDs comprising this DAE.

3U, 15-drive DAE Front view


On the front, the 3U, 15 disk drive DAE includes the following components:
● Disk drives in 3.5-inch carriers (hot-swappable)
● Status LEDs
Example of a 3U, 15 disk drive DAE (front view) shows the location of these components.

1 2 3

5 4

Figure 23. Example of a 3U, 15 disk drive DAE (front view)

Table 51. 3U, 15-drive DAE descriptions


Location Description Location Description

1 3.5-inch disk drive carriers that hold 2.5- or 4 Disk drive fault LED
3.5-inch disk drives

2 DAE fault LED 5 Disk drive on/activity LED

3 DAE power on LED

3U, 15 disk drive DAE and disk drive LEDs describes the 2U, 25 (2.5-inch) DAE and disk drive status LEDs.

Table 52. 3U, 15 disk drive DAE and disk drive LEDs
LED Location Color State Description
DAE fault 2 Amber On Fault has occurred within DAE

DAE power 3 Blue On Enclosure power on (main


voltage)
— Off Enclosure power off
Disk drive fault 4 Amber On Fault has occurred
— Off No fault has occurred
Disk drive on/activity 5 Blue On Powering and powered up
Blinking Disk drive activity
— Off Powered down

46 Disk-array enclosures
3U, 15-drive DAE rear view
On the rear, the 3U, 15-drive DAE includes the following components:

● Two 12-Gb/s SAS link control cards (LCC); A ( 3 ) and B ( 1 )

● Two power supply/cooling modules; A ( 4 ) and B ( 2 )


The 3U, 15-drive DAE rear components are redundantly distributed across two sides, A and B. When viewed from behind, the
top two components make up the B-side of the DAE, and the bottom two components make up the A-side.
3U, 15-drive DAE rear component locations shows an example of the rear view of a 3U, 15-drive DAE.

1 2

A B
# x4 x4

B A B

B A
A
x4 x4
#
B A

4 3

Figure 24. 3U, 15-drive DAE rear component locations

3U, 15-drive DAE LCC

Link control card functions and features


The LCC supports, controls, and monitors the DAE, and is the primary interconnect management element. Each LCC includes
connectors for input and output to downstream devices.
The LCCs in a DAE connects to the storage processors and other DAEs. The cables connect the LCCs in a system in a
daisy-chain topology.
Internally, each DAE LCC uses protocols to emulate a loop; it connects to the drives in its enclosure in a point-to-point fashion
through a switch. The LCC independently receives and electrically terminates incoming signals. For traffic from the system's
storage processors, the LCC switch passes the signal from the input port to the drive being accessed; the switch then forwards
the drive output signal to the port.
Each LCC independently monitors the environmental status of the entire enclosure, using a microcomputer-controlled monitor
program. The monitor communicates the status to the storage processor, which polls disk enclosure status. LCC firmware also
controls the SAS Phys and the disk-module status LEDs.
An enclosure ID, sometimes referred to as the enclosure address (EA), indicator is located on each LCC. Each LCC also includes
a bus (back-end port) identification indicator. The SP initializes the bus ID when the operating system is loaded.

3U, 15-drive DAE LCC connectors and LEDs


Each 3U, 15 (3.5-inch) DAE LCC shows the following ports, LEDs, and connectors:

Disk-array enclosures 47
4
1 2 3 5
6
B A

x4 x4
#
B A

9 8 7

Figure 25. 12-Gb/s LCC ports, LEDs, and connectors

Table 53. 12-Gb/s LCC ports, LEDs, and connectors


Location Description Location Description

1 LCC management port (RJ-12) (not used) 6 Captive screw

2 12-Gb/s SAS ports 7 Part number label

3 Enclosure ID display 8 LCC power LED

4 LCC fault LED 9 SAS port status LED

5 Back-end (BE) bus ID display

Review 3U 15 disk drive DAE AC power supply/cooling module LEDs for the LED descriptions and status meanings.

Table 54. 12-Gb/s LCC LEDs


LED Location Color State Description
LCC fault LED 4 Amber On Fault within the LCC
— Off No fault or powered off
LCC power LED 8 Blue On Powered on and no fault
— Off Powered off
SAS port status LED 9 Amber On SAS port faulted
Blue On SAS port linked up
— Off No connector in port

3U, 15-drive DAE power supply and cooling module

Power supply and cooling module functions and features


The power supply/cooling modules are located above and below the LCCs. The units integrate independent power supply and
dual-blower cooling assemblies into a single module.
Each power supply is an auto-ranging, power-factor-corrected, multi-output, offline converter with its own line cord. Each
supply supports a fully configured DAE and shares load currents with the other supply. The drives and LCCs have individual
soft-start switches that protect the disk drives and LCCs if they are installed while the disk enclosure is powered up. The
enclosure cooling system includes two dual-blower modules.

48 Disk-array enclosures
Power supply and cooling module connectors and LEDs
3U, 15-drive DAE power supply and cooling module shows an example of the 3U 15 (3.5 inch) disk drive DAE AC power supply/
cooling module with a power in (recessed) connector (plug) and status LEDs.

1 2

3
6 5 4

Figure 26. 3U, 15-drive DAE power supply and cooling module

Table 55. 3U 15 disk drive DAE AC power supply/cooling module


Location Description Location Description

1 AC power in (recessed plug) connector 4 Power supply fault LED

2 Cooling fault LED 5 Part number label

3 Power supply on LED 6 Captive screw

Review 3U 15 disk drive DAE AC power supply/cooling module LEDs for the LED descriptions and status meanings.

Table 56. 3U 15 disk drive DAE AC power supply/cooling module LEDs


LED Location Color State Description
Cooling fault 2 Amber On Fault, one or both blowers not
operating normally
— Off No fault, blowers operating
normally
Power supply on 3 Green On Power on
— Off Power off
Power supply fault 4 Amber On Fault
Blinking During power shutdown and during
overvoltage and undervoltage
protection (OVP/UVP) fault
— Off No fault or power off

General information on drawer-type DAEs


Each DAE with internal drives typically consists of the following components:
● Drive carrier
● Disk drive
● Link control cards (LCCs)
● Power supply
● Cooling modules
● EMI shielding
● Cable management arms

Disk-array enclosures 49
Drive carrier
The disk drive carriers are metal and plastic assemblies that provide smooth, reliable contact with the enclosure slot guides and
midplane connectors. Each carrier has a handle with a latch and spring clips. The latch holds the disk drive in place to ensure
proper connection with the midplane. Disk drive activity/fault LEDs are integrated into the carrier.

Disk drives
Each disk drive consists of one disk drive in a carrier. You can visually distinguish between disk drive types by their different
latch and handle mechanisms and by type, capacity, and speed labels on each disk drive. You can add or remove a disk drive
while the DAE is powered up, but you should exercise special care when removing disk drives while they are in use. Disk drives
are extremely sensitive electronic components.

Link control cards (LCCs)


An LCC supports, controls, and monitors the DAE, and is the primary interconnect management element. Each LCC includes
connectors for input and expansion to downstream devices. An enclosure address (EA) indicator and bus (loop) identification
indicator is located on one LCC of each DAE.

Power supply
The power supplies and cooling modules or fans are separated. The power supplies are located on the rear. The power supply
module has an orange knob used for removing and installing the power supply module from the DAE .

Cooling modules (Fans)


The cooling modules or fans are separate from the power supply modules. The cooling modules or fans are located on the front
and middle of the drawer-type DAEs, depending on DAE type. The cooling modules or fans can only be installed/removed by
sliding the DAE forward. You access the cooling modules or fans from inside the DAE.

EMI shielding
EMI compliance requires a properly installed electromagnetic interference (EMI) shield in front of the DAE disk drives. When
installed in cabinets that include a front door, the DAE includes a simple EMI shield. Other installations require a front bezel that
has a locking latch and integrated EMI shield. You must remove the bezel/shield to remove and install the disk drive modules.

Cable management arms


Locking Scissor-type cable management arms attach to the rear of the drawer-type DAEs to provide easy cable management
for the power cords and SAS cables that attach to the rear ports of the DAE. The cable management arms extend to an open
position when the unlocked DAE is pulled forward in the cabinet and retract to a closed position when the DAE is pushed back
into the cabinet.

3U, 80 (2.5-inch) DAE


The 80 (2.5-inch) disk drive DAE is 3 rack units (U), 3.4 inches (8.64 cm) high, and includes slots for 80 2.5-inch disk drives. It
uses a 12-Gb/s SAS interface for communication between the storage processors (SPs) and the DAE.
Review the following sections for details on the components and LEDs comprising this DAE.

50 Disk-array enclosures
3U, 80-drive DAE top view

Component overview
The 3U, 80-drive DAE includes the following internal components:

● Disk drives in 2.5-inch carriers (hot-swappable) ( 1 )


● 10 redundant cooling modules

○ Five in the front of the system, labeled 0-4 ( 2 )

○ Five at the rear of the system, labeled 5-9 ( 3 )


The disk drive slots and cooling modules on an 80-drive DAE are located inside the enclosure. To access the disk drives, release
and pull the enclosure out of the cabinet. The enclosure slides out of the cabinet far enough for you to access its internal
components, and then locks on the rails in the service position so that you cannot pull it out any farther.

Figure 27. 3U, 80-drive DAE internal component locations (top view)

Disk-array enclosures 51
Disk drive LEDs

Figure 28. 2.5 inch disk drive LEDs

LED Location Color State Description


Disk drive on/activity 1 Blue On Powering and powered up
Blinking Disk drive activity
Disk drive fault 2 Amber On Fault has occurred
- Off No fault has occurred

Cooling module LEDs


Cooling modules contain only one LED, to indicate that the part has faulted.

CL5364

Figure 29. Cooling module fault LED location

3U, 80-drive DAE front view


There is only one component accessible from the front of the 3U 80-drive DAE, the system status card (SSC).

52 Disk-array enclosures
Figure 30. 3U 80-drive DAE system status card location

Table 57. System status card status LEDs


LED Location Color State Description
System status card fault LED 1 Amber On Fault within the system status
card
- Off No fault
System fault LED 2 Amber On Component within the system
(disk, fan LCC, power supply) has
faulted
- Off No fault
System status card power LED 3 Blue On Powered on and no fault
- Off Powered off

3U, 80-drive DAE rear view


The following components are accessible from the rear of the 3U, 80-drive DAE:

● Two 12-Gb/s SAS link control cards (LCC); A ( 2 ) and B ( 1 )

● Four power supplies ( 3 )


The 3U, 80-drive DAE rear components are redundantly distributed across two sides, A and B. When viewed from behind, the
right half of the system makes up the A-side of the DAE, and the left half of the system makes up the B-side.

Figure 31. 3U, 80-drive DAE rear component locations

Disk-array enclosures 53
3U, 80-drive DAE LCC

Link control card functions and features


The LCC supports, controls, and monitors the DAE, and is the primary interconnect management element. Each LCC includes
connectors for input and output to downstream devices.
The LCCs in a DAE connect to the storage processors and other DAEs. The cables connect the LCCs in a system in a daisy
chain topology.
Internally, each DAE LCC uses protocols to emulate a loop; it connects to the drives in its enclosure in a point-to-point fashion
through a switch. The LCC independently receives and electrically terminates incoming signals. For traffic from the system's
storage processors, the LCC switch passes the signal from the input port to the drive being accessed; the switch then forwards
the drive output signal to the port.
Each LCC has four ports marked AA/A and BB/B. The A and B ports are used when connecting (A) or expanding (B) using x4
lane cables. The AA/A and BB/B ports are both used when connecting (AA/A) or expanding (BB/B) using x8 lane cabling.
Each LCC independently monitors the environmental status of the entire enclosure, using a microcomputer-controlled monitor
program. The monitor communicates the status to the storage processor, which polls disk enclosure status. LCC firmware also
controls the SAS Phys and the disk-module status LEDs.
An enclosure ID, sometimes referred to as the enclosure address (EA), indicator is located on each LCC. Each LCC also includes
a bus (back-end port) identification indicator. The SP initializes the bus ID when the operating system is loaded.

NOTE: Some LCCs may not have the enclosure ID display ( 3 ) or back-end bus display ( 6 ). These LCCs are functionally
identical to LCCs with the enclosure ID display and back-end bus display. LCCs with displays always replace LCCs without
displays.

3U, 80-drive DAE LCC connectors and LEDs


Each 3U, 80-drive DAE LCC contains the following ports, LEDs, and connectors:

Figure 32. 12-Gb/s LCC ports, LEDs and connectors

Table 58. 12-Gb/s LCC ports, LEDs and connectors


Location Description

1 12-Gb/s mini SAS ports

2 Mini SAS port status LED

Enclosure ID display a
3

4 LCC fault LED

5 LCC power LED

Back-end (BE) bus ID display a


6

7 LCC management port (RJ-12) (not used)

54 Disk-array enclosures
Table 58. 12-Gb/s LCC ports, LEDs and connectors (continued)

a. May not be included on all LCCs.

Table 59. 12 Gb/s LCC LEDs


LED Location Color State Description
Mini SAS port status LED 2 Blue On SAS port linked up
Green On Powered on
- Off No connector in port
LCC fault LED 4 Amber On Fault within the LCC
- Off No fault or powered off
LCC power LED 5 Green On Powered on and no fault
- Off Powered off

3U, 80-drive DAE power supply

Power supply functions and features


The power supplies are located above the LCCs.
Each power supply is an auto-ranging, power-factor-corrected, multi-output, offline converter with its own line cord. Each
supply supports a fully configured DAE and shares load currents with the other supply. The drives and LCCs have individual
soft-start switches that protect the disk drives and LCCs if they are installed while the disk enclosure is powered up.

Power supply components and LEDs

Figure 33. 3U, 80-drive DAE power supply components and LEDs

Table 60. 3U, 80-drive DAE power supply components and LEDs
Location Description

1 AC power in (recessed plug) connector

2 Release lever

3 Retaining bail

4 Power supply fault LED

5 AC output LED

6 AC input LED

Disk-array enclosures 55
Table 61. 3U, 80-drive DAE power supply LEDs
LED Location Color State Description
Power supply fault 4 Amber On Fault
- Off No fault or power off
AC power LED (input) 5 Green On Power on
- Off Power off, verify
source power
AC output LED 6 Green On Power on
- Off Power off, verify
source power

56 Disk-array enclosures
A
Cabling
This section describes examples of the types of cabling you will need to connect the DAEs to your system. The descriptions
are presented in illustrations and text. Each illustration shows an example of the cable connection points (ports) located on the
specific hardware component.

NOTE: The following sections only discuss the DAE cabling with the customer installable front-loading DAEs.

For all other cabling of your system, its installation guide provides information about the system power cabling, DAE power
cabling, PDU power cabling, LAN cabling, and so on.
Topics:
• Cable label wraps
• Cabling the DPE to a DAE
• Cabling an expansion DAE to an existing DAE to extend a back-end bus
• 12Gb/s SAS cabling for interleaved DAE configurations
• 12Gb/s SAS cabling for stacked DAE configurations
• Attaching expansion (back-end) cables to an 80-drive DAE

Cable label wraps


Each system comes with a cable label wrap guide or set of cable label wraps to affix to the cables. These labels should be
affixed to the appropriate cables as you connect the cables.
NOTE: If your system was assembled at the factory, all the cable labels have been affixed to the cables except for any
DAEs you have ordered. Additionally, if your system was not assembled at the factory, the cable kit supplied with your
product will have all the required cables already labeled except for the DAEs.

Cabling the DPE to a DAE


If you have one or more DAEs, these components must be cabled to the DPE back-end ports so that the storage is available
in the system. Typically, the DAE(s) that are to be directly connected to the DPE need to be located close enough to the
DPE so that the 2-meter DPE-to-DAE interconnect cables can be routed and connected to the DPE easily. 5- and 10-meter
interconnect cables are available when you need to connect enclosures across multiple racks.
NOTE: General DAE back-end bus configuration rules:
1. Maximum number of enclosures per bus is 10.
2. Maximum number of drive slots per bus is 250, up to specific system limitations for drive slots.
3. For best performance, evenly distributing DAEs across the available back-end buses is recommended.
Consider the maximum number of drives supported by the storage system model. DAEs can be added to the system while
the operating system is active and up to the DAE and drive slot limit for the storage system. DAEs or drive slots over the
system limit will not be allowed to operate with the system.
Shown in the upcoming figures are examples of two-bus SAS cabling in this DPE-based storage platform. The storage
processors connect to the DAEs with mini-SAS HD cables. The cables connect LCCs in the DAEs of a storage platform in
a daisy-chain topology.
The mini-SAS HD ports on the storage processors in the DPE are labeled 0 and 1. Mini-SAS HD port 0 is connected internally to
the SAS expander that connects the drives on the front of the DPE. The DPE and its front facing drives begin the first back-end
bus, BE0, and is automatically enclosure 0 (EA0). We refer to the address of this enclosure as BE0 EA0.

NOTE: Each DAE supports two completely redundant connections to the DPE (LCC A and LCC B).

Cabling 57
Since mini-SAS HD port 0 is already connected internally to the DPE drives, it is recommended that you connect the first
optional DAE to the mini-SAS HD output port 1 of each storage processor to begin back-end bus 1 (BE1) and designate this DAE
as enclosure 0 of this bus. We refer to the address of this enclosure as BE1 EA0.
In a two back-end bus system, it is recommended that you connect the second optional DAE to the mini-SAS HD port 0 of each
storage processor.

DAE load balancing


If your system has several optional DAEs, you can daisy-chain them within that bus. However, it is recommended that you
balance each bus. In other words, always optimize your environment by using every available bus, and spreading the number of
enclosures and drives as evenly as possible across the buses.
The rule of load or bus balancing is applied to all DAEs. BE0 EA0 (0_0) is the DPE (SP A and B). So, to balance the load, the
first DAE (LCC A and B) in the cabinet is BE1 EA0 (1_0) and with the second DAE BE0 EA1 (0_1), and so on.

Cabling the first optional DAE to create back-end bus 1


Connect the first optional expansion DAE to port 1 of the DPE to create back-end bus 1 (BE1) and designate this DAE as
enclosure 0 of this bus. We refer to the address of this enclosure as BE1 EA0 (1_0).

Prerequisites
To prepare for this cabling task:
● Locate the mini-SAS HD cables to be used to connect to the newly installed expansion DAE.
Typically these cables are 2-meters long. You use longer cables, typically 5-meters or 8-meters, to connect enclosures
located in different racks. Cables are shipped without labels attached. The cables and ports are not colored.
● Locate the sheet of cable labels provided.
Orient the cable connectors as described in the procedure that follows, making sure that you do NOT connect:
● A DAE expansion port 0 to another expansion port 0.
● Any A-side ports to B-side ports.

About this task


Use the following illustrations to complete this cabling task:

x4
x4 0 1 x4
x4

B 1
0

2 x4
x4 x4
1

x4 x4
3
x4
11 0 0
4
4
1
1
MAC:
5
5
DC
AC
1 GbE

AC
DC
5
5

4
4
0 1
x4

58 Cabling
A BB
# x4 x4

B B
A B

B
B A

0
A
x4 x4
#
BB AA

2 1

x4 x4 10 GbE
3 2
x4
11 0 0
4
4
1
1
MAC:
3
5
5
DC
AC
1 GbE

AC
DC
5
5

4
4
0 1
x4

Figure 34. Example: DPE to DAE BE1 enclosure 0

NOTE: When cabling the 15-drive DAE LCC SAS ports, ensure that the cables do not overlap behind the DAE. The
illustration above demonstrates the proper method for cabling to the DAE LCC SAS ports.

Steps
1. Label a pair of mini-SAS HD cables using the blue labels shown here.

Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-001-562_xx 046-021-012_xx

SP A SAS 1 LCC A PORT A


SP A SAS 1 LCC A PORT A
SP A SAS 1 LCC A PORT A
SP A SAS 1 LCC A PORT A
046-001-562 SP A SAS 1 046-021-012 LCC A Port A

046-003-750_xx 046-021-013_xx

SP B SAS 1 LCC B PORT A


SP B SAS 1 LCC B PORT A
SP B SAS 1 LCC B PORT A
SP B SAS 1 LCC B PORT A
046-003-750 SP B SAS 1 046-021-013 LCC B Port A

2. Connect each SP to the first optional DAE to create BE1 EA0.


NOTE: Neither connector on the mini-SAS HD cable has a symbol to indicate input or output.

a. Connect port 1 on SP A in the bottom slot in the DPE to port A on the link control card A (LCC A) at the bottom of the
DAE. [ 1 ]

b. Connect port 1 on SP B in the top slot in the DPE to port A on the link control card (LCC B) at the top of the DAE. [ 2 ]

Cabling 59
Cabling the second optional DAE to extend back-end bus 0
Connect the second optional expansion DAE to the DPE expansion port 0 to extend back-end bus 0 (BE0) and designate this
DAE as enclosure 1 of this bus. We refer to the address of this enclosure as BE0 EA1 (0_1).

About this task


Use the following illustration to complete this cabling task:

# x4 AA BB x4
x4 x4

B AA BB

BB AA
A
x4
x4 x4
x4
BB AA

2
1

x4
1 0
4
4
1
MAC:
3 2
5
5
DC
AC
1 GbE

1 GbE
AC
DC
5
5

4
4
0 1
x4

Figure 35. Example: DPE to 15-drive DAE

NOTE: When cabling the 15-drive DAE LCC SAS ports, ensure that the cables do not overlap behind the DAE. The
illustration above demonstrates the proper method for cabling to the DAE LCC SAS ports.

Steps
1. Label a pair of mini-SAS HD cables using the orange labels shown here.

Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-001-561_xx 046-021-010_xx

SP A SAS 0 LCC A PORT A


SP A SAS 0 LCC A PORT A
SP A SAS 0 LCC A PORT A
SP A SAS 0 LCC A PORT A
046-001-561 SP A SAS 0 046-021-010 LCC A Port A

046-003-489_xx 046-021-011_xx

SP B SAS 0 LCC B PORT A


SP B SAS 0 LCC B PORT A
SP B SAS 0 LCC B PORT A
SP B SAS 0 LCC B PORT A
046-003-489 SP B SAS 0 046-021-011 LCC B Port A

60 Cabling
2. Connect DPE port 0 to the new DAE to extend BE0.
a. Connect port 0 on SP A in the bottom slot in the DPE to port A on the link control card A (LCC A) at the bottom of the
DAE. [ 1 ]

b. Connect port 0 on SP B in the top slot in the DPE to port A on the link control card (LCC B) at the top of the DAE. [ 2 ]

Cabling the DPE SAS module ports to create back-end buses 2


through 5
Where supported, the following example shows how to connect remaining four SAS back-end ports and shows the cable labels
for these SAS cables, as well as the back-end bus and enclosure numbers for these DPE to DAE connections.

About this task

NOTE: The optional back-end 12-Gb/s SAS module is not supported on all Unity storage systems.

Cable the DAE to the 12-Gb/s SAS modules in the DPE 0, port 0 through port 3, to create back-end bus 2 through 5, BE2-BE5.
Use the following illustration to complete this cabling task:

LCC B Port A 2_0 SP B B0 PORT 0

LCC B Port A 3_0 SP B B0 PORT 1

LCC B Port A 4_0 SP B B0 PORT 2

LCC B Port A 5_0 SP B B0 PORT 3

x4
x4 1 0
4
4
1
1
3 2 1 0
5
5
DC
AC
1 GbE

1 GbE
AC
DC
5
5
0 1 2 3
4
4
0 1
x4

SP A A0 PORT 3 5_0 LCC A Port A

SP A A0 PORT 2 4_0 LCC A Port A

SP A A0 PORT 1 3_0 LCC A Port A

SP A A0 PORT 0 2_0 LCC A Port A

Figure 36. Bus 2, Bus 3 , Bus 4, and Bus 5 enclosure 0 SAS cabling

● 2_0 side A, black, SP A B0 port 0 to DAE <w> LCC A port A


● 2_0 side B, black, SP B B0 port 0 to DAE <w> LCC B port A
● 3_0 side A, green, SP A B0 port 1 to DAE <x> LCC A port A
● 3_0 side B, green, SP B B0 port 1 to DAE <x> LCC B port A
● 4_0 side A, brown, SP A B0 port 2 to DAE <y> LCC A port A
● 4_0 side B, brown, SP B B0 port 2 to DAE <y> LCC B port A
● 5_0 side A, cyan, SP A B0 port 3 to DAE <z> LCC A port A
● 5_0 side B, cyan, SP B B0 port 3 to DAE <z> LCC B port A
For each new BE2-BE5:

Cabling 61
Steps
1. Label a pair of mini-SAS HD cables using the appropriate labels (black, green, brown, or cyan) shown here.

Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-005-679_xx 046-021-016_xx

SP A A0 PORT 0 LCC A Port A


SP A A0 PORT 0 LCC A Port A
SP A A0 PORT 0 LCC A Port A
SP A A0 PORT 0 LCC A Port A
046-005-679 SP A A0 PORT 0 046-021-16 LCC A Port A

046-005-718_xx 046-021-017_xx

SP B B0 PORT 0 LCC B Port A


SP B B0 PORT 0 LCC B Port A
SP B B0 PORT 0 LCC B Port A
SP B B0 PORT 0 LCC B Port A
046-005-718 SP B B0 PORT 0 046-021-017 LCC B Port A

046-005-679_xx 046-021-018_xx

SP A A0 PORT 1 LCC A Port A


SP A A0 PORT 1 LCC A Port A
SP A A0 PORT 1 LCC A Port A
SP A A0 PORT 1 LCC A Port A
046-005-711 SP A A0 PORT 1 046-021-018 LCC A Port A

046-005-718_xx 046-021-019_xx

SP B B0 PORT 1 LCC B Port A


SP B B0 PORT 1 LCC B Port A
SP B B0 PORT 1 LCC B Port A
SP B B0 PORT 1 LCC B Port A
046-005-719 SP B B0 PORT 1 046-021-019 LCC B Port A

046-005-679_xx 046-021-020_xx

SP A A0 PORT 2 LCC A Port A


SP A A0 PORT 2 LCC A Port A
SP A A0 PORT 2 LCC A Port A
SP A A0 PORT 2 LCC A Port A
046-005-935 SP A A0 PORT 2 046-021-020 LCC A Port A

046-005-718_xx 046-021-021_xx

SP B B0 PORT 2 LCC B Port A


SP B B0 PORT 2 LCC B Port A
SP B B0 PORT 2 LCC B Port A
SP B B0 PORT 2 LCC B Port A
046-005-937 SP B B0 PORT 2 046-021-021 LCC B Port A

046-005-679_xx 046-021-022_xx

SP A A0 PORT 3 LCC A Port A


SP A A0 PORT 3 LCC A Port A
SP A A0 PORT 3 LCC A Port A
SP A A0 PORT 3 LCC A Port A
046-005-936 SP A A0 PORT 3 046-021-022 LCC A Port A

62 Cabling
Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-005-718_xx 046-021-023_xx

SP B B0 PORT 3 LCC B Port A


SP B B0 PORT 3 LCC B Port A
SP B B0 PORT 3 LCC B Port A
SP B B0 PORT 3 LCC B Port A
046-005-938 SP B B0 PORT 3 046-021-023 LCC B Port A

2. Connect each SP to the optional DAE to create BE2 enclosure 0 through BE5 enclosure 0, as needed.
a. For SP A, connect the lowest available port in the SAS module in the bottom slot of the DPE to port A on the link control
card A (LCC A) at the bottom of the DAE.
b. For SP B, connect the lowest available port in the SAS module in the top slot of the DPE to port A on the link control
card B (LCC B) at the top of the DAE.
Connect the DAE to the DPE SP slot 0 port 0 to create back-end bus 2, BE2
Connect the DAE to the DPE SP slot 0 port 0 to create back-end bus 2 (BE2) and designate this DAE as enclosure 0 of this bus.
We refer to the address of this enclosure as BE2 EA0 (2_0).

AA BB
# x4
x4 x4
x4

B AA BB
BB AA
A
x4
x4 x4
x4
BB AA

1 2
x4

4
4
1 1
MAC:
3 2 3 2 1 0
5
5
DC
AC
1 GbE

1 GbE
AC

5
5 DC

0 1 2 3
4
4

Figure 37. Example: DPE to 15-drive DAE BE2 enclosure 0

NOTE: When cabling the 15-drive DAE LCC SAS ports, ensure that the cables do not overlap behind the DAE. The
illustration above demonstrates the proper method for cabling to the DAE LCC SAS ports.
1. Label a pair of mini-SAS HD cables using the black labels shown here.

Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-005-679_xx 046-021-016_xx

SP A A0 PORT 0 LCC A Port A


SP A A0 PORT 0 LCC A Port A
SP A A0 PORT 0 LCC A Port A
SP A A0 PORT 0 LCC A Port A
046-005-679 SP A A0 PORT 0 046-021-016 LCC A Port A

Cabling 63
Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-005-718_xx 046-021-017_xx

SP B B0 PORT 0 LCC B Port A


SP B B0 PORT 0 LCC B Port A
SP B B0 PORT 0 LCC B Port A
SP B B0 PORT 0 LCC B Port A
046-005-718 SP B B0 PORT 0 046-021-017 LCC B Port A
2. Connect slot 0 port 0 on SP A in the bottom slot in the DPE to port A on the link control card A (LCC A) at the bottom of
the DAE. [ 1 ]
3. Connect slot 0 port 0 on SP B in the top slot in the DPE to port A on the link control card (LCC B) at the top of the DAE.
[ 2 ]

Cabling an expansion DAE to an existing DAE to


extend a back-end bus
Connect the optional expansion DAE to the last installed DAE in the back-end bus to extend to the new DAE.

About this task


Use the following illustration to complete this cabling task:

A B
x4 x4

B A B

B A
A
x4 x4 #
B A

1 2
x4
x4 0 1 x4
x4

B 1
0

0 A
1

x4
x4 x4
x4

Figure 38. Example: Extend SAS BE to new DAE

NOTE: When cabling the 15-drive DAE LCC SAS ports, ensure that the cables do not overlap behind the DAE. The
illustration above demonstrates the proper method for cabling to the DAE LCC SAS ports.

Steps
1. Label a pair of mini-SAS HD cables using the appropriate labels (orange, blue, lack, green, brown, or cyan) shown here.
Typically, DAEs connect to other DAEs using 1-meter cables.

64 Cabling
Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-004-455_xx 046-004-455_xx

A BE0 A BE0
A BE0 A BE0
A BE0 A BE0
A BE0 A BE0
046-004-455 LCC A Port B 046-004-455 LCC A Port A

046-004-463_xx 046-004-463_xx

B BE0 B BE0
B BE0 B BE0
B BE0 B BE0
B BE0 B BE0
046-004-463 LCC B Port B 046-004-463 LCC B Port A

046-004-456_xx 046-004-456_xx

A BE1 A BE1
A BE1 A BE1
A BE1 A BE1
A BE1 A BE1
046-004-456 LCC A Port B 046-004-456 LCC A Port A

046-004-464_xx 046-004-464_xx

B BE1 B BE1
B BE1 B BE1
B BE1 B BE1
B BE1 B BE1
046-004-464 LCC B Port B 046-004-464 LCC B Port A

046-004-457_xx 046-004-457_xx

A BE2 A BE2
A BE2 A BE2
A BE2 A BE2
A BE2 A BE2
046-004-457 LCC A Port B 046-004-457 LCC A Port A

046-004-465_xx 046-004-465_xx

B BE2 B BE2
B BE2 B BE2
B BE2 B BE2
B BE2 B BE2
046-004-465 LCC B Port B 046-004-465 LCC B Port A

046-004-458_xx 046-004-458_xx

A BE3 A BE3
A BE3 A BE3
A BE3 A BE3
A BE3 A BE3
046-004-458 LCC A Port B 046-004-458 LCC A Port A

Cabling 65
Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-004-466_xx 046-004-466_xx

B BE3 B BE3
B BE3 B BE3
B BE3 B BE3
B BE3 B BE3
046-004-466 LCC B Port B 046-004-466 LCC B Port A

046-004-459_xx 046-004-459_xx

A BE4 A BE4
A BE4 A BE4
A BE4 A BE4
A BE4 A BE4
046-004-459 LCC A Port B 046-004-459 LCC A Port A

046-004-467_xx 046-004-467_xx

B BE4 B BE4
B BE4 B BE4
B BE4 B BE4
B BE4 B BE4
046-004-467 LCC B Port B 046-004-467 LCC B Port A

046-004-460_xx 046-004-460_xx

A BE5 A BE5
A BE5 A BE5
A BE5 A BE5
A BE5 A BE5
046-004-460 LCC A Port B 046-004-460 LCC A Port A

046-004-468_xx 046-004-468_xx

B BE5 B BE5
B BE5 B BE5
B BE5 B BE5
B BE5 B BE5
046-004-468 LCC B Port B 046-004-468 LCC B Port A

2. Connect the existing DAE to the expansion DAE to extend that back-end.
If you have additional DAEs, add labels to the mini-SAS HD to mini-SAS HD cables and use those cables to extend the bus.
For more information about cabling additional DAEs, see the associated Hardware Information Guide.

a. Connect port B on the link control card A (LCC A) of the lower-numbered DAE to port A on the link control card A (LCC
A) of the higher-numbered DAE. [ 1 ]
LCC A is located on the lower portion of the DAE.
b. Connect port B on the link control card B (LCC B) of the lower-numbered DAE to port A on the link control card B (LCC
B) of the higher-numbered DAE. [ 2 ]
LCC B is located on the upper portion of the DAE.

66 Cabling
12Gb/s SAS cabling for interleaved DAE
configurations
The interleaved DAE configuration is one of the racking methods available when installing optional DAEs. An interleaved
configuration is when the optional DAEs across each of the back-end buses are racked in an interwoven manner.

About interleaved DAE cabling conventions


The interleaved DAE example with Unity platform with nineteen DAEs (all are 2U, 25 drive DAEs) with a total of 500 drives
(including the 25 drives in the DPE) across six back-end buses. As described previously, the onboard SAS ports on the DPE are
labeled 0 and 1 and the optional SAS module, where supported, contains four additional SAS ports.
DPE SAS port 0 is connected internally to the SAS expander that connects to the front-facing drives in the DPE and thus
begins back-end bus 0 and is enclosure 0 on this back-end (BE0 EA0). So when cabling the first expansion DAE, to balance the
load, this DAE is cabled to DPE SAS port 1 to begin back-end bus 1 as enclosure 0 (BE1 EA0). Then, the rest of the DAEs in the
bus are daisy-chained where they are intertwined. So, the 1st DAE is daisy-chained to the 7th DAE designated as BE1 EA1, and
so on.
The 2nd DAE connects to DPE SAS port 0 to extend back-end bus 0 as enclosure 1 (BE0 EA1) and is daisy-chained to the 8th
DAE, designated as BE0 EA2, and so on.
The 3rd DAE connects to DPE SAS module port 0 to begin back-end bus 2 as enclosure 0 (BE2 EA0) and is daisy-chained to the
9th DAE, designated as BE2 EA1, and so on.
The 4th DAE connects to DPE SAS module port 1 to begin back-end bus 3 as enclosure 0 (BE3 EA0) and is daisy-chained to the
10th DAE, designated as BE3 EA1, and so on.
The 5th DAE connects to DPE SAS module port 2 to begin back-end bus 4 as enclosure 0 (BE4 EA0) and is daisy-chained to the
11th DAE, designated as BE4 EA1, and so on.
Finally, the 6th DAE connects to DPE SAS module port 3 to begin back-end bus 5 as enclosure 0 (BE5 EA0) and is daisy-
chained to the 12th DAE, designated as BE5 EA1, and so on.

Cabling 67
19 2U DAEs in a interleaved configuration across 6 back-end buses
Example: DAE number and address DAE port connections
Port A (Input) Port B (Output)
1_3/DAE 19 - BE 1 EA 3 (Blue) Connected to DAE Not connected
13
x4 0 1 x4

1
0

1_3
5_2/DAE 18 - BE 5 EA 2 Connected to DAE Not connected
x4 x4

x4 0 1 x4

1
0

5_2 (Cyan) 12
x4 x4

4_2/DAE 17 - BE 4 EA 2 Connected to DAE Not connected


x4 0 1 x4

1
0

4_2
x4 x4
(Brown) 11
x4 0 1 x4

3_2/DAE 16 - BE 3 EA 2 Connected to DAE Not connected


1
0

3_2
x4 x4

x4 0 1 x4
(Green) 10
2_2
1
0

x4 x4
2_2/DAE 15 - BE 2 EA 2 Connected to DAE Not connected
x4 0 1 x4

(Black) 9
0_3
1
0

x4 x4

x4 0 1 x4
0_3/DAE 14 - BE 0 EA 3 Connected to DAE Not connected
(Orange) 8
1

1_2
0

x4 x4

x4 0

0
1

1
x4

1_2/DAE 13 - BE 1 EA 2 (Blue) Connected to DAE Connected to


5_1 7 DAE 19
x4 x4

0 1

5_1/DAE 12 - BE 5 EA 1 Connected to DAE Connected to


x4 x4

1
0

4_1
x4 x4
(Cyan) 6 DAE 18
x4 0 1 x4

1
0

3_1 4_1/DAE 11 - BE 4 EA 1 Connected to DAE Connected to


(Brown) 5 DAE 17
x4 x4

x4 0 1 x4

1
0

2_1
x4 x4
3_1/DAE 10 - BE 3 EA 1 Connected to DAE Connected to
0 1

(Green) 4 DAE 16
x4 x4

1
0

0_2
x4 x4

x4 0 1 x4
2_1/DAE 9 - BE 2 EA 1 Connected to DAE Connected to
(Black) 3 DAE 15
1
0

1_1
x4 x4

x4 0

0
1

1
x4

0_2/DAE 8 - BE 0 EA 2 Connected to DAE Connected to


5_0 (Orange) 2 DAE 14
x4 x4

x4 0 1 x4

1_1/DAE 7 - BE 1 EA 1 (Blue) Connected to DAE Connected to


1
0

4_0
x4 x4

x4 0 1 x4
1 DAE 13
1
0

3_0 5_0/DAE 6 - BE 5 EA 0 Connected to DPE Connected to


(Cyan) 0 port 3 DAE 12
x4 x4

x4 0 1 x4

1
0

2_0
x4 x4
4_0/DAE 5 - BE 4 EA 0 Connected to DPE Connected to
(Brown) 0 port 2 DAE 11
x4 0 1 x4

1
0

0_1
3_0/DAE 4 - BE 3 EA 0 Connected to DPE Connected to
x4 x4

x4 0 1 x4

(Green) 0 port 1 DAE 10


1
0

x4 x4
1_0
2_0/DAE 3 - BE 2 EA 0 Connected to DPE Connected to
1 0

3 2 1 0

0 1 2 3

0 1
x4
0_0 (Black) 0 port 0 DAE 9
0_1/DAE 2 - BE 0 EA 1 Connected to DPE Connected to
(Orange) SAS 0 DAE 8
1_0/DAE 1 - BE 1 EA 0 (Blue) Connected to DPE Connected to
SAS 1 DAE 2

68 Cabling
12Gb/s SAS cabling for stacked DAE configurations
The stacked DAE configuration is another one of the racking methods available when installing optional DAEs. A stacked
configuration is when the optional DAEs within a back-end loop are installed one on top of the other until all the DAEs in that
loop are installed into the rack. Then, the next set of DAEs in the next back-end loop are installed.

About stacked DAE cabling conventions


The stacked DAE example with Unity platform with nineteen DAEs (all are 2U, 25 drive DAEs) with a total of 500 drives
(including the 25 drives in the DPE) across six back-end buses. As described previously, the onboard SAS ports on the DPE are
labeled 0 and 1 and the optional SAS module, where supported, contains four additional SAS ports.
DPE SAS port 0 is connected internally to the SAS expander that connects to the front-facing drives in the DPE and thus
begins back-end bus 0 and is enclosure 0 on this back-end (BE0 EA0). So when cabling the first expansion DAE, to balance the
load, this DAE is cabled to DPE SAS port 1 to begin back-end bus 1 as enclosure 0 (BE1 EA0). Then, the rest of the DAEs in the
bus are daisy-chained where they are stacked. So, the 1st DAE is daisy-chained to the 2nd DAE designated as BE1 EA1, and so
on.
The 5th DAE connects to DPE SAS port 0 to extend back-end bus 0 as enclosure 1 (BE0 EA1) and is daisy-chained to the 6th
DAE, designated as BE0 EA2, and so on.
The 8th DAE connects to DPE SAS module port 0 to begin back-end bus 2 as enclosure 0 (BE2 EA0) and is daisy-chained to the
9th DAE, designated as BE2 EA1, and so on.
The 11th DAE connects to DPE SAS module port 1 to begin back-end bus 3 as enclosure 0 (BE3 EA0) and is daisy-chained to the
12th DAE, designated as BE3 EA1, and so on.
The 14th DAE connects to DPE SAS module port 2 to begin back-end bus 4 as enclosure 0 (BE4 EA0) and is daisy-chained to
the 15th DAE, designated as BE4 EA1, and so on.
Finally, the 17th DAE connects to DPE SAS module port 3 to begin back-end bus 5 as enclosure 0 (BE5 EA0) and is daisy-
chained to the 18th DAE, designated as BE5 EA1, and so on.

Cabling 69
19 2U DAEs in a stacked configuration across 6 back-end buses
Example: DAE number and address DAE port connections
Port A (Input) Port B (Output)
5_2/DAE 19 - BE 5 EA 2 Connected to DAE Not connected
(Cyan) 18
x4 0 1 x4

5_2
0

5_1/DAE 18 - BE 5 EA 1 Connected to DAE Connected to


x4 x4

x4 0 1 x4

(Cyan) 17 DAE 19
1
0

5_1
x4 x4

x4 0 1 x4

5_0/DAE 17 - BE 5 EA 0 Connected to DPE Connected to


5_0
1
0

x4 x4
(Cyan) 0 port 3 DAE 18
x4 0 1 x4

4_2 4_2/DAE 16 - BE 4 EA 2 Connected to DAE Not connected


1
0

x4 x4

x4 0 1 x4
(Brown) 15
0
1

4_1
4_1/DAE 15 - BE 4 EA 1 Connected to DAE Connected to
(Brown) 14 DAE 16
x4 x4

x4 0 1 x4

0
1

4_0
x4 x4
4_0/DAE 14 - BE 4 EA 0 Connected to DPE Connected to
(Brown) 0 port 2 DAE 15
x4 0 1 x4

3_2
1
0

3_2/DAE 13 - BE 3 EA 2 Connected to DAE Not connected


x4 x4

x4 0 1 x4

3_1
1
0

(Green) 12
x4 x4

3_1/DAE 12 - BE 3 EA 1 Connected to DAE Connected to


x4 0 1 x4

3_0
1
0

x4 x4
(Green) 11 DAE 13
x4 0 1 x4

2_2 3_0/DAE 11 - BE 3 EA 0 Connected to DPE Connected to


1
0

x4 x4

x4 0 1 x4
(Green) 0 port 1 DAE 12
1

2_1
0

x4 x4
2_2/DAE 10 - BE 2 EA 2 Connected to DAE Not connected
x4 0

0
1

1
x4

(Black) 9
2_0
x4 x4

x4 0 1 x4
2_1/DAE 9 - BE 2 EA 1 Connected to DAE Connected to
(Black) 10 DAE 8
1
0

0_3
x4 x4

x4 0

0
1

1
x4

2_0/DAE 8 - BE 2 EA 0 Connected to DPE Connected to


0_2 (Black) 0 port 0 DAE 9
x4 x4

x4 0 1 x4

0_3/DAE 7 - BE 0 EA 3 Connected to DAE Not connected


1
0

0_1
x4 x4
(Orange) 6
x4 0 1 x4

1
0

1_3 0_2/DAE 6 - BE 0 EA 2 Connected to DAE Connected to


(Orange) 5 DAE 7
x4 x4

x4 0 1 x4

1
0

1_2
x4 x4
0_1/DAE 5 - BE 0 EA 1 Connected to DPE Connected to
(Orange) SAS 0 DAE 6
x4 0 1 x4

1
0

1_1
1_3/DAE 4 - BE 1 EA 3 (Blue) Connected to DAE Not connected
x4 x4

x4 0 1 x4

3
1
0

x4 x4
1_0
1_2/DAE 3 - BE 1 EA 2 (Blue) Connected to DAE Connected to
1 0

3 2 1 0

0 1 2 3

0 1
x4
0_0 2 DAE 4
1_1/DAE 2 - BE 1 EA 2 (Blue) Connected to DAE Connected to
1 DAE 3
1_0/DAE 1 - BE 1 EA 0 (Blue) Connected to DPE Connected to
SAS 1 DAE 2

70 Cabling
Attaching expansion (back-end) cables to an 80-drive
DAE
Do NOT FORCE the cable into a connector. A click indicates that the cable is completely seated in the connector.

Prerequisites
To prepare for this cabling task:
● Locate the mini-SAS HD cables to be used to connect to the newly installed expansion DAE.
Typically these cables are 2-meters long. You use longer cables, typically 5-meters or 8-meters, to connect enclosures
located in different racks. Cables are shipped without labels attached. The cables and ports are not colored.
● Locate the sheet of cable labels provided.
Orient the cable connectors as described in the procedure that follows, making sure that you do NOT connect:
● A DAE expansion port 0 to another expansion port 0.
● Any A-side ports to B-side ports.
NOTE: If you are connecting the 80-drive DAE to a 4-port SAS SLIC that requires x8 connectivity, insert the SAS cable
into the 4-port SAS SLIC before persisting the SLIC. The 4-port SAS SLIC must be persisted with the cable inserted for x8
connectivity. If the SAS back-end SLIC is powered on without any cables inserted, it is automatically set at x4 and cannot
be used for x8 lane cabling.

Cabling for x4 connections


About this task
The drives in the DPE are internally connected to the first back-end bus, which is bus 0. To maintain balance, the first DAE
connected to the array should be connected to back-end bus 1. If the array only has 2 back end busses (0 and 1) then you
should add DAEs by alternating between bus 0 and bus 1 to maintain an even distribution, or balance of drives over the busses.
If the array has a 4-port SAS I/O module, this would create additional back-end bus numbers 2 through 5. Maintain the same
type of even distribution of drives over all of the back-end busses.
This section provides three different ways to connect the DAE to the array with an x4 connection.
● Connecting to back-end bus 1
● Connecting to back-end bus 0
● Connecting to a port on the SAS I/O module
Each installation may be different. Choose the connection option that suits your needs.

Cabling 71
Steps
● Connect to back-end bus 1: To connect the first optional expansion DAE to back-end port 1 of the DPE to create back-end
bus 1 (BE1) and designate this DAE as Enclosure Address 0 of this bus. We refer to the address of this enclosure as BE1 EA0
(1_0):
1. Label a pair of mini-SAS HD cables using the blue labels shown here.

Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-001-562_xx 046-021-012_xx

SP A SAS 1 LCC A PORT A


SP A SAS 1 LCC A PORT A
SP A SAS 1 LCC A PORT A
SP A SAS 1 LCC A PORT A
046-001-562 SP A SAS 1 046-021-012 LCC A Port A

046-003-750_xx 046-021-013_xx

SP B SAS 1 LCC B PORT A


SP B SAS 1 LCC B PORT A
SP B SAS 1 LCC B PORT A
SP B SAS 1 LCC B PORT A
046-003-750 SP B SAS 1 046-021-013 LCC B Port A

2. Connect the ports as follows:


○ Connect BE port 1 on SP A (the bottom storage processor of the DPE) to port A of link control card A (LCC A) on
the right side of the DAE.
○ Connect BE port 1 on SP B (the top storage processor of the DPE) to port A of link control card B (LCC B) on the
left side of the DAE.

72 Cabling
● Connect to back-end bus 0: To connect the second optional expansion DAE to the DPE expansion port 0 to extend back-end
bus 0 (BE0) and designate this DAE as Enclosure Address 1 of this bus. We refer to the address of this enclosure as BE0 EA1
(0_1):
1. Label a pair of mini-SAS HD cables using the orange labels shown here.

Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-001-561_xx 046-021-010_xx

SP A SAS 0 LCC A PORT A


SP A SAS 0 LCC A PORT A
SP A SAS 0 LCC A PORT A
SP A SAS 0 LCC A PORT A
046-001-561 SP A SAS 0 046-021-010 LCC A Port A

046-003-489_xx 046-021-011_xx

SP B SAS 0 LCC B PORT A


SP B SAS 0 LCC B PORT A
SP B SAS 0 LCC B PORT A
SP B SAS 0 LCC B PORT A
046-003-489 SP B SAS 0 046-021-011 LCC B Port A

2. Connect the ports as follows:


○ Connect BE port 0 on SP A (the bottom storage processor of the DPE) to port A of link control card A (LCC A) on
the right side of the DAE.
○ Connect port 0 on SP B (the top storage processor of the DPE) to port A of link control card B (LCC B) on the left
side of the DAE .

Cabling 73
● Connect to the 4-port SAS back-end I/O module: To connect the DAE to a BE port in the SAS I/O module of the storage
processor, cable the DAE to the first available port in the 12-Gb/s SAS I/O module. Use the same port on each storage
processor's SAS I/O module. This SAS I/O module can be used to create back-end bus 2 through 5, (BE2 through BE5):
NOTE: The optional back-end 12-Gb/s SAS module is not supported on all Unity storage systems.

NOTE: Adding a new 12-Gb/s SAS I/O module requires a coordinated restart of the array. Refer to Adding an optional
I/O module for more information.

1. Label a pair of mini-SAS HD cables using the appropriate labels (black, green, brown, or blue) shown here.

Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-005-679_xx 046-021-016_xx

SP A A0 PORT 0 LCC A Port A


SP A A0 PORT 0 LCC A Port A
SP A A0 PORT 0 LCC A Port A
SP A A0 PORT 0 LCC A Port A
046-005-679 SP A A0 PORT 0 046-021-16 LCC A Port A

046-005-718_xx 046-021-017_xx

SP B B0 PORT 0 LCC B Port A


SP B B0 PORT 0 LCC B Port A
SP B B0 PORT 0 LCC B Port A
SP B B0 PORT 0 LCC B Port A
046-005-718 SP B B0 PORT 0 046-021-017 LCC B Port A

046-005-679_xx 046-021-018_xx

SP A A0 PORT 1 LCC A Port A


SP A A0 PORT 1 LCC A Port A
SP A A0 PORT 1 LCC A Port A
SP A A0 PORT 1 LCC A Port A
046-005-711 SP A A0 PORT 1 046-021-018 LCC A Port A

046-005-718_xx 046-021-019_xx

SP B B0 PORT 1 LCC B Port A


SP B B0 PORT 1 LCC B Port A
SP B B0 PORT 1 LCC B Port A
SP B B0 PORT 1 LCC B Port A
046-005-719 SP B B0 PORT 1 046-021-019 LCC B Port A

046-005-679_xx 046-021-020_xx

SP A A0 PORT 2 LCC A Port A


SP A A0 PORT 2 LCC A Port A
SP A A0 PORT 2 LCC A Port A
SP A A0 PORT 2 LCC A Port A
046-005-935 SP A A0 PORT 2 046-021-020 LCC A Port A

046-005-718_xx 046-021-021_xx

SP B B0 PORT 2 LCC B Port A


SP B B0 PORT 2 LCC B Port A
SP B B0 PORT 2 LCC B Port A
SP B B0 PORT 2 LCC B Port A
046-005-937 SP B B0 PORT 2 046-021-021 LCC B Port A

74 Cabling
Expansion port cable labeling details Primary port cable labeling details
Label part Label Port Label part Label Port
number number
046-005-679_xx 046-021-022_xx

SP A A0 PORT 3 LCC A Port A


SP A A0 PORT 3 LCC A Port A
SP A A0 PORT 3 LCC A Port A
SP A A0 PORT 3 LCC A Port A
046-005-936 SP A A0 PORT 3 046-021-022 LCC A Port A

046-005-718_xx 046-021-023_xx

SP B B0 PORT 3 LCC B Port A


SP B B0 PORT 3 LCC B Port A
SP B B0 PORT 3 LCC B Port A
SP B B0 PORT 3 LCC B Port A
046-005-938 SP B B0 PORT 3 046-021-023 LCC B Port A

2. For SP A, connect the DAE cable to the lowest available port in the SAS module in the bottom storage processor of the
DPE to port A on link control card AA/A (LCC A) on the right side of the DAE.
3. For SP B, connect the DAE cable to the lowest available port in the SAS module in the top storage processor of the DPE
to port A on link control card BB/B (LCC B) on the left side of the DAE.

Cabling 75
Example

Figure 39. x4 cabling example

Cabling for x8 connections


Prerequisites
As previously noted, if you are connecting the DAE to a 4-port SAS I/O module that requires x8 connectivity, you must insert
the SAS cable into the 4-port SAS I/O module before persisting it. The 4-port SAS I/O module must be persisted with the cable
inserted for x8 connectivity. If the SAS back-end I/O module is powered on and persisted without any cables inserted, it is
automatically set at x4 and cannot be used for x8 lane cabling.
NOTE: x8 connections can only be made using the 4-port back-end SAS I/O module. Never use ports 1 and 2 for x8
connections.

76 Cabling
Steps
● Connect to the 4-port SAS back-end I/O module: Insert SAS cables into ports 0 and 1 or ports 2 and 3 of the 4-port SAS I/O
modules in the storage processor, if they are not connected already. For consistency and clarity, use ports 0 and 1 first. This
will create BE bus 2. The next configured x8 bus using ports 2 and 3 will create BE 4.
1. Label a pair of mini-SAS HD cables using the black or green labels.
The labels used depend upon how the back-end ports are configured.
2. Connect the ports as follows:
○ Ensure that the SAS cable is inserted into ports 0 and 1 or ports 2 and 3 of the SP A SAS module, located in the
bottom storage processor of the DPE. Connect the cable to ports AA/A of link control card A (LCC A), located on the
right side of the DAE.
○ Ensure that the SAS cable is inserted into ports 0 and 1 or ports 2 and 3 of SP B SAS module, located in the top
storage processor of the DPE. Connect the cable to ports AA/A of link control card B (LCC B), located on the left
side of the DAE.

Example

Figure 40. x8 cabling example

Cabling 77
B
Rail kits and cables
Topics:
• Rail kits
• Cable types

Rail kits
Dell sells rail kits for mounting system enclosures in 19-inch NEMA cabinets/racks and TELCO racks.

Standard NEMA racks


Model number Description Allowable rail depth
D3DPE2URK12 Adjustable rail kit for 2U DPE with 12 drives 20.3" to 34" (51.6 cm to 84.4 cm)
D3DPE2URK25 Adjustable rail kit for 2U DPE with 25 drives 20.3" to 34" (51.6 cm to 84.4 cm)
D3DAE2URK Adjustable rail kit for 2U DAE with 25 drives 20.3" to 34" (51.6 cm to 84.4 cm)
D3DAE3URK Adjustable rail kit for 2U DAE with 15 drives 20.3" to 34" (51.6 cm to 84.4 cm)
D3DAE80RK Adjustable rail kit for 3U DAE with 80 drives 18" to 36" (45.7 cm to 91.4 cm)

TELCO racks
Model number Description
VCTELCO3UDPE TELCO tray for the 2U DPE with 25 drives
VCTELCO2UDPE TELCO tray for the 2U DPE with 25 drives
VCTELCO3UDAE TELCO rail kit for the 3U DAE with 25 drives
VCTELCO3UDAE TELCO rail kit for the 2U DAE with 15 drives

Cable types
Reference information detailing the SAS, optical, and Twin Ax cables and SFP+ modules used with your systems.

SFP+ modules
Model Number For:
D3SFP1 Copper 1 Gb SFP+ qty 4 for iSCSI connection
D3SFP8F 8 Gb SFP+ qty 4 for FC connection
D3SFP10I 10 Gb SFP+ qty 4 for iSCSI connection
D3SFP16F 16 Gb SFP+ qty 4 for FC connection

78 Rail kits and cables


Model Number For:
D3SFPSM16F 16 Gb SFP+ qty 4 for FC (Single Mode) connection

Optical cables
Model Number: For:
D3FC-OM3-1M 1 meter OM3 LC-LC Multi-mode 50UM fibre optic cable
D3FC-OM3-3M 3 meter OM3 LC-LC Multi-mode 50UM fibre optic cable
D3FC-OM3-5M 5 meter OM3 LC-LC Multi-mode 50UM fibre optic cable
D3FC-OM3-10M 10 meter OM3 LC-LC Multi-mode 50UM fibre optic cable
D3FC-OM3-30M 30 meter OM3 LC-LC Multi-mode 50UM fibre optic cable
D3FC-OM3-50M 50 meter OM3 LC-LC Multi-mode 50UM fibre optic cable
D3FC-OM3-100M 100 meter OM3 LC-LC Multi-mode 50UM fibre optic cable

Active TwinAx cables


These models consist of a shielded, quad construction style cable with a 100 Ohm differential. Both ends of the cable have SFP+
style connectors that comply with SFF-8431 and SFF-8472 standards. The transmit and receive ends of the cable have active
components to facilitate the transmission of 8 Gigabit or 10 Gigabit protocols. The use of DC blocking capacitors on the receiver
is required per the SFF-8431 standard.

Model Number For:


D3TX-TWAX-1M 1 meter SFP+ to SFP+ active 8 Gb/10 Gb cable
D3TX-TWAX-3M 3 meter SFP+ to SFP+ active 8 Gb/10 Gb cable
D3TX-TWAX-5M 5 meter SFP+ to SFP+ active 8 Gb/10 Gb cable

Passive TwinAx cables


SFP+ Copper TwinAx cables are suitable for very short distances and offer a highly cost-effective way to connect within racks
and across adjacent racks.

Model Number For:


10G-SFPP-TWX-0101 1 meter SFP+ to SFP+ passive 10 Gb cable
10G-SFPP-TWX-0308 3 meter SFP+ to SFP+ passive 10 Gb cable
10G-SFPP-TWX-0508 5 meter SFP+ to SFP+ passive 10 Gb cable

Back end SAS cables


Model Number For:
D3MSHDMSSHD2 2 meter 12 Gb mini-SAS HD to mini-SAS HD cables
D3MSHDMSSHD5 5 meter 12 Gb mini-SAS HD to mini-SAS HD cables
D3MSHDMSSHD8 8 meter 12 Gb mini-SAS HD to mini-SAS HD cables

Rail kits and cables 79


DAE-to-DAE copper cabling
The expansion port interface to and between DAEs is copper cabling. The 100 Ω cables are keyed at either end, and available in
1- 10-meter lengths.
● DAE-to-DAE cables are SFF 8088 mini-SAS to mini-SAS.
● Keys are defined in the T10–SAS 2.1 specification.

80 Rail kits and cables

You might also like