Huawei Oceanstor 5000 v5 Product Description
Huawei Oceanstor 5000 v5 Product Description
Huawei Oceanstor 5000 v5 Product Description
V500R007
Product Description
Issue 08
Date 2019-06-30
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
OceanStor 5000 and 6000 V5 Series
Product Description About This Document
Purpose
This document describes the positioning, features, typical applications, architecture,
specifications, environmental requirements, standards compliance, and granted certifications
of OceanStor storage systems.
OceanStor 5000 V5 series OceanStor 5110 V5a, 5300 V5, 5500 V5, 5600 V5, and 5800
V5
a: supported in V500R007C30
Intended Audience
This document is intended for: All readers
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Change History
Changes between document issues are cumulative. The latest document issue contains all the
changes made in earlier issues.
Issue 08 (2019-06-30)
This issue is the eighth official release. The updates are as follows:
Made some changes in specifications.
Issue 07 (2019-05-15)
This issue is the seventh official release.
l Added the feature of SmartMigration for file services.
l Optimized the description of storage systems' environment requirements.
l Updated specifications.
Issue 06 (2019-03-30)
This issue is the sixth official release that contains the following changes:
Added the description of 5110 V5.
Issue 05 (2018-12-06)
This issue is the fifth official release that contains the following changes:
l Added the 40GE and 100GE interface modules.
l Added the 1288H V5 quorum server.
l Updated specifications.
Issue 04 (2018-07-30)
This issue is the fourth official release that updated specifications.
Issue 03 (2018-05-09)
This issue is the third official release that updated specifications.
Issue 02 (2018-01-30)
This issue is the second official release that updated specifications.
Issue 01 (2017-11-30)
This issue is the first official release.
Contents
4 Hardware Architecture............................................................................................................... 17
4.1 Device Composition..................................................................................................................................................... 17
4.2 3D Interactive Hardware Demonstration......................................................................................................................20
4.3 2 U Controller Enclosure (Supported by OceanStor 5110 V5/5300 V5)..................................................................... 21
4.3.1 Overview................................................................................................................................................................... 21
4.3.2 Component Description............................................................................................................................................. 24
4.3.2.1 System Subrack...................................................................................................................................................... 24
4.3.2.2 Controller................................................................................................................................................................25
4.3.2.3 Power Module.........................................................................................................................................................28
4.3.2.4 Disk Module........................................................................................................................................................... 30
4.3.3 Indicator Introduction................................................................................................................................................ 31
4.4 2 U Controller Enclosure (Supported by OceanStor 5500 V5).................................................................................... 35
4.4.1 Overview................................................................................................................................................................... 35
4.4.2 Component Description............................................................................................................................................. 38
4.4.2.1 System Subrack...................................................................................................................................................... 38
4.4.2.2 Controller................................................................................................................................................................39
4.4.2.3 Power-BBU Module............................................................................................................................................... 42
4.4.2.4 Disk Module........................................................................................................................................................... 44
4.4.3 Indicator Introduction................................................................................................................................................ 46
4.5 3 U Controller Enclosure (Supported by OceanStor 5600 V5 and 5800 V5).............................................................. 50
4.5.1 Overview................................................................................................................................................................... 50
4.5.2 Component Description............................................................................................................................................. 52
4.5.2.1 System Subrack...................................................................................................................................................... 52
4.5.2.2 Controller................................................................................................................................................................53
4.5.2.3 Fan Module.............................................................................................................................................................55
4.5.2.4 BBU........................................................................................................................................................................ 56
4.5.2.5 Management Module.............................................................................................................................................. 57
4.5.2.6 Power Module.........................................................................................................................................................58
4.5.3 Indicator Introduction................................................................................................................................................ 60
4.6 6 U Controller Enclosure (Supported by OceanStor 6800 V5).................................................................................... 64
4.6.1 Overview................................................................................................................................................................... 65
4.6.2 Component Description............................................................................................................................................. 68
4.6.2.1 System Subrack...................................................................................................................................................... 68
4.6.2.2 Controller................................................................................................................................................................68
4.6.2.3 Assistant Cooling Unit............................................................................................................................................70
4.6.2.4 Fan Module.............................................................................................................................................................71
4.6.2.5 BBU........................................................................................................................................................................ 73
4.6.2.6 Management Module.............................................................................................................................................. 74
4.6.2.7 Power Module.........................................................................................................................................................75
4.6.3 Indicator Introduction................................................................................................................................................ 76
4.7 Interface Module...........................................................................................................................................................80
4.7.1 GE Electrical Interface Module................................................................................................................................. 80
4.7.2 10GE Electrical Interface Module............................................................................................................................. 81
4.7.3 40GE Interface Module............................................................................................................................................. 83
4.7.4 100GE Interface Module........................................................................................................................................... 84
4.7.5 SmartIO Interface Module.........................................................................................................................................85
4.7.6 8 Gbit/s Fibre Channel Interface Module (Four Ports)..............................................................................................89
4.7.7 8 Gbit/s Fibre Channel Interface Module (Eight Ports)............................................................................................ 91
4.7.8 16 Gbit/s Fibre Channel Interface Module (Eight Ports).......................................................................................... 92
4.7.9 10 Gbit/s FCoE Interface Module (Two Ports)..........................................................................................................94
4.7.10 56 Gbit/s InfiniBand Interface Module................................................................................................................... 95
4.7.11 12 Gbit/s SAS Expansion Module........................................................................................................................... 96
4.7.12 12 Gbit/s SAS Shared Expansion Module...............................................................................................................98
4.8 2 U Disk Enclosure (2.5-Inch Disks)............................................................................................................................99
4.8.1 Overview................................................................................................................................................................... 99
4.8.2 Component Description........................................................................................................................................... 101
4.8.2.1 System Subrack.................................................................................................................................................... 101
4.8.2.2 Expansion Module................................................................................................................................................ 102
4.8.2.3 Power Module.......................................................................................................................................................103
4.8.2.4 Disk Module......................................................................................................................................................... 105
4.8.3 Indicator Introduction.............................................................................................................................................. 106
4.9 4 U Disk Enclosure (3.5-Inch Disks)..........................................................................................................................108
4.9.1 Overview................................................................................................................................................................. 108
4.9.2 Component Description........................................................................................................................................... 110
4.9.2.1 System Subrack.....................................................................................................................................................110
4.9.2.2 Expansion Module................................................................................................................................................ 111
4.9.2.3 Power Module.......................................................................................................................................................113
8 Standards Compliance..............................................................................................................226
9 Certifications.............................................................................................................................. 230
10 Operation and Maintenance..................................................................................................233
A How to Obtain Help.................................................................................................................235
A.1 Preparations for Contacting Huawei..........................................................................................................................235
A.1.1 Collecting Troubleshooting Information................................................................................................................ 235
A.1.2 Making Debugging Preparations............................................................................................................................ 235
A.2 How to Use the Document.........................................................................................................................................235
A.3 How to Obtain Help from Website............................................................................................................................ 236
A.4 Ways to Contact Huawei............................................................................................................................................236
B Glossary...................................................................................................................................... 237
C Acronyms and Abbreviations................................................................................................ 238
1 Product Positioning
The OceanStor 5110 V5/5300 V5/5500 V5/5600 V5/5800 V5 storage system is Huawei mid-
range storage providing stable, reliable, converged, and efficient data services for enterprises.
The OceanStor 6800 V5 storage system is Huawei mission-critical storage dedicated to
providing the highest level of data services for enterprises' key services.
The 5110 V5/5300 V5/5500 V5/5600 V5/5800 V5/6800 V5 storage system offers
comprehensive and superb solutions by unifying file-based, block-based offerings and various
protocols into a single product and using diverse efficiency boost mechanisms to provide
industry-leading performance. Those solutions help customers maximize their return on
investment (ROI) and meet the requirements of different application scenarios such as Online
Transaction Processing (OLTP) and Online Analytical Processing (OLAP) of large databases,
high-performance computing (HPC), digital media, Internet operation, centralized storage,
backup, disaster recovery, and data migration.
In addition to providing high-performance storage services for application servers, the storage
system supports advanced data backup and disaster recovery technologies, ensuring the secure
and smooth running of data services. Also, it offers easy-to-use management modes and
convenient local/remote maintenance modes, greatly decreasing the management and
maintenance costs.
2 Product Features
Designed for midtier-to-enterprise storage environments, the storage system utilizes high-
specification hardware and is available in block, file, and unified configurations. It offers
significant advancements in data applications and protection and provides the following
benefits.
Unified Storage
l Support for SAN and NAS storage technologies
Unifies SAN and NAS technologies to store both structured and unstructured data.
l Support for mainstream storage protocols
Supports mainstream storage protocols such as iSCSI, Fibre Channel, NFS, CIFS, HTTP,
and FTP.
l Support for hosts to access any LUN or file system using the front-end ports of any
controller.
High Performance
The storage system offers a three-level performance acceleration technology, and delivers
hierarchical performance for different applications. The three levels are:
1. State-of-the-art hardware
The storage system is equipped with 64-bit multi-core processors, high-speed and large-
capacity caches, and various high-speed interface modules. The superior hardware
allows it to offer better storage performance than tradition storage systems.
2. SmartTier
The SmartTier technology identifies hotspot data and periodically promotes them to
high-performance storage medium for a performance boost. In addition, SmartTier
supports SSD data caching, accelerating access to hotspot data.
3. Solid state drives (SSDs)
The storage system can be fully configured with SSDs to provide peak performance for
the most-demanding applications.
Flexible Scalability
The storage system has an outstanding scalability. It supports a wide range of the following
disks and host interface modules in a high density:
l Disks:
SAS disks, NL-SAS disks, and SSDs.
l Host interface modules:
8 Gbit/s Fibre Channel, 16 Gbit/s Fibre Channel, GE, 10GE, 10 Gbit/s FCoE, 56 Gbit/s
(4 x 14 Gbit/s) InfiniBand, and SmartIO.
The storage system also supports the Scale-out technology to improve storage system
performance as the number of controllers increases.
Proven Reliability
The storage system uses advanced technologies to offer protection measures, minimizing risks
of failures and data loss.
High Availability
In routine maintenance:
The storage system (OceanStor 5110 V5, 5300 V5, 5500 V5, 5600 V5, 5800 V5, or 6800 V5)
uses Turbo Module, online capacity expansion, and disk roaming technologies to provide high
availability for applications and non-disruptive system running during maintenance.
l Turbo Module enables controllers, fans, power modules, interface modules, BBUs, and
disks to be hot-swappable, allowing online operations.
l Dynamic capacity expansion enables users to add disks to a disk domain in an online and
easy manner.
l Disk roaming enables a storage system to automatically identify relocated disks and
resume their services.
In data protection:
The storage system provides the following advanced data protection technologies and
protocols to protect data integrity and continuous system running even when catastrophic
disasters happen:
l HyperSnap generates multiple point-in-time images for the source logical unit number
(LUN) or source file system data. The snapshot images can be used to recover data
quickly when needed.
l HyperCopy backs up data among heterogeneous storage systems for data protection.
l HyperReplication backs up local data onto a remote storage system for disaster recovery.
l HyperClone preserves a real-time physical copy of a source LUN or file system for the
high availability of local data.
l HyperMirror backs up data in real time. If the source data becomes unavailable,
applications can automatically use the data copy, ensuring data security and application
continuity.
l HyperMetro synchronizes and replicates data between storage arrays, monitors service
operating status, and performs failovers. In addition, it can switch over services and
implement service load sharing when storage arrays are running.
l The Network Data Management Protocol (NDMP) is used for data backup and recovery.
In resource management:
The storage system employs the following resource application technologies for flexible
resource management to protect customers' storage investments:
The storage system supports memory upgrade so that storage performance matches service
development.
When deleting unwanted data, the system erases the specified LUN to make the deleted
data unable to be restored, preventing critical data leaks.
l File antivirus
When the storage system runs a file system and shares the file system with clients
through CIFS, third-party antivirus software can be used to trigger virus scanning and
delete virus-infected files, improving storage system security.
Storage management security:
l Security of management and maintenance
The operations of users can be allowed and denied. All management operations are
logged by the system.
l Data integrity protection and tamper resistance
The Write Once Read Many (WORM) feature allows users to set critical data to the read-
only state, preventing unauthorized data change and deletion during a specified period of
time.
In addition, trusted verification is enabled during the storage system startup to measure and
verify BIOS > Grub > Euler Linux Kernel > Euler Linux > Storage application software
level by level to prove integrity of loaded software at each level and to prevent software
tampering. The storage system's power-on process will be verified to ensure that the system is
not tampered with.
resources used is close to the amount of resources allocated. In this way, the initial
purchase cost and total cost of ownership are reduced.
l SmartCache (intelligent storage cache)
Uses SSDs as cache resources to significantly promote system read performance when
random, small I/Os with hot data require more read operations than write operations.
l Quick document incremental backup with Tivoli Storage Manager (TSM)
When the storage system interworks with the TSM backup software to perform
incremental file backup, the Snapdiff feature uses the snapshot mechanism to quickly
obtain differential file information and identify changed files. Without the need for full
scanning, only changed files are backed, greatly shortening backup time. The backup
performance is not affected by the number of files, which greatly improves the backup
efficiency.
Intelligent O&M
The eService intelligent cloud management system (eService for short) improves customers'
O&M capabilities and takes planned maintenance actions to prevent potential risks.
Being authorized by customers, eService monitors device alarms in 24/7 mode. Whenever an
alarm is detected, eService automatically notifies Huawei technical support center and creates
service requests (SRs). Huawei service engineers will help customers solve problems in a
timely manner.
l eService provides a self-service O&M system for customers, aiming for precise and
customized information services.
l Based on HUAWEI CLOUD, the eService cloud system drives IT O&M activities via
big data analytics and artificial intelligence (AI) technologies to identify faults in
advance, reduce O&M difficulties, and improve O&M efficiency.
l Data is encrypted during the data transmission, ensuring secure data transmission.
eService can access the customer's system only after being authorized by the customer.
l eService provides 24/7 secure, reliable, and proactive O&M services. SRs can be
automatically created.
l Customers can use any PC to access eService at any time and any place to view device
information.
eService enables the client system to work with the cloud system.
l eService client system:
Deployed on the customer side, the eService client system collects customer device alarms
and sends them to the eService cloud system in a timely manner to implement remote
maintenance functions, such as remote inspection and remote log collection.
l eService cloud system:
Deployed in Huawei technical support center, the eService cloud system receives device
alarms from the client system in 24/7 mode, automatically notifies Huawei technical support
personnel to handle the alarms in a timely manner, and supports automatic inspection and log
collection for devices on the customer side.
For details, see the eService Intelligent Cloud Management System User Guide or log in to
http://support.eservice.huawei.com to access and use eService.
3 Typical Applications
After the initial purchase, the storage system is equipped with affordable hard disk drives
(HDDs) to deliver data storage services. As the service requirements increase and the storage
system requires higher performance, administrators can add HDDs or SSDs to boost the
system performance. If even greater system performance is required, administrators can
replace all the existing HDDs with SSDs to further improve system performance.
In the example in Figure 3-2, application server A and controller A are faulty, and a link
between the cluster and the storage system is down. Under this circumstance, the redundant
components and links compensate for the failed ones, and services are switched to application
server B that is running properly. This ensures the nonstop system operations and greatly
improves the service availability.
– HyperCopy: replicates data from the source LUN to the destination LUN at block
level. A LUN copy task can be performed within a storage system or among storage
systems (even if they are heterogeneous).
– HyperMirror: backs up data in real time. If the source data becomes unavailable,
applications can automatically use the data copy, ensuring data security and
application continuity.
– HyperMetro: synchronizes and replicates data between storage arrays, monitors
service operating status, and performs failovers. In addition, it can switch over
services and implement service load sharing when storage arrays are running.
l Disaster recovery
Disaster recovery is essential for critical applications that must continue operating even
during catastrophic disasters. Disaster recovery technologies involve many aspects such
as storage systems, application servers, application software, and technicians. From the
storage system aspect, the remote replication technology is used for disaster recovery
because it can back up data in real time.
The technology duplicates backup data in real time across sites, and utilizes the long
distance between sites to eliminate data loss. This ensures that data is readily available
on other sites if one site is destroyed.
Multi-Service Applications
It is common nowadays for one storage system to process diversified applications. However,
those applications have differentiated requirements on storage. Therefore, the storage system
must have high flexibility in performance and networking.
Each type of services has its specific requirements for storage systems:
l Database servers (featuring structured data) have high requirements on storage
performance, data integrity, and system stability.
l Mail servers (featuring high randomicity of concurrent accesses) have high requirements
on storage performance, data integrity, and system stability.
l Video servers have high requirements on storage capacity, data access continuity, and
continuous bandwidths.
l Backup servers have low requirements on performance and bandwidths.
The storage system supports an intermixed configuration of SSDs, SAS disks, and NL-SAS
disks to deliver optimal performance.
l SSDs: deliver the highest performance among these three types of disks, and are suitable
for application servers such as busy database servers and mail servers that require
superior storage performance.
l SAS disks: deliver performance lower than SSDs but higher than NL-SAS disks, and are
suitable for application servers such as common database servers, mail servers, and high-
definition (HD) video servers that have a moderate storage performance requirement.
l NL-SAS disks: deliver the lowest performance among these three types of disks, and are
suitable for application servers such as low-end video servers and backup servers that
have a low storage performance requirement.
Multiple front-end interface modules are flexibly configured with customizable transmission
rates, adapting to various networks and providing storage services in different networks.
Figure 3-4 shows an example of multi-service application scenario.
4 Hardware Architecture
4.3.1 Overview
The controller enclosure adopts a modular design and consists of a system subrack,
controllers (housing backup power modules), power modules, and disk modules.
The 2 U controller enclosure of OceanStor 5110 V5 supports AC power modules, and that of
OceanStor 5300 V5 supports both AC and DC power modules. One 2 U controller enclosure
supports dual controllers only. The following figures show the structure of OceanStor 5300
V5 with dual controllers and AC power modules.
Overall Structure
Figure 4-10 shows the overall structure and components of a 2 U 25-disk controller enclosure
and Figure 4-11 shows the overall structure and components of a 2 U 12-disk controller
enclosure.
Power module
Disk module
~AC
System subrack
Front panel
Controller Power module
IOM
0
IOM
1
EXP0 12Gb
SAS
EXP1
IOM
0
IOM
1
EXP0 12Gb
SAS
EXP1
CTM
H4
H5
CTM
H4
H5 ~AC
Rear panel
~AC
System subrack
IOM1
EXP0 12Gb
SAS
EXP1
IOM0
IOM1
EXP0 12Gb
SAS
EXP1
CTM
H4
Rear panel
H5
CTM
H4
H5 ~AC
NOTE
Controller A is above controller B. Controllers communicate with each other using internal heartbeat
links and do not need cable connections.
Front View
Figure 4-12 shows the front view of a 2 U 25-disk controller enclosure and Figure 4-13
shows the front view of a 2 U 12-disk controller enclosure.
NOTE
l The disk slots of a 2 U 25-disk controller enclosure are numbered 0 to 24 from left to right. The four
coffer disks are located in slot 0 to slot 3.
l The disk slots of a 2 U 12-disk controller enclosure are numbered 0 to 11 from left to right and from
top to bottom. The four coffer disks are located in slot 0 to slot 3.
l SAS, NL-SAS, and SSD disks can be used as coffer disks. The type of the four coffer disks must be
the same.
l Slots are used to accommodate and secure disks, interface modules, controller modules, fan
modules, and power modules.
l The information plate records device information.
Rear View
Figure 4-14 shows the rear view of a 2 U controller enclosure of OceanStor 5300 V5, with
the AC power supply and SmartIO interface modules.
Do not connect the management network port and maintenance network port to the same
switch.
NOTE
Figure 4-14 Rear view of the OceanStor 5300 V5 controller enclosure (with AC power
modules)
NOTE
A 2 U controller enclosure houses controller A and controller B from top to bottom. From left to right,
the interface module slots of controller A are A0 and A1, and those of controller B are B0 and B1. When
the storage device requires IP Scale-out, SmartIO interface modules must be installed in A1 and B1
slots.
Appearance
Figure 4-15 shows the appearance of a system subrack.
4.3.2.2 Controller
A controller is the core component of a storage system. It processes storage services, receives
configuration management commands, saves configuration data, connects to disk enclosures,
and saves critical data onto coffer disks. Each controller houses one backup power module to
supply power to a controller enclosure in the event of unexpected power failures.
NOTE
Each controller has two built-in disks. The disks are used to store the configuration data of the storage
system, data in cache after a power failure, and OceanStor OS data. The disks built in one controller and
those built in another are redundant for each other.
Appearance
Each controller supports two interface modules.
Figure 4-16 shows the appearance of a controller.
Ports
Figure 4-17 describes the ports of a controller.
Indicators
Table 4-2 describes the states and corresponding meanings of indicators on a controller after
it is powered on.
Speed indicator of the l Steady orange: Data is being transferred at the highest
management network port rate.
l Off: The data transfer speed is lower than the highest
speed.
Indicator of the mini SAS HD l Steady blue: Data is transferred to the downstream
expansion port disk enclosure at the rate of 4 x 12 Gbit/s.
l Steady green: Data is transferred to the downstream
disk enclosure at the rate of 4 x 3 Gbit/s or 4 x 6
Gbit/s.
l Steady red: The port is faulty.
l Off: The link to the port is down.
Link/Active indicator of the l Steady green: The link to the application server is
GE electrical port normal.
l Blinking green: Data is being transferred.
l Off: The link to the application server is down or no
link exists.
Speed indicator of the GE l Steady orange: The data transfer rate between the
electrical port storage system and the application server is 1 Gbit/s.
l Off: The data transfer rate between the storage system
and the application server is less than 1 Gbit/s.
NOTE
Appearance
Figure 4-18 and Figure 4-19 show the front view of an AC power module and a DC power
module respectively.
Indicators
Table 4-3 describes indicators on a power module of a powered-on storage system.
Appearance
Figure 4-20 shows the appearance of a 2.5-inch disk module. Figure 4-21 shows the
appearance of a 3.5-inch disk module.
Indicators
Table 4-4 describes indicators on a disk module of a powered-on storage system.
Running indicator of the disk l Steady green: The disk module is working correctly.
module l Blinking green: Data is being read and written on the
disk module.
l Off: The disk module is powered off or powered on
incorrectly.
Table 4-5 describes the indicators on the front panel of a controller enclosure.
Table 4-5 Description of the indicators on the front panel of a controller enclosure
Disk Running indicator of the disk l Steady green: The disk module is
module module working correctly.
l Blinking green: Data is being read and
written on the disk module.
l Off: The disk module is powered off
or powered on incorrectly.
Table 4-6 describes the indicators on the rear panel of a controller enclosure.
Table 4-6 Description of the indicators on the rear panel of a controller enclosure
Module Indicator Status and Description
Speed indicator l Steady orange: The data transfer rate between the
of the GE storage system and the application server is 1 Gbit/s.
electrical port l Off: The data transfer rate between the storage system
and the application server is less than 1 Gbit/s.
4.4.1 Overview
The controller enclosure adopts a modular design and consists of a system subrack,
controllers, Power-Fan/BBU modules, and disk modules.
Overall Structure
Figure 4-25 shows the overall structure and components of a 2 U 25-disk controller enclosure
and Figure 4-26 shows the overall structure and components of a 2 U 12-disk controller
enclosure.
NOTE
2 U controller enclosures support both AC and DC power modules. The following figure uses the AC
power module as an example.
NOTE
Controller A is above controller B. Controllers communicate with each other using internal heartbeat
links and do not need cable connections.
Front View
Figure 4-27 shows the front view of a 2 U 25-disk controller enclosure and Figure 4-28
shows the front view of a 2 U 12-disk controller enclosure.
NOTE
l The disk slots of a 2 U 25-disk controller enclosure are numbered 0 to 24 from left to right. The four
coffer disks are located in slot 0 to slot 3.
l The disk slots of a 2 U 12-disk controller enclosure are numbered 0 to 11 from left to right and from
top to bottom. The four coffer disks are located in slot 0 to slot 3.
l SAS, NL-SAS, and SSD disks can be used as coffer disks. The type of the four coffer disks must be
the same.
l Slots are used to accommodate and secure disks, interface modules, controller modules, fan
modules, and power modules.
l The information plate records device information.
Rear View
Figure 4-29 shows the rear view of a 2 U controller enclosure of OceanStor 5500 V5, with
AC power supply and 8 Gbit/s Fibre Channel interface modules.
Do not connect the management network port and maintenance network port to the same
switch.
NOTE
NOTE
A 2 U controller enclosure houses controller A and controller B from top to bottom. The slots for
interface modules of controller A are A0 and A1, and the slots for interface modules of controller B are
B0 and B1. When the storage device requires IP Scale-out, SmartIO interface modules must be installed
in A1 and B1 slots.
Appearance
Figure 4-30 shows the appearance of a system subrack.
4.4.2.2 Controller
A controller is the core component of a storage system. It processes storage services, receives
configuration management commands, saves configuration data, connects to disk enclosures,
and saves critical data onto coffer disks.
NOTE
Each controller has one or more built-in disks to store system data. If a power failure occurs, such disks
also store cache data. The disks built in one controller and those built in another are redundant for each
other.
Appearance
Figure 4-31 shows the appearance of a controller.
Ports
Figure 4-32 describes the ports of a controller.
Indicators
Table 4-7 describes the states and corresponding meanings of indicators on a controller after
it is powered on.
Link/Speed indicator of the 8 l Steady blue: The data transfer rate between the storage
Gbit/s Fibre Channel port system and the application server is 8 Gbit/s.
l Blinking blue: Data is being transferred.
l Steady green: The data transfer rate between the
storage system and the application server is 2 Gbit/s or
4 Gbit/s.
l Blinking green: Data is being transferred.
l Steady red: The port is faulty.
l Off: The link to the port is down.
Speed indicator of the l Steady orange: Data is being transferred at the highest
management network port rate.
l Off: The data transfer speed is lower than the highest
speed.
Mini SAS HD expansion port l Steady blue: The data transfer rate between the
indicator controller enclosure and the disk enclosure is 4 x 12
Gbit/s.
l Steady green: The data transfer rate between the
controller enclosure and the disk enclosure is 4 x 3
Gbit/s or 4 x 6 Gbit/s.
l Steady red: The port is faulty.
l Off: The link is down.
Appearance
Figure 4-33, Figure 4-34, and Figure 4-35 show the front view of an AC Power-BBU
module, the front view of a DC Power-BBU module, and the rear view of a Power-BBU
module respectively.
Indicators
Table 4-8 describes indicators on a Power-BBU module of a powered-on storage system.
Appearance
Figure 4-36 shows the appearance of a 2.5-inch disk module. Figure 4-37 shows the
appearance of a 3.5-inch disk module.
Indicators
Table 4-9 describes indicators on a disk module of a powered-on storage system.
Running indicator of the disk l Steady green: The disk module is working correctly.
module l Blinking green: Data is being read and written on the
disk module.
l Off: The disk module is powered off or powered on
incorrectly.
Table 4-10 describes the indicators on the front panel of a controller enclosure.
Table 4-10 Description of the indicators on the front panel of a controller enclosure
Module Indicator Status and Description
Disk Running indicator of the disk l Steady green: The disk module is
module module working correctly.
l Blinking green: Data is being read and
written on the disk module.
l Off: The disk module is powered off
or powered on incorrectly.
Table 4-11 describes the indicators on the rear panel of a controller enclosure.
Table 4-11 Description of the indicators on the rear panel of a controller enclosure
4.5.1 Overview
The controller enclosure consists of a system subrack, controllers, BBU modules, power
modules, management modules, and interface modules.
Overall Structure
Figure 4-41 shows the overall structure of a 3 U controller enclosure.
NOTE
A 3 U controller enclosure can use AC or DC power modules. The following figure uses the AC power
module as an example.
Front View
Figure 4-42 shows the front view of a controller enclosure.
NOTE
l After opening the controller panel latch, you will see that each controller contains three fan modules.
l BBU slots are numbered 0 to 3 from left to right. BBUs are inserted into slots 0, 1, and 3. The other
slots are vacant (filler panels are installed for these slots).
l The information plate records device information.
l Controllers are controller A and controller B from left to right. Controllers communicate with each
other using internal heartbeat links and do not need cable connections.
Rear View
Figure 4-43 shows the rear view of a controller enclosure with the AC power module as an
example.
NOTE
A controller enclosure supports 8 Gbit/s Fibre Channel interface modules (four ports), GE electrical
interface modules, 10GE electrical interface modules, 10 Gbit/s FCoE interface modules (two ports), 56
Gbit/s InfiniBand interface modules, SmartIO interface modules, 8 Gbit/s Fibre Channel interface
modules (eight ports), and 12 Gbit/s SAS expansion modules.
Do not connect the management network port and maintenance network port to the same
switch.
Figure 4-43 Rear view of a controller enclosure with the AC power module
The slots for interface modules of a 3 U controller enclosure are B0, B1, B2, B3, B4, B5, B6,
B7, A7, A6, A5, A4, A3, A2, A1, and A0 from left to right. Among the slots, A0 to A7 are
slots for the interface modules of controller A and B0 to B7 are slots for the interface modules
of controller B.
NOTE
A controller enclosure provides the following interface modules. You can configure them based on
service needs.
l Slots A0 and B0 accommodate back-end ports and only allow 12 Gbit/s SAS expansion modules.
l A6, A7, B6, and B7 are slots for front-end interface modules and do not support 12 Gbit/s SAS
expansion modules.
l When the storage device requires IP Scale-out, SmartIO interface modules must be installed in A3
and B3 slots.
l Management module (mandatory): used for management and maintenance.
l 12 Gbit/s SAS expansion module (mandatory): used for connecting disk enclosures.
l Interface modules (optional but at least one type required): used for connecting application servers.
l When the maintenance network port is used for management and maintenance, the maintenance
network port can only be used by Huawei technical support for emergency maintenance and cannot
be connected to the same network with the management network port. Otherwise, a network
loopback may occur, causing a network storm. The initial value for the IP address of the
maintenance network port is 172.31.128.101 or 172.31.128.102. The default subnet mask is
255.255.0.0. You are advised to only connect the management network port to the network.
Appearance
Figure 4-44 shows the appearance of a system subrack.
4.5.2.2 Controller
A controller is the core component of a storage system. It processes storage services, receives
configuration management commands, saves configuration data, connects to disk enclosures,
and saves critical data onto coffer disks.
NOTE
Each controller has one or more built-in disks to store system data. If a power failure occurs, such disks
also store cache data. The disks built in one controller and those built in another are redundant for each
other.
Appearance
Figure 4-45 shows the appearance of a controller. Figure 4-46 shows the front view of a
controller.
Indicators
Table 4-12 describes the indicators on a controller of a storage system that is powered on.
Appearance
Figure 4-47 shows the appearance of a fan module. Figure 4-48 shows the front view of a fan
module.
Indicators
Table 4-13 describes indicators on a fan module of a powered-on storage system.
4.5.2.4 BBU
A BBU provides backup power for a storage system during an external power failure,
protecting the integrity of service data. When the external power supply is normal, BBUs are
standby. In the event of a power failure, BBUs provide power for the storage system. A faulty
BBU can be isolated without affecting the normal running of the storage system. If a power
failure occurs, BBUs ensure that the storage system writes cached data to the built-in disks of
the controllers, preventing data loss. After the external power supply resumes, the driver reads
data from the built-in disks of the controllers to the cache. In a system using the lithium
batteries, the battery capacity is updated and detected by charging and discharging the
batteries. In this way, the problems can be detected in advance that the battery capacity
attenuates, the batteries fail to meet the power backup requirements of the system, and thus
the data backup fails when the batteries are not used for a long time. Then, the reliability of
data protection upon the system power failure can be improved.
Appearance
Figure 4-49 shows the appearance of a BBU. Figure 4-50 shows the front view of a BBU.
Indicator
Table 4-14 describes the indicator on a BBU of a storage system that is powered on.
Ports
Figure 4-51 shows a management module.
Indicators
Table 4-15 describes the indicators on a management module of a storage system that is
powered on.
Speed indicator of the l Steady orange: Data is being transferred at the highest
management network port rate.
l Off: The data transfer speed is lower than the highest
speed.
Appearance
Figure 4-52 shows the appearance of an AC power module. Figure 4-53 shows the
appearance of a DC power module.
Indicators
Table 4-16 describes indicators on a power module of a powered-on storage system.
Table 4-17 describes the indicators on the front panel of a controller enclosure.
Table 4-18 describes the indicators on the rear panel of a controller enclosure.
Speed indicator l Steady orange: The data transfer rate between the
of a GE controller enclosure and the application server is 1
electrical port Gbit/s.
l Off: The data transfer rate between the controller
enclosure and the application server is lower than 1
Gbit/s.
Speed indicator l Steady orange: The data transfer rate between the
of a 10 GE controller enclosure and the application server is 10
electrical port Gbit/s.
l Off: The data transfer rate between the controller
enclosure and the application server is lower than 10
Gbit/s.
Link/Speed l Steady blue: The data transfer rate between the storage
indicator of a system and the application server is 10 Gbit/s.
10 Gbit/s FCoE l Blinking blue: Data is being transferred.
port
l Steady red: The port is faulty.
l Off: The link to the port is down.
4.6.1 Overview
A controller enclosure employs a modular design and consists of a system subrack,
controllers, BBUs, power modules, management modules, and interface modules.
Overall Structure
Figure 4-56 shows the overall structure of a 6 U controller enclosure.
NOTE
A 6 U controller enclosure can use AC or DC power modules. The following figure uses the AC power
module as an example.
Front View
Figure 4-57 shows the front view of a controller enclosure.
NOTE
l After opening the controller panel latch, you will see that each controller contains three fan modules.
l The information plate records device information.
l Controllers A, B, C, and D are placed from left to right and from top to bottom. Controllers
communicate with each other using internal heartbeat links and do not need cable connections.
Rear View
Figure 4-58 shows the rear view of a controller enclosure with the AC power module as an
example.
NOTE
A controller enclosure supports 8 Gbit/s Fibre Channel interface modules (four ports), GE electrical
interface modules, 10GE electrical interface modules, 10 Gbit/s FCoE interface modules (two ports), 56
Gbit/s InfiniBand interface modules, SmartIO interface modules, 8 Gbit/s Fibre Channel interface
modules (eight ports), 16 Gbit/s Fibre Channel interface modules (eight ports), and 12 Gbit/s SAS
shared expansion modules.
Do not connect the management network port and maintenance network port to the same
switch.
Figure 4-58 Rear view of a controller enclosure with the AC power module
The slots for interface modules of a 6 U controller enclosure are L0, L1, L2, L3, L4, L5, R5,
R4, R3, R2, R1, and R0 from left to right. From the top to bottom, the slots are IOM0 and
IOM1.
l R5IOM0 to R0IOM0 are slots for interface modules of controller A.
l L0IOM0 to L5IOM0 are slots for interface modules of controller B.
l R5IOM1 to R0IOM1 are slots for interface modules of controller C.
l L0IOM1 to L5IOM1 are slots for interface modules of controller D.
NOTE
A controller enclosure provides the following interface modules. You can configure them based on
service needs.
l The first pair of 12 Gbit/s SAS shared expansion modules are installed in slots L5 and R5, and the
second pair are installed in slots L4 and R4.
l Front-end interface modules are installed in slots L0, L1, L2, R0, R1, and R2. Slots L4 and R4 can
only be used once all the slots on the front-end interface modules are fully configured and SAS
interface modules are not installed in slots L4 and R4.
l When the storage system requires IP Scale-out, SmartIO interface modules must be installed in
L3IOM0, R3IOM0, L3IOM1, and R3IOM1 slots.
l Insert interface modules of the same type into a slot of controller A and the corresponding slot of
controller B. Insert interface modules of the same type into a slot of controller C and the
corresponding slot of controller D.
For example, if you insert a 10 Gbit/s FCoE interface module into slot R2IOM0 of controller A, you
must insert a 10 Gbit/s FCoE interface module into slot L2IOM0 on controller B.
l Management module (mandatory): used for management and maintenance
l 12 Gbit/s SAS shared expansion module (mandatory): used for connecting disk enclosures
l Interface modules (optional but at least one type required): used for connecting application servers
l When the maintenance network port is used for management and maintenance, the maintenance
network port can only be used by Huawei technical support for emergency maintenance and cannot
be connected to the same network with the management network port. Otherwise, a network
loopback may occur, causing a network storm. The initial value for the IP address of the
maintenance network port is 172.31.128.101 or 172.31.128.102. The default subnet mask is
255.255.0.0. You are advised to only connect the management network port to the network.
Appearance
Figure 4-59 shows the appearance of a system subrack.
4.6.2.2 Controller
A controller is the core component of a storage system. It processes storage services, receives
configuration management commands, saves configuration data, connects to disk enclosures,
and saves critical data onto coffer disks.
NOTE
Each controller has one or more built-in disks to store system data. If a power failure occurs, such disks
also store cache data. The disks built in one controller and those built in another are redundant for each
other.
Appearance
Figure 4-60 shows the appearance of a controller. Figure 4-61 shows the front view of a
controller.
Indicators
Table 4-19 describes the indicators on a controller of a storage system that is powered on.
Appearance
Figure 4-62 shows the appearance of an assistant cooling unit. Figure 4-63 shows the front
view of an assistant cooling unit.
Indicators
Table 4-20 describes the indicators on an assistant cooling unit of a storage system that is
powered on.
Assistant cooling unit alarm l Steady red: An alarm is generated on the assistant
indicator cooling unit.
l Off: The assistant cooling unit is working correctly.
Assistant cooling unit power l Steady green: The assistant cooling unit is powered
indicator on.
l Blinking green (0.5 Hz): The assistant cooling unit is
powered on and in the BIOS boot process.
l Blinking green (2 Hz): The assistant cooling unit is in
the operating system boot process.
l Off: The assistant cooling unit cannot be detected or is
powered off.
Appearance
Figure 4-64 shows the appearance of a fan module. Figure 4-65 shows the front view of a fan
module.
Indicators
Table 4-21 describes indicators on a fan module of a powered-on storage system.
4.6.2.5 BBU
A BBU provides backup power for a storage system during an external power failure,
protecting the integrity of service data. When the external power supply is normal, BBUs are
standby. In the event of a power failure, BBUs provide power for the storage system. A faulty
BBU can be isolated without affecting the normal running of the storage system. If a power
failure occurs, BBUs ensure that the storage system writes cached data to the built-in disks of
the controllers, preventing data loss. After the external power supply resumes, the driver reads
data from the built-in disks of the controllers to the cache. In a system using the lithium
batteries, the battery capacity is updated and detected by charging and discharging the
batteries. In this way, the problems can be detected in advance that the battery capacity
attenuates, the batteries fail to meet the power backup requirements of the system, and thus
the data backup fails when the batteries are not used for a long time. Then, the reliability of
data protection upon the system power failure can be improved.
Appearance
Figure 4-66 shows the appearance of a BBU. Figure 4-67 shows the front view of a BBU.
Indicator
Table 4-22 describes the indicator on a BBU of a storage system that is powered on.
Ports
Figure 4-68 shows a management module.
NOTE
Indicators
Table 4-23 describes the indicators on a management module of a storage system that is
powered on.
Speed indicator of the l Steady orange: Data is being transferred at the highest
management network port rate.
l Off: The data transfer speed is lower than the highest
speed.
Appearance
Figure 4-69 shows the appearance of an AC power module. Figure 4-70 shows the
appearance of a DC power module.
Indicators
Table 4-24 describes indicators on a power module of a powered-on storage system.
Table 4-25 describes the indicators on the front panel of a controller enclosure.
Table 4-26 describes the indicators on the rear panel of a controller enclosure.
Ports
Figure 4-73 shows the appearance of a GE electrical interface module.
Indicators
Table 4-27 describes indicators on a GE electrical interface module of a powered-on storage
system.
Link/Active indicator of the l Steady green: The link to the application server is
GE electrical port normal.
l Blinking green: Data is being transferred.
l Off: The link to the application server is down or no
link exists.
Ports
Figure 4-74 shows the appearance of a 10GE electrical interface module. 10GE electrical
interface modules of the storage system support GE/10GE autonegotiation.
Indicators
Table 4-28 describes indicators on a 10GE electrical interface module of a powered-on
storage system.
Link/Active indicator of the l Steady green: The link to the application server is
10GE electrical port normal.
l Blinking green: Data is being transferred.
l Off: The link to the application server is down or no
link exists.
Speed indicator of the 10GE l Steady yellow: The speed is the highest.
electrical port l Off: The speed is not the highest.
Interface
Figure 4-75 shows the appearance of a 40GE interface module.
Indicator
Table 4-29 describes the indicators on a 40GE interface module after the storage system is
powered on.
Speed indicator of the 40GE l Steady blue: The speed is the highest.
port l Blinking blue (2 Hz): The port is transmitting data at
the highest speed.
l Steady green: The speed is not the highest.
l Blinking green (2 Hz): The port is transmitting data, but
not at the highest speed.
l Steady red: The optical module or cable is faulty or not
supported by the port.
l Off: The port is not connected.
Function
A 100GE interface module provides two 100 Gbit/s optical ports.
Interface
Figure 4-76 shows the appearance of a 100GE interface module.
Indicator
Table 4-30 describes the indicators on a 100GE interface module after the storage system is
powered on.
Speed indicator of the 100GE l Steady blue: The speed is the highest.
port l Blinking blue (2 Hz): The port is transmitting data at
the highest speed.
l Steady green: The speed is not the highest.
l Blinking green (2 Hz): The port is transmitting data,
but not at the highest speed.
l Steady red: The optical module or cable is faulty or not
supported by the port.
l Off: The port is not connected.
The interface module supports 8, 16, and 32 Gbit/s Fibre Channel, 10GE, and 25GE ports.
Interface
l Figure 4-77 shows a SmartIO interface module (8 Gbit/s, 10 Gbit/s, and 16 Gbit/s).
l Figure 4-78, Figure 4-79, Figure 4-80, Figure 4-81, and Figure 4-82 show 8 Gbit/s, 10
Gbit/s, and 16 Gbit/s, 25 Gbit/s, and 32 Gbit/s SmartIO interface modules respectively.
Indicators
Table1 Indicator status description for a SmartIO interface module describes the states of
indicators and their meanings on a SmartIO interface module after the storage system is
powered on.
NOTE
The SmartIO interface module supports multiple working modes which need to be adjusted based on the
used optical module. For details, see section "Checking and Setting the Working Mode of SmartIO
Interface Modules" in the Installation Guide.
Function
An 8 Gbit/s Fibre Channel interface module (four ports) provides four 8 Gbit/s Fibre Channel
ports. If the port speed is auto-negotiable, the port will auto-negotiate 2 Gbit/s, 4 Gbit/s, or 8
Gbit/s. If the port speed is manually set but inconsistent with the data transfer speed of the
connected application server, the connection will be interrupted.
Ports
Figure 4-83 shows the appearance of an 8 Gbit/s Fibre Channel interface module (four ports).
Indicators
Table 4-32 describes the indicators on an 8 Gbit/s Fibre Channel interface module (four ports)
of a storage system that is powered on.
Table 4-32 Indicators on an 8 Gbit/s Fibre Channel interface module (four ports)
Indicators Status and Description
Link/Speed indicator of an 8 l Steady blue: The data transfer rate between the storage
Gbit/s Fibre Channel port system and the application server is 8 Gbit/s.
l Blinking blue: Data is being transferred.
l Steady green: The data transfer rate between the
storage system and the application server is 2 Gbit/s or
4 Gbit/s.
l Blinking green: Data is being transferred.
l Steady red: The port is faulty.
l Off: The link to the port is down.
Interface
Figure 4-84 shows the appearance of an 8 Gbit/s Fibre Channel interface module (eight
ports).
Indicators
Table 4-33 describes the states of indicators and their meanings on an 8 Gbit/s Fibre Channel
interface module (eight ports) after the storage device is powered on.
Table 4-33 Indicator status description for an 8 Gbit/s Fibre Channel interface module (eight
ports)
Indicator Status Description
Link/Speed indicator of the 8 l Steady blue: Data is being transmitted between the
Gbit/s Fibre Channel port storage system and the application server at a rate of 8
Gbit/s.
l Blinking blue: Data is being transferred.
l Steady green: Data is being transmitted between the
storage system and the application server at a rate of 2
Gbit/s or 4 Gbit/s.
l Blinking green: Data is being transmitted.
l Steady red: The port is faulty.
l Off: The port link is down.
Interface
Figure 4-85 shows the appearance of a 16 Gbit/s Fibre Channel interface module (eight
ports).
Indicators
Table 4-34 describes the states of indicators and their meanings on a 16 Gbit/s Fibre Channel
interface module (eight ports) after the storage device is powered on.
Table 4-34 Indicator status description for a 16 Gbit/s Fibre Channel interface module (eight
ports)
Link/Speed indicator of the 16 l Steady blue: Data is being transmitted between the
Gbit/s Fibre Channel port storage system and the application server at a rate of
16 Gbit/s.
l Blinking blue: Data is being transferred.
l Steady green: Data is being transmitted between the
storage system and the application server at a rate of 4
Gbit/s or 8 Gbit/s.
l Blinking green: Data is being transmitted.
l Steady red: The port is faulty.
l Off: The port link is down.
Function
A 10 Gbit/s FCoE interface module provides two 10 Gbit/s FCoE ports.
Ports
Figure 4-86 shows the appearance of a 10 Gbit/s FCoE interface module.
NOTE
l A 10 Gbit/s two-port FCoE interface module only supports direct connection networking.
l You are not advised to run iSCSI and FCoE protocols simultaneously for a 10 Gbit/s two-port FCoE
interface module to prevent performance deterioration and fluctuation.
Indicators
Table 4-35 describes the indicators on a 10 Gbit/s FCoE interface module of a storage system
that is powered on.
Link/Speed indicator of a 10 l Steady blue: The data transfer rate between the storage
Gbit/s FCoE port system and the application server is 10 Gbit/s.
l Blinking blue: Data is being transferred.
l Steady red: The port is faulty.
l Off: The link to the port is down.
Interface
Figure 4-87 shows the appearance of a 56 Gbit/s InfiniBand interface module.
Indicators
Table 4-36 describes the states of indicators and their meanings on a 56 Gbit/s InfiniBand
interface module after the storage device is powered on.
Table 4-36 Indicator status description for a 56 Gbit/s InfiniBand interface module
Indicator Status Description
56 Gbit/s InfiniBand port Link l Steady green: The port is connected properly.
indicator l Off: The port link is down.
Function
A SAS interface module provides four 4 x 12 Gbit/s mini SAS HD expansion ports that
provide connectivity to disk enclosures. The SAS interface module connects to the back-end
storage array of the storage system through a mini SAS HD cable. When the transfer rate of
the connected device is less than that of the expansion port, the expansion port automatically
adjusts the transfer rate to that of the connected device to ensure the connectivity of the data
transfer channel.
Ports
Figure 4-88 shows the appearance of a 12 Gbit/s SAS expansion module.
Indicators
Table 4-37 describes indicators on a 12 Gbit/s SAS expansion module of a powered-on
storage system.
Indicator of the mini SAS HD l Steady blue: Data is transferred to the downstream
expansion port disk enclosure at the rate of 4 x 12 Gbit/s.
l Steady green: Data is transferred to the downstream
disk enclosure at the rate of 4 x 3 Gbit/s or 4 x 6
Gbit/s.
l Steady red: The port is faulty.
l Off: The link to the port is down.
Function
A 12 Gbit/s SAS shared expansion module on an engine provides twelve 4 x 12 Gbit/s mini
SAS ports to connect the engine to a disk enclosure through a mini SAS HD cable. When the
transfer rate of the connected device is less than that of the expansion port, the expansion port
automatically adjusts the transfer rate to that of the connected device to ensure the
connectivity of the data transfer channel.
Ports
Figure 4-89 shows a 12 Gbit/s SAS shared expansion module.
Indicators
Table 4-38 describes indicators on a 12 Gbit/s SAS shared expansion module of a powered-on
storage system.
Port Link/Speed indicator l Steady blue: Data is being transferred at the highest
rate.
l Steady green: The data transfer speed is lower than the
highest speed.
l Steady red: The port is faulty.
l Blinking red: The module is being located.
l Off: The link of the port is down.
4.8.1 Overview
The disk enclosure consists of a system subrack, expansion modules, disk modules, and
power modules.
Overall Structure
Figure 4-90 shows the overall structure of a disk enclosure.
NOTE
A 2 U SAS disk enclosure can use AC or DC power modules. The following figure uses the AC power
module as an example.
Front View
Figure 4-91 shows the front view of a disk enclosure.
NOTE
Rear View
l Figure 4-92 shows the rear view of a disk enclosure with the DC power module.
Figure 4-92 Rear view of a disk enclosure with the DC power module
l Figure 4-93 shows the rear view of a disk enclosure with the AC power module.
Figure 4-93 Rear view of a disk enclosure with the AC power module
Appearance
Figure 4-94 shows the appearance of a system subrack.
Appearance
Figure 4-95 shows the appearance of an expansion module.
Ports
Figure 4-96 shows the ports of an expansion module.
Indicators
Table 4-39 describes indicators on an expansion module of a powered-on storage system.
Power indicator of the l Steady green: The expansion module is powered on.
expansion module l Off: The expansion module is powered off.
Indicator of the mini SAS l Steady blue: Data is transferred to the downstream disk
HD expansion port enclosure at the rate of 4 x 12 Gbit/s.
l Steady green: Data is transferred to the downstream
disk enclosure at the rate of 4 x 3 Gbit/s or 4 x 6 Gbit/s.
l Steady red: The port is faulty.
l Off: The link to the port is down.
Appearance
Figure 4-97 shows the appearance of an AC power module. Figure 4-98 shows the
appearance of a DC power module.
Indicators
Table 4-40 describes indicators on a power module of a powered-on storage system.
Running/Alarm indicator of l Steady green: The power module and fan module are
the power/fan module normal.
l Blinking green: The power input is normal but the
device is powered off.
l Steady red: The power module or fan module is faulty.
l Off: No external power input is found.
Appearance
Figure 4-99 shows the appearance of a disk module.
Indicators
Table 4-41 describes indicators on a disk module of a powered-on storage system.
Running indicator of the disk l Steady green: The disk module is working correctly.
module l Blinking green: Data is being read and written on the
disk module.
l Off: The disk module is powered off or powered on
incorrectly.
Table 4-42 describes the indicators on the front panel of the disk enclosure.
Table 4-42 Description of the indicators on the front panel of a disk enclosure
Module Indicator Status and Description
Disk Running indicator of the disk l Steady green: The disk module is
module module working correctly.
l Blinking green: Data is being read
and written on the disk module.
l Off: The disk module is powered off
or powered on incorrectly.
System Location indicator of the disk l Blinking blue: The disk enclosure is
subrack enclosure being located.
l Off: The disk enclosure is not
located.
Table 4-43 describes the indicators on the rear panel of the disk enclosure.
Table 4-43 Description of the indicators on the rear panel of a disk enclosure
Power Running/ l Steady green: The power module and fan module are
module Alarm normal.
indicator of the l Blinking green: The power input is normal but the
power/fan device is powered off.
module
l Steady red: The power module or fan module is faulty.
l Off: No external power input is found.
4.9.1 Overview
The disk enclosure consists of a system subrack, expansion modules, power modules, fan
modules, and disk modules.
Overall Structure
Figure 4-102 shows the overall structure of a 4 U disk enclosure.
Front View
Figure 4-103 shows the front view of a 4 U disk enclosure.
NOTE
The disk slots of a 4 U disk enclosure are numbered 0 to 23 from left to right and from top to bottom.
The first four disks in the first disk enclosure that is connected to the 3 U or 6 U controller enclosure are
coffer disks. The coffer disks are inserted into slot 0 to slot 3.
Rear View
Figure 4-104 shows the rear view of a disk enclosure with the AC power module as an
example.
Appearance
Figure 4-105 shows the appearance of a system subrack.
Appearance
Figure 4-106 shows the appearance of an expansion module.
Ports
Figure 4-107 shows the ports of an expansion module.
Indicators
Table 4-44 describes indicators on an expansion module of a powered-on storage system.
Power indicator of the l Steady green: The expansion module is powered on.
expansion module l Off: The expansion module is powered off.
Indicator of the mini SAS l Steady blue: Data is transferred to the downstream disk
HD expansion port enclosure at the rate of 4 x 12 Gbit/s.
l Steady green: Data is transferred to the downstream
disk enclosure at the rate of 4 x 3 Gbit/s or 4 x 6 Gbit/s.
l Steady red: The port is faulty.
l Off: The link to the port is down.
Appearance
Figure 4-108 shows the appearance of an AC power module. Figure 4-109 shows the
appearance of a DC power module.
Indicators
Table 4-45 describes indicators on a power module of a powered-on storage system.
Running/Alarm indicator of l Steady green: The power module and fan module are
the power/fan module normal.
l Blinking green: The power input is normal but the
device is powered off.
l Steady red: The power module or fan module is faulty.
l Off: No external power input is found.
Appearance
Figure 4-110 shows the appearance of a fan module.
Indicators
Table 4-46 describes indicators on a fan module of a powered-on storage system.
Appearance
Figure 4-111 shows the appearance of a disk module.
Indicators
Table 4-47 describes indicators on a disk module of a powered-on storage system.
Running indicator of the disk l Steady green: The disk module is working correctly.
module l Blinking green: Data is being read and written on the
disk module.
l Off: The disk module is powered off or powered on
incorrectly.
Table 4-48 describes the indicators on the front panel of the disk enclosure.
Table 4-48 Description of the indicators on the front panel of a disk enclosure
Disk Running indicator of the disk l Steady green: The disk module is
module module working correctly.
l Blinking green: Data is being read
and written on the disk module.
l Off: The disk module is powered off
or powered on incorrectly.
System Location indicator of the disk l Blinking blue: The disk enclosure is
subrack enclosure being located.
l Off: The disk enclosure is not located.
Table 4-49 describes the indicators on the rear panel of the disk enclosure.
Table 4-49 Description of the indicators on the rear panel of a disk enclosure
Module Indicator Status and Description
Fan module Running/ l Steady green: The fan module is working correctly.
Alarm l Steady red: The fan module is faulty.
indicator of the
fan module l Off: The fan module is powered off.
Power Running/ l Steady green: The power module and fan module are
module Alarm normal.
indicator of the l Blinking green: The power input is normal but the
power/fan device is powered off.
module
l Steady red: The power module or fan module is faulty.
l Off: No external power input is found.
4.10.1 Overview
A high-density disk enclosure employs a modular design and consists of a system subrack,
disk modules, fan modules, power modules, and expansion modules.
Overall Structure
Figure 4-114 shows the overall structure of a high-density disk enclosure.
Figure 4-114 Overall structure of a high-density disk enclosure with four 1200 W power
modules
Front View
Figure 4-115 shows the front view of a high-density disk enclosure.
Rear View
Figure 4-116 shows the rear view of a high-density disk enclosure.
Top View
Figure 4-117 shows the top view of a high-density disk enclosure.
The disk number of a high-density disk enclosure displayed on DeviceManager or CLI ranges
from 0 to 74. These disks are numbered from left to right (15 columns) and from bottom to
top (five rows). The slots of a high-density disk enclosure are numbered 0 to 14 from left to
right (15 columns), and A to E from bottom to top (five rows). For example, in the preceding
figure, the disk in the red box is numbered 40 in slot C10.
Table 4-50 lists the mappings between disk numbers and slot numbers of high-density disk
enclosures.
Table 4-50 Mappings between disk numbers and slot numbers of high-density disk enclosures
Disk Slot Disk Slot Disk Slot Disk Slot Disk Slot
No. No. No. No. No. No. No. No. No. No.
0 A0 15 B0 30 C0 45 D0 60 E0
1 A1 16 B1 31 C1 46 D1 61 E1
2 A2 17 B2 32 C2 47 D2 62 E2
3 A3 18 B3 33 C3 48 D3 63 E3
4 A4 19 B4 34 C4 49 D4 64 E4
5 A5 20 B5 35 C5 50 D5 65 E5
6 A6 21 B6 36 C6 51 D6 66 E6
7 A7 22 B7 37 C7 52 D7 67 E7
8 A8 23 B8 38 C8 53 D8 68 E8
9 A9 24 B9 39 C9 54 D9 69 E9
Appearance
Figure 4-118 shows the appearance of a system subrack.
Appearance
Figure 4-119 shows the appearance of an expansion module.
Ports
Figure 4-120 shows the ports of an expansion module.
Indicators
Table 4-51 describes the indicators on a disk enclosure expansion module of a storage system
that is powered on.
Alarm indicator of the l Steady red: An alarm about the expansion module is
expansion module generated.
l Off: The expansion module is powered off or working
correctly.
Mini SAS HD expansion port l Steady blue: The link to the expansion port is normal,
indicator and the data transfer rate is 4 x 12 Gbit/s.
l Steady green: The link to the expansion port is normal,
and the data transfer rate is 4 x 6 Gbit/s.
l Steady red: The port is faulty.
l Off: The link to the expansion port is down.
Appearance
Figure 4-121 shows the appearance of a disk module.
Indicator
Table 4-52 describes the indicator on a disk module of a storage system that is powered on.
Disk module status indicator l Steady green: The disk module is working correctly.
l Blinking green: Data is being read and written on the
disk module.
l Steady red: The disk module is faulty.
l Blinking red: The disk module is located.
l Off: The disk module is powered off or powered on
incorrectly.
Appearance
Figure 4-122 shows the appearance of a power module.
Indicator
Table 4-53 describes the indicator on a power module of a storage system that is powered on.
Appearance
Figure 4-123 shows the appearance of a fan module.
Indicator
Table 4-54 describes the indicator on a fan module of a storage system that is powered-on.
Fan module Running/Alarm l Steady green: The fan module is working correctly.
indicator l Steady red: The fan module is faulty.
l Off: The fan module is powered off.
Table 4-55 describes the indicators on the front panel of a high-density disk enclosure.
Table 4-55 Description of the indicators on the front panel of a high-density disk enclosure
Module Indicator Status and Description
System Location indicator l Blinking blue: The high-density disk enclosure has
subrack been located.
l Off: The high-density disk enclosure is not located.
Rear module Alarm l Steady red: The number of rear field replaceable
indicator units (FRUs) is fewer than half of that in standard
configuration or rear FRUs are faulty.
NOTE
Modules on the rear of the high-density disk enclosure
include power modules, fan modules, and expansion
modules.
l Off: Rear FRUs are running correctly.
Table 4-56 describes the indicators on the rear panel of a high-density disk enclosure.
Table 4-56 Description of the indicators on the rear panel of a high-density disk enclosure
Module Indicator Status and Description
Expansion Indicator of the mini SAS HD l Steady blue: The link is up and the
module expansion port data transfer rate is 4 x 12 Gbit/s.
l Steady green: The link is up and the
data transfer rate is 4 x 6 Gbit/s.
l Steady red: The expansion port is
faulty.
l Off: The link is down.
Fan module Fan module Running/Alarm l Steady green: The fan module is
indicator running correctly.
l Steady red: The fan module is faulty.
l Off: The fan module is not powered
on.
Size 2 x 16 2x 1 x 64 1 x 64 2 x 64 1x -
GB 32 GB GB GB 800
SSDs GB SSD SSD SSDs GB
SSDs NVMe
SSD
Positions
l If a storage system employs the disk and controller integration architecture, the first four
disks in the storage system are configured as coffer disks. Figure 4-128 uses a 2 U
controller enclosure with 25 disk slots as an example.
Figure 4-128 Positions of external coffer disks in the disk and controller integration
architecture
l If a storage system employs the disk and controller separation architecture, the first four
disks in the first disk enclosure are planned as coffer disks. Figure 4-129 uses a 2 U disk
enclosure with 25 disk slots as an example.
Figure 4-129 Positions of external coffer disks in the disk and controller separation
architecture
Capacity partitions: For the four disks, each spares 5 GB (for 5000 V5 series) or 7 GB (for
6000 V5 series) of space to form a RAID 1 group. The rest of the coffer disk space can be
used to store service data. Table 4-58 describes capacity partitions of external coffer disks.
LogZone partition l 2 GB (for 5000 Stores system logs and run logs when the
V5 series) storage system is powered off and write
l 4 GB (for 6000 through is enabled. The four coffer disks are
V5 series) mirrors of each other for redundancy.
NOTE
Data Switch
The data switch used by OceanStor storage systems is CE6855-48S6Q-HI, as shown in
Figure 4-130.
NOTE
When HyperMetro is used, the storage systems can also connect to third-party quorum servers. For the
compatibility requirements on third-party quorum servers, see Huawei Storage Interoperability
Navigator.
Table 4-59 describes the indicators and buttons on the quorum server front panel.
NOTE
The default IP address of the management network port on the quorum server is 192.168.128.200, and
the default subnet mask is 255.255.255.0.
Table 4-60 describes the indicators on the quorum server rear panel.
Table 4-61 describes the indicators and buttons on the quorum server front panel.
Power button/indicator Yellow and green l Off: The quorum server is not
powered on.
l Blinking yellow: The system is being
started.
l Steady yellow: The system is in the
standby state.
l Steady green: The system is properly
powered on.
NOTE
You can hold down the power button for 6
seconds to power off the quorum server.
Health indicator Red and green l Steady green: The quorum server is
operating properly.
l Blinking red at 1 Hz: A major alarm is
generated.
l Blinking red at 5 Hz: A critical alarm
is generated.
NOTE
l The default IP address of the management network port on the quorum server is 192.168.2.100, and
the default subnet mask is 255.255.255.0.
l a: This port is reserved and does not have any function. Do not connect cables here.
Table 4-62 describes the indicators on the quorum server rear panel.
UID indicator Blue The UID indicator helps identify and locate a
quorum server in a rack. You can turn on or
off the UID indicator by manually pressing
the UID button or remotely running a
command on the iBMC CLI.
l Steady on: The quorum server is located.
l Off: The quorum server is not located.
l You can hold down the UID button for 4
to 6 seconds to reset the system.
Power module indicator Green l Steady green: The power input is normal.
l Off: There is no AC power input, or the
power module is in the standby state or is
faulty.
DC Power
Each DC power module is equipped with two DC power cables. Figure 4-135 shows the
appearance of DC power cables.
NOTE
Connect the black cable to the positive pole of the battery (+) and the blue cable to the negative pole (-).
AC Power
l Each AC power module is equipped with one AC power cable. Figure 4-136 shows the
appearance of an AC power cable.
l If a cabinet is equipped with PDUs, use PDU power cables to supply power to devices in
the cabinet. Figure 4-137 shows the appearance of a PDU power cable.
Appearance
Figure 4-138 shows the appearance of a ground cable.
Appearance
The storage system communicates with the external network using network cables. One end
of the network cable connects to the management network port, service network port, or other
maintenance network port of the storage system, and the other end connects to the network
switch, application server, or other devices.
Figure 4-139 shows the appearance of a network cable.
NOTE
GE electrical ports employ CAT5 network cables or CAT6A shielded network cables. 10GE electrical
ports employ 1 m to 3 m CAT6A shielded network cables.
Appearance
A serial cable connects the serial port of the storage system to the port of the maintenance
terminal.
One end of a serial cable is the RJ-45 port used to connect to the serial port of a storage
system. The other end is a DB-9 port used to connect to the port of the maintenance terminal.
Figure 4-140 shows the appearance of a serial cable.
NOTE
l For the lengths of the mini SAS HD electrical and optical cables, see the "Hardware Specifications".
l For OceanStor 5110 V5/5300 V5, use mini SAS HD electrical cables to connect controller
enclosures to disk enclosures. It is recommended that a controller enclosure and its connected disk
enclosure be installed in the same cabinet.
l The mini SAS HD optical cables can be used to connect devices over distance, for example, cross-
cabinet connections.
l The optical connector of a mini SAS HD optical cable has a built-in O/E conversion module and
provides electrical ports.
NOTE
The interface of a mini SAS HD optical cable is inconsistent with that of an optical fiber. Bind the mini
SAS HD optical cable according to the cable binding method. For details about how to bind the mini
SAS HD optical cable, see section "Cable Routing and Binding Basics" in Installation Guide.
NOTE
l Huawei provides orange OM1 optical fibers and blue OM3 and OM4 optical fibers.
l Huawei provides no longer than 10 m OM1 optical fibers.
l When connecting cables, select proper cables according to site requirements and label information.
l For details about how to bind the cables, see section "Cable Routing and Bundling Basics" in
Installation Guide.
5 Software Architecture
Storage system software manages storage devices and the data stored on them, and assists
application servers in data operations.
The software suite of the storage system (OceanStor 5110 V5, 5300 V5, 5500 V5, 5600 V5,
5800 V5, or 6800 V5) consists of software running on a storage system, software running on a
maintenance terminal, and software running on an application server. These three types of
software work jointly to deliver storage, backup, and disaster recovery services in a smart,
efficient, and cost-effective manner.
Figure 5-1 shows the storage system software architecture.
SmartKit OceanStor
BCManager eReplication
OceanStor
eService UItraPath
OceanStor
SystemReporter eSDK OceanStor
Management
Software running on Fibre Channel/
network port/
iSCSI channel
serial port a storage system
Management function control software
OceanStor DeviceManager SNMP CLI Syslog
Quota
File SmartMigration SmartVirtualization SmartErase
manage
protocol
ment
SmartMulti-Tenant HyperMirror HyperVault
Volume management
module of file systems SmartDedupe&
HyperLock HyperMetro
SmartCompression
Table 5-1 describes the software running on a storage system. The dedicated operating system
OceanStor OS manages storage system hardware and supports the running of storage service
software. The basic function control software provides basic data storage and access
functions. The value-added function control software provides advanced functions such as
backup, disaster recovery, and performance tuning. The management function control
software provides the management utilities to the storage system.
Basic function SCSI software Manages the status of SCSI commands, and
control software module dispatches, resolves, and processes SCSI
commands.
Table 5-2 describes the software running on a maintenance terminal. Maintenance terminal
software configures and maintains the storage system. The software includes SmartKit,
OceanStor SystemReporter, and OceanStor eService.
SmartKit Helps service engineers and O&M engineers deploy, maintain, and
upgrade devices.
OceanStor eService A piece of remote maintenance and management software used for
device monitoring, alarm reporting, and device inspection.
OceanStor A dedicated performance and capacity report analysis tool for the
SystemReporter storage system.
Table 5-3 describes the software running on an application server. On a SAN network,
software running on an application server enables the application server to communicate and
cooperate with the storage system. This software category includes BCManager eReplication,
UltraPath, and eSDK OceanStor.
6 Product Specifications
Category Description
Dimensions and Describes the dimensions and weight of controller enclosures and disk
Weight enclosures.
(Unpackaged)
Hardware Configuration
Item 5300 V5 5500 V5 5600 V5 5800 V5 6800 V5
Maximum 2 4
number of
controllers
per
enclosure
Maximum 8
number of
IP scale-out
controllers
Maximum Versions earlier Versions earlier Versions earlier Versions earlier l 2 U SAS disk
number of than than than than enclosure: 96
disk V500R007C20: V500R007C20: V500R007C20: V500R007C20: l 4 U SAS disk
enclosures l 2 U SAS disk l 2 U SAS disk l 2 U SAS disk l 2 U SAS enclosure: 96
enclosure: 21 enclosure: 31 enclosure: 50 disk l 4 U SAS high-
l 4 U SAS disk l 4 U SAS disk l 4 U SAS disk enclosure: density disk
enclosure: 21 enclosure: 31 enclosure: 50 63 enclosure: 24
l 4 U SAS l 4 U SAS high- l 4 U SAS l 4 U SAS
high-density density disk high-density disk
disk enclosure: 10 disk enclosure:
enclosure: 7 enclosure: 16 63
V500R007C20
V500R007C20 and later V500R007C20 l 4 U SAS
and later versions: and later high-density
versions: versions: disk
l 2 U SAS disk enclosure:
l 2 U SAS disk enclosure: 48 l 2 U SAS disk 20
enclosure: 42 l 4 U SAS disk enclosure: 67
V500R007C20
l 4 U SAS disk enclosure: 48 l 4 U SAS disk and later
enclosure: 42 l 4 U SAS high- enclosure: 67 versions:
l 4 U SAS density disk l 4 U SAS l 2 U SAS
high-density enclosure: 16 high-density disk
disk disk enclosure:
enclosure: 14 enclosure: 22 84
l 4 U SAS
disk
enclosure:
84
l 4 U SAS
high-density
disk
enclosure:
24
Maximum 2 8 6
number of
hot-
swappable
I/O
interface
modules per
controller
Port Specifications
Maximum 5300 V5 5500 V5 5600 V5 5800 V5 6800 V5
Number of
Ports per
Interface
Module
56 Gbit/s (4 x - Two ports for each front-end module (The module is used
14 Gbit/s) IB for SAN services only and cannot be used for NAS
interface services. It supports electrical ports only.)
module
SmartIO Used for front-end access and networking between storage arrays, each
interface with four ports.
module (8 The port type can be 8 Gbit/s Fibre Channel, 16 Gbit/s Fibre Channel, or
Gbit/s, 10 10 Gbit/s ETH (optical).
Gbit/s, and
16 Gbit/s)
SmartIO Used for front-end access, each with four optical ports.
interface The port type can be 8 Gbit/s Fibre Channel, 16 Gbit/s Fibre Channel, 32
module (8 Gbit/s Fibre Channel, 10 Gbit/s ETH, or 25 Gbit/s ETH (cannot be
Gbit/s, 10 negotiated to GE).
Gbit/s, 16
NOTE
Gbit/s, 25 Applicable to V500R007C30 and later versions.
Gbit/s, and
32 Gbit/s)
40GE Two 40 Gbit/s ETH ports (optical) for each front-end module
interface
module
100GE Two 100 Gbit/s ETH ports (optical) for each front-end module
interface
module
16 Gbit/s 16 20e 28 28
Fibre
Channel port
GE port 14 8 28 20
10GE port 8 8 28 20
(electrical)
10GE port 8 12 28 20
(optical)
10 Gbit/s 8 8 28 20
FCoE port
(VN2VF)
10 Gbit/s - 4 14 10
FCoE port
(VN2VN)
12 Gbit/s 6 6 24 -
SAS
expansion
port
12 Gbit/s - 48
SAS shared NOTE
expansion This
port configuratio
n is for
dual-
controller or
single-
engine
scenarios.
56 Gbit/s (4 x - 4 14 10
14 Gbit/s) IB
port
40GE port 4 4 14 14 10
100GE port 4 4 14 14 10
32 Gbit/s 8 8 28 28 20
Fibre
Channel port
25GE port 8 8 28 28 20
(optical)
a: On OceanStor 5300 V5, the onboard front-end ports are GE ports and onboard back-end
ports are SAS ports.
b: On OceanStor 5500 V5, the onboard front-end ports are SmartIO ports and onboard
back-end ports are SAS ports.
c: The number of ports can reach the upper limit when 8 Gbit/s Fibre Channel high-density
interface modules are configured.
d: The number of ports can reach the upper limit when 8 Gbit/s Fibre Channel high-density
interface modules are configured and 8 Gbit/s Fibre Channel optical modules are
configured for onboard SmartIO ports.
e: The number of ports can reach the upper limit when 16 Gbit/s Fibre Channel high-
density interface modules are configured and 16 Gbit/s Fibre Channel optical modules are
configured for onboard SmartIO ports.
Disk Specifications
Disk Typea Dimensi Rotational Weight Capacity
ons Speed
a: Restricted by the storage principles, SSDs and mechanical disks such as NL-SAS and
SAS disks cannot be preserved for a long term while they are powered off.
l SSDs where no data is stored can be preserved for a maximum of 12 months while they
are powered off. SSDs where data has been stored can be preserved for a maximum of 3
months while they are powered off. If the maximum preservation time is exceeded, data
loss or SSD failure may occur.
l Packed mechanical disks and unpacked mechanical disks that are powered off can be
preserved for a maximum of six months. If the maximum preservation time is exceeded,
data loss or disk failure may occur. The maximum preservation time is determined
based on the disk preservation specifications provided by the mechanical disk vendor.
For details about the specifications, see the manual provided by the vendor.
b: Self-encrypting drives (SEDs) are supported (not sold in mainland China).
c: High-density disk enclosures are supported.
d: SEDs and high-density disk enclosures are supported.
Electrical Specifications
Item 5300 V5 5500 V5 5600 V5 5800 V5 6800 V5
4U l Max: 1250 W
high- l Typical: 995 W
density
disk l Min: 735 W
enclosur
e
Disk AC:
enclosur l 100 V to 240 V, AC±10%, 10 A, single-phase, 50/60 Hz
e
DC:
l -48 V to -60 V, DC±20%, 18.5 A
High voltage DC (N/A for North America and Canada)
l 240 V, DC±20%, 10 A
High- AC:
density l 100 V to 127 V, AC±10%, 10 A, single-phase, 50/60 Hz
disk
enclosur l 200 V to 240 V, AC±10%, 5 A, single-phase, 50/60 Hz
e
AC l AC: IEC60320-C14
power l High voltage DC: IEC60320-C14
input
type l DC: OT-M6
(socket
type)
Each BBU - 16 Wh 32 Wh 32 Wh 32 Wh
capacity/Overall
power backup
duration
Reliability Specifications
Item Value
Dimensions and Describes the dimensions and weight of controller enclosures and disk
Weight enclosures.
(Unpackaged)
Hardware Configuration
Item 5110 V5
Item 5110 V5
Supported disk enclosure types l 2 U SAS disk enclosure with twenty-five 2.5-inch disks
l 4 U SAS disk enclosure with twenty-four 3.5-inch disks
Maximum number of disk enclosures A maximum of eight SAS disk enclosures can be connected
connected to back-end channels (ports) to a pair of SAS ports. Two are recommended.
Port Specifications
Maximum Number of Ports 5110 V5
per Interface Module
SmartIO interface module (8 Used for front-end access and networking between
Gbit/s, 10 Gbit/s, and 16 storage arrays, each with four ports.
Gbit/s) The port type can be 8 Gbit/s Fibre Channel, 16 Gbit/s
Fibre Channel, or 10 Gbit/s ETH (optical).
SmartIO interface module (8 Used for front-end access, each with four optical ports.
Gbit/s, 10 Gbit/s, 16 Gbit/s, 25 The port type can be 8 Gbit/s Fibre Channel, 16 Gbit/s
Gbit/s, and 32 Gbit/s) Fibre Channel, 32 Gbit/s Fibre Channel, 10 Gbit/s ETH,
or 25 Gbit/s ETH (cannot be negotiated to GE).
GE port 14
a: On OceanStor 5110 V5, the onboard front-end ports are GE ports and onboard back-end
ports are SAS ports.
Disk Specifications
Disk Typea Dimensi Rotational Weight Capacity
ons Speed
a: Restricted by the storage principles, SSDs and mechanical disks such as NL-SAS and
SAS disks cannot be preserved for a long term while they are powered off.
l SSDs where no data is stored can be preserved for a maximum of 12 months while they
are powered off. SSDs where data has been stored can be preserved for a maximum of 3
months while they are powered off. If the maximum preservation time is exceeded, data
loss or SSD failure may occur.
l Packed mechanical disks and unpacked mechanical disks that are powered off can be
preserved for a maximum of six months. If the maximum preservation time is exceeded,
data loss or disk failure may occur. The maximum preservation time is determined
based on the disk preservation specifications provided by the mechanical disk vendor.
For details about the specifications, see the manual provided by the vendor.
b: Self-encrypting drives (SEDs) are supported (not sold in mainland China).
Electrical Specifications
Power Item 5110 V5
IP switch AC:
l 100 V to 240 V, AC±10%, 10 A,
single-phase, 50/60 Hz
Reliability Specifications
Item Value
License Control Describes whether software features of the storage unit are controlled
by licenses.
Basic Specifications
Item 5300 V5 5500 V5 5600 V5 5800 V5 6800 V5
Maximum 64
number of
hosts per
host group
Maximum 64
number of
PE LUNs
Maximum 32 64 128
number of
disk
domains
Minimum 4
number of
disks in a
disk domain
per engine
Maximum 32 64 128
number of
storage
pools
Minimum 512 KB
capacity of a
LUN
Maximum 256 TB
capacity of a
LUN
Maximum l The total l The total l The total l The total l The
number of number of number of number number total
file systems clone file clone file of clone of clone number
systems systems file file of clone
and file and file systems systems file
systems systems and file and file systems
cannot cannot systems systems and file
exceed exceed cannot cannot systems
1024. 2048. exceed exceed cannot
l The total l The total 2048. 4096. exceed
number of number of l The total l The total 4096.
clone file clone file number number l The
systems, systems, of clone of clone total
file file file file number
systems, systems, systems, systems, of clone
LUNs, LUNs, file file file
and and systems, systems, systems,
writable writable LUNs, LUNs, file
LUN LUN and and systems,
snapshots snapshots writable writable LUNs,
cannot cannot LUN LUN and
exceed exceed snapshot snapshot writable
4096. 8192. s cannot s cannot LUN
exceed exceed snapsho
16,384. 16,384. ts
cannot
exceed
65,536.
Minimum 1 GB
capacity of a
file system
Maximum 16 PB
capacity of a
file system
Maximum 2 billion
number of
files per file
system
Maximum 256 TB
capacity of a
file
Maximum 30 million
number of
sub-
directories
per directory
Maximum 12,000
number of
SMB shares
Maximum 10,000
number of
NFS shares
Maximum 8 16 32
number of
NDMP
flows per
controller
Maximum 256
directory
depth of a
file system
a: Maximum total number of clone file systems, file systems, LUNs, and writable LUN
snapshots, plus the number of PE LUNs and VVol LUNs.
Feature Specifications
Feature Parame 5300 V5 5500 V5 5600 V5 5800 V5 6800 V5
Name ter
Maximu 2048
m
number
of read-
only
snapshot
s for a
source
file
system
Minimu 1 minute
m
interval
of
periodic
snapshot
s for a
file
system
Maximu 64 128
m
number
of target
LUNs
for each
source
LUN
Maximu 8
m
number
of
secondar
y LUNs
in a
clone
group
Maximu 64 512
m
number
of
consiste
nt split
pairs
Maximu 8
m levels
of
cascadin
g clones
Maximu l Synchronous: 1
m l Asynchronous: 2
number
of
secondar
y LUNs
in a pair
Maximu Asynchronous: 1
m
number
of
secondar
y file
systems
in a pair
Maximu 64
m
number
of
connecte
d remote
storage
devices
Maximu 64 512
m
number
of pairs
in a
remote
replicati
on
consiste
nt group
Maximu 64
m
number
of LUNs
supporte
d by a
policy
Number 3
of
priority
levels
SmartPart Maximu 8
ition m cache
partition
s for
every
two
controlle
rs
Minimu 256 MB
m size of
a cache
partition
Maximu 2 GB 5 GB 5 GB 10 GB 20 GB
m size of
a cache
partition
Migratio l SAN: 512 KB, 1 MB, 2 MB, 4 MB, 8 MB, 16 MB, 32 MB, or
n 64 MB (4 MB by default)
granulari l NAS: file size
ty
(configu
rable)
SmartMot Granular 64 MB
ion ity
Maximu 256 TB
m
capacity
of a thin
LUN
Space Supported
reclamat
ion
SmartMig Maximu 8
ration m
(SAN) number
of LUNs
that can
be
simultan
eously
migrated
for each
controlle
r
SmartMig Maximu 32 64
ration m
(NAS)b number
of file
systems
that can
be
migrated
simultan
eously
by each
controlle
r
SmartEra Maximu 8 16
se m
number
of LUNs
whose
data can
be
simultan
eously
destructe
d on
each
controlle
r
Maximu 32
m
number
of tenant
administ
rators
for a
tenant
Maximu 8 32
m
number
of paths
for each
external
LUN
Number 2
of copies
per
volume
mirror
NAS Virus- CIFS share (scanning starts when files are closed)
antivirus scanning
mode
Maximu 32
m
number
of
antivirus
servers
per
vStore
HyperMet Maximu 1 2 2 2 2
ro (SAN) m The The The The number The
number number of number of number of of NAS and number of
of NAS and NAS and NAS and SAN NAS and
HyperM SAN SAN SAN HyperMetr SAN
etro HyperMetr HyperMetr HyperMetr o domains HyperMetr
domains o domains o domains o domains cannot o domains
cannot cannot cannot exceed 2. cannot
exceed 1. exceed 2. exceed 2. exceed 2.
HyperMet Maximu 1 2 2 2 2
ro (NAS) m The The The The number The
number number of number of number of of NAS and number of
of NAS and NAS and NAS and SAN NAS and
HyperM SAN SAN SAN HyperMetr SAN
etro HyperMetr HyperMetr HyperMetr o domains HyperMetr
domains o domains o domains o domains cannot o domains
cannot cannot cannot exceed 2. cannot
exceed 1. exceed 2. exceed 2. exceed 2.
Supporte SMB3.0/NFSv3/NFSv4.0/NFSv4.1
d NOTE
protocol Only storage systems in V500R007C20 and later versions support
types NFSv4.1.
Quorum Maximu 2 4 8 16 32
client m
number
of
quorum
servers
that can
be
connecte
d to an
array
Maximu 2
m
number
of
quorum
servers
that can
be
connecte
d to a
HyperM
etro
domain
Maximu 2
m
number
of IP
addresse
s (at the
server
side)
that can
be added
to a
quorum
server
Maximu 2
m
number
of links
that can
be
connecte
d from
each
controlle
r of an
array to
the same
quorum
server
Windows Mainstream Windows operating systems are supported, including but not
limited to the following:
l Windows Server 2003 R2 Standard SP2
l Windows Server 2003 R2 Datacenter SP2
l Windows Server 2003 R2 Enterprise Edition SP2
l Windows Server 2008 R2 Standard SP1
l Windows Server 2008 R2 Datacenter SP1
l Windows Server 2008 R2 Enterprise Edition SP1
l Windows Server 2012 Standard
l Windows Server 2012 Datacenter
l Windows Server 2012 Essentials
l Windows Server 2012 Foundation X64 Edition
Linux Mainstream Linux operating systems, including but not limited to the
following:
l SUSE Linux Enterprise Server 10
l SUSE Linux Enterprise Server 11
l Red Hat Enterprise Server AS 5
l Red Hat Enterprise Server AS 6
License Control
Function Requiring a License or Not
HyperSnap Yesa
HyperClone Yes
HyperCopy Yes
HyperReplication Yesb
SmartQoS Yes
SmartTier Yes
SmartMotion Yes
SmartThin Yes
SmartPartition Yes
SmartMigration Yes
SmartErase Yes
SmartMulti-Tenant Yes
SmartVirtualization Yes
HyperMirror Yes
SmartQuota Yes
CIFS Yes
NFS Yes
SmartCache Yes
NDMP Yes
HyperVault Yes
a: The same license is used for HyperSnap for block and file services. After importing the
license file for the HyperSnap feature, a user can create snapshots for both block and file
services.
b: The same license is used for HyperReplication for block and file services. After
importing the license file for the HyperReplication feature, a user can create remote
replications for both block and file services.
NOTE
As the OceanStor SystemReporter and OceanStor UltraPath are not deployed on a storage system, you
cannot check them on the license management page of the storage system. To view purchased features,
you can obtain the product authorization certificate from your dealer, which shows the purchased
features.
Interoperability
You can go to Huawei Storage Interoperability Navigator and select components such as
an operating system and multipathing software you want to check to obtain the
interoperability information.
Table 6-4 describes the categories of storage software specifications to help you quickly
locate the specification information you need.
Category Description
License Control Describes whether software features of the storage unit are controlled
by licenses.
Basic Specifications
Item 5110 V5
Item 5110 V5
Maximum number of file systems l The total number of clone file systems and
file systems cannot exceed 1024.
l The total number of clone file systems,
file systems, LUNs, and writable LUN
snapshots cannot exceed 4096.
Item 5110 V5
a: Maximum total number of clone file systems, file systems, LUNs, and writable LUN
snapshots, plus the number of PE LUNs and VVol LUNs.
Feature Specifications
Feature Name Parameter 5110 V5
Maximum number of 64
LUNs that can be batch
activated
Maximum number of 64
target LUNs for each
source LUN
Maximum number of 8
secondary LUNs in a
clone group
Maximum number of 64
consistent split pairs
Maximum levels of 8
cascading clones
Maximum number of 64
connected remote storage
devices
Maximum number of 64
pairs in a remote
replication consistent
group
Maximum number of 63
remote replication vStore
pairs
Maximum number of 64
LUNs supported by a
policy
SmartMotion Granularity 64 MB
Maximum number of 32
tenant administrators for a
tenant
Maximum number of 32
external storage arrays
Maximum number of 8
paths for each external
LUN
Maximum number of 32
antivirus servers per
vStore
Maximum number of 16
HyperMetro LUN
consistency groups
Maximum number of 63
HyperMetro vStore pairs
Maximum number of 2
quorum servers that can
be connected to a
HyperMetro domain
Maximum number of IP 2
addresses (at the server
side) that can be added to
a quorum server
Maximum number of 2
links that can be
connected from each
controller of an array to
the same quorum server
Linux Mainstream Linux operating systems, including but not limited to the
following:
l SUSE Linux Enterprise Server 10
l SUSE Linux Enterprise Server 11
l Red Hat Enterprise Server AS 5
l Red Hat Enterprise Server AS 6
Operating 5110 V5
System
License Control
Function Requiring a License or Not
HyperSnap Yesa
HyperClone Yes
HyperCopy Yes
HyperReplication Yesb
SmartQoS Yes
SmartTier Yes
SmartMotion Yes
SmartThin Yes
SmartPartition Yes
SmartMigration Yes
SmartErase Yes
SmartMulti-Tenant Yes
SmartVirtualization Yes
HyperMirror Yes
SmartQuota Yes
CIFS Yes
NFS Yes
SmartCache Yes
NDMP Yes
HyperVault Yes
a: The same license is used for HyperSnap for block and file services. After importing the
license file for the HyperSnap feature, a user can create snapshots for both block and file
services.
b: The same license is used for HyperReplication for block and file services. After
importing the license file for the HyperReplication feature, a user can create remote
replications for both block and file services.
NOTE
As the OceanStor SystemReporter and OceanStor UltraPath are not deployed on a storage system, you
cannot check them on the license management page of the storage system. To view purchased features,
you can obtain the product authorization certificate from your dealer, which shows the purchased
features.
Interoperability
You can go to Huawei Storage Interoperability Navigator and select components such as
an operating system and multipathing software you want to check to obtain the
interoperability information.
7 Environmental Requirements
Table 7-2 shows the vibration and shock requirements of storage systems.
Parameter Requirement
Operating vibration 5 to 350 Hz, PSD: 0.0002 g2/Hz, 350 to 500 Hz, -3 dB,
0.3 Grms
Parameter Requirement
The concentration level of particle contaminants in a data center should meet the requirements
listed in the white paper entitled Gaseous and Particulate Contamination Guidelines for Data
Centers published in 2011 by the American Society of Heating Refrigerating and Air-
conditioning Engineers (ASHRAE) Technical Committee (TC) 9.9.
According to the Guidelines, particle contaminants in a data center shall reach the cleanliness
of ISO 14664-1 Class 8:
l Each cubic meter contains not more than 3,520,000 particles that are greater than or
equal to 0.5 μm.
l Each cubic meter contains not more than 832,000 particles that are greater than or equal
to 1 μm.
l Each cubic meter contains not more than 29,300 particles that are greater than or equal to
5 μm.
It is recommended that you use an effective filter to process air flowing into the data center as
well as a filtering system to periodically clean the air already in the data center.
Table 7-3 Air cleanliness classification by particle concentration of ISO 14664-1 and
maximum allowable concentrations (particles/m3) for particles
ISO Particle Particle Particle Particle Particle Particle
Class Size Size Size Size Size Size
≥ 0.1 μm ≥ 0.2 μm ≥ 0.3 μm ≥ 0.5 μm ≥ 1 μm ≥ 5 μm
Class 1 10 2 - - - -
Class 2 100 24 10 4 - -
Symbol Sources
The concentration level of corrosive airborne contaminants in a data center should meet the
requirements listed in the white paper entitled Gaseous and Particulate Contamination
Guidelines for Data Centers published in 2011 by the American Society of Heating,
Refrigerating and Air-conditioning Engineers (ASHRAE) Technical Committee (TC) 9.9.
According to the Guidelines, corrosive airborne contaminants in a data center should meet the
following requirements:
l Copper corrosion rate
Less than 300 Å/month per ANSI/ISA-71.04-1985 severity level G1.
l Silver corrosion rate
Less than 200 Å/month.
NOTE
See Table 7-6 for the copper and silver corrosion rate requirements.
O3 ppb <2
a: Part per billion (ppb) is the number of units of mass of a contaminant per billion units of
total mass.
Group A and group B are common gas groups in a data center. The concentration limits of
group A or group B that correspond to copper reactivity level G1 are calculated based on the
premise that relative humidity in the data center is lower than 50% and that the gases in the
group interact with each other. A 10% of increase in the relative humidity will heighten the
gaseous corrosivity level by 1.
Corrosion is not determined by a single factor, but by comprehensive environmental factors
such as temperature, relative humidity, corrosive airborne contaminants, and ventilation. Any
of the environmental factors may affect the gaseous corrosivity level. Therefore, the
concentration limitation values specified in the previous table are for reference only.
Heat Dissipation
Traditional heat dissipation modes are as follows:
l Controller enclosure
Cooling air enters from the front end through small holes on the interface modules. After
dissipating the heat of interface modules, controllers, and power modules, the air is
discharged out of its back end by fans. The controller enclosure dynamically adjusts
rotational speed of the fans based on the operational temperature of the storage system.
l Disk enclosure
Cooling air enters from the front end through the space between disks, passing the
midplane, into the power modules and expansion modules. After dissipating the heat, the
air is discharged out of its back end by fans. The disk enclosure dynamically adjusts
rotational speed of the fans based on the operational temperature of the storage system.
For better maintenance, ventilation, and heat dissipation, pay attention to the following when
installing the storage system in the cabinet:
l To ensure smooth ventilation, the cabinet should be at least 100 cm (39.4 inches) away
from the equipment room walls and at least 120 cm (47.24 inches) away from other
cabinets (that are in front of or behind).
l To keep air convection between the cabinet and the equipment room, no enclosed space
is allowed in the cabinet. 1 U (44.45 mm or 1.75 inches) space should be left above and
below each device.
The airflow parameters of the storage system are shown in Table 7-7.
The heat dissipation parameters of the storage system are shown in Table 7-8.
Noise
The disks and fans make noise when in operation, with fans being the major noise source. The
intensity of fan rotation is associated with the temperature. A higher temperature leads to
greater rotational speed by the fans, which in return creates greater noise. Therefore, there is a
direct correlation between the noise made by a storage system and the ambient temperature in
the equipment room.
When the temperature is 25°C, the parameters of the noise generated by the storage system
are shown in Table 7-9.
2 U disk 67.5 dB
enclosur
e
4 U disk 66.3 dB
enclosur
e
4U - 75.4 dB
high-
density
disk
enclosur
e
8 Standards Compliance
The chapter describes the protocol standards, the safety specifications and electromagnetic
compatibility (EMC) standards, and the industry standards that the storage system complies
with.
Protocol Standards
Table 8-1 lists the protocol standards that the storage system complies with.
SFF 8323 3.5' disk drive form factor with serial connector
TCP/IP SNMP v1
system
SNMP v2c
SNMP v3
Interface Standards
Table 8-2 describes the interface standards that the storage systems comply with.
Table 8-2 Interface standards that the storage systems comply with
Name Description
VASA An API used for VMware vSphere ESXi hosts to communicate with storage
devices. It enables vCenter to manage storage arrays in a unified manner.
SRA An interface between VMware Site Recovery Manager (SRM) and a storage
system. It enables SRM to perform the following operations: discovery of
storage systems, non-disruptive failover test, emergency or planned failover,
reverse replication, backup, and restoration.
Name Description
ODX Offloaded data transfer (ODX) is a feature of Windows Server 2012. The
feature unloads files into storage arrays for transmission. High transmission
bandwidth between storage arrays largely shortens the data transmission delay
and improves the data copy speed, as well as reduces host server resource
occupation.
GB17625.1-2012
EN 55024
Industry Standards
Table 8-4 lists the industry standards that the storage system complies with.
IEEE 802.3ab
9 Certifications
Name Description
IC Industry Canada (IC) sets up the test standards for analog and digital terminal
devices and specifies corresponding EMC certificates that all import
electronic products must obtain.
RoHS The restriction of the use of certain hazardous substances in electrical and
electronic equipment (RoHS) is the directive that restricts the use of certain
hazardous substances in the electrical and electronic equipment.
RoHS is the EU compulsory standard that is designed to regulate the materials
and the technical standard of the electrical and electronic products. In this
way, it does good to human health and environment protection. That is, the 10
hazardous substances of Pb, Cd, Hg, Cr6+, PBB, PBDE, DEHP, BBP, DBP,
and DIBP must not exceed the maximum limit. The maximum limit is 100
ppm for Cd and is 1000 ppm for other nine hazardous substances.
Name Description
CU-TR Russia, Kazakhstan, and Belarus have integrated their own certification
technology requirements and formulated a unified Customs Union (CU)
certification. The products within the scope of control are subject to
mandatory certification of customs union technical regulations (CU-TR),
unified technical regulations and evaluation modes, product qualification
directories, certificate forms, and technical supervision and registration.
RCM The Australian & New Zealand Regulatory Compliance Mark (RCM) is the
mandatory compliance for selling electrical equipment products in the market.
VCCI As the EMC mark in Japan, the VCCI certificate is managed by the Voluntary
Control Council for Interference by Information Technology Equipment. The
organization determines whether an information technology product meets
VCCI requirements based on the Special International Committee on Radio
Interference (abbreviated as CISPR from its French name) 32 standard.
FCC-DOC
Supplier's Declaration of Conformity (SDoC)
Unique Identifier: trade name: HUAWEI; product name: Storage System; model number:
OceanStor 5110 V5, 5300 V5, 5500 V5, 5600 V5, 5800 V5, and 6800 V5
Responsible Party- U.S. Contact Information
Huawei Technologies USA Inc.
5700 Tennyson Parkway, Suite 500
Plano, Texas 75024
Main: 214-919-6000 / TAC Hotline: 877-448-2934
FCC Compliance Statement (for products subject to Part 15)
This device complies with part 15 of the FCC Rules. Operation is subject to the following
two conditions: (1) This device may not cause harmful interference, and (2) this device
must accept any interference received, including interference that may cause undesired
operation.
The storage systems can be operated and maintained by using DeviceManager and the
command-line interface (CLI), adapting to different environments and user habits.
Introduction to DeviceManager
Figure 10-1 shows the DeviceManager main window.
1 Function pane The function pane shows a page associated with the
current operation.
2 Status bar The status bar shows information such as the user name
currently logged in and the login time.
4 Exit, help, and This area displays an exit button, a help button, and a
language selection language selection button. DeviceManager supports two
area languages: simplified Chinese and English.
5 Fault statistics area The fault statistics area shows the number of each level
of system faults, helping users understand the running
status of a storage system.
Before contacting Huawei for help, you need to prepare the boards, port modules,
screwdrivers, screws, cables for serial ports, network cables, and other required materials.
To better solve the problems, use the documents before you contact Huawei for technical
support.
B Glossary
A
ANSI American National Standards Institute
B
BBU Backup Battery Unit
C
CLI Command Line Interface
E
ESN Equipment Serial Number
F
FC Fibre Channel
FC-AL Fibre Channel Arbitrated Loop
FCoE Fibre Channel over Ethernet
G
GE Gigabit Ethernet
GUI Graphical User Interface
H
HBA Host Bus Adapter
HD High Density
I
IP Internet Protocol
L
LUN Logical Unit Number
M
MTBF Mean Time Between Failures
MTTR Mean Time to Repair
N
NL-SAS Near Line Serial Attached SCSI
P
PDU Power Distribution Unit
U
USB Universal Serial Bus
R
RAID Redundant Array of Independent Disks
RSCN Registered State Change Notification
S
SAN Storage Area Network
SAS Serial Attached SCSI
SCSI Small Computer System Interface
SSD Solid State Drive
V
VLAN Virtual Local Area Network
VPN Virtual Private Network