Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

E 05 THEORY0

Download as pdf or txt
Download as pdf or txt
You are on page 1of 284

Hitachi Proprietary DW850

Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.


THEORY00-00-00

THEORY OF OPERATION
SECTION

THEORY00-00-00
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY00-00-10

Contents
1. Storage System Overview of DW850 .........................................................................THEORY01-01-10
1.1 Overview ..............................................................................................................THEORY01-01-10
1.2 Features of Hardware ..........................................................................................THEORY01-02-10
1.3 Storage System Configuration .............................................................................THEORY01-03-10
1.3.1 Hardware Configuration ...............................................................................THEORY01-03-20
1.3.2 Software Configuration .................................................................................THEORY01-03-70
1.3.2.1 Software to Perform Data I/O ...............................................................THEORY01-03-70
1.3.2.2 Software to Manage the Storage System .............................................THEORY01-03-80
1.3.2.3 Software to Maintain the Storage System ............................................THEORY01-03-90
1.4 Specifications by Model .......................................................................................THEORY01-04-10
1.4.1 Storage System Specifications .....................................................................THEORY01-04-10

2. Descriptions for the Operations of DW850 .................................................................THEORY02-01-10


2.1 RAID Architecture Overview ................................................................................THEORY02-01-10
2.1.1 Overview of RAID Systems ..........................................................................THEORY02-01-10
2.1.2 Comparison of RAID Levels .........................................................................THEORY02-01-50
2.2 Open Platform ......................................................................................................THEORY02-02-10
2.2.1 Product Overview and Functions .................................................................THEORY02-02-10
2.2.2 Precautions on Maintenance Operations .....................................................THEORY02-02-30
2.2.3 Configuration ................................................................................................THEORY02-02-40
2.2.3.1 System Configuration ...........................................................................THEORY02-02-40
2.2.3.2 Channel Configuration ..........................................................................THEORY02-02-50
2.2.3.3 Channel Addressing..............................................................................THEORY02-02-60
2.2.3.4 Logical Unit ...........................................................................................THEORY02-02-90
2.2.3.5 Volume Setting....................................................................................THEORY02-02-120
2.2.3.6 Host Mode Setting ..............................................................................THEORY02-02-130
2.2.4 Control Function .........................................................................................THEORY02-02-140
2.2.4.1 Cache Specifications (Common to Fibre/iSCSI) .................................THEORY02-02-140
2.2.4.2 iSCSI Command Multiprocessing .......................................................THEORY02-02-150
2.2.5 HA Software Linkage Configuration in a Cluster Server Environment .......THEORY02-02-160
2.2.5.1 Hot-standby System Configuration .....................................................THEORY02-02-160
2.2.5.2 Mutual Standby System Configuration ...............................................THEORY02-02-170
2.2.5.3 Configuration Using Host Path Switching Function ............................THEORY02-02-180
2.2.6 LUN Addition ..............................................................................................THEORY02-02-190
2.2.6.1 Overview .............................................................................................THEORY02-02-190
2.2.6.2 Specifications......................................................................................THEORY02-02-190
2.2.6.3 Operations ..........................................................................................THEORY02-02-200
2.2.7 LUN Removal .............................................................................................THEORY02-02-210
2.2.7.1 Overview .............................................................................................THEORY02-02-210
2.2.7.2 Specifications......................................................................................THEORY02-02-210
2.2.7.3 Operations ..........................................................................................THEORY02-02-220
2.2.8 Prioritized Port Control (PPC) Functions ....................................................THEORY02-02-230
2.2.8.1 Overview .............................................................................................THEORY02-02-230
THEORY00-00-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY00-00-20

2.2.8.2 Overview of Monitoring Functions ......................................................THEORY02-02-240


2.2.8.3 Procedure (Flow) of Prioritized Port and WWN Control......................THEORY02-02-250
2.2.9 Replacing Firmware Online ........................................................................THEORY02-02-260
2.2.9.1 Overview .............................................................................................THEORY02-02-260
2.3 Logical Volume Formatting ..................................................................................THEORY02-03-10
2.3.1 High-speed Format .......................................................................................THEORY02-03-10
2.3.1.1 Overviews .............................................................................................THEORY02-03-10
2.3.1.2 Estimation of Logical Volume Formatting Time.....................................THEORY02-03-20
2.3.2 Quick Format ..............................................................................................THEORY02-03-100
2.3.2.1 Overviews ...........................................................................................THEORY02-03-100
2.3.2.2 Volume Data Assurance during Quick Formatting ..............................THEORY02-03-120
2.3.2.3 Quick Formatting Time........................................................................THEORY02-03-130
2.3.2.4 Performance during Quick Format......................................................THEORY02-03-150
2.3.2.5 Combination with Other Maintenance.................................................THEORY02-03-160
2.3.2.6 SIM Output When Quick Format Completed ......................................THEORY02-03-170
2.4 Ownership Right ..................................................................................................THEORY02-04-10
2.4.1 Requirements Definition and Sorting Out Issues .........................................THEORY02-04-10
2.4.1.1 Requirement #1 ...................................................................................THEORY02-04-10
2.4.1.2 Requirement #2 ...................................................................................THEORY02-04-20
2.4.1.3 Requirement #3 ...................................................................................THEORY02-04-20
2.4.1.4 Requirement #4 ...................................................................................THEORY02-04-30
2.4.1.5 Requirement #5 ...................................................................................THEORY02-04-40
2.4.1.6 Process Flow ........................................................................................THEORY02-04-50
2.4.2 Resource Allocation Policy ...........................................................................THEORY02-04-60
2.4.2.1 Automation Allocation ...........................................................................THEORY02-04-70
2.4.3 MPU Block ..................................................................................................THEORY02-04-100
2.4.3.1 MPU Block for Maintenance ............................................................... THEORY02-04-110
2.4.3.2 MPU Block due to Failure ...................................................................THEORY02-04-150
2.5 Cache Architecture ...............................................................................................THEORY02-05-10
2.5.1 Physical Addition of Controller/DIMM ...........................................................THEORY02-05-10
2.5.2 Maintenance/Failure Blockade Specification................................................THEORY02-05-20
2.5.2.1 Blockade Unit........................................................................................THEORY02-05-20
2.5.3 Cache Control ..............................................................................................THEORY02-05-30
2.5.3.1 Cache Directory PM Read and PM/SM Write .......................................THEORY02-05-30
2.5.3.2 Cache Segment Control Image ............................................................THEORY02-05-40
2.5.3.3 Initial Setting (Cache Volatilization) ......................................................THEORY02-05-50
2.5.3.4 Ownership Right Movement .................................................................THEORY02-05-60
2.5.3.5 Cache Load Balance ............................................................................THEORY02-05-80
2.5.3.6 Controller Replacement ...................................................................... THEORY02-05-110
2.5.3.7 Queue/Counter Control.......................................................................THEORY02-05-190
2.6 CVS Option Function ...........................................................................................THEORY02-06-10
2.6.1 Customized Volume Size (CVS) Option .......................................................THEORY02-06-10
2.6.1.1 Overview ...............................................................................................THEORY02-06-10
2.6.1.2 Features................................................................................................THEORY02-06-20
2.6.1.3 Specifications........................................................................................THEORY02-06-30

THEORY00-00-20
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY00-00-30

2.6.1.4 Maintenance Functions.........................................................................THEORY02-06-40


2.7 PDEV Erase .........................................................................................................THEORY02-07-10
2.7.1 Overview ......................................................................................................THEORY02-07-10
2.7.2 Rough Estimate of Erase Time.....................................................................THEORY02-07-20
2.7.3 Influence in Combination with Other Maintenance Operation ......................THEORY02-07-30
2.7.4 Notes of Various Failures .............................................................................THEORY02-07-60
2.8 Cache Management .............................................................................................THEORY02-08-10
2.9 Destaging Operations ..........................................................................................THEORY02-09-10
2.10 Power-on Sequences .........................................................................................THEORY02-10-10
2.10.1 IMPL Sequence ..........................................................................................THEORY02-10-10
2.10.2 Planned Power Off .....................................................................................THEORY02-10-30
2.11 Data Guarantee .................................................................................................. THEORY02-11-10
2.11.1 Data Check Using LA (Logical Address) (LA Check) (Common to SAS
Drives and SSD) ......................................................................................... THEORY02-11-20
2.12 Encryption License Key .....................................................................................THEORY02-12-10
2.12.1 Overview of Encryption ..............................................................................THEORY02-12-10
2.12.2 Specifications of Encryption .......................................................................THEORY02-12-10
2.12.3 Notes on Using Encryption License Key ....................................................THEORY02-12-20
2.12.4 Creation of Encryption Key.........................................................................THEORY02-12-30
2.12.5 Backup of Encryption Key ..........................................................................THEORY02-12-30
2.12.6 Restoration of Encryption Key ....................................................................THEORY02-12-40
2.12.7 Setting and Releasing Encryption ..............................................................THEORY02-12-40
2.12.8 Encryption Format ......................................................................................THEORY02-12-50
2.12.9 Converting Non-encrypted Data/Encrypted Data .......................................THEORY02-12-50
2.12.10 Deleting Encryption Keys .........................................................................THEORY02-12-50
2.12.11 Reference of Encryption Setting ...............................................................THEORY02-12-50
2.13 Operations Performed when Drive Errors Occur ...............................................THEORY02-13-10
2.13.1 I/O Operations Performed when Drive Failures Occur ...............................THEORY02-13-10
2.13.2 Data Guarantee at the Time of Drive Failures ............................................THEORY02-13-20
2.14 Data Guarantee at the Time of a Power Outage due to Power Outage and
Others .................................................................................................................THEORY02-14-10
2.15 Overview of DKC Compression .........................................................................THEORY02-15-10
2.15.1 Capacity Saving and Accelerated Compression ........................................THEORY02-15-10
2.15.2 Capacity Saving .........................................................................................THEORY02-15-20
2.15.2.1 Compression.......................................................................................THEORY02-15-30
2.15.2.2 Deduplication ......................................................................................THEORY02-15-30
2.16 Media Sanitization ..............................................................................................THEORY02-16-10
2.16.1 Overview ....................................................................................................THEORY02-16-10
2.16.2 Estimated Erase Time ................................................................................THEORY02-16-20
2.16.3 Checking Result of Erase ...........................................................................THEORY02-16-30
2.16.3.1 SIMs Indicating End of Media Sanitization .........................................THEORY02-16-30
2.16.3.2 Checking Details of End with Warning ................................................THEORY02-16-40
2.16.4 Influence between Media Sanitization and Maintenance Work ..................THEORY02-16-70
2.16.5 Notes when Errors Occur ...........................................................................THEORY02-16-80

THEORY00-00-30
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY00-00-40

3. Specifications for the Operations of DW850 ...............................................................THEORY03-01-10


3.1 Precautions When Stopping the Storage System ................................................THEORY03-01-10
3.1.1 Precautions in a Power-off Mode .................................................................THEORY03-01-10
3.1.2 Operations When a Distribution Board Is Turned off ....................................THEORY03-01-20
3.2 Precautions When Installing of Flash Drive and Flash Module Drive Addition ....THEORY03-02-10
3.3 Notes on Maintenance during LDEV Format/Drive Copy Operations ..................THEORY03-03-10
3.4 Inter Mix of Drives ................................................................................................THEORY03-04-10

4. Appendixes .................................................................................................................THEORY04-01-10
4.1 DB Number - C/R Number Matrix ........................................................................THEORY04-01-10
4.2 Comparison of Pair Status on Storage Navigator, Command Control
Interface (CCI) ....................................................................................................THEORY04-02-10
4.3 Parts Number of Correspondence Table ..............................................................THEORY04-03-10
4.4 Connection Diagram of DKC ................................................................................THEORY04-04-10
4.5 Channel Interface (Fiber and iSCSI) ....................................................................THEORY04-05-10
4.5.1 Basic Functions ............................................................................................THEORY04-05-10
4.5.2 Glossary .......................................................................................................THEORY04-05-20
4.5.3 Interface Specifications ................................................................................THEORY04-05-30
4.5.3.1 Fibre Channel Physical Interface Specifications...................................THEORY04-05-30
4.5.3.2 iSCSI Physical Interface Specifications ................................................THEORY04-05-50
4.5.4 Volume Specification (Common to Fibre/iSCSI) ...........................................THEORY04-05-70
4.5.5 SCSI Commands ........................................................................................THEORY04-05-210
4.5.5.1 Common to Fibre/iSCSI ......................................................................THEORY04-05-210
4.6 Outline of Hardware .............................................................................................THEORY04-06-10
4.6.1 Outline Features ...........................................................................................THEORY04-06-20
4.6.2 External View of Hardware ...........................................................................THEORY04-06-40
4.6.3 Hardware Architecture ..................................................................................THEORY04-06-60
4.6.4 Hardware Component ................................................................................ THEORY04-06-110
4.7 Mounted Numbers of Drive Box and the Maximum Mountable Number of Drive THEORY04-07-10
4.8 Storage System Physical Specifications ..............................................................THEORY04-08-10
4.8.1 Environmental Specifications .......................................................................THEORY04-08-30
4.9 Power Specifications ............................................................................................THEORY04-09-10
4.9.1 Storage System Current ...............................................................................THEORY04-09-10
4.9.2 Input Voltage and Frequency .......................................................................THEORY04-09-40
4.9.3 Efficiency and Power Factor of Power Supplies ...........................................THEORY04-09-60
4.10 Locations where Configuration Information is Stored and Timing of
Information Update .............................................................................................THEORY04-10-10

THEORY00-00-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-01-10

NOTICE: Unless otherwise stated, firmware version in this section indicates DKCMAIN
firmware.

1. Storage System Overview of DW850


This section describes the storage system overview and the operations.
Section 1 describes the overview of the storage systems.
The operations and the related information of the storage systems are described from section 2 and the
following sections.

1.1 Overview
DW850 are the 19-inchi rack mount models and they consist of the controller chassis that controls the drive
and the drive installed drive box.

Controller chassis is the hardware that plays a vital role in the storage system, and it controls the drive box.
Chassis has the two clustered controllers and it provides the redundant configuration in which the all major
components such as processor, memory and power supply are duplicated.
When a failure occurs on one side of the controllers, a continuous processing can be performed on the other
side of the controllers. When a load is concentrated on one side of the controllers, an acceleration of the
processing performance is achieved by distributing the processor resource to all CPUs of the both controllers.
Furthermore, each component and firmware can minimize the influence from the suspension of the
maintenance operation system as they can replace and update while operating the system.

Five types of the drive boxes are available. Also, the number of drive boxes and the size of the drive box are
expandable depending on the usage purpose. Like the controller chassis, the major components of the drive
box have the duplicated redundancy configuration.

THEORY01-01-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-02-10

1.2 Features of Hardware


DW850 have the following features.

High-performance
• Distributing the processing by the cluster configured controller
• High-speed of processing is achieved by large capacity cache memory
• High-speed of I/O processing is achieved by flash disk and FMD
• High-speed data transfer is achieved by 16/32 Gbps Fibre Channel and 10 Gbs iSCSI interface

High Availability
• Continuous operation by the duplicated major components
• RAID1/5/6 are supported (RAID 6 supports up to 14D+2P)
• Data is maintained during a power failure by saving data in cash flash memory
• File can be shared between the different types of server

Scalability and Diversity


• Five types of drive box corresponding to drive (SAS), flash drive (SAS SSD), FMD and flash drive (NVMe
SSD) can be connected
• DBS: Up to 24 of 2.5 inch SAS drives and flash drives can be installed (2U size)
• DBL: Up to 12 of 3.5 inch SAS drives and flash drives can be installed (2U size)
• DB60: Up to 60 of 3.5 inch SAS drives and flash drives can be installed (4U size)
• DBF: 12 FMD can be installed (2U size)
• DBN: Up to 24 of 2.5 inch flash drives (NVMe SSD) can be installed (2U size)
• High density drive box DB60 in which up to 60 drives can be installed is supported (4U size)
• OS mixed environment such as UNIX, Linux, Windows and VMware is supported

THEORY01-02-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-10

1.3 Storage System Configuration


To configure and operate the Storage System, other than the DW850 hardware, SVP (SuperVisor PC) for the
management server is required.
For the storage management and the operation software, use Hitachi Device Manager - Storage Navigator,
and for the maintenance software, use Maintenance Utility.
Figure 1-1 shows the outline of the Storage System configuration.

Figure 1-1 Outline of Storage System Configuration

Client PC HCS (Hitachi Command Suite) Server

Customer LAN Environment

Device Manager - Storage Navigator Storage


Controller Chassis

Maintenance Utility

Drive Box (such as DB60)

19 inch Rack

THEORY01-03-10
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-20

1.3.1 Hardware Configuration


A storage system consists of one controller chassis (DKC) and multiple drive boxes (DB) and channel board
box (CHBB).
Figure 1-2 shows the hardware configuration of the storage system.

Figure 1-2 Hardware Configuration (Example : VSP Gx00 Model)

Drive Box
(DBS/DBL/DBF)

Drive Box
(DBS/DBL/DBF)
Drive Box
(DB60)

Controller Chassis
(CBXSS/CBXSL/CBSS1/CBSL1/
CBSS2/CBSL2/CBLH1/CBLH2)
Drive Box
(DB60)

THEORY01-03-20
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-30

Figure 1-3 shows the system hardware configuration of the storage system.

Figure 1-3 System Hardware Configuration (Back-end SAS)

To Next Drive Box To Next Drive Box

Drive Box 03
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit

Drive Box 02
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit

Drive Box 01
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit

Drive Box 00
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit

Drive Path (4path each)

Controller
Chassis SAS
(12Gbps/port)
LANB LANB

DKB-1 CTL CTL DKB-2


DIMM DIMM
Power BKM/BKMF BKM/BKMF
PDU CFM CFM
Supply Unit
DKB-1 DKB-2
AC INPUT
Power
PDU Supply Unit
CHB CHB

GCTL
+
GUM

Fibre Channel Interface/iSCSI Interface

THEORY01-03-30
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-31

Figure 1-4 System Hardware Configuration (Back-end NVMe)

Drive Box 01
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit

Drive Box 00
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit

Drive Path (4path each)

Controller
Chassis NVMe
(8Gbps/port)
LANB LANB

DKB-1 CTL CTL DKB-2


DIMM DIMM
Power BKM/BKMF BKM/BKMF
PDU CFM CFM
Supply Unit
DKB-1 DKB-2
AC INPUT
Power
PDU Supply Unit
CHB CHB

GCTL
+
GUM

Fibre Channel Interface/iSCSI Interface

THEORY01-03-31
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-40

[Controller Chassis]
It consists of a controller board, a channel board (CHB), a disk board (DKB) and a power supply that
supplies the power to them.

[Drive Box]
It consists of ENC, drives and the cooling fan integrated power supply.
For the drive boxes, the five types of DBS, DBL, DBF, DBN and DB60 are available.

• DBS (for SAS 2.5 inch)


One DKC and up to 16 drive boxes can be installed in a rack.
• DBL (for SAS 3.5 inch)
One DKC and up to 15 drive boxes can be installed in a rack.
• DBF (for SAS flash module drive) (only VSP G700 and G900)
One DKC and up to 12 drive boxes can be installed in a rack.
• DBN (for NVMe flash drive)
One DKC and up to 4 drive boxes can be installed in a rack.
• DB60 (for 2.5 inch/3.5 inch) (Not supported on VSP G130.)
One DKC and up to 5 drive boxes can be installed in DB60.

For the maximum number of drive boxes which can be installed for each model, see Table 1-1 Storage
System Specifications (VSP G130, G350, G370, G700, G900 Model) , Table 1-2 Storage System
Specifications (VSP F350, F370, F700, F900 models) or Table 1-3 Storage System Specifications (VSP
E990 models) .
Figure 1-5 shows the installing configuration in a rack. DBS, DBL, DBF and DB60 can be mixed in the
same system.

THEORY01-03-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-50

Figure 1-5 Installing Configuration in Rack

DKC+DBS/DBL/DBF DBS/DBL/DBF only DKC+DB60 DB60 only

DBS/DBL/DBF
DBS/DBL/DBF
DB60
DBS/DBL/DBF DBS/DBL/DBF
*1 DBS/DBL/DBF DB60
DBS/DBL/DBF
*3
DBS/DBL/DBF
DBS/DBL/DBF DB60 6 DB60
*2 DBS/DBL/DBF
DBS/DBL/DBF Cases
DBS/DBL/DBF DBS/DBL/DBF DB60
CHBB DBS/DBL/DBF CHBB DB60
DBS/DBL/DBF
DKC DBS/DBL/DBF DKC DB60
Blank Blank Blank Blank

DKC+DBN

DBN
4 DBN
Cases DBN
DBN
DKC
Blank

*1: When CHBB is not installed : 16 DBS, 15 DBL and 12 DBF


When CHBB is installed : 14 DBS, 14 DBL and 10 DBF
*2: 19 DBS, 18 DBL and 14 DBF
*3: When CHBB is not installed : 5 DB60
When CHBB is installed : 4 DB60

THEORY01-03-50
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-60

[Channel Board Box]


It consists of a channel board (CHB), a PCIe cable connection package (PCP), a switch package (SWPK)
and a power supply (CHBBPS).

THEORY01-03-60
Hitachi Proprietary DW850
Rev.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-70

1.3.2 Software Configuration


This software configuration section describes the software to perform the data I/O, manage and maintain the
storage system.
For the overview of each software, see the subsections from 1.3.2.1 to 1.3.2.3.

1.3.2.1 Software to Perform Data I/O


DW850 transfers data in blocks.
It assigns the address to the data for each block in the storage system to write or read the data. For how to
access to the block storage, assign the address to the head of the data and access for each block.
Firmware is the micro-program to perform the data I/O, hardware failure management (Moni/eMoni) and the
maintenance function.

Figure 1-6 How to Access to Storage

In the storage, data area is divided by blocks. Access the head of the data that is the address assigned.

THEORY01-03-70
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-80

1.3.2.2 Software to Manage the Storage System


Management and operation of the storage system are performed by the exclusive management software.
To access to the management GUI, access from the Web browser and operate on GUI (Graphical User
Interface). Management GUI uses Hitachi Device Manager - Storage Navigator ( Storage Navigator
hereinafter). Also, Hitachi Command Suite that can manage the multiple storage systems collectively can be
used.

Overview of each software is as follows.

• Storage Navigator
It is the storage management software to manage the hardware (setting the configuration information,
defining the logical device and displaying the status) and the performance management (tuning) of the
storage system. Install Storage Navigator in SVP to use. When installing Storage Navigator, StorageDevice
List is also installed. Due to the Web application, the storage system can be operated by accessing from the
Web browser on the LAN connected PC.
If the following conditions are met, Storage Navigator for the DW800 storage system (VSP G200, G/F400,
G/F600, and G/F800) can be installed on the SVP for the DW850 storage system (VSP G130, G/F350, G/
F370, G/F700, G/F900, and VSP E990).
Item Conditions
SVP [Version] displayed in the upper right of the window of Storage Device List is 88-
03-03-00/xx or later.
For the DW850 storage system (VSP G130, G/F350, G/F370, G/F700, and G/F900),
use the SVP installation media version 88-03-03-x0/xx or later. For the DW850
storage system (VSP E990), use the SVP installation media for VSP E990 (any
media version is allowed).
Note that the following restrictions are applied:
• TLS1.0/1.1 cannot be enabled as the communication protocol between the SVP
and the client PC or the storage system. (TLS1.0/1.1 can be enabled on the SVP for
the DW800 storage system (VSP G200, G/F400, G/F600, and G/F800).)
• The Log Dump automation function (dumps are automatically collected when
specific SIMs are generated) can be used only for the DW800 storage system.
Storage Navigator 83-03-21-x0/xx or later
software Installation from the SVP installation media version 83-03-21-x0/xx or later for
DW800 storage system is performed.

NOTE: The storage management software for the DW850 storage system (VSP G130, G/
F350, G/F370, G/F700, G/F900, and VSP E990) cannot be installed in the SVP for the
DW800 storage system (VSP G200, G/F400, G/F600, and G/F800).

• Hitachi Command Suite


It is the integrated platform management software that can manage multiple servers and storage systems
collectively. Hitachi Command Suite can be used as an option. Install Hitachi Command Suite in the
management PC to use. Each storage functions that can be used on Storage Navigator can use from Hitachi
Command Suite.

THEORY01-03-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-90

1.3.2.3 Software to Maintain the Storage System


Maintenance of the storage system is performed by the exclusive software.
To maintain the hardware and update the firmware, use Maintenance Utility.

The overview of each software is as follows.

• Maintenance Utility
It is the Web application to be used for the failure monitoring of the storage system, parts replacement, an
upgrade of the firmware and installation of the program product.
Maintenance Utility is incorporated into GUM (Gateway for Unified Management) controller that is
mounted in the controller chassis. Installation is not required.
Maintenance Utility is started by specifying the IP address of CTL on the Web browser or using the Web
Console window or the MPC window on the Maintenance PC. Note that Maintenance Utility can still
be accessed even if the power is turned off as GUM is operated as long as the controller chassis is powered
on.

THEORY01-03-90
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-10

1.4 Specifications by Model


1.4.1 Storage System Specifications
Table 1-1 shows the storage system specifications.

Table 1-1 Storage System Specifications (VSP G130, G350, G370, G700, G900 Model)
Item Specifications
VSP G900 VSP G700 VSP G370 VSP G350 VSP G130
System Number of Minimum 4 (Disk-in model) / 0 (Diskless model)
HDDs Maximum 1,440 1,200 384 264 96
Number of Minimum 4 (disk-in model) / 0 (diskless model)
Flash Drives Maximum 1,152 864 288 192 96
Number of Minimum 4 (disk-in model) −
Flash Module Maximum 576 432 −
Drives
RAID Level RAID6/RAID5/RAID1
RAID Group RAID6 6D+2P, 12D+2P, 14D+2P
Configuration RAID5 3D+1P, 4D+1P, 6D+1P, 7D+1P
RAID1 2D+2D, 4D+4D (*9)
Maximum Number of 64 (*1) 48 (*1) 24 (*1) 16 (*1) 16 (*1)
Spare Disk Drives
Maximum Number of Volumes 65,280 49,152 32,768 16,384 2,048
Maximum 2.4 TB 2.5 HDD 2,656 TB 1,992 TB 664 TB 443 TB 221 TB
Storage System used
Capacity 14 TB 3.5 HDD 19,737 TB 16,447 TB 5,263 TB 3,618 TB 1,315 TB
(Physical used
Capacity) 15 TB 2.5 SSD 17,335 TB 13,001 TB 4,333 TB 2,889 TB 1,444 TB
used
14 TB FMD used 8,106 TB 6,080 TB −
Maximum External Configuration 255 PiB 192 PiB 128 PiB 64 PiB 8 PiB
Maximum Number of DBs (*6) DBS/DBL/ DBS/DBL/ DBS/DBL : DBS/DBL : DBS : 3
DBF : 48 DBF : 36 11 7 DBL : 7
DB60 : 24 DB60 : 20 DB60 : 6 DB60 : 4
Memory Cache Memory Capacity 256 GiB to 128 GiB to 128 GiB to 64 GiB to 32 GiB
1,024 GiB 512 GiB 256 GiB 128 GiB
Cache Flash Memory Type BM35/ BM35 BM15 BM15 BM05
BM45
(To be continued)

THEORY01-04-10
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-20

(Continued from preceding page)


Item Specifications
VSP G900 VSP G700 VSP G370 VSP G350 VSP G130
Storage I/F DKC-DB Interface SAS/Dual Port
Data Transfer Rate 12 Gbps
Maximum Number of HDD per 144
SAS I/F
Number of DKB PCB 8 4 −
Device I/F Support Channel Type Fibre Channel Shortwave (*2) Fibre
Channel
Shortwave
iSCSI (Optic/Copper)
Data Fibre Channel 400/800/1600/3200 MB/s 400/800/
Transfer 1600 MB/s
Rate iSCSI 1000 MB/s (Optic)
100 / 1000 MB/s (Copper)
Maximum CHBB is not 12 (16: 12 (16: 4 4 −
Number of Installed When DKB When DKB
CHB slot is used) slot is used)
CHBB is 16 (20: −
installed When DKB
slot is used)
Acoustic Operating CBL 60 dB −
Level LpAm CBSS/CBSL − 60 dB −
(*7) (*8) CBXSS/CBXSL − 60 dB
DBL/DBS 60 dB (*3) (*4)
DB60 71 dB (*3) (*4) (*5) −
DBF 60 dB (*3) (*4) −
Standby CBL 55 dB −
CBSS/CBSL − 55 dB −
CBXSS/CBXSL − 55 dB
DBL/DBS 55 dB (*3) (*4)
DB60 71 dB (*3) (*4) (*5) −
DBF 55 dB (*3) (*4) −
Dimension W×D× 19 inch Rack 600 × 1,150 × 2,058.2
H (mm)
(To be continued)

THEORY01-04-20
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-30

(Continued from preceding page)


Item Specifications
VSP G900 VSP G700 VSP G370 VSP G350 VSP G130
Non- Control PCB Supported
disruptive Cache Memory Supported
Maintenance Cache Flash Memory Supported
Power Supply, Fan Supported
Microcode Supported
Disk Drive Supported
Flash Drive
Flash Module Drive Supported −
*1: Available as spare or data Disks.
*2: By the replacing SFP transceiver of the fibre port on the Channel Board to DKC-F810I-1PL16 (SFP
for 16 Gbps Longwave), the port can be used for the Longwave.
*3: Sound pressure level [LA] changes from 66 dB to 75 dB according to the ambient temperature,
Drive configuration and operating status. The maximum could be 79 dB during maintenance
procedure for failed ENC or Power Supply.
*4: Acoustic power level [LwA] measured by ISO7779 condition is 7.2 B. And it changes from 7.2 B
to 8.1B according to the ambient temperature, Drive configuration and operating status.
*5: Do not work behind DB60 for a long time.
*6: For details, see Table 4-52 or Table 4-53.
*7: The acoustic level is measured under the following conditions in accordance with ISO7779 and the
value is declared based on ISO9296.
In a normal installation area (data center/general office), the storage system is surrounded by
different elements from the following measuring conditions according to ISO, such as noise
sources other than the storage system (other devices) , the walls and ceilings that reflect the sound.
Therefore, the values described in the table do not guarantee the acoustic level in the actual
installation area.
• Measurement environment: In a semi-anechoic room whose ambient temperature is 23 degrees C
± 2 degrees C
• Device installation position: The Controller Chassis is at the bottom of the rack and the Drive
Box is at a height of 1.5 meters in the rack
• Measurement position: 1 meter away from the front, rear, left, or right side of the storage system
and 1.5 meters high (at four points)
• Measurement value: Energy average value of the four points (front, rear, left, and right)
*8: It is recommended to install the storage system in a computer room in a data center and the like.
It is possible to install the storage system in a general office, however, take measures against noise
as required.
When you replace the old Hitachi storage system with the new one in a general office, especially
note the following to take measures against noise.
The cooling fans in the storage system are downsized to enhance the high density of the storage
system. As a result, the rotation number of the fan is increased than before to maintain the cooling
performance. Therefore, the rate of the noise occupied by high-frequency content is high.
*9: RAID1 (4D+4D) is a concatenation of two RAID1 (2D+2D).
THEORY01-04-30
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-40

Table 1-2 Storage System Specifications (VSP F350, F370, F700, F900 models)
Item Specifications
VSP F900 VSP F700 VSP F370 VSP F350
System Number of Minimum 4 (Disk-in model)
Flash Drives Maximum 1,152 864 288 192
Number of Minimum 4 (Disk-in model) −
Flash Module Maximum 576 432 − −
Drives
RAID Level RAID6/RAID5/RAID1
RAID Group RAID6 6D+2P, 12D+2P, 14D+2P
Configuration RAID5 3D+1P, 4D+1P, 6D+1P, 7D+1P
RAID1 2D+2D, 4D+4D (*8)
Maximum Number of Spare Disk 64 (*1) 48 (*1) 24 (*1) 16 (*1)
Drives
Maximum Number of Volumes 64 k 48 k 32 k 16 k
Maximum 15 TB 2.5 SSD 17,335 TB 13,001 TB 4,333 TB 2,889 TB
Storage System used
Capacity 14 TB FMD used 8,106 TB 6,080 TB −
(Physical
Capacity)
Maximum External Configuration 255 PiB 192 PiB 128 PiB 64 PiB
Maximum Number of DBs (*5) DBS/DBF : 48 DBS/DBF : 36 DBS : 11 DBS : 7
Memory Cache Memory Capacity 256 GiB to 128 GiB to 512 128 GiB to 256 64 GiB to 128
1,024 GiB GiB GiB GiB
Cache Flash Memory Type BM35/BM45 BM35 BM15 BM15
Storage DKC-DB Interface SAS/Dual Port
I/F Data Transfer Rate 12 Gbps
Maximum Number of drive per SAS 24
I/F
Device Number of DKB PCB 8 4 − −
I/F Support Channel Type Fibre Channel Shortwave (*2)/iSCSI (Optic/Copper)
Data Transfer Fibre Channel 400/800/1600/3200 MB/s
Rate (MB/s) iSCSI 1000 MB/s (Optic)
100/1000 MB/s (Copper)
Maximum CHBB is not 12 (16: When 12 (16: When 4 4
Number of Installed DKB slot is DKB slot is
CHB used) used)
CHBB is Installed 16 (20: When − − −
DKB slot is
used)
(To be continued)

THEORY01-04-40
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-50

(Continued from preceding page)


Item Specifications
VSP F900 VSP F700 VSP F370 VSP F350
Acoustic Operating CBL 60 dB −
Level LpAm CBSS − 60 dB
(*6) (*7) DBS 60 dB (*3) (*4)
DBF 60 dB (*3) (*4) −
Standby CBL 55 dB −
DBSS − 55 dB
DBS 55 dB (*3) (*4)
DBF 55 dB (*3) (*4) −
Dimension W × D × H 19 inch Rack 600 × 1,150 × 2,058.2
(mm)
Non- Control PCB Supported
disruptive Cache Memory Supported
Maintenance Cache Flash Memory Supported
Power Supply, Fan Supported
Microcode Supported
Flash Drive Supported
Flash Module Drive Supported −
*1: Available as spare or data Disks.
*2: By the replacing SFP transceiver of the fibre port on the Channel Board to DKC-F810I-1PL16 (SFP
for 16 Gbps Longwave), the port can be used for the Longwave.
*3: Sound pressure level [LA] changes from 66 dB to 75 dB according to the ambient temperature,
Drive configuration and operating status. The maximum could be 79 dB during maintenance
procedure for failed ENC or Power Supply.
*4: Acoustic power level [LwA] measured by ISO7779 condition is 7.2 B. And it changes from 7.2 B
to 8.1B according to the ambient temperature, Drive configuration and operating status.
*5: For details, see Table 4-55.
*6: The acoustic level is measured under the following conditions in accordance with ISO7779 and the
value is declared based on ISO9296.
In a normal installation area (data center/general office), the storage system is surrounded by
different elements from the following measuring conditions according to ISO, such as noise
sources other than the storage system (other devices) , the walls and ceilings that reflect the sound.
Therefore, the values described in the table do not guarantee the acoustic level in the actual
installation area.
• Measurement environment: In a semi-anechoic room whose ambient temperature is 23 degrees C
± 2 degrees C
• Device installation position: The Controller Chassis is at the bottom of the rack and the Drive
Box is at a height of 1.5 meters in the rack
• Measurement position: 1 meter away from the front, rear, left, or right side of the storage system
and 1.5 meters high (at four points)
• Measurement value: Energy average value of the four points (front, rear, left, and right)
THEORY01-04-50
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-60

*7: It is recommended to install the storage system in a computer room in a data center and the like.
It is possible to install the storage system in a general office, however, take measures against noise
as required.
When you replace the old Hitachi storage system with the new one in a general office, especially
note the following to take measures against noise.
The cooling fans in the storage system are downsized to enhance the high density of the storage
system. As a result, the rotation number of the fan is increased than before to maintain the cooling
performance. Therefore, the rate of the noise occupied by high-frequency content is high.
*8: RAID1 (4D+4D) is a concatenation of two RAID1 (2D+2D).

THEORY01-04-60
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-70

Table 1-3 Storage System Specifications (VSP E990 models)


Item Specifications
VSP E990
System Number of Minimum 4 (Disk-in model)
Flash Drives Maximum 96
RAID Level RAID6/RAID5/RAID1
RAID Group RAID6 6D+2P, 12D+2P, 14D+2P
Configuration RAID5 3D+1P, 4D+1P, 6D+1P, 7D+1P
RAID1 2D+2D, 4D+4D (*8)
Maximum Number of Spare Disk 8 (*1)
Drives
Maximum Number of Volumes 64 k
Maximum 15 TB 2.5 NVMe 1,444 TB (1,313 TiB)
Storage System SSD used
Capacity
(Physical
Capacity)
Maximum External Configuration 287 PB (255 PiB)
Maximum Number of DBs (*5) DBN : 4
Memory Cache Memory Capacity 1,024 GiB
Cache Flash Memory Type BM55/BM65/BM5E/BM6E
Storage DKC-DBN Interface NVMe/Dual Port
I/F Data Transfer Rate 8 Gbps
Maximum Number of drive per SAS 24
I/F
Device Number of DKB PCB 8
I/F Support Channel Type Fibre Channel Shortwave (*2)/iSCSI (Optic/Copper)
Data Transfer Fibre Channel 400/800/1600/3200 MB/s
Rate (MB/s) iSCSI 1000 MB/s (Optic)
100/1000 MB/s (Copper)
Maximum CHBB is not 12
Number of Installed
CHB CHBB is Installed 16
(To be continued)

THEORY01-04-70
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-80

(Continued from preceding page)


Item Specifications
VSP E990
Acoustic Operating CBL 60 dB
Level LpAm DKBN 60 dB (*3) (*4)
(*4) (*5) (*6) Standby CBL 55 dB
(*7) DKBN 55 dB (*3) (*4)
Dimension W × D × H 19 inch Rack 600 × 1,150 × 2,058.2
(mm)
Non- Control PCB Supported
disruptive Cache Memory Supported
Maintenance Cache Flash Memory Supported
Power Supply, Fan Supported
Microcode Supported
Flash Drive Supported
*1: Available as spare or data Disks.
*2: By the replacing SFP transceiver of the fibre port on the Channel Board to DKC-F810I-1PL16 (SFP
for 16 Gbps Longwave), the port can be used for the Longwave.
*3: Sound pressure level [LA] changes from 66 dB to 75 dB according to the ambient temperature,
Drive configuration and operating status. The maximum could be 79 dB during maintenance
procedure for failed ENC or Power Supply.
*4: Acoustic power level [LwA] measured by ISO7779 condition is 7.2 B. And it changes from 7.2 B
to 8.1B according to the ambient temperature, Drive configuration and operating status.
*5: For details, see Table 4-54.
*6: The acoustic level is measured under the following conditions in accordance with ISO7779 and the
value is declared based on ISO9296.
In a normal installation area (data center/general office), the storage system is surrounded by
different elements from the following measuring conditions according to ISO, such as noise
sources other than the storage system (other devices) , the walls and ceilings that reflect the sound.
Therefore, the values described in the table do not guarantee the acoustic level in the actual
installation area.
• Measurement environment: In a semi-anechoic room whose ambient temperature is 23 degrees C
± 2 degrees C
• Device installation position: The Controller Chassis is at the bottom of the rack and the Drive
Box is at a height of 1.5 meters in the rack
• Measurement position: 1 meter away from the front, rear, left, or right side of the storage system
and 1.5 meters high (at four points)
• Measurement value: Energy average value of the four points (front, rear, left, and right)

THEORY01-04-80
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-90

*7: It is recommended to install the storage system in a computer room in a data center and the like.
It is possible to install the storage system in a general office, however, take measures against noise
as required.
When you replace the old Hitachi storage system with the new one in a general office, especially
note the following to take measures against noise.
The cooling fans in the storage system are downsized to enhance the high density of the storage
system. As a result, the rotation number of the fan is increased than before to maintain the cooling
performance. Therefore, the rate of the noise occupied by high-frequency content is high.
*8: RAID1 (4D+4D) is a concatenation of two RAID1 (2D+2D).

THEORY01-04-90
Hitachi Proprietary DW850
Rev.4.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-10

2. Descriptions for the Operations of DW850


2.1 RAID Architecture Overview
The Storage System supports RAID1, RAID5 and RAID6.
The feature of each RAID level is described below.
NOTE: RAID1 supported by this storage system is commonly referred to as RAID1+0.
RAID1+0 mirrors blocks across two drives and then creates a striped set across
multiple drive pairs. For details, see Table 2-2.
In this manual, the above RAID level is referred to as RAID1 .

2.1.1 Overview of RAID Systems


The concept of a Storage System was announced in 1987 by the research group of University of California at
Berkeley.
The research group called the Storage System RAID (Redundant Array of Inexpensive Disks: A Storage
System that has redundancy by employing multiple inexpensive and small Disk Drives), classified the RAID
systems into five levels, that is, RAID 1 to RAID 5, and added RAID 0 and RAID 6 later.
The Storage System supports RAID1, RAID5 and RAID6. The following shows respective methods,
advantages and disadvantages.

Table 2-1 RAID Configuration Supported by the Storage System


Level Configuration
RAID1 2D+2D
Two concatenation of (2D+2D)
RAID5 3D+1P
4D+1P
6D+1P
7D+1P
Two concatenation of (7D+1P)
Four concatenation of (7D+1P)
RAID6 6D+2P
12D+2P
14D+2P

THEORY02-01-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-20

Table 2-2 Overview of RAID Systems


Level Configuration Characteristics
RAID 1 Data block Overview Mirror Disks (Dual write)
A B C D E F ... Two Disk Drives, primary and
DKC secondary Disk Drives, compose a
RAID pair (mirroring pair) and the
A A B B identical data is written to the primary
C C D D and secondary Disk Drives.
: : : :
Further, data is divided into the two
Primary Secondary Primary Secondary
Drive Drive Drive Drive RAID pairs.
RAID pair RAID pair Advantage RAID 1 is highly usable and reliable
Parity group because of the duplicated data.
NOTE: It has higher performance than
The above diagram shows the ordinary RAID 1 (when it consists of
(2D+2D) configuration. two Disk Drives) because it consists
of the two RAID pairs.
Disadvantage A Disk capacity twice as large as user
data capacity is required.

RAID 1 Data block Overview Mirror Disks (Dual write)


Concatenation A B C D E F ... The two parity groups of RAID 1
configuration DKC (2D+2D) are concatenated and data is
divided into them. In the each RAID
A B C D pair, data is written in duplicate.
E F G H
Advantage This configuration is highly usable
: : : :
and reliable because of the duplicated
RAID pair RAID pair RAID pair RAID pair
data.
Parity group It has higher performance than the
NOTE: 2D+2D configuration because it
The above diagram shows the two consists of the four RAID pairs.
concatenation configuration of Disadvantage A Disk capacity twice as large as user
(2D+2D).
A RAID pair consists of two Disk
data capacity is required.
Drives.

(To be continued)

THEORY02-01-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-30

(Continued from preceding page)


Level Configuration Characteristics
RAID5 Data block Overview Data is written to multiple Disks
A B C D E F ... successively in units of block (or
DKC blocks).
Parity data is generated from data
A B C P0 of multiple blocks and written to
E F P1 D optional Disk.
: : : :
Advantage RAID 5 fits the transaction operation
Data Disks + Parity Disk
mainly uses small size random access
NOTE: because each Disk can receive I/O
The above diagram shows the 3D+1P
configuration.
instructions independently.
It can provide high reliability and
usability at a comparatively low cost
by virtue of the parity data
Disadvantage Write penalty of RAID 5 is larger
than that of RAID 1 because pre-
update data and pre-update parity
data must be read internally because
the parity data is updated when data
is updated.

RAID6 Data block Overview Data blocks are divided into multiple
A B C D E F ... Disks in the same way as RAID 5 and
DKC two parity Disks, P and Q, are set in
each row.
A B C D P0 Q0 Therefore, data can be assured even
F : : P1 Q1 E when failures occur in up to two Disk
: : : : : :
Drives in a parity group.
Data Disks + Parity Disks P and Q
NOTE:
Advantage RAID 6 is far more reliable than
The above diagram shows the 4D+2P RAID 1 and RAID 5 because it can
configuration. restore data even when failures occur
in up to two Disks in a parity group.
Disadvantage Because the parity data P and Q
must be updated when data is
updated, RAID 6 is imposed write
penalty heavier than that on RAID 5,
performance of the random write is
lower than that of RAID 5 in the case
where the number of Drives makes a
bottleneck.

(To be continued)

THEORY02-01-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-40

(Continued from preceding page)


Level Configuration Characteristics
RAID5 Data block Overview In the case of RAID5 (7D+1P), two
Concatenation D0 D1 D2 D3 D4 D5 ... or four parity groups (eight Drives)
configuration DKC are concatenated, and the data is
distributed and arranged in 16 Drives
D0 to D7 to D14 to D21 to or 32 Drives.
D6,P0 D13,P1 D20,P2 D27,P3
Advantage When the parity group becomes
D28 to D35 to D42 to D49 to
a performance bottleneck, the
D34,P4 D41,P5 D48,P6 D55,P7
: : : :
performance improvement can be
: : : :
attempted because it is configured
Parity group with twice and four times the number
of Drives in comparison with RAID5
NOTE: (7D+1P).
The above-mentioned figure is four
concatenation cofiguration, but it is the Disadvantage The influence level when two Drives
same in the case of two concatenation. are blocked is large because twice
and four times LDEVs are arranged
in comparison with RAID5 (7D+1P).
However, the probability that the
read of the single block in the parity
group becomes impossible due to the
failure is the same as that of RAID5
(7D+1P).

THEORY02-01-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-50

2.1.2 Comparison of RAID Levels


1. Space efficiency
The space efficiency of each RAID level is based on the following contents.
• RAID1 : The user area is half of the total Drive capacity due to mirroring.
• RAID5, RAID6 : The data part and the parity part are in the parity group.
The space efficiency is calculated by the ratio of the data part to the total Drive
capacity.
Table 2-3 shows the space efficiency per RAID level.

Table 2-3 Example of Space Efficiency Comparison


RAID Level Space Efficiency Remarks
(User Area/Disk Capacity)
RAID1 50.0% Due to mirroring

RAID5 (N – 1) / N Example: In case of 3D+1P, (4 – 1) / 4 = 0.75 = 75%


N indicates the number of
Drives which configure a
parity group
RAID6 (N – 2) / N Example: In case of 6D+2D, (8 – 2) / 8 = 0.75 = 75%
N indicates the number of
Drives which configure a
parity group

THEORY02-01-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-60

2. I/O processing operation


Table 2-4 shows the overview operation of front-end I/O and back-end I/O in each RAID level.

Table 2-4 Example of I/O Operation in Each RAID Level


RAID Level RAID1 RAID5, RAID6
I/O Type
Random read Host Host

Read Request Read Request

CM D0 CM D0

D0 D0 D1 D1 D0 D1 D2 P0
D2 D2 D3 D3 D4 D5 P1 D3
: : : : : : : :
Primary Secondary Primary Secondary
Drive Drive Drive Drive

Perform single Drive read for single Perform single Drive read for single
read from the host. read from the host.

Sequential read Host Host

Read Request Read Request

CM D0 D2 D1 D3 CM D0 D1 D2 D3

D0 D0 D1 D1 D0 D1 D2 P0
D2 D2 D3 D3 D4 D5 P1 D3
: : : : : : : :
Primary Secondary Primary Secondary
Drive Drive Drive Drive

Perform Drive read for requests from Perform Drive read for requests from
the host. the host.
For the example in the above For the example in the above
diagram, perform Drive read four diagram, perform Drive read four
times for four requests of D0 to D3 times for four requests of D0 to D3
from the host. from the host.
(To be continued)

THEORY02-01-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-70

(Continued from preceding page)


RAID Level RAID1 RAID5, RAID6
I/O Type
Random write Host Host

Write Request (CM) Write Request


Write
CM D0
New D0 New P0

Read
(1) (2)
Old D0 Old P0
D0 D0 D1 D1
D2 D2 D3 D3 (3) (4)
(1) (2)
: : : :
Primary Secondary Primary Secondary D0 D1 D2 P0
Drive Drive Drive Drive
D4 D5 P1 D3
: : : :

Perform Drive write twice as shown • In case of RAID5:


below for single write by the host. Perform the following read for
(1) Primary Drive single write by the host and create a
(2) Secondary Drive new parity.
(1) Read the old data
(2) Read the old parity
After that, perform the following
write.
(3) Write new data
(4) Write a new parity
Operate the I/O four times in total.

• In case of RAID6:
In addition to the case of RAID5,
perform old parity read and new
parity write of the second parity.
Operate the I/O six times in total.

(To be continued)

THEORY02-01-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-80

(Continued from preceding page)


RAID Level RAID1 RAID5, RAID6
I/O Type
Sequential write Host Host

Write Request Write Request

CM D0 D1 CM D0 D1 D2

(1) (2) (3) (4) D0 D1 D2 P0


D0 D0 D1 D1
D2 D2 D3 D3
: : : : D0 D1 D2 P0
Primary Secondary Primary Secondary D4 D5 P1 D3
Drive Drive Drive Drive
: : : :

Perform write twice (write to the • In case of RAID5:


primary Drive and secondary Drive) For write requests by the host, when
for the write requests by the host. the crosswise data (from D0 to D2
For the example in the above in the above diagram) is complete,
diagram, perform Drive write four create parity and write the data from
times for the following two write the host and the parity to the Drive.
requests by the host. For example, in case of 3D+1P,
(1) (3) Primary Drive create parity for write of three sets
(2) (4) Secondary Drive of data by the host and perform
Drive write four times combining
the data and parity.

• In case of RAID6:
In addition to the case of RAID5,
create the second parity and write
the data from the host and two sets
of parity to the Drive.
For example, in case of 6D+2P,
create two sets of parity for write
of six sets of data by the host and
perform Drive write eight times
combining the data and parity.

THEORY02-01-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-90

3. Limit performance comparison


Table 2-5 shows the performance per RAID level when setting the performance of a Drive to 100.
(N indicates the number of Drives which configure a parity group.)

Table 2-5 I/O performance per RAID level


(1) Random Read, Sequential Read
RAID Level Calculation Remarks
RAID1 100 × N Example: In case of 2D+2D, N = 4 results in 400
RAID5 100 × N Example: In case of 3D+1P, N = 4 results in 400
RAID6 100 × N Example: In case of 6D+2P, N = 8 results in 800

(2) Random Write


RAID Level Calculation Remarks
RAID1 100 × N / 2 Example: In case of 2D+2D, 100 × 4 / 2 = 200
RAID5 100 × N / 4 Example: In case of 7D+1P, 100 × 8 / 4 = 200
RAID6 100 × N / 6 Example: In case of 6D+2P, 100 × 8 / 6 = 133

(3) Sequential Write


RAID Level Calculation Remarks
RAID1 100 × N / 2 Example: In case of 2D+2D, 100 × 4 / 2 = 200
RAID5 100 × (N – 1) Example: In case of 7D+1P, 100 × (8 – 1) = 700
RAID6 100 × (N – 2) Example: In case of 6D+2P, 100 × (8 – 2) = 600

The above is the theoretical value in case of a Drive neck.


It is not limited to the above in case other parts are performance necks.

4. Reliability
Table 2-6 shows the reliability related to each RAID level.

Table 2-6 Reliability of Each RAID Level


RAID Level Conditions to Guarantee the Data
RAID1 When a Drive failure occurs in the mirroring pair, recover the data from the Drive
on the opposite side.
When two Drive failures occur in the mirroring pair, the LDEV is blocked.
RAID5 When a Drive failure occurs in the parity group, recover the data using the parity
data.
When two Drive failures occur in the parity group, the LDEV is blocked.
RAID6 When one or two Drive failures occur in the parity group, recover the data using
the parity data.
When three Drive failures occur in the parity group, the LDEV is blocked.

THEORY02-01-90
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-10

2.2 Open Platform


2.2.1 Product Overview and Functions
The open platform optional functions can allocate a partial or full of Disk volume area of the DKC for the
Open system hosts by installing Channel Board (CHB) to the Disk Controller (hereinafter called DKC). This
function enables a use of high reliable and high performance Storage System realized by the DKC for an
open platform or Fibre system environment. This also provides the customers with a flexible and optimized
system construction capability for their system expansion and migration. In Open system environment, Fibre
Channel (FC) and Internet Small Computer System Interface (iSCSI) can be used as a Channel interface.

1. Fibre Channel option (FC) / iSCSI option (iSCSI)


Available major functions by using the Fibre Channel options are as follows.

(1) This enables multiplatform system users to share the high reliable and high performance resource
realized by the DKC.
• The SCSI interface is complied with ANSI SCSI-3, a standard interface for various peripheral
devices for open systems. Thus, the DKC can be easily connected to various open-market Fibre
host systems (e.g. Workstation servers and PC servers).
• DW850 can be connected to open system via Fibre interface by installing Fibre Channel Board
(DW-F800-4HF32R). Fibre connectivity is provided as Channel option of DW850. Fibre Channel
Board can be installed in any CHB location of DW850.
• The iSCSI interface transmits and receives the block data by SCSI on the IP network. For this
reason, you can configure and operate IP-SAN (IP-Storage Area Network) at a low cost using the
existing network devices. The iSCSI interface board (DW-F800-2HS10S/DW-F800-2HS10B) can
be inserted in an optional place of the DW-F800 CHB slot.

(2) Fast and concurrent data transmission


Data can be read and written at a maximum speed of 32 Gbps with use of Fibre interface.
All of the Fibre ports can transfer data concurrently too.
You can read/write the data by 10 Gbps using the iSCSI interface.

THEORY02-02-10
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-20

(3) High performance


The DKC has two independent areas of Cache Flash Memory and this mechanism also applies to
the Fibre attachment / iSCSI option. Thus, compared with a conventional Disk array Controller
used for open systems and not having a Cache, this Storage System has the following outstanding
characteristics:

• Cache data management by LRU control


• Adoption of DFW (DASD Fast Write)
• Write data duplexing
• Cache Flash Memory

(4) High availability


The DKC is fault-tolerant against even single point of failure in its components and can
successively read and write data without stopping the system. Fault-tolerance against path failures
depends on the multi-path configuration support of the host system too.

(5) High data reliability


The Fibre attachment option automatically creates a guarantee code of a unique eight byte data,
adds it to host data, and writes it onto the Disk as data. The data guarantee code is checked
automatically on the internal data bus of the DKC to prevent data errors due to array-specific data
distribution or integration control. Thus, the reliability of the data improves.

(6) TrueCopy Support


TrueCopy is a function to realize the duplication of open system data by connecting the two
DW850 Storage Systems or inside parts of a single DW850 using the Fibre.
This function enables the construction of a backup system against disasters by means of the
duplication of data including those of the host system or the two volumes containing identical data
to be used for different purposes.

THEORY02-02-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-30

2.2.2 Precautions on Maintenance Operations


There are some notices about Fibre maintenance operations.

1. Before LUN path configuration is changed, Fibre I/O on the related Fibre port must be stopped.
2. Before Fibre Channel Board or LDEV is removed, the related LUN path must be removed.
3. Before Fibre Channel Board is replaced, the related Fibre I/O must be stopped.
4. When Fibre-Topology information is changed, pull out a Fibre cable between the port and SWITCH and
put it back again. Before a change of Fibre-Topology information, pull out Fibre cable and put it back
after completing the change.

The precautions against the iSCSI interface maintenance work are as shown below.

1. Before changing the LUN path definition, the iSCSI interface port I/O needs to be stopped.
2. Before removing the iSCSI interface board or LDEV, the LUN path definition needs to be removed.
3. Before replacing the iSCSI interface board, the I/O needs to be stopped.

THEORY02-02-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-40

2.2.3 Configuration
2.2.3.1 System Configuration
1. All Fibre Configuration
The DKC can also have the All Fibre configuration installed only by CHB adapters.
The all Fibre configuration example is shown below.

Figure 2-1 Minimum system configuration for All Fibre

Fibre HOST
Fibre I/F

Fibre Channel
Adapter CHB CHB

Open
Volume

DKC 1st DB
DKB

Figure 2-2 Maximum all Fibre Configuration

HOST HOST
FC FC FC FC

CHB CHB CHB CHB CHB CHB CHB CHB


Fibre I/F

DKC

THEORY02-02-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-50

2.2.3.2 Channel Configuration


1. Fibre Channel Configuration
The Fibre Channel Board (CHB) PCBs must be used in sets of 2.
The DKC can install a maximum of 4 Fibre Channel Board packages (CHBs) in the VSP G350, G370, 12
in the VSP G700 (16 for the HDD-less configuration) and 12 in the VSP G900 and VSP E990 (20 for the
HDD-less configuration with CHBB installed).
The Fibre Channel Board PCB is not used for the VSP G130. The fibre channel interface is integrated in
the Controller Board of the VSP G130.

2. iSCSI Channel Configuration


The iSCSI Interface Board PCBs (CHBs) must be configured in sets of 2.
The DKC can install a maximum of 4 Fibre Channel Board packages (CHBs) in the VSP G350, G370, 12
in the VSP G700 (16 for the HDD-less configuration) and 12 in the VSP G900 and VSP E990 (20 for the
HDD-less configuration with CHBB installed).
The iSCSI Interface Board PCB is not used for the VSP G130. The iSCSI interface is integrated in the
Controller Board of the VSP G130.

THEORY02-02-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-60

2.2.3.3 Channel Addressing


1. Fibre Channel
Each Fibre device can set a unique Port-ID number within the range from 1 to EF. An addressing from
the Fibre host to the Fibre volume in the DKC can be uniquely defined with a nexus between them.
The nexus through the Initiator (host) ID, the Target (CHB port) ID, and LUN (Logical Unit Number)
defines the addressing and access path.
The maximum number of LUNs that can be allocated to one port is 2,048.
The addressing configuration is shown in the Figure 2-4.

(1) Number of connected Hosts


For Fibre Channel, the number of connectable hosts is limited to 256 per Fibre port. (FC)
The number of MCU connections is limited to 16 per RCU Target port. (only for FC)

(2) Number of Host Groups


You can define a host group admitted access for the some LU by LUN Security as a Host Group.
For example, the two hosts in the hg-lnx group can only access the three LUs (00:00, 00:01, and
00:02).
The two hosts in the hg-hpux group can only access the two LUs (02:01 and 02:02).
The two hosts in the hg-solar group can only access the two LUs (01:05 and 01:06).

Figure 2-3 Example of Host Group Definition

Storage System

host group 00 LUN00


hg-lnx lnx01 00:00 LUN01
00:01 LUN02
lnx02 00:02

port
CL1-A
host group 01 LUN0
hg-hpux hpux01 02:01 LUN1
02:02
hpux02

host group 02 LUN0


hg-solar solar01 01:05 LUN1
01:06
solar02

Hosts in each gray box can only


access LUNs in the same gray box.

THEORY02-02-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-70

(3) LUN (Logical Unit Number)


LUNs can be allocated from 0 to 2,047 to each Fibre Port.

Figure 2-4 Fibre addressing configuration from Host

Other Fibre Other Fibre


HOST
device device
Port ID* Port ID* Port ID*

DKC One port on CHB

ID*:
Each has a different ID number within a range of 0
LUN
through EF on a bus.
0, 1, 2, 3, 4, 5, 6, 7 to 2047

(4) PORT INFORMATION


A PORT address(AL_PA) and the Topology can be set as PORT INFORMATION.
The port address is selectable from EF to one (loop ID 0 to 125).
Topology information is selected from “Fabric”, “FC-AL” or “Point to point”.

THEORY02-02-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-80

2. iSCSI interface
The iSCSI interface specifies an IPv4 address or IPv6 address and connects it to the iSCSI port.
Up to 16 virtual ports can be added to an iSCSI physical port. Use Command Control Interface (CCI)
when adding virtual ports.

(1) The number of connected hosts


For the iSCSI interface, the number of hosts (Initiators) connectable to a port is up to 255.

(2) Target number


Multiple accessible LUNs by the LUN Security function are defined as the target.
The target is equivalent to the Fibre Channel host group.
An iSCSI Name is allocated to each target of iSCSI. When connecting from the host by the iSCSI
interface, specify a target iSCSI Name in addition to an IP address/TCP port number and connect
it.

(3) LUN (Logical Unit Number)


The maximum number of LUNs allocatable to each iSCSI interface port is 2048.

(4) Port information


Use the iSCSI interface by setting the information related to the following address.
• IP address: IPV4 or IPv6
• Subnet mask:
• Gateway:
• TCP port number:

THEORY02-02-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-90

2.2.3.4 Logical Unit


1. Logical Unit Specification
The specifications of Logical Units supported and accessible from Open system hosts are defined in the
Table 2-7.

Table 2-7 LU specification


No Item Specification
1 Access right Read/Write
2 Logical Unit G byte (109) OPEN-V × n
(LU) size G byte (1,0243) ̶
3 Block size 512 Bytes
4 # of blocks ̶
5 LDEV emulation name OPEN-V

*1: “0” is added to the emulation type of the V-VOLs (e.g. OPEN-0V).

THEORY02-02-90
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-100

2. Logical Unit definition


Each volume name, such as OPEN-V is also used as an emulation type name to be specified for each
ECC group. When the emulation type is defined on an ECC group, Logical volumes (LDEVs) are
automatically allocated to the ECC group from the specified LDEV#. After creating LDEVs, each LUN
of Fibre/iSCSI port will be mapped on any location of LDEV within DKC. This setting is performed by
Maintenance PC operation.

This flexible LU and LDEV mapping scheme enables the same logical volume to be set to multiple paths
so that the host system can configure a shared volume configuration such as a High Availability (HA)
configuration. In the shared volume environment, however, some lock mechanism need to be provided by
the host systems.

Figure 2-5 LDEV and LU mapping for open volume

HOST HOST
Fibre/iSCSI
port
Max. 2048 LUNs
Shared
Volume
LU
LU
LU
DKB pair
CU#0:LDEV#00 CU#0:LDEV#14 CU#1:LDEV#00 CU#2:LDEV#00
CU#0:LDEV#01 CU#0:LDEV#15 CU#1:LDEV#01 CU#2:LDEV#01

DKB CU#0:LDEV#02 CU#0:LDEV#16 CU#1:LDEV#02 CU#2:LDEV#02


CU#0:LDEV#03 CU#0:LDEV#17 CU#1:LDEV#03 CU#2:LDEV#03

DKB
CU#0:LDEV#12 CU#0:LDEV#26 CU#1:LDEV#12 CU#2:LDEV#12
CU#0:LDEV#13 CU#0:LDEV#27 CU#1:LDEV#13 CU#2:LDEV#13

1 ECC group 20 LDEV 20 LDEV 20 LDEV 20 LDEV

THEORY02-02-100
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-110

3. LUN Security
(1) Overview
This function connects various types of servers into a segregated, secure environment via the
switch in the Fibre/iSCSI port, and thus enables the storage and the server to be used in the SAN
environment.
The MCU (initiator) port of TrueCopy does not support this function.

Figure 2-6 LUN Security

Fibre/iSCSI port DKC

Host 1

SW •••• ••••
Host 2 LUN:0 1 •••• 7 8 9 10 11 • • • • 2047

After setting LUN security


(Host A -> LU group A, Host B -> LU group B)

For Host 1
Host 1

SW •••• ••••
Host 2 LUN:0 1 •••• 7 0 1 2 3 ••••
LU group 1 LU group 2

For Host 1 For Host 2


Host 1

SW •••• ••••
Host 2 LUN:0 1 • • • • 7 0 1 2 3 ••••
LU group 1 LU group 2

THEORY02-02-110
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-120

2.2.3.5 Volume Setting


1. Setting of volume space
The volume setting procedure uses the Maintenance PC function.

2. LUN setting
- LUN setting:

• Select the CHB, Fibre port and the LUN, and select the CU# and LDEV# to be allocated to the LUN.
• Repeat the above procedure as needed.
The MCU port (Initiator port) of TrueCopy function does not support this setting.

*1: It is possible to refer to the contents which is already set on the Maintenance PC display.
*2: The above setting can be done during on-line.
*3: Duplicated access paths’ setting from the different hosts to the same LDEV is allowed. This
will provide a means to share the same volume among host computers. It is, however, the host
responsibility to manage an exclusive control on the shared volume.

Refer to MAINTENANCE PC SECTION 4.1.3 Allocating the Logical Devices of a Storage System to a
Host for more detailed procedures.

THEORY02-02-120
Hitachi Proprietary DW850
Rev.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-130

2.2.3.6 Host Mode Setting


It is necessary to set Host Mode by using Maintenance PC if you want to change a host system.
The meanings of each mode are follows.

*******HDS RAID Controller Models**************************


MODE 00 : Standard mode (Linux)
MODE 01 : (Deprecated) VMWare host mode (*1)
MODE 03 : HP-UX host mode
MODE 04 : Not supported
MODE 05 : OpenVMS host mode
MODE 07 : Tru64 host mode
MODE 08 : Not supported
MODE 09 : Solaris host mode
MODE 0A : NetWare host mode
MODE 0C : (Deprecated) Windows host mode (*2)
MODE 0F : AIX host mode
MODE 21 : VMWare host mode (Online LU)
MODE 2C : Windows host mode
MODE 4C : Not supported
others : Reserved
***********************************************************

*1: There are no functional differences between host mode 01 and 21. When you first connect a host, it
is recommended that you set host mode 21.
*2: There are no functional differences between host mode 0C and 2C. When you first connect a host,
it is recommended that you set host mode 2C.

Please set the HOST MODE OPTION if required.


For details, see MAINTENANCE PC SECTION 4.1.4.1 Editing Host Group (for fibre connection) or
4.1.4.2 Editing iSCSI Target (for iSCSI connection).

THEORY02-02-130
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-140

2.2.4 Control Function


2.2.4.1 Cache Specifications (Common to Fibre/iSCSI)
The DKC has two independent areas of Cache Flash Memory for volumes by which high reliability and high
performance with the following features can be achieved.

1. Cache data management by LRU control


Data that has been read out is stored into the Cache and managed under LRU control. For upright
transaction processing, therefore, a high Cache hit ratio can be expected and a data-write time is reduced
for improved system throughput.

2. Adoption of DFW (DASD Fast Write)


At the same time that the normal write command writes data into the Cache, it reports the end of the
write operations to a host. Data write to the Disk is asynchronous with host access. The host, therefore,
can execute the next process without waiting for the end of data write to Disk.

3. Write data duplexing


The same write data is stored into the two areas of a Cache provided in the DKC. Thus, loss of DFW data
can be avoided even one failure occurs in the Cache.

4. Non-volatile Cache
Batteries and Cache Flash Memories (CFM) are installed in Controller Board in a DKC. Once a data has
been written into a Cache, even if a power interruption occurs, it always holds the data because the data
is transferred to the CFM.

THEORY02-02-140
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-150

2.2.4.2 iSCSI Command Multiprocessing


1. Command Tag Queuing
The Command Tag Queuing function defined in the SCSI specification is supported.
This function allows each Fibre/iSCSI port on a CHB to accept multiple iSCSI commands even for the
same LUN.
The DKC can process those queued commands in parallel because a LUN is composed of multiple
physical Drives.

2. Concurrent data transfer


Fibre ports on a CHB can perform the host I/Os and data transfer with maximum 32 Gbps transfer
concurrently.
This is also applied among different CHBs.
iSCSI ports can perform the host I/Os and data transfer with maximum 10 Gbps transfer concurrently.

THEORY02-02-150
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-160

2.2.5 HA Software Linkage Configuration in a Cluster Server Environment


When this Storage System is linked to High-Availability software (HA software) which implements dual-
system operation for improved total system fault-tolerance and availability, the open system side can also
achieve higher reliability on the system scale.

2.2.5.1 Hot-standby System Configuration


The HA software minimizes system down time in the event of hardware or software failures and allows
processing to be restarted or continued. The basic system takes a hot-standby (asymmetric) configuration,
in which, as shown in the figure below, two hosts (an active host and a standby host) are connected via a
monitoring communication line. In the hot-standby configuration, a complete dual system can be built by
connecting the Fibre/iSCSI cables of the active and standby hosts to different CHB Fibre/iSCSI ports.

Figure 2-7 Hot-standby configuration

LAN

Host A (active) Monitoring communications line Host B (standby)

HA software Monitoring HA software


Moni-
toring
AP HW AP HW
FS FS
Fibre/iSCSI Fibre/iSCSI

AP : Application program
CHB0 CHB1 CHB0 CHB1 FS : File system
HW : Hardware

LU0

LU0
DKC/DB
• The HA software under the hot-standby configuration operates in the following sequence:
(1) The HA software within the active host monitors the operational status of own system by using a
monitoring agent and sends the results to the standby host through the monitoring communication
line (this process is referred to as heart beat transmission ). The HA software within the standby
host monitors the operational status of the active host based on the received information.
(2) If an error message is received from the active host or no message is received, the HA software
of the standby host judges that a failure has occurred in the active host. As a result, it transfers
management of the IP addresses, Disks, and other common resources, to the standby host (this
process is referred to as fail-over ).
(3) The HA software starts the application program concerned within the standby host to take over the
processing on behalf of the active host.

• Use of the HA software allows a processing requirement from a client to be taken over. In the case of some
specific application programs, however, it appears to the client as if the host that was processing the task has
been rebooted due to the host switching. To ensure continued processing, therefore, a login to the application
program within the host or sending of the processing requirement may need to be executed once again.

THEORY02-02-160
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-170

2.2.5.2 Mutual Standby System Configuration


In addition to the hot-standby configuration described above, a mutual standby (symmetric) configuration can
be used to allow two or more hosts to monitor each other. Since this Storage System has eight Fibre/iSCSI
ports, it can, in particular, be applied to a large-scale cluster environment in which more than two hosts exist.

Figure 2-8 Mutual Standby System Configuration

LAN

Host A (active) Monitoring communications line Host B (standby)

HA software Monitoring HA software


AP-1 AP-1
AP is started when
AP-2 a failure occurs
AP-2

Fibre/iSCSI Fibre/iSCSI

DKC/DB
CHB0 CHB1 CHB0 CHB1
AP : Application program

LU0

LU0

• In the mutual standby configuration, since both hosts operate as the active hosts, no resources exist that
become unnecessary during normal processing. On the other hand, however, during a backup operation
the disadvantages are caused that performance deteriorated and that the software configuration becomes
complex.
• This Storage System is scheduled to support Oracle SUN CLUSTER, Symantec Cluster server, Hewlett-
Packard MC/ServiceGuard, and IBM HACMP and so on.

THEORY02-02-170
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-180

2.2.5.3 Configuration Using Host Path Switching Function


When the host is linked with the HA software and has a path switching capability, if a failure occurs in the
adapter, Fibre/iSCSI cable, or DKC (Fibre/iSCSI ports and the CHB) that is being used, automatic path
switching will take place as shown below.

Figure 2-9 Host Path Switching Function Using Configuration

LAN

Host A (active) Host B (standby)

Host capable of switching the path


Host switching
Automatic path switching is not required
Adapter 0 Adapter 1
Adapter 0 Adapter 1

Strage System
CHB0 CHB1 CHB0 CHB1

LU0

LU0

The path switching function enables processing to be continued without host switching in the event of a
failure in the adapter, Fibre/iSCSI cable, Storage System or other components.

THEORY02-02-180
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-190

2.2.6 LUN Addition


2.2.6.1 Overview
LUN addition function makes it enable to add LUNs to DW850 Fibre ports in the I/O.
Some host operations are required before the added volumes are recognized and become usable from the host
operating systems.

2.2.6.2 Specifications
1. General
(1) LUN addition function supports Fibre interface.
(2) LUN addition can be executed by Maintenance PC or by Web Console.
(3) Some operating systems require reboot operation to recognize the newly added volumes.
(4) When new LDEVs should be installed for LUN addition, install the LDEVs by Maintenance PC
first. Then add LUNs by LUN addition from Maintenance PC or Web Console.

2. Platform support
Host Platforms supported for LUN addition are shown in Table 2-8.

Table 2-8 Platform support level


Support level Platform
(A) LUN addition and LUN recognition. Solaris, HP-UX, AIX,
Windows
(B) LUN addition only. Linux
Reboot is required before new LUNs are recognized.
(C) LUN addition is not supported.

Host must be shutdown before installing LUNs and then must be rebooted.

THEORY02-02-190
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-200

2.2.6.3 Operations
1. Operations
Step 1: Execute LUN addition from Maintenance PC.
Step 2: Check whether or not the platform of the Fibre port supports LUN recognition with Table 2-8.
Support (A) : Execute LUN recognition procedures in Table 2-8.
Not support (B) : Reboot host and execute normal install procedure.

2. Host operations
Host operations for LUN recognition are shown in Table 2-9.

Table 2-9 LUN recognition procedures overview for each platform


Platform LUN recognition procedures
HP-UX (1) ioscan (check device added after IPL)
(2) insf (create device files)
Solaris (1) /usr/sbin/drvconfig
(2) /usr/sbin/devlinks
(3) /usr/sbin/Disks
(4) /usr/ucb/ucblinks
AIX (1) Devices-Install/Configure Devices Added After IPL By SMIT
Windows Automatically detected

THEORY02-02-200
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-210

2.2.7 LUN Removal


2.2.7.1 Overview
LUN removal function makes it enable to remove LUNs to DW850.

2.2.7.2 Specifications
1. General
(1) LUN removal can be used only for the ports on which LUNs are already existing.
(2) LUN removal can be executed by Maintenance PC or by Web Console .
(3) When LUNs should be removed for LUN removal, stop Host I/O of concerned LUNs.
(4) If necessary, execute backup of concerned LUNs.
(5) Remove concerned LUNs from HOST.
(6) In case of AIX, release the reserve of concerned LUNs.
(7) In case of HP-UX do not remove LUN=0 under existing target ID.

NOTE: If LUN removal is done without stopping Host I/O, or releasing the reserve, it would fail. Then
stop HOST I/O or release the reserve of concerned LUNs and try again. If LUN removal would
fail after stopping Host I/O or releasing the reserve, there is a possibility that the health check
command from HOST is issued.
At that time, wait about three minutes and try again.

2. Platform support
Host platforms supported for LUN removal are shown in Table 2-10.

Table 2-10 Support platform


Platform OS Fibre/iSCSI
HP HP-UX 
SUN Solaris 
RS/6000 AIX 
PC Windows 
(example) : supported, ×: not supported

THEORY02-02-210
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-220

2.2.7.3 Operations
1. Operations
Step 1: Confirm whether or not the platform supports LUN removal with Table 2-10.
Support : Go to Step 2.
Not support : Go to Step 3.
Step 2: If HOST MODE of the port is not 00 or 04 or 07 use, go to Step 4.
Step 3: Stop Host I/O of concerned LUNs.
Step 4: If necessary, execute backup of concerned LUNs.
Step 5: Remove concerned LUNs form HOST.
Step 6: In case AIX, release the reserve of concerned LUNs.
If not, go to Step 7.
Step 7: Execute LUN removal from Maintenance PC.

2. Host operations
Host operations for LUN removal procedures are shown in Table 2-11.

Table 2-11 LUN removal procedures overview for each platform


Platform LUN removal procedures
HP-UX mount point:/01, volume group name:vg01
(1) umount /01 (umount)
(2) vgchange -a n vg01 (deactive volume groups)
(3) vgexport /dev/vg01 (export volume groups)
Solaris mount point:/01
(1) umount /01 (unmout)
AIX mount point:/01, volume group name:vg01, device file name:hDisk1
(1) umount /01 (umount)
(2) rmfs -r” /01 (delete file systems)
(3) varyoffvg vg01 (vary off)
(4) exportvg vg01 (export volume groups)
(5) rmdev -I ‘hDisk1’ ‘-d’ (delete devime files)

THEORY02-02-220
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-230

2.2.8 Prioritized Port Control (PPC) Functions


2.2.8.1 Overview
The Prioritized Port Control (PPC) function allows you to use the DKC for both production and development.
The assumed system configuration for using the Prioritized Port Control option consists of a single DKC
that is connected to multiple production servers and development servers. Using the Prioritized Port Control
function under this system configuration allows you to optimize the performance of the development servers
without adversely affecting the performance of the production servers.
MCU port (Initiator port) of Fibre Remote Copy function does not support Prioritized Port Control (PPC).

The Prioritized Port Control option has two different control targets: fibre port and open-systems host s World
Wide Name (WWN). The fibre ports used on production servers are called prioritized ports, and the fibre
ports used on development servers are called non-prioritized ports. Similarly, the WWNs used on production
servers are called prioritized WWNs, and the WWNs used on development servers are called non-prioritized
WWNs.
The Prioritized Port Control option cannot be used simultaneously for both the ports and WWNs for the same
DKC. Up to 80 ports or 2048 WWNs can be controlled for each DKC.
*: When the number of the installed ports in the storage system is less than this value, the maximum
number is the number of the installed ports in the storage system.

The Prioritized Port Control option monitors I/O rate and transfer rate of the fibre ports or WWNs. The
monitored data (I/O rate and transfer rate) is called the performance data, and it can be displayed in graphs.
You can use the performance data to estimate the threshold and upper limit for the ports or WWNs, and
optimize the total performance of the DKC.

1. Prioritized Ports and WWNs


The fibre ports or WWNs used on production servers are called prioritized ports or prioritized WWNs,
respectively. Prioritized ports or WWNs can have threshold control set, but are not subject to upper limit
control. Threshold control allows the maximum workload of the development server to be set according
to the workload of the production server, rather than at an absolute level. To do this, the user specifies
whether the current workload of the production server is high or low, so that the value of the threshold
control is indexed accordingly.

2. Non-Prioritized Ports and WWNs


The fibre ports or WWNs used on development servers are called non-prioritized ports or prioritized
WWNs, respectively. Non-prioritized ports or WWNs are subject to upper limit control, but not threshold
control. Upper limit control makes it possible to set the I/O of the non-prioritized port or WWN within a
range that does not affect the performance of the prioritized port or WWN.

THEORY02-02-230
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-240

2.2.8.2 Overview of Monitoring Functions


1. Monitoring Function
Monitoring allows you to collect performance data, so that you can set optimum upper limit and
threshold controls. When monitoring the ports, you can collect data on the maximum, minimum and
average performance, and select either per port, all prioritized ports, or all non-prioritized ports. When
monitoring the WWNs, you can collect data on the average performance only, and select either per
WWN, all prioritized WWNs, or all non-prioritized WWNs.
The performance data can be displayed in graph format either in the real time mode or offline mode.
The real time mode displays the performance data of the currently active ports or WWNs. The data is
refreshed in every time that you specified between 1 and 15 minutes by minutes, and you can view the
varying data in real time. The offline mode displays the stored performance data. Statistics are collected
at a user-specified interval between 1 and 15 minutes, and stored between 1 and 15 days.

2. Monitoring and Graph Display Mode


When you activate the Prioritized Port Control option, the Select Mode panel where you can select either
Port Real Time Mode, Port Offline Mode, WWN Real Time Mode, or WWN Offline Mode opens. When
you select one of the modes, monitoring starts automatically and continues unless you stop monitoring.
However, data can be stored for up to 15 days. To stop the monitoring function, exit the Prioritized Port
Control option, and when a message asking if you want to stop monitoring is displayed, select the Yes
button.

(1) The Port/WWN Real Time Mode is recommended if you want to monitor the port or WWN
performance for a specific period of time (within 24 hours) of a day to check the performance in
real time.
(2) The Port/WWN Offline Mode is recommended if you want to collect certain amount of the port or
WWN performance data (maximum of one week), and check the performance in non-real time.

To determine a preliminary upper limit and threshold, run the development server by using the
performance data collected from the production server that was run beforehand and check the changes of
performance of a prioritized port. If the performance of the prioritized port does not change, set a value
by increasing an upper limit of the non-prioritized port. After that, recollect and analyze the performance
data. Repeat these steps to determine the optimized upper limit and threshold.

THEORY02-02-240
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-250

2.2.8.3 Procedure (Flow) of Prioritized Port and WWN Control


To perform the prioritized port(WWN) control, determine the upper limit to the non-prioritized port(WWN)
by checking that the performance monitoring function does not affect production. Figure 2-10 shows the
procedures for prioritized port(WWN) control.

Figure 2-10 Flow of Prioritized Port(WWN) Control

(1) Monitoring the current performance of production server


Gather the performance data only by the production server using performance
monitoring for each port (WWN) in IO/s and MB/s.

(2) Determining an upper limit of non-prioritized ports (WWNs)


Determine a preliminary upper limit from the performance data in Step (1) above.

(3) Setting or resetting an upper limit of non-prioritized ports (WWNs)


Note on resetting the upper limit:
・Set a higher value if the performance of prioritized ports (WWNs) is affected.
・Set a lower value if the performance of prioritized ports (WWNs) is not affected.
(4) Running both production and development servers together
If there are two or more development servers start them one by one.

(5) Monitoring the performance of servers


Check if there is performance deterioration at prioritized ports. (WWNs)

No
(6) Determining an upper limit

Yes
Is this upper limit of non-prioritized ports (WWNs) the maximum value,
without affecting the performance of prioritized ports (WWNs)?

(7) Starting operations on production and development servers

THEORY02-02-250
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-260

2.2.9 Replacing Firmware Online


2.2.9.1 Overview
The firmware replacement during I/O is enabled by reducing the offline time of the firmware replacement
significantly.
By doing this, the firmware replacement (online replacement) is enabled without stopping I/O of the host
connected to the port on the Channel Board even in the system which does not have the path function.

THEORY02-02-260
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-10

2.3 Logical Volume Formatting


2.3.1 High-speed Format
2.3.1.1 Overviews
DKC can format two or more ECCs at the same time by providing HDDs with the Logical Volume formatting
function. However, when using the encryption function, the high-speed format is unusable.

Table 2-12 Flow of Format


Item No. Item Contents
1 Maintenance PC operation Specify a parity group and execute the LDEV format.
2 Display of execution status The progress (%) is displayed in the Task window or in the
summary of the Parity Group window and LDEV window.
3 Execution result • Normal: Completed normally
• Failed: Terminated abnormally
4 Recovery action when a failure Same as the conventional one. However, a retry is to be executed
occurs in units of ECC. (Because the Logical Volume formatting is
terminated abnormally in units of ECC when a failure occurs in
the HDD.)
5 Operation of the Maintenance When the Logical Volume format for more than one ECCs is
PC which is a high-speed instructed, the high-speed processing is carried out(*1).
Logical Volume formatting
object
6 PS/OFF or powering off The Logical Volume formatting is suspended.
No automatic restart is executed.
7 Maintenance PC powering off After the Maintenance PC is rebooted, the indication before the
during execution of an Logical PC powering off is displayed in succession.
Volume formatting
8 Execution of a high-speed ECC of HDD which the spare is saved fails the high-speed
Logical Volume format in the Logical Volume formatting, and changes to a low-speed format.
status that the spare is saved (Because the low-speed formatting is executed after the high-
speed format is completed, the format time becomes long.)
After the high-speed Logical Volume formatting is completed,
execute the copy back of HDD which the spare is saved from SIM
log and restore it.
*1: Normal Format is used for ECC of SSD.

THEORY02-03-10
Hitachi Proprietary DW850
Rev.4.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-20

2.3.1.2 Estimation of Logical Volume Formatting Time


The standard formatting time of the high-speed LDEV format and the low-speed LDEV format for each
Drive type is described below.
Note that the Storage System configuration at the time of this measurement is as shown below.

<Storage System Conditions at the Time of Format Measurement>


• The number of the installed DKBs (VSP G130, G350, G370 and F350, F370: CTL is directly installed,
VSP G700, VSP F700, VSP G900 and VSP F900: Two per cluster)
• Without I/O
• Perform the formatting for the single ECC
• Define the number of LDEVs (define a maximum number of 100GB LDEVs for the single ECC)
• Measurement emulation (OPEN-V)

1. HDD
The formatting time of HDD doesn t depend on number of logical volumes, and be decided by capacity
and the rotational speed of HDD.
(1) High speed LDEV formatting
The high-speed format time is indicated as follows.
It is an aim to the last in the standard time required, and the real formatting time may be different
by RAID GROUP and a Drive type.

Table 2-13 High-speed format time estimation


(Unit : min)
HDD Capacity/rotation Speed Formatting Time (*3) Monitoring Time (*1)
600 GB / 10 krpm 100 150
1.2 TB / 10 krpm 170 260
2.4 TB / 10 krpm 285 430
6 TB / 7.2 krpm 805 1210
10 TB / 7.2 krpm 1090 1635
14 TB / 7.2 krpm 1385 2080

THEORY02-03-20
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-30

(2) Slow LDEV formatting


The low-speed format time is indicated as follows.
Rough formatting time per 1 TB/1 PG without host I/O is indicated as follows (*2) (*4).

Table 2-14 10 krpm


• 10 krpm : 600 GB (Unit : min)
600 GB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 160 130 125 125 120
RAID5 3D+1P 110 85 80 85 85
4D+1P 85 65 60 65 65
6D+1P 60 45 45 45 45
7D+1P 50 40 40 40 40
RAID6 6D+2P 60 50 45 45 45
12D+2P 35 25 25 25 25
14D+2P 30 25 20 20 20
• 10 krpm : 1.2 TB (Unit : min)
1.2 TB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 155 135 130 130 130
RAID5 3D+1P 110 85 85 85 85
4D+1P 85 65 65 65 65
6D+1P 60 45 45 45 45
7D+1P 50 40 40 40 40
RAID6 6D+2P 60 45 45 45 45
12D+2P 35 25 25 25 25
14D+2P 30 20 20 20 20
• 10 krpm : 2.4 TB (Unit : min)
2.4 TB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 145 120 115 115 115
RAID5 3D+1P 100 80 75 75 75
4D+1P 75 60 60 60 55
6D+1P 55 40 40 40 35
7D+1P 50 35 35 35 30
RAID6 6D+2P 50 40 40 40 35
12D+2P 35 20 20 20 20
14D+2P 30 20 20 20 15

THEORY02-03-30
Hitachi Proprietary DW850
Rev.4.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-40

Table 2-15 7.2 krpm


• 7.2 krpm : 6 TB (Unit : min)
6 TB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 215 160 160 160 160
RAID5 3D+1P 140 100 90 90 90
4D+1P 110 75 70 70 65
6D+1P 75 50 45 45 45
7D+1P 65 45 40 40 40
RAID6 6D+2P 75 50 45 45 45
12D+2P 40 30 25 25 25
14D+2P 35 25 25 25 20
• 7.2 krpm : 10 TB (Unit : min)
10 TB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 210 160 160 155 155
RAID5 3D+1P 140 90 85 80 80
4D+1P 105 75 65 60 60
6D+1P 70 50 45 40 40
7D+1P 65 45 40 35 35
RAID6 6D+2P 75 50 45 40 40
12D+2P 40 30 25 20 20
14D+2P 35 30 25 20 20
• 7.2 krpm : 14 TB (Unit : min)
14 TB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 180 130 130 130 130
RAID5 3D+1P 110 85 80 75 75
4D+1P 85 70 60 55 55
6D+1P 60 45 40 35 35
7D+1P 55 40 35 30 30
RAID6 6D+2P 65 50 45 40 40
12D+2P 35 30 25 20 20
14D+2P 30 30 25 20 20

THEORY02-03-40
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-50

2. SAS SSD
SAS SSD doesn t have the self LDEV format function.
LDEV fomatting is performed by slow LDEV format only.
Rough formatting time per 1 TB/1 PG without host I/O is indicated as follows (*2) (*4).

Table 2-16 SAS SSD format time estimation


• SAS SSD : 480 GB (Unit : min)
480 GB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 15 15 15 15 −
RAID5 3D+1P 20 10 10 10 −
4D+1P 20 10 10 10 −
6D+1P 15 5 5 5 −
7D+1P 15 5 5 5 −
RAID6 6D+2P 20 5 5 10 −
12D+2P 15 5 5 5 −
14D+2P 15 5 5 5 −
• SAS SSD : 960 GB (Unit : min)
960 GB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 15 15 15 15 15
RAID5 3D+1P 20 10 10 10 10
4D+1P 20 10 10 10 10
6D+1P 15 5 5 5 5
7D+1P 15 5 5 5 5
RAID6 6D+2P 20 5 5 5 5
12D+2P 15 5 5 5 5
14D+2P 15 5 5 5 5
(To be continued)
The formatting time becomes the same in 16 SSDs because the transmission of the format data does not
arrive even at the limit of passing.
Depending on SSD internal condition, formatting time would be approximately 4x faster than these
values.

THEORY02-03-50
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-60

(Continued from the preceding page)


• SAS SSD : 1.9 TB (Unit : min)
1.9 TB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 15 15 15 15 15
RAID5 3D+1P 20 10 10 10 10
4D+1P 20 10 10 10 10
6D+1P 15 5 5 5 5
7D+1P 15 5 5 5 5
RAID6 6D+2P 20 5 5 5 5
12D+2P 15 5 5 5 5
14D+2P 15 5 5 5 5
• SAS SSD : 3.8 TB (Unit : min)
3.8 TB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 15 15 15 15 15
RAID5 3D+1P 20 10 10 10 10
4D+1P 20 10 10 10 10
6D+1P 15 10 10 10 10
7D+1P 15 5 5 5 5
RAID6 6D+2P 20 10 10 10 10
12D+2P 20 5 5 5 5
14D+2P 20 5 5 5 5
• SAS SSD : 7.6 TB (Unit : min)
7.6 TB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 15 15 15 15 15
RAID5 3D+1P 20 10 10 10 10
4D+1P 20 10 10 10 10
6D+1P 15 5 5 5 5
7D+1P 15 5 5 5 5
RAID6 6D+2P 20 10 5 5 5
12D+2P 20 5 5 5 5
14D+2P 20 5 5 5 5
(To be continued)
The formatting time becomes the same in 16 SSDs because the transmission of the format data does not
arrive even at the limit of passing.
Depending on SSD internal condition, formatting time would be approximately 4x faster than these
values.

THEORY02-03-60
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-61

(Continued from the preceding page)


• SAS SSD : 15 TB (Unit : min)
15 TB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 20 20 20 20 20
RAID5 3D+1P 25 15 15 15 15
4D+1P 25 15 15 15 15
6D+1P 20 10 10 10 10
7D+1P 20 10 10 10 10
RAID6 6D+2P 30 10 10 10 10
12D+2P 20 10 10 10 10
14D+2P 20 10 10 10 10
• SAS SSD : 30 TB (Unit : min)
30 TB
RAID Level Standard Formatting Time (*3)
VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D 15 10 10 10 10
RAID5 3D+1P 20 10 10 10 10
4D+1P 20 10 10 10 10
6D+1P 20 5 5 5 5
7D+1P 20 5 5 5 5
RAID6 6D+2P 20 5 5 5 5
12D+2P 20 5 5 5 5
14D+2P 20 5 5 5 5
The formatting time becomes the same in 16 SSDs because the transmission of the format data does not
arrive even at the limit of passing.
Depending on SSD internal condition, formatting time would be approximately 4x faster than these
values.

THEORY02-03-61
Hitachi Proprietary DW850
Rev.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-70

3. FMD
The formatting time of FMD doesn t depend on number of ECC, and be decided by capacity of FMD.
(1) High speed LDEV formatting
The high-speed format time is indicated as follows.
It is an aim to the last in the standard time required, and the real formatting time may be different
by RAID GROUP and a Drive type.

Table 2-17 FMD High-speed format time estimation


(Unit : min)
FMD Capacity Formatting Time (*3) Time Out Value (*1)
3.5 TB (3.2 TiB) 5 10
7 TB 5 10
14 TB 5 10

(2) Slow LDEV formatting


The low-speed format time is indicated as follows.
Rough formatting time per 1 TB/1 PG without host I/O is indicated as follows (*2) (*4).

THEORY02-03-70
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-80

Table 2-18 FMD Low-speed format time estimation


• FMD : 3.5 TB (3.2 TiB) (Unit : min)
3.5 TB
RAID Level Standard Formatting Time (*3)
VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D - - 5 5
RAID5 3D+1P - - 5 5
4D+1P - - 5 5
6D+1P - - 5 5
7D+1P - - 5 5
RAID6 6D+2P - - 5 5
12D+2P - - 5 5
14D+2P - - 5 5
• FMD : 7 TB (Unit : min)
7 TB
RAID Level Standard Formatting Time (*3)
VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D - - 10 10
RAID5 3D+1P - - 5 5
4D+1P - - 5 5
6D+1P - - 5 5
7D+1P - - 5 5
RAID6 6D+2P - - 5 5
12D+2P - - 5 5
14D+2P - - 5 5
• FMD : 14 TB (Unit : min)
14 TB
RAID Level Standard Formatting Time (*3)
VSP G350 VSP G370 VSP G700 VSP G900
RAID1 2D+2D - - 10 10
RAID5 3D+1P - - 5 5
4D+1P - - 5 5
6D+1P - - 5 5
7D+1P - - 5 5
RAID6 6D+2P - - 5 5
12D+2P - - 5 5
14D+2P - - 5 5

THEORY02-03-80
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-90

4. NVMe SSD
NVMe SSD doesn t have the self LDEV format function.
LDEV fomatting is performed by slow LDEV format only.
Rough formatting time per 1 TB/1 PG without host I/O is indicated as follows (*2) (*4).

Table 2-19 NVMe SSD format time estimation


• NVMe SSD : 1.9 TB (Unit : min)
1.9 TB
RAID Level Standard Formatting Time (*3)
VSP E990
RAID1 2D+2D 20
RAID5 3D+1P 20
4D+1P 20
6D+1P 20
7D+1P 10
RAID6 6D+2P 10
12D+2P 10
14D+2P 5

• NVMe SSD : 3.8 TB (Unit : min)


3.8 TB
RAID Level Standard Formatting Time (*3)
VSP E990
RAID1 2D+2D 20
RAID5 3D+1P 20
4D+1P 20
6D+1P 20
7D+1P 10
RAID6 6D+2P 10
12D+2P 10
14D+2P 5
(To be continued)
The formatting time becomes the same in 16 SSDs because the transmission of the format data does not
arrive even at the limit of passing.

THEORY02-03-90
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-91

(Continued from the preceding page)


• NVMe SSD : 7.6 TB (Unit : min)
7.6 TB
RAID Level Standard Formatting Time (*3)
VSP E990
RAID1 2D+2D 20
RAID5 3D+1P 20
4D+1P 20
6D+1P 20
7D+1P 10
RAID6 6D+2P 10
12D+2P 10
14D+2P 5

• NVMe SSD : 15TB (Unit : min)


15 TB
RAID Level Standard Formatting Time (*3)
VSP E990
RAID1 2D+2D 20
RAID5 3D+1P 20
4D+1P 20
6D+1P 20
7D+1P 10
RAID6 6D+2P 10
12D+2P 10
14D+2P 5

The formatting time becomes the same in 16 SSDs because the transmission of the format data does not
arrive even at the limit of passing.

THEORY02-03-91
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-92

*1: After the standard formatting time has elapsed, the display on the Web Console shows 99% until it
reaches to the monitoring time. Because Drive itself performs the format, and the progress rate to
the total capacity is not understood, the ratio at the elapsed time from the format beginning to the
Formatting time required is displayed.
*2: If there is an I/O operation, the minimum formatting time is over 6 times as long as the discrete
value, depending on the I/O load.
*3: The formatting time varies according to the generation of the Drive in standard time distance.
NOTE: The formatting time when mixing the Drive types and the configurations described in
(1) High speed LDEV formatting and (2) Slow LDEV formatting divides into the
following cases.

(a) When only the high speed formatting available Drives (1. HDD, 3. FMD) are
mixed
The formatting time is the same as the formatting time of Drive types and
configurations with the maximum standard time.

(b) When only the low speed formatting available Drives (2. SAS SSD) are mixed
The formatting time is the same as the formatting time of Drive types and
configurations with the maximum standard time.

(c) When the high speed formatting available Drives (1. HDD, 3. FMD) and the low
speed formatting available Drives (2. SAS SSD) are mixed

(1) The maximum standard time in the high speed formatting available Drive
configuration is the maximum high speed formatting time.

(2) The maximum standard time in the low speed formatting available Drive
configuration is the maximum low speed formatting time.

The formatting time is the sum of the above formatting time (1) and (2).

When the high speed formatting available Drives and the low speed formatting
available Drives are mixed in one formatting process, the low speed formatting
starts after the high speed formatting is completed. Even after the high speed
formatting is completed, the logical volumes with the completed high speed
formatting cannot be used until the low speed formatting is completed.

In all cases of (a), (b) and (c), the time required to start using the logical volumes
takes longer than the case that the high speed formatting available Drives and the low
speed formatting available Drives are not mixed.
Therefore, when formatting multiple Drive types and the configurations, we
recommend dividing the formatting work and starting the work individually from a
Drive type and a configuration with the shorter standard time.
*4: The time required to format the drive might be increased by up to approximately 20% in the DB on
the rear stage in cascade connection.

THEORY02-03-92
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-100

2.3.2 Quick Format


2.3.2.1 Overviews
Quick Format provides the function to format in the background that allows the volumes to be usable without 
waiting for the completion of the formatting when starting the formatting function.
The support specifications are shown below.

Table 2-20 Quick Format Specifications


Item Item Contents
No.
1 Support Drive HDD type All Drive type support
2 Number of parity groups • Quick Format can be performed on multiple parity groups simultaneously.
The number of those parity groups depends on the total of parity group
entries.
The number of entries is an indicator for controlling the number of parity
groups on which Quick Format can be performed. The number of parity
group entries depends on the drive capacity configuring each parity group.
The number of entries per parity group is as follows.
Model Capacity of drives The number of entries
composing a parity group per parity group
VSP G130, G350, 32 TB or less 1 entry
G370, G700, Greater than 32 TB 2 entries
G900 and
VSP F350, F370,
F700, F900
VSP E990 48 TB or less 1 entry
Greater than 48 TB 2 entries

The maximum number of entries on which Quick Format can be performed


is as follows.
• VSP G130, VSP G350/G370, VSP F350/F370: 18 entries
• VSP G700, VSP F700: 36 entries
• VSP G900, VSP F900, VSP E990: 72 entries
• The number of volumes does not have a limit if it is less than or equal to
the maximum number of entries.
• In the case of four concatenations, the number of parity groups is four. In
the case of two concatenations, the number of parity groups is two.
(To be continued)

THEORY02-03-100
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-110

(Continued from the preceding page)


Item Item Contents
No.
3 Combination with It is operable in combination with all P.P.
various P.P.
4 Formatting types When performing a format from Maintenance PC, Web Console or CLI,
you can select either Quick Format or the normal format.
5 Additional start in Additional Quick Format can be executed during Quick Format execution.
execution In this case, the total number of entries during Quick Format and those to
be added is limited to the maximum number of entries per model.
6 Preparing Quick Format • When executing Quick Format, management information is created first. I/
O access cannot be executed in the same way as the normal format in this
period.
• Creating management information takes up to about one minute for one
parity group, and up to about 36 minutes in case of 36 parity groups for
the preparation.
7 Blocking and restoring • When the volume during Quick Format execution is blocked for
the volume maintenance, the status of the volume (during Quick Format execution) is
stored in the Storage System. When the volume is restored afterwards, the
volume status becomes Normal (Quick Format) .
Therefore, parity groups in which all volumes during Quick Format are
blocked are included in the number of entries during Quick Format.
The number of entries for additional Quick Format can be calculated with
the following calculating formula: The maximum number of entries
per model - X - Y
(Legend)
X: The number of entries for parity groups during Quick Format.
Y: The number of entries for parity groups in which all volumes during
Quick Format are blocked.
8 Operation at the time of After P/S ON, Quick Format restarts.
PS OFF/ON
9 Restrictions • Quick Format cannot be executed to the journal volume of Universal
Replicator, external volume, and virtual volume.
• Volume Migration and Quick Restore of ShadowImage cannot be executed
to a volume during Quick Format.
• When the parity group setting is the Accelerated Compression, Quick
Format cannot be performed. (If performed, it terminates abnormally)

THEORY02-03-110
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-120

2.3.2.2 Volume Data Assurance during Quick Formatting


The Quick Formatting management table is kept on SM. This model can prevent the management table from
volatilizing by backing up the SM to an SSD, and assures the data quality during Quick Formatting.

THEORY02-03-120
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-130

2.3.2.3 Quick Formatting Time


Quick Format is executed in the background while I/O from and to the host is performed.
Therefore, the Quick Format time may vary significantly depending on the number of I/Os from and to the
host or other conditions.
You can also calculate a rough estimation of the Quick Format time using the following formula.

Rough estimation of Quick Format time


• When executing Quick Format in the entire area of a parity group
Format time = Format standard time (see Table 2-21)
× Format multiplying factor (see Table 2-22) ×↑ (The number of parity groups ÷ 8) ↑
• When executing Quick Format on some LDEVs in a parity group
Format time = Format standard time (see Table 2-21)
× Format multiplying factor (see Table 2-22) ×↑ (The number of parity groups ÷ 8) ↑
× (Capacity of LDEVs on which Quick Format is executed ÷ Capacity of a parity group)
NOTE: ↑ indicates roundup.
Table 2-21 shows the Quick Format time when no I/O is performed in the entire area of a parity group.

Table 2-21 Quick Format Time


Drive type Formatting time
6R0H9M/6R0HLM (7.2 krpm) 78 h
10RH9M/10RHLM (7.2 krpm) 130 h
14RH9M/14RHLM (7.2 krpm) 184 h
600JCMC (10 krpm) 8h
1R2JCMC/1R2J7MC (10 krpm) 15 h
2R4JGM/2R4J8M (10 krpm) 31 h
480MGM (SAS SSD) 2h
960MGM (SAS SSD) 4h
1R9MGM/1T9MGM (SAS SSD) 8h
3R8MGM (SAS SSD) 17 h
7R6MGM (SAS SSD) 34 h
15RMGM (SAS SSD) 67 h
30RMGM (SAS SSD) 134 h
1R9RVM (NVMe SSD) 8h
3R8RVM (NVMe SSD) 17 h
7R6RVM (NVMe SSD) 34 h
15RRVM (NVMe SSD) 67 h
3R2FN (FMD) 16 h
7R0FP (FMD) 32 h
14RFP (FMD) 64 h

THEORY02-03-130
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-140

Table 2-22 Format Multiplying Factor


RAID level I/O Multiplying factor
RAID1 No 0.5
Yes 2.5
RAID5, RAID6 No 1.0
Yes 5.0

• When Quick Format is executed to parity groups with different Drive capacities at the same time, calculate
the time based on the parity group with the largest capacity.

THEORY02-03-140
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-150

2.3.2.4 Performance during Quick Format


Quick Format executes the formatting in background while executing I/O from HOST.
Therefore, it may influence the HOST performance.
The following table shows the proportion of the performance influence.
(However, this is only a rough standard, and it may change depending on the conditions.)

Table 2-23 Performance during Quick Format


I/O types Performance when the ratio shows
100% at normal condition
Random read 80%
Random write to the unformatted area 20%
Random write to the formatted area 60%
Sequential read 90%
Sequential write 90%

THEORY02-03-150
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-160

2.3.2.5 Combination with Other Maintenance

Table 2-24 Combination with Other Maintenance


Item No. Maintenance Operation Operation during Quick Format
1 Drive copy / correction copy The processing is possible as well as the normal volumes, but
unformatted area is skipped.
2 LDEV Format The LDEV Format is executable for the volumes that Quick
(high-speed / low-speed) Format is not executed.
3 Volume maintenance block It is possible to block the volumes instructed by Web Console or
CLI for the volumes during Quick Format.
4 Volume forcible restore If forcible restore is executed after the maintenance block, it
returns to Quick Formatting.
5 Verify consistency check Possible. However, the Verify consistency check for the
unformatted area is skipped.
6 PDEV replacement Possible as usual
7 PK replacement Possible as usual

THEORY02-03-160
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-170

2.3.2.6 SIM Output When Quick Format Completed


After Quick Format is completed, SIM = 0x410100 is output when performing Quick Format.
However, SIM is not output when Quick Format is performed by Command Control Interface (CCI).

THEORY02-03-170
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-10

2.4 Ownership Right


2.4.1 Requirements Definition and Sorting Out Issues

Table 2-25 Confirmation and Definitions of Requirement and issues


# Requirement Case
1 Maximize system performance by using MPU Initial set up (Auto-Define-Configuration)
effectively Ownership management resources are added.
2 At performance tuning
3 Troubleshoot in the case of problems related to Troubleshoot
ownership
4 Confirm resources allocated to each MPU Ownership management resources are added.
At performance tuning
Troubleshoot
5 Maintain performance for resources allocated to Maintain performance for resources allocated to
specific MPU specific MPU

2.4.1.1 Requirement #1
Requirement
Maximize system performance by using MPU effectively.

Issue
Way to distribute resources to balance load of each MPU.

How to realize
(1) User directly allocates resources to each MPU.
(2) User does not allocate resources. Resources are allocated to each MPU automatically.

Case
(A) At the time of initial construction (Auto-Define-Configuration)
Target resource : LDEV
Setting IF : Maintenance PC

(B) Ownership management resources are added.


Target resources : L
 DEV / External VOL / JNLG
Setting IF : M
 aintenance PC / Storage Navigator / CLI / RMLib

THEORY02-04-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-20

2.4.1.2 Requirement #2
Requirement
Maximize system performance by using MPU effectively.

Issue
Way to move resources to balance load of each MPU.

How to realize
User directly requirement s to move resources.

Case
Performance tuning
Target resources : LDEV / E xternal VOL / JNLG
Setting IF : Storage Navigator / CLI / RMLib

2.4.1.3 Requirement #3
Requirement
Troubleshooting in the case of problems related to ownership.

Issue
Way to move resources required for solving problems.

How to realize
Maintenance personnel directly requirement s to move resources.

Case
Troubleshooting
Target resources :LDEV / External VOL / JNLG
Setting IF :Storage Navigator / CLI / RMLib

THEORY02-04-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-30

2.4.1.4 Requirement #4
Requirement
Confirm resources allocated to each MPU.

Issue
Way to reference resources allocated to each MPU.

How to realize
User directly requirement to reference resources.

Case
(A) Before ownership management resources are added.
Target resources : LDEV / External VOL / JNLG
Referring IF : Storage Navigator / CLI / Report (XPDT) / RMLib

(B) Performance tuning


Target resources : LDEV / External VOL / JNLG
Referring IF : Storage Navigator / CLI / Report (XPDT) / RMLib

(C) Troubleshooting
Target resources : L DEV / External VOL / JNLG
Referring IF : S
 torage Navigator / CLI / Report (XPDT) / RMLib

THEORY02-04-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-40

2.4.1.5 Requirement #5
Requirement
Maintain performance for resources allocated to specific MPU.

Issue
Way to move resources allocated to each MPU automatically and, way to prevent movement of resources
during addition of MPU.

How to realize
Resources are NOT allocated / moved automatically to the MPU that user specified.

Case
(A) When adding ownership management resources, preventing allocation of resources to the Auto
Allocation Disable MPU.

Figure 2-11 Requirement #5 (A)

Disable Eable
MPU MPU

Resource Resource Resource Resource

Resource Resource

To be added
Resource Resource Resource

THEORY02-04-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-50

2.4.1.6 Process Flow

Figure 2-12 Process flow

Introduction of Storage System

Initial set up (Auto-Define-Configuration) Allocation for LDEV ownership

Check of Allocation for each ownerships

Addition of LDEV(s)
Allocation for LDEV ownership
(Addition of ECC/CV operation)

External VOL ADD LU Allocation for External VOL ownership

Definition of JNLG Allocation for JNL ownership

Configuration change (Pair operation et cetera.) Movement of each ownerships

Setting/release of the Auto Allocation


Disable MPU

Performance tuning Allocation for each ownerships

THEORY02-04-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-60

2.4.2 Resource Allocation Policy


1. Both User-specific and automation allocation are based on the common policy. Allocate resources to each
MPU equally.

Figure 2-13 Resource allocation Policy (1)

MPU MPU

#0 #2 #1 #3

#4 #5

2. Additionally, user-specific allocation can consider the weight of each device.

Figure 2-14 Resource allocation Policy (2)

MPU MPU

#4 #2 #1 #4

#2
Total 6 Total 5 7
But, automation allocation cannot consider the weight of each device.

THEORY02-04-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-70

2.4.2.1 Automation Allocation


Allocate resources to each MPU equally independently in each type.

Table 2-26 Automation allocation


Ownership Device type Unit Leveling
LDEV SAS ECC Gr. Number of LDEVs
SSD/FMD LDEV Number of LDEVs
DP VOL LDEV Number of LDEVs
External VOL ̶ Ext. VOL Number of Ext. VOLs
JNLG ̶ JNLG Number of JNLGs.

Figure 2-15 Automation allocation

MPU MPU

SAS

SSD/FMD

DP VOL

THEORY02-04-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-80

1. Automation allocation (SAS)


Unit : ECC Gr.
Leveling : Number of LDEVs

Figure 2-16 Automation allocation (SAS)

MPU MPU

SAS
ECC Gr.1-1 ECC Gr.1-3
(3D+1P) (7D+1P)
(5 LDEV) (6 LDEV)

ECC Gr.1-5 ECC Gr.1-6


(7D+1P) (3D+1P)
(6 LDEV) (5 LDEV)

Total 11 Total 6 11

2. Automation allocation (SSD, FMD/DP VOL)


Unit : ECC Gr.
Leveling : Number of LDEVs

Figure 2-17 Automation allocation (SSD, FMD/DP VOL)

MPU MPU

SSD, FMD LDEV#0 LDEV#2 LDEV#1 LDEV#3

LDEV#4

Total 2 3 Total 2

DP VOL

Total 5 Total 4 5

THEORY02-04-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-90

3. Automation allocation (Ext. VOL)


Unit : Ext. VOL
Leveling : Number of Ext. VOLs (not Ext. LDEVs)

Figure 2-18 Automation allocation (Ext. VOL)

MPU MPU

Ext. VOL E-vol #0 E-vol #1


LDEV#0 LDEV#1 LDEV#2 LDEV#3

E-vol #2 E-vol #3
LDEV#4 LDEV#5 LDEV#6

E-vol #5 E-vol #4
LDEV#7

Total 2 3 Total 3

4. Automation allocation (JNLG)


Unit : JNLG
Leveling : Number of JNLGs (not JNL VOLs)

Figure 2-19 Automation allocation (JNLG)

MPU MPU

JNLG JNLG #0 JNLG #1


LDEV#0 LDEV#1 LDEV#4 LDEV#5

LDEV#2 LDEV#3 LDEV#6 LDEV#7

LDEV#8

Total 1 Total 0 1

THEORY02-04-90
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-100

2.4.3 MPU Block

Figure 2-20 MPU block

Host Host

FE IFPK FE IFPK

Port Port

LDEV #0

CTL CTL

MPU MPU
MP <owner> <owner> MP
Processing PM PM
LDEV #0 LDEV #1
LDEV
MP #0 MP
LDEV #0 : :
MP : : MP
LDEV #0
MP MP

SM SM

LDEV #0 LDEV #0

THEORY02-04-100
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-110

2.4.3.1 MPU Block for Maintenance


Step1. Make the ownership the moving transient state.

Figure 2-21 MPU block for maintenance (1)

Host Host

FE IFPK FE IFPK

Port Port

LDEV #0

CTL CTL
MPU MPU
MP <owner> <owner> MP
Processing PM PM
LDEV #0 LDEV #1
LDEV
MP #0 MP
LDEV #0 : :
MP : : MP
LDEV #0
MP MP

SM SM

LDEV #0 LDEV #0

THEORY02-04-110
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-120

Step2. Switch MPU, to which I/O is issued, to the target MPU (to which the ownership is moved).

Figure 2-22 MPU block for maintenance (2)

Host Host

FE IFPK FE IFPK

Port Port

LDEV #0 LDEV #0

MPU MPU
MP <owner> <owner> MP
Processing PM PM
LDEV #0 LDEV #1
LDEV
MP #0 MP
LDEV #0 : :
MP : : MP
LDEV #0
MP MP

CTL SM SM CTL

LDEV #0 LDEV #0

THEORY02-04-120
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-130

Step3. Complete the ongoing processing in the source MP whose ownership is moved.
(New processing is not performed in the source MP.)
Figure 2-23 MPU block for maintenance (3)

Host Host

FE IFPK FE IFPK <Target MPU>


• I/O is issued to the target
Port Port
MPU, but processing waits
LDEV #0 until ownership is completely
moved.
MPU MPU
MP MP
<owner> <owner>
Waiting for <Source MPU>
Processing PM PM processing
LDEV
MP #0 LDEV #0 LDEV #1 LDEV MP#0
• Monitor until all ongoing
:
LDEV #0
:
:
processing is completed.
MP : MP
LDEV #0 After it is completed. Go on
MP MP to Step4.
• When Time Out is detected,
terminate ongoing processing
CTL SM SM CTL forcibly.
LDEV #0 LDEV #0

Step4. Disable PM information in the source MP whose ownership is moved.


Figure 2-24 MPU block for maintenance (4)

Host Host

FE IFPK FE IFPK <Source MPU>


• When disabling PM
Port Port
information, only
LDEV #0 representative info is
rewritten, so processing time
is less than 1ms.
MPU MPU
MP <owner> <owner> MP
Not processing Waiting for
PM PM processing
LDEV
MP #0 LDEV #0 LDEV #1 LDEV MP#0
LDEV #0 : :
MP : : MP
LDEV #0
MP MP

CTL SM SM CTL

LDEV #0 LDEV #0

THEORY02-04-130
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-140

Step5. Moving ownership is completed and the processing starts in the target MPU.
Figure 2-25 MPU block for maintenance (5)

Host Host

FE IFPK FE IFPK <Target MPU>


Port Port • Immediately after processing
is started, SM is accessed, so
LDEV #0
access performance to control
info is degraded compared to
MPU MPU before moving ownership.
MP <owner> <owner> MP
PM PM
Start
processing
• Along with the progress of the
MP LDEV #2 LDEV #1 LDEV MP
: :
#0 PM information collection,
LDEV #0
MP : : MP access performance to
LDEV #0
the control information is
MP MP
improved.

CTL SM SM CTL

LDEV #0 LDEV #0

Step6. Perform Step1. to Step5. for all resources under the MPU blocked and after they are completed,
block MPU.
Figure 2-26 MPU block for maintenance (6)

Host Host

FE IFPK FE IFPK <Moving ownership>


Port Port • Move resources that are
related to ShadowImage, UR,
LDEV #0
and TI synchronously.
• If they are moved at a
MPU MPU time, performance would
MP <owner> <owner> MP
PM PM
Start be affected significantly,
processing
MP LDEV #2
:
LDEV #1 LDEV MP#0 so move them in phase as
: LDEV #0
MP : : MP long as maintenance time is
LDEV #0
permissible.
MP MP

CTL SM SM CTL

LDEV #0 LDEV #0

THEORY02-04-140
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-150

2.4.3.2 MPU Block due to Failure


Step1. Detect that all MPs in the MPU are blocked and decide MPU that takes over the ownership.

Figure 2-27 MPU blocked due to failure (1)

Host Host

FE IFPK FE IFPK

Port Port

LDEV #0

MPU MPU
MP <owner> <owner> MP
PM PM
MP LDEV #0 LDEV #1 MP
LDEV #0 : LDEV #0
MP : : MP
:
MP MP

CTL SM SM CTL

LDEV #0 LDEV #0

THEORY02-04-150
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-160

Step2. Switch MPU, to which I/O is issued, to MPU that takes over the ownership.

Figure 2-28 MPU blocked due to failure (2)

Host Host

FE IFPK FE IFPK

Port Port

LDEV #0 LDEV #0

MPU MPU
MP <owner> <owner> MP
PM PM
MP LDEV #0 LDEV #1 MP
LDEV #0 : LDEV #0
MP : : MP
:
MP MP

CTL SM SM CTL

LDEV #0 LDEV #0

THEORY02-04-160
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-170

Step3. Perform WCHK1 processing at the initiative of MPU that takes over the ownership.
Figure 2-29 MPU blocked due to failure (3)

Host Host

FE IFPK FE IFPK <WCHK1 processing>


• Issue a requirement for
Port Port
cancelling the requirement
LDEV #0 for processing received from
WCHK1 MPU to all MPs.
MPU MPU • Issue an abort instruction
MP Register
MP
<owner> <owner>
cancel of data transfer started from
PM PM
MP LDEV #1 MP WCHK1 MPU.
LDEV #0 Transfer abort
:
• WCHK1 MPU performs post-
MP MP
: JOB FRR processing of ongoing JOB.
MP MP

CTL SM SM CTL

LDEV #0 LDEV #0

Step4. WCHK1 processing is completed, and the processing starts in the target MPU.
Figure 2-30 MPU blocked due to failure (4)

Host Host

FE IFPK FE IFPK <Target MPU>


• Immediately after processing
Port Port
is started, SM is accessed, so
LDEV #0 access performance to control
information is degraded
MPU MPU compared to before moving
MP <owner> <owner> MP
PM PM
Start ownership.
processing
MP LDEV #1 LDEV MP#0 • As processes of importing
LDEV #0 LDEV #0
MP : MP
information in PM progress,
:
access performance to control
MP MP
info is improved.

CTL SM SM CTL

LDEV #0 LDEV #0

THEORY02-04-170
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-10

2.5 Cache Architecture


2.5.1 Physical Addition of Controller/DIMM
Figure 2-31 Physical addition of Controller/DIMM

VSP G370, VSP G350, VSP G130 (*1) and VSP F370, VSP F350

Controller1

DIMM DIMM MG#0

Controller2

DIMM DIMM MG#0

*1: For VSP G130, one DIMM can be installed in each controller.

VSP G900, VSP G700, VSP F900, VSP F700 and VSP E990

Addition unit DIMM 4 × 2 = 8


Controller1

DIMM DIMM DIMM DIMM MG#0

DIMM DIMM DIMM DIMM MG#1

Controller2

DIMM DIMM DIMM DIMM MG#0

DIMM DIMM DIMM DIMM MG#1

THEORY02-05-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-20

2.5.2 Maintenance/Failure Blockade Specification


2.5.2.1 Blockade Unit
The block at the time of maintenance/failure is by surface unit and the block management by MG unit is not
performed.

Figure 2-32 Blockade Unit

All models

Controller1

DIMM DIMM DIMM DIMM MG#0

DIMM DIMM DIMM DIMM MG#1

Power boundary

Controller2

DIMM DIMM DIMM DIMM MG#0 Side

DIMM DIMM DIMM DIMM MG#1

MG : Module Group

THEORY02-05-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-30

2.5.3 Cache Control


2.5.3.1 Cache Directory PM Read and PM/SM Write

Figure 2-33 Cache Directory PM read and PM/SM write

Controller1 Controller 2
MPU MPU

MP MP MP MP MP MP MP MP

PM PM

Cache DIR Cache DIR


SGCB SGCB
LRU queue LRU queue

Cache DIR Cache DIR


SGCB SGCB

User Data User Data

CM CM

SGCB: SeGment Control Block

THEORY02-05-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-40

2.5.3.2 Cache Segment Control Image

Figure 2-34 Cache Segment Control Image

Controller 1 Controller 2
MPU MPU
PM PM
SGCB SGCB

00 02 05 07 01 03 06 09

11 13 14 1b 12 18 19 1a

00 01 02 03 10 11 12 13

04 05 06 07 14 15 16 17

08 09 0a 0b 18 19 1a 1b
SGCB SGCB

CM CM

THEORY02-05-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-50

2.5.3.3 Initial Setting (Cache Volatilization)

Figure 2-35 Initial Setting (Cache Volatilization)

Controller 1 Controller 2
MPU MPU
PM PM
SGCB SGCB

00 01 02 03 04 05 06 07

10 11 12 13 14 15 16 17

00 01 02 03 10 11 12 03

04 05 06 07 14 15 16 17

SGCB SGCB

CM CM

THEORY02-05-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-60

2.5.3.4 Ownership Right Movement

Figure 2-36 Ownership movement (1)

Controller 1 Controller 2
MPU MPU
PM PM
Cache DIR SGCB Cache DIR SGCB
C-VDEV#0 C-VDEV#2
C-VDEV#1
00 01 02 03 04 05 06 07
Cache DIR
C-VDEV#0 10 11 12 13 14 15 16 17
C-VDEV#1

Cache DIR Cache DIR


C-VDEV#0 C-VDEV#0
C-VDEV#1 C-VDEV#1
00 01 02 03 10 11 12 13

04 05 06 07 14 15 16 17

SGCB SGCB

CM CM

THEORY02-05-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-70

Figure 2-37 Ownership movement (2)

Controller 1 Controller 2
MPU MPU
PM PM
Cache DIR SGCB Cache DIR SGCB
C-VDEV#0
C-VDEV#1 C-VDEV#2 04 05 06 07
00 01 02 03
Cache DIR 14 15 16 17
C-VDEV#0 10 11 12 13
C-VDEV#1
01 12

Cache DIR Cache DIR


C-VDEV#0 C-VDEV#0
C-VDEV#1 C-VDEV#1
00 01 02 03 10 11 12 13

04 05 06 07 14 15 16 17

SGCB SGCB

CM CM

THEORY02-05-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-80

2.5.3.5 Cache Load Balance

Figure 2-38 Cache Load balance (1)

Controller 1 Controller 1
MPU (High workload) MPU (Low workload)
PM PM
SGCB SGCB

D D C C C C F F

C D D D F C F F

F F F F F

SGCB SGCB

CM CM

D:Dirty
C:Clean
F:Free

THEORY02-05-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-90

Figure 2-39 Cache Load balance (2)

Controller 1 Controller 2
MPU (High workload) MPU (Low workload)
PM PM
SGCB SGCB

D D C C C C F F

C D D D F C F F

F F F F F

SGCB SGCB

CM CM

D:Dirty
C:Clean
F:Free

THEORY02-05-90
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-100

Figure 2-40 Cache Load balance (3)

Controller 1 Controller 2
MPU (High workload) MPU (Low workload)
PM PM
SGCB SGCB
D D C C
C C F F
C D D D
F C F F
F F F F

F F F F F

SGCB SGCB

CM CM

D:Dirty
C:Clean
F:Free

THEORY02-05-100
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-110

2.5.3.6 Controller Replacement


Figure 2-41 Controller Replacement (1)

MPU (Controller1) MPU (Controller 2)


PM PM
SGCB SGCB

D F C F C D F C

C D F D

D F C C F C D C

D F F D

SGCB SGCB

CM (Controller1) CM (Controller2)

D:Dirty
C:Clean
F:Free

THEORY02-05-110
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-120

Figure 2-42 Controller Replacement (2)

MPU (Controller1) MPU (Controller2)


PM PM
SGCB SGCB

D F C D

C F

D F C C

D F

SGCB SGCB

CM (Controller1) CM (Controller2)

D:Dirty
C:Clean
F:Free

THEORY02-05-120
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-130

Figure 2-43 Controller Replacement (3)

MPU (Controller1) MPU (Controller2)


PM PM
SGCB SGCB

D F C D

C C D F

D F C C

D F

SGCB SGCB

CM (Controller1) CM (Controller2)

D:Dirty
C:Clean
F:Free

THEORY02-05-130
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-140

Figure 2-44 Controller Replacement (4)

MPU (Controller1) MPU (Controller2)


PM PM
SGCB SGCB
D F

C C D F

D F C C

D F

SGCB SGCB

CM (Controller1) CM (Controller2)

D:Dirty
C:Clean
F:Free

THEORY02-05-140
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-150

Figure 2-45 Controller Replacement (5)

MPU (Controller1) MPU (Controller2)


PM PM
SGCB SGCB
D F

C C D

D F C C

D F

SGCB SGCB

CM (Controller1) CM (Controller2)

D:Dirty
C:Clean
F:Free

THEORY02-05-150
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-160

Figure 2-46 Controller Replacement (6)

MPU (Controller1) MPU (Controller2)


PM PM
SGCB SGCB
D F

C C D F

D F C C

D F

SGCB SGCB

CM (Controller1) CM (Controller2)

D:Dirty
C:Clean
F:Free

THEORY02-05-160
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-170

Figure 2-47 Controller Replacement (7)

MPU (Controller1) MPU (Controller2)


PM PM
SGCB SGCB
D F C D

C C D

D F C C

D F

SGCB SGCB

CM (Controller1) CM (Controller2)

D:Dirty
C:Clean
F:Free

THEORY02-05-170
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-180

Figure 2-48 Controller Replacement (8)

MPU (Controller1) MPU (Controller2)


PM PM
SGCB SGCB
D F F F F C D F

C F F

D F C C F F F F

D F F F

SGCB SGCB

CM (Controller1) CM (Controller2)

D:Dirty
C:Clean
F:Free

THEORY02-05-180
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-190

2.5.3.7 Queue/Counter Control

Figure 2-49 Queue/Counter Control (1)

Unallocated bitmap

Free queue MPU Free bitmap Free queue MPU Free bitmap

Free-counter Free-counter Free-counter Free-counter

Clean-counter Clean-counter Clean-counter Clean-counter

ALL-counter ALL-counter ALL-counter ALL-counter

CLPR0 CLPR1 CLPR0 CLPR1

Dirty queue Dirty queue

MPU MPU

THEORY02-05-190
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-200

Figure 2-50 Queue/Counter Control (2)

Unallocated bitmap

Dynamic Cache assignment

Free queue MPU Free bitmap Free queue MPU Free bitmap
Free-counter Free-counter Free-counter
Free-counter
Discard Cache data
Clean-counter Clean-counter Clean-counter Clean-counter
Data access

ALL-カウンタ
Destage / Staging ALL-counter ALL-counter ALL-counter

CLPR0 CLPR1 CLPR0 CLPR1

Dirty queue Dirty queue

MPU MPU

THEORY02-05-200
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-06-10

2.6 CVS Option Function


2.6.1 Customized Volume Size (CVS) Option
2.6.1.1 Overview
When two or more files to which I/Os are applied frequently exist in the same volume, a contention for the
logical volume occurs. If this occurs, the files mentioned above are stored separately in different logical
volumes and an action is taken to avoid contention for access. (Or means to prevent I/Os from generation is
required.)
However, the work for adjusting the file arrangement giving consideration to the accessing characteristic of
the file will be a burden on users of the DKC and it is not welcomed by them.
To solve this problem, the Customized Volume Size (CVS) option is provided. (Hereinafter, it is abbreviated
to CVS.)
The CVS provides a function for freely defining the logical volume size.
By doing this, even in a Storage System with the same capacity, the number of volumes can be increased
easily. As a result, a file with a high I/O frequency can be easily allocated to an independent volume. That is
to say, the trouble to consider a combination of stored files in a volume can be saved.

THEORY02-06-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-06-20

2.6.1.2 Features
• The capacity of the ECC group can be fully used.

Figure 2-51 Overview of CVS Option function

Host

HBA HBA HBA

RAID5(3D+1P)
16 LDEVs
(OPEN-V)
LDEV PDEV
CV #1
Regular Mapping LDEV
(OPEN-V)
150 LBA OPEN-V Base Volume
CV #2 PDEV
volume size
(OPEN-V) LDEV
30 LBA
Unused LDEV PDEV
area 1 LDEV
LDEV
PDEV

ECC Group
CV #3 Base Volume
(Physical Image)
Regular
(OPEN-V) OPEN-V
volume size
CV #4(OPEN-V)
ECC Group
CV #5(OPEN-V)
(Logical Image)
Unused area

THEORY02-06-20
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-06-30

2.6.1.3 Specifications
The CVS option consists of a function to provide variable capacity volumes.

1. Function to provide variable capacity volumes


This function can create the capacity volume as required by the users.
You can set the data by Mbytes or Logical Blocks.

Table 2-27 CVS Specifications


Parameter Content
Track format OPEN-V
Emulation type OPEN-V
Maximum number of LDEVs 2,048 for one parity group
(Base volume and CVS) per VDEV
Maximum number of LDEVs VSP G130: 2,048
(Base volume and CVS) per Storage System VSP G350, VSP F350: 16,384
VSP G370, VSP F370: 32,768
VSP G700, VSP F700: 49,152
VSP G900, VSP F900: 65,280
Size increment for CV 1 MB
Disk location for CVS Volume Anywhere

THEORY02-06-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-06-40

2.6.1.4 Maintenance Functions


Features of the maintenance functions of the CVS option is that they allow execution of not only the
conventional maintenance operations instructed by the Maintenance PC but also the maintenance operations
instructed from the SVP. (Refer to Item No. 2 to 5 in Table 2-28.)
Unlike the conventional LDEV addition or reduction, the operation for the ECC group is made unnecessary,
so that the volumes can be operated from the SVP. In the case of the configuration that does not contain the
SVP, the maintenance can be executed from Command Control Interface.

Table 2-28 Maintenance Function List


Item No. Maintenance function CE User Remarks
1 Concurrent addition or deletion of  — Same as the conventional addition or
CVs at the time of addition or removal removal of LDEVs. (*2)
of ECC group
2 Addition of CVs only   Addition of CVs in the free area. (*1)
3 Conversion of normal volumes to CV   (*1), (*2)
4 Conversion of CV to normal volumes   (*1), (*2)
5 Deletion of CVs only   No removal of ECC group is
involved. (*2)
*1: LDEV format operates as an extension of maintenance.
Since the deleted volume data is lost, the customer s approval is required for execution.
*2: The pending data on the Cache is also discarded with the data on the volume to be deleted.
Same as *1, the customer s approval is required for execution.

Figure 2-52 Maintenance Execution Route when CVS Is Used

TSD

SVP User
TEL Line
CE LAN Maintenance
function of the
MPC item No.2,3,4, and
5 in a table (*1)
All DKC DKC DKC
maintenance
function in a
table

*1: Operated from Command Control Interface in the case of the configuration that does not contain
the SVP.

THEORY02-06-40
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-10

2.7 PDEV Erase


2.7.1 Overview
When the specified system option (*1) is set, the DKC deletes the data of PDEV automatically in the case
according Table 2-29.
When the SOM for Media Sanitization is set to on, Media Sanitization is prioritized.
*1: Please contact to T.S.D.

Table 2-29 Overview


No. Item Content
1 Maintenance PC Operation Select system option from “Install”.
2 Status DKC only reports on SIM of starting the function. The progress
status is not displayed.
3 Result DKC reports on SIM of normality or abnormal complete.
4 Recovery procedure at failure Re-Erase of PDEV that terminates abnormally is impossible.
Please exchange it for new service parts.
5 P/S off or B/K off The Erase processing fails. It doesn’t restart after P/S on.
6 How to stop the “PDEV Erase” Please execute Replace from the Maintenance screen of the
Maintenance PC operation, and exchange PDEV that Erase wants
to stop for new service parts.
7 Data Erase Pattern Data Erase Pattern is zero data.

Table 2-30 PDEV Erase execution case


No. Execution case
1 PDEV is blocked according to Drive Copy completion.

THEORY02-07-10
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-20

2.7.2 Rough Estimate of Erase Time


The Erase time is decided by capacity and the rotational speed of PDEV.
Time is indicated as follows. (Time is a standard and it might take the TOV)

Table 2-31 PDEV Erase completion expectation time (1/2)


480 GB 600 GB 960 GB 1.2 TB 1.9 TB 2.4 TB
3.5 TB 3.8 TB
Type of PDEV
(3.2 TiB)
SAS (7.2 krpm) − − − − − − − −
SAS (10 krpm) − 70 M − 140 M − 190 M − −
Flash Drive (SAS SSD) 1 to 10 M − 1 to 20 M − 1 to 40 M − − 1 to 90 M
Flash Module Drive − − − − − − 1M −
NVMe SSD − − − − 2M − − 4M

Table 2-32 PDEV Erase completion expectation time (2/2)


Type of PDEV 6.0 TB 7.0 TB 7.6 TB 10 TB 14 TB 15 TB 30 TB
SAS (7.2 krpm) 590 M − − 880 M 1145 M − −
SAS (10 krpm) − − − − − − −
Flash Drive (SAS SSD) − − 1 to 140 M − − 1 to 220 M 1 to 490 M
Flash Module Drive − 1M − − 1M − −
NVMe SSD − − 8M − − 16 M −

Table 2-33 PDEV Erase TOV (1/2)


480 GB 600 GB 960 GB 1.2 TB 1.9 TB 2.4 TB 3.5 TB 3.8 TB
Type of PDEV
(3.2 TiB)
SAS (7.2 krpm) − − − − − − − −
SAS (10 krpm) − 150 M − 255 M − 330 M − −
Flash Drive (SAS SSD) 60 M − 75 M − 105 M − − 180 M
Flash Module Drive − − − − − − 9M −
NVMe SSD − − − − 34 M − − 38 M

Table 2-34 PDEV Erase TOV (2/2)


Type of PDEV 6.0 TB 7.0 TB 7.6 TB 10 TB 14 TB 15 TB 30 TB
SAS (7.2 krpm) 930 M − − 1365 M 1765 M − −
SAS (10 krpm) − − − − − − −
Flash Drive (SAS SSD) − − 255 M − − 375 M 780 M
Flash Module Drive − 9M − − 9M − −
NVMe SSD − − 46 M − − 62 M −

THEORY02-07-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-30

2.7.3 Influence in Combination with Other Maintenance Operation


The influence on the maintenance operation during executing PDEV Erase becomes as follows.

Table 2-35 PDEV Replace


No. Object part Influence Countermeasure
1 Replace from Maintenance PDEV Erase terminates —
PC as for PDEV that does abnormally.
PDEV Erase.
2 Replace from Maintenance Nothing —
PC as for PDEV that does
not PDEV Erase.
3 User Replace Please do not execute the user Please execute it after completing
replacement during PDEV Erase. PDEV Erase.

Table 2-36 DKB Replace


No. Object part Influence Countermeasure
1 DKB connected with [SVP4198W] may be displayed. <SIM4c2xxx/4c3xxx about this
PDEV that is executed The DKB replacement might PDEV is not reported>
PDEV Erase fail by [ONL2412E] when the Please replace PDEV (to which Erase
password is entered. (*2) is done) to new service parts. (*1)
The DKB replacement might fail by
[ONL2412E] when the password is
entered. (*2)
2 DKB other than the above Nothing Nothing

Table 2-37 I/F Board Replace/I/F Board Removal


No. Object part Influence Countermeasure
1 I/F Board that is executed [SVP4198W] may be displayed. <SIM4c2xxx/4c3xxx about this
PDEV Erase The I/F Board replacement might PDEV is not reported>
fail by [ONL2412E] when the Please replace PDEV (to which Erase
password is entered. (*2) is done) to new service parts. (*1)
The I/F Board replacement might fail
by [ONL2412E] when the password
is entered. (*2)
2 I/F Board other than the Nothing Nothing
above

THEORY02-07-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-40

Table 2-38 ENC Replace


No. Object part Influence Countermeasure
1 ENC connected with DKB [SVP4198W] may be displayed. <SIM4c2xxx/4c3xxx about this
connected with HDD that The ENC replacement might fail PDEV is not reported>
does PDEV Erase by [ONL2788E] [ONL3395E] Please replace PDEV (to which Erase
when the password is entered. is done) to new service parts. (*1)
(*2) The ENC replacement might fail by
[ONL2788E][ONL3395E] when the
password is entered. (*2)
2 ENC other than the above Nothing Nothing

Table 2-39 PDEV Addition/Removal


No. Object part Influence Countermeasure
1 ANY Addition/Removal might fail by Please wait for the Erase completion
[SVP739W]. or replace PDEV (to which Erase is
done) to new service parts. (*1)

Table 2-40 Exchanging microcode


No. Object part Influence Countermeasure
1 DKC MAIN [SVP0732W] may be displayed. Please wait for the Erase completion
Microcode exchanging might or replace PDEV (to which Erase is
fail by [SMT2433E], when the done) to new service parts. (*1)
password is entered. (*2)
2 HDD [SVP0732W] may be displayed. Please wait for the Erase completion
Microcode exchanging might or replace PDEV (to which Erase is
fail by [SMT2433E], when the done) to new service parts. (*1)
password is entered. (*2)

Table 2-41 LDEV Format


No. Object part Influence Countermeasure
1 ANY There is a possibility that PATH- Please wait for the Erase completion
Inline fails. There is a possibility or replace PDEV (to which Erase is
that the cable connection cannot done) to new service parts. (*1)
be checked when the password is
entered.

THEORY02-07-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-50

Table 2-42 PATH-Inline


No. Object part Influence Countermeasure
1 DKB connected with There is a possibility of detecting Please wait for the Erase completion
PDEV that is executed the trouble by PATH-Inline. or replace PDEV (to which Erase is
PDEV Erase done) to new service parts. (*1)

Table 2-43 PS/OFF


No. Object part Influence Countermeasure
1 ANY PDEV Erase terminates <SIM4c2xxx/4c3xxx about this
abnormally. PDEV is not reported>
Please wait for the Erase completion
or replace PDEV (to which Erase is
done) to new service parts. (*1)

*1: When PDEV that stops PDEV Erase is installed into DKC again, it might fail by Spin-up failure.
*2: It is not likely to be able to maintain it when failing because of concerned MSG until PDEV Erase
is completed or terminates abnormally.

THEORY02-07-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-60

2.7.4 Notes of Various Failures


Notes of the failure during PDEV Erase become as follows.

No. Failure Object part Notice Countermeasure


1 B/K OFF/ Drive BOX There is a possibility that PDEV Erase Please replace PDEV of the
Black Out (DB) fails due to the failure. Erase object to new service
parts after P/S on.
2 DKC Because monitor JOB of Erase Please replace PDEV of the
disappears, it is not possible to report on Erase object to new service
normality/abnormal termination SIM of parts after P/S on.
Erase.
3 MP failure I/F Board [E/C 9470 is reported at the MP failure] Please replace PDEV of the
JOB of the Erase monitor is reported on Erase object to new service
E/C 9470 when Abort is done due to the parts after the recovery of MP
MP failure and completes processing. In failure.
this case, it is not possible to report on
normality/abnormal termination SIM of
Erase.
4 [E/C 9470 is not reported at the MP Please replace PDEV to new
failure] service parts after judging the
It becomes impossible to communicate Erase success or failure after
with the Controller who is doing Erase it waits while TOV of PDEV
due to the MP failure. In this case, it Erase after the recovery of MP.
becomes TOV of monitor JOB with E/C
9450, and reports abnormal SIM.

THEORY02-07-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-10

2.8 Cache Management


Since the DKC requires no through operation, its Cache system is implemented by two memory areas called
Cache A and Cache B so that write data can be duplexed.
To prevent data loss due to power failures, Cache is made non-volatile by storing SSD on Cache PCB. This
dispenses with the need for the conventional NVS.
The minimum unit of Cache is the segment. Cache is destaged in segment units.
Emulation Disk type at one or four segments make up one slot.
The read and write slots are always controlled in pair.
Cache data is enqueued and dequeued usually in slot units.
In real practice, the segments of the same slot are not always stored in a contiguous area in Cache, but are
stored in discreet areas. These segments are controlled suin-g CACHE-SLCB and CACHE-SGCB so that the
segments belonging to the same slot are seemingly stored in a contiguous area in Cache.

Figure 2-53 Cache Data Structure

R0 R1 RL
HA C D C K D C K D

CACHE SLOT SEG SEG SEG SEG CACHE SLOT SEG

BLOCK BLOCK BLOCK BLOCK 32 Block = 1 SEG (64 KB)

2KB

SUB SUB SUB SUB


BLOCK BLOCK BLOCK BLOCK 4 Subblock = 1 Block

520

For increased directory search efficiency, a single virtual device (VDEV) is divided into 16-slot groups which
are controlled using VDEV-GRPP and CACHE-GRPT.

1 Cache segment 32 blocks = 128 subblocks = 64 KB


=

1 slot = 1 stripe = 4 segments = 256 KB

The directories VDEV-GRPP, CACHE-GRPT, CACHE-SLCB, and CACHE-SGCB are used to identify the
Cache hit and miss conditions. These control tables are stored in the shared memory.

THEORY02-08-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-20

In addition to the Cache hit and miss control, the shared memory is used to classify and control the data in
Cache according to its attributes. Queues are something like boxes that are used to classify data according to
its attributes.

Basically, queues are controlled in slot units (some queues are controlled in segment units). Like SLCB-
SGCB, queues are controlled using a queue control table so that queue data of the seemingly same attribute
can be controlled as a single data group. These control tables are briefly described below.

THEORY02-08-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-30

1. Cache control tables (directories)

Figure 2-54

LDEV-DIR VDEV-GRPP GRPT SLCB SGCB

0
RD data
1
WR data

15

SGCB CACHE
0 16 32 40
SLCB
RSEG1ADR=0 64 RSEG1 WSEG2
128 WSEG1 RSEG3
RSEG2ADR=208
192 RSEG2 RSEG4
RSEG3ADR=176 256 WSEG4 WSEG3
320
RSEG4ADR=240
SLCB
SLCB
WSEG1ADR=128

WSEG2ADR=32

WSEG3ADR=288

WSEG4ADR=256
SLCB

LDEV-DIR (Logical DEV-directory):


Contains the shared memory addresses of VDEV-GRPPs for an LDEV. LDEV-DIR is located in
the local memory in the CHB.
VDEV-GRPP (Virtual DEV-group Pointer):
Contains the shared memory addresses of the GRPTs associated with the group numbers in the
VDEV.
GRPT (Group Table):
A table that contains the shared memory address of the SLCBs for 16 slots in the group. Slots are
grouped to facilitate slot search and to reduce the space for the directory area.
SLCB (Slot Control Block):
Contains the shared memory addresses of the starting and completing SGCBs in the slot. One
or more SGCBs are chained. The SLCB also stores slot status and points to the queue that is
connected to the slot. The status transitions of clean and dirty queues occur in slot units. The
processing tasks reserve and release Cache areas in this unit.
SGCB (Segment Control Block):
Contains the control information about a Cache segment. It contains the Cache address of the
segment. It is used to control the staged subblock bit map, dirty subblock bitmap and other
information. The status transitions of only free queues occur in segment units.
THEORY02-08-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-40

2. Cache control table access method (hit/miss identification procedure)

Figure 2-55 Overview of Cache Control Table Access

VDEV-GRPP
VDEV#51
(1)
LDEV-DIR
VDEV#2
VDEV#1
VDEV#0
(2)

0 (4)
Slot group#0
15

0
(3)
(5)
15

GRPT SLCB SGCB

(1) The current VDEV-GRPP is referenced through the LDEV-DIR to determine the hit/miss condition
of the VDEV-groups.
(2) If a VDEV-group hits, CACHE-GRPT is referenced to determine the hit/miss condition of the
slots.
(3) If a slot hits, CACHE-SLCB is referenced to determine the hit/miss condition of the segments.
(4) If a segment hits, CACHE-SGCB is referenced to access the data in Cache.

If a search miss occurs during the searches from 1. through 4., the target data causes a Cache miss.

Definition of VDEV number


Since the host processor recognizes addresses only by LDEV, it is unaware of the device address of the
parity device. Accordingly, the RAID system is provided with a VDEV address which identifies the
parity device associated with an LDEV. Since VDEVs are used to control data devices and parity devices
systematically, their address can be computed using the following formulas:
Data VDEV number = LDEV number
Parity VDEV number = 1024 + LDEV number

From the above formulas, the VDEV number ranges from 0 to 2047.

THEORY02-08-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-50

3. Queue structures
The DKC and DB uses 10 types of queues to control data in Cache segments according to its attributes.
These queues are described below.

• CACHE-GRPT free queue


This queue is used to control segments that are currently not used by CACHE-GRPT (free segments)
on an FIFO (First-In, First-Out) basis. When a new table is added to CACHE-GRPT, the segment that
is located by the head pointer of the queue is used.
• CACHE-SLCB free queue
This queue is used to control segments that are currently not used by CACHE-SLCB (free segments)
on an FIFO basis. When a new slot is added to CACHE-SLCB, the segment that is located by the head
pointer of the queue is used.
• CACHE-SGCB free queue
This queue is used to control segments that are currently not used by CACHE-SGCB (free segments)
on an FIFO basis. When a new segment is added to CACHE-SGCB, the segment that is located by the
head pointer of the queue is used.
• Clean queue
This queue is used to control the segments that are reflected on the Drive on an LRU basis.
• Bind queue
This queue is defined when the bind mode is specified and used to control the segments of the bind
attribute on an LRU basis.
• Error queue
This queue controls the segments that are no longer reflected on the Drive due to some error (pinned
data) on an LRU basis.
• Parity in-creation queue
This queue controls the slots (segments) that are creating parity on an LRU basis.
• DFW queue (host dirty queue)
This queue controls the segments that are not reflected on the Drive in the DFW mode on an LRU
basis.
• CFW queue (host dirty queue)
This queue controls the segments that are not reflected on the Drive in the CFW mode on an LRU
basis.
• PDEV queue (physical dirty queue)
This queue controls the data (segments) that are not reflected on the Drive and that occur after a parity
is generated. Data is destaged from this queue onto the physical DEV. There are 32 PDEV queues per
physical DEV.

The control table for these queues is located in the shared memory and points to the head and tail
segments of the queues.

THEORY02-08-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-60

4. Queue status transitions


Figure 2-56 shows the status transitions of the queues used in. A brief description of the queue status
transitions follows.
• Status transition from a free queue
When a read miss occurs, the pertinent segment is staged and enqueued to a clean queue. When a write
miss occurs, the pertinent segment is temporarily staged and enqueued to a host dirty queue.
• Status transition from a clean queue
When a write hit occurs, the segment is enqueued to a host dirty queue. Transition from clean to free
queues is performed on an LRU basis.
• Status transition from a host dirty queue
The host dirty queue contains data that reflects no parity. When parity generation is started, a status
transition occurs to the parity in-creation queue.
• Status transition from the parity in-creation queue
The parity in-creation queue contains parity in-creation data. When parity generation is completed, a
transition to a physical dirty queue occurs.
• Status transition from a physical dirty queue
When a write hit occurs in the data segment that is enqueued in a physical dirty queue, the segment
is enqueued into the host dirty queue again. When destaging of the data segment is completed, the
segment is enqueued into a queue (destaging of data segments occur asynchronously on an LRU basis).

Figure 2-56 Queue Segment Status Transition Diagram

Destaging complete
(WR/RD SEG)
Free queue

LRU basis
RD MISS/WR MISS
Clean queue
WR HIT
RD HIT
WR HIT
RD HIT/WR HIT

Parity Parity
Parity not creation Parity creation Parity not
RD HIT starts complete
WR HIT reflected in-creation reflected RD HIT
& dirty status status & dirty status

Host Parity Physical


dirty queue in-creation dirty queue
queue

THEORY02-08-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-70

5. Cache usage in the read mode

Figure 2-57 Cache Usage in the Read Mode

CHB

CACHE A CACHE B

Read data Read data

DRIVE

The Cache area to be used for destaging read data is determined depending on whether the result of
evaluating the following expression is odd or even:
(CYL# x 15 + HD#) / 16
The read data is destaged into area A if the result is even and into area B if the result is odd.

Read data is not duplexed and its destaging Cache area is determined by the formula shown in Figure
2-57. Staging is performed not only on the segments containing the pertinent block but also on the
subsequent segments up to the end of track (for increased hit ratio). Consequently, one track equivalence
of data is prefetched starting at the target block. This formula is introduced so that the Cache activity
ratios for areas A and B are even. The staged Cache area is called the Cache area and the other area NVS
area.

THEORY02-08-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-80

6. Cache usage in the write mode

Figure 2-58 Cache Usage in the Write Mode

Data
(2) (2)

CACHE A
Write data Write data
CACHE B
Old data New Parity Old data New Parity
(1) (5) (3) (5)

(4) (4) (4)


Parity
generation
Data Disk DRR Data Disk

This system handles write data (new data) and read data (old data) in separate segments as shown in
Figure 2-58 (not overwritten as in the conventional systems), whereby compensating for the write
penalty.

(1) If the write data in question causes a Cache miss, the data from the block containing the target
record up to the end of the track is staged into a read data slot.
(2) In parallel with Step (1), the write data is transferred when the block in question is established in
the read data slot.
(3) The parity data for the block in question is checked for a hit or miss condition and, if a Cache miss
condition is detected, the old parity is staged into a read parity slot.
(4) When all data necessary for generating new parity is established, create the Parity in the DRR
processing of the CPU.
(5) When the new parity is completed, the DRR transfers it into the write parity slots for Cache A and
Cache B (the new parity is handled in the same manner as the write data).

The reason for write the write data into both Cache areas is that data will be lost if a Cache error occurs
when it is not yet written on the Disk.

Although two Cache areas are used as described above, the read data (including parity) is staged into
either Cache A or Cache B simply by duplexing only the write data (including parity) (in the same
manner as in the read mode).

THEORY02-08-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-90

7. CFW-inhibited write-operation (with Cache single-side error)


The non RAID-type Disk systems write data directly onto Disk storage in the form of Cache through,
without performing a DFW, when a Cache error occurs. In this system, Cache must always be passed,
which fact disables the through operation. Consequently, the write data is duplexed, and a CFW-inhibited
write-operation is performed; that is, when one Cache Storage System goes down, the end of processing
status is not reported until the data write in the other Cache Storage System is completed. This process is
called CFW-inhibited write-operation.

The control information necessary for controlling Cache is stored in the shared memory.

THEORY02-08-90
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-09-10

2.9 Destaging Operations


1. Cache management in the destage mode (RAID5)
Destaging onto a Drive is deferred until parity generation is completed. Data and parity slot transitions in
the destage mode occur as shown in Figure 2-59.

Figure 2-59 Cache Operation in the Destage Mode

Data slot Parity slot


Cache area NVS area Cache area NVS area

WR WR

(1) (2)New parity generated


(2)Segment released

(4)Switch to Read
area

(3)Switch to Read area

RD RD

(3)Segment released

(4)Destage (1)Old parity read (5)Destage

(1) The write data is copied from the NVS area (1) Parity generation correction read (old parity)
into the read area. occurs.
(2) The write segment in the Cache area is (2) New parity is generated.
released.
(3) Simultaneously, the segment in the NVS area is (3) The old parity in the read segment is released.
switched from write to read segment.
(4) Destaging. (4) The segments in the Cache and NVS areas are
switched from write to read segment.
(5) The read segment in the NVS area is released. (5) Destaging.
(6) The read segment in the NVS area is released.

Write data is stored in write segments before parity is generated but stored in read segments after parity
is generated. When Drive data is stored, therefore, the data from the read segment is transferred.

THEORY02-09-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-09-20

2. Cache management in the destage mode (RAID1)


Data slot is destaged to primary/secondary Drive.

Figure 2-60 RAID1 asynchronous destage

Data slot
Cache area NVS area

WR Data

RD RD

(2) destage (1)

Secondary Drive Primary Drive

(1) Destage to primary Drive.


(2) Destage to secondary Drive.
(3) The data read segment in the NVS area is released.

THEORY02-09-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-09-30

3. Blocked data write


The purpose of blocked data write is to reduce the number of accesses to the Drive during destaging,
whereby increasing the Storage System performance. There are three modes of blocked data write:
single-stripe blocking, multiple-stripe blocking and Drive blocking. These modes are briefly described
below.

• Single-stripe blocking
Two or more dirty segments in a stripe are combined into a single dirty data block. Contiguous dirty
blocks are placed in a single area. If an unloaded block exists between dirty blocks, the system destages
the dirty blocks separately at the unloaded block. If a clean block exists between dirty blocks, the
system destages the blocks including the clean block.
• Multiple-stripe blocking
The sequence of stripes in a parity group are blocked to reduce the number of write penalties. This
mode is useful for sequential data transfer.
• Drive blocking
In the Drive blocking mode, blocks to be destaged are written in a block with a single Drive command
if they are contiguous when viewed from a physical Drive to shorten the Drive's latency time.

The single- and multiple-stripe blocking modes are also called in-Cache blocking modes. The DMP
determines which mode to use. The Drive blocking mode is identified by the DSP.

THEORY02-09-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-10-10

2.10 Power-on Sequences


2.10.1 IMPL Sequence
The IMPL sequence, which is executed when power is turned on, is comprises of the following four modules:

1. BIOS
The BIOS starts other MP cores after a ROM boot. Subsequently, the BIOS expands the OS loader from
the flash memory into the local memory and OS loader is executed.

2. OS loader
The OS loader performs the minimum necessary amount of initializations, tests the hardware resources,
then loads the Real Time OS modules into the local memory and the Real Time OS is executed.

3. Real Time OS modules


Real Time OS is a root task that initializes the tables in the local memory that are used for intertask
communications. Real Time OS also initializes network environment and create the DKC task.

4. DKC task
When the DKC task is created, it executes initialization routines. Initialization routines initialize the most
part of the environment that the DKC task uses. When the environment is established so that the DKC
task can start scanning, the DKC task notifies the Maintenance PC of a power event log. Subsequently,
the DKC task turns on the power for the physical Drives and, when the logical Drives become ready, The
DKC task notifies the host processor of an NRTR.

The control flow of IMPL processing is shown in Figure 2-61.

THEORY02-10-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-10-20

Figure 2-61 IMPL Sequence

Power On

BIOS
• Start MP core
• Load OS loader

OS loader
• MP register initialization
• CUDG for BSP
• CUDG for each MP core
• Load Real Time OS

Real Time OS modules


• Set IP address
• Network initialization
• Load DKC task

DKC task
• CUDG
• Initialize LM/CM
• FCDG
• Send Power event log
• Start up physical Drives

SCAN

THEORY02-10-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-10-30

2.10.2 Planned Power Off


When a power-off is specified by a maintenance personnel, this Storage System checks for termination
of tasks that are blocked or running on all logical devices. When all the tasks are terminated, this Storage
System disables the CHL and executes emergency destaging. If a track for which destaging fails (pinned
track) occurs, this Storage System stores the pin information in shared memory.
Subsequently, this Storage System saves the configuration data and the pin information(which is used as
hand-over information) in flash memory of I/F Boards, save all SM data (which is used as none-volatile
power on) in SSD memory of CPCs. Then, sends Power Event Log to the Maintenance PC, notifies the
hardware of the grant to turn off the power.

The hardware turns off main power when power-off grants for all processors are presented.

Storage MP
Navigator

PS-off detected

Disable the Channel


Executes emergency destaging

Collect ORM information

Turn off Drive power

Store SM data to SDD memory of CPCs

Store configuration data in FM of I/F Boards


Store pin information in FM of I/F Boards

Send Power Event Log to Storage Navigator

Grant PS-off

DKC PS off

THEORY02-10-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-11-10

2.11 Data Guarantee


DW850 makes unique reliability improvements and performs unique preventive maintenance.

THEORY02-11-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-11-20

2.11.1 Data Check Using LA (Logical Address) (LA Check) (Common to SAS Drives and
SSD)
When data is transferred, the LA value of the target BLK (LA expectation value) and the LA value of the
actual transferred data (Read LA value) are compared to guarantee data. This data guarantee is called LA
check .
With the LA check, it is possible to check whether data is read from the correct BLK location.

Table 2-44 LA check method


Write Read

1. Receive Write requirement from Host. 1. DKB calculates the LA expectation value based
2. CHB stores data on Cache and adds the LA on the logical address of the BLK to read.
value and, at the same time, adds an LA value, 2. Perform read from HDD.
which is a check code, to each BLK. (LA value 3. Check whether the LA expectation value and the
is calculated based on the logical address of each LA value of the read data are consistent. (When
BLK) the LBA to read is wrong, the LA values would
3. DKB stores data on HDD. be inconsistent, and the error can be detected.
In such a case, a correction read is performed to
restore data.)
4. CHB transfers data to Host by removing the LA
field.

THEORY02-11-20
Hitachi Proprietary DW850
Rev.4 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-12-10

2.12 Encryption License Key


When the firmware version is 88-03-21-x0/xx or later, Encryption License Key can be used through Storage
Navigator or REST API. However, REST API does not support the following functions:
• Connection to a key management server
• Enabling/disabling encryption on already created parity groups
• Editing the password policy of the password used for file backup
• Rekeying certificate encryption keys
When the firmware version is earlier than 88-03-21-x0/xx, Encryption License Key can be used only through
Storage Navigator. To use Storage Navigator, the SVP needs to be installed.

2.12.1 Overview of Encryption


Data stored in volumes in the storage system can be encrypted by using Encryption License Key.
Encrypting data prevents data from being leaked even when the storage system or data drives in the storage
system are replaced or stolen.

2.12.2 Specifications of Encryption


Table 2-45 shows the specifications of encryption with Encryption License Key.

Table 2-45 Specifications of encryption with Encryption License Key


Item Specifications
Hardware spec Encryption algorithm AES 256 bit
Volume to encrypt Volume type Open volumes
Emulation type OPEN-V
Encryption key Unit of creating encryption Drive
management key
Number of encryption keys VSP G350/G370, VSP F350/F370: 1,024
VSP G700, VSP F700, VSP G900, VSP F900: 4,096
Unit of setting encryption RAID group
Converting Encryption of existing data Convert non-encrypted data/encrypted data for RAID
non-encrypted data/ group in which encryption is set by using the existing
encrypted data function (Volume Migration, ShadowImage et cetera.).

THEORY02-12-10
Hitachi Proprietary DW850
Rev.4 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-12-20

2.12.3 Notes on Using Encryption License Key


Please note the following when using Encryption License Key.

Table 2-46 Notes on Encryption License Key


# Item Content
1 Volumes to encrypt Only internal volumes in the Storage System can be encrypted with Encryption
License Key. External volumes cannot be encrypted.
2 LDEV format You cannot perform high-speed format (Drive format) for Disks under RAID
group in which encryption is set.
Time required for LDEV format performed for the RAID group in which
encryption is set depends on the number of RAID groups.
3 Encryption/ When encrypting user data, set encryption for all volumes in which the data is
non-encryption stored in order to prevent the data from being leaked.
status of volumes Example:
• In the case a copy function is used, and when encryption is set for P-VOL, set
it also for S-VOL.
When encryption is set for P-VOL, and non-encryption is set for S-VOL (or
vice versa), you cannot prevent data on the non-encrypted volume from being
leaked.
4 Switch encryption When you switch the encryption setting of RAID group, you need to perform
setting LDEV format again. To switch the encryption setting, back up data as necessary.
5 Protecting the The KEK (Key Encryption Key) is stored in the key management server. Note
Key Encryption the following points:
Key at the key • The key management server must consist of two clustered servers.
management server • The communication between the SVP and the key management server must be
available when the storage system is powered on because the SVP obtains the
key from the key management server. Before powering on the storage system,
make sure that the communication between the SVP and the key management
server is available.

THEORY02-12-20
Hitachi Proprietary DW850
Rev.4 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-12-30

2.12.4 Creation of Encryption Key


An encryption key is used for data encryption and decryption. Up to 1,024 encryption keys can be created
in the Storage System for VSP G350/G370 and VSP F350/F370, and 4,096 for VSP G700, VSP G900, VSP
F700, and VSP F900.
Only customer security administrators are able to create encryption keys.

In the following cases, however, creation of encryption key is inhibited to avoid data corruption.
• Due to a failure in the Storage System, the Storage System does not have any encryption key but it has a
RAID group in which encryption is set.
In this case, restore the backed up encryption key.

2.12.5 Backup of Encryption Key


There are two types of encryption key backup: the primary backup to store the key in the Cache Flash
Memory in the Storage System and the secondary backup to store the key in the management client (client
PC to use Storage Navigator or REST API) or the key management server.

• Primary backup
Encryption key created on SM is backed up in the Cache Flash Memory in the Storage System.
Encryption key is automatically backed up within the Storage System at the time it is created, deletion, a
status are changed.

• Secondary backup
Encryption key created on SM is backed up in the management client (client PC to use Storage Navigator
or REST API) or the key management server of the user.
The secondary backup is performed from Storage Navigator or REST API by direction of the security
administrator.

THEORY02-12-30
Hitachi Proprietary DW850
Rev.4 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-12-40

2.12.6 Restoration of Encryption Key


There are two types of encryption key restoration; restoration from primary backup and restoration from
secondary backup.

• Restoration from primary backup


When the encryption key on SM cannot be used, the encryption key primary backup is restored.
Restoration from primary backup is automatically performed in the the Storage System.

• Restoration from secondary backup


When encryption keys including the encryption key primary backup cannot be used in the Storage System,
the encryption key secondary backup is restored.
Restoration from secondary backup is performed when requested by the security administrator from
Storage Navigator or REST API.

2.12.7 Setting and Releasing Encryption


You can set and release encryption by specifying a RAID group. Set and release encryption in the Parity
Group list window in Storage Navigator.

NOTE:
• Encryption can be set and released only when all volumes that belong to the RAID group are
blocked, or when there is no volume in the RAID group.
When the RAID group contains at least one volume that is not blocked, you cannot set and
release encryption.
• When you switch the encryption setting, you need to perform LDEV format again. Therefore set
encryption before formatting the entire RAID group when installing RAID groups et cetera.

THEORY02-12-40
Hitachi Proprietary DW850
Rev.4 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-12-50

2.12.8 Encryption Format


To format a RAID group in which encryption is set, format the entire Disk area by write encrypted 0 data in
the entire Disk area. This is called Encryption format.
When encryption is set for a RAID group, encryption format is needed before user data is written. When
encryption is released, normal format is needed before user data is written.
NOTE: Encryption format can be performed only when all volumes in the RAID group can be formatted.
When at least one volume cannot be formatted, encryption format cannot be performed.

2.12.9 Converting Non-encrypted Data/Encrypted Data


To encrypt existing data, create a RAID group in which encryption is set in advance and use a copy Program
Product, such as Volume Migration, and ShadowImage, to convert data. Data conversion is performed per
LDEV.
The specifications of converting non-encrypted data/encrypted data comply with the specifications of the
copy function (Volume Migration, ShadowImage et cetera.) used for conversion.

2.12.10 Deleting Encryption Keys


Only customer security administrators are able to delete encryption keys. You cannot delete the encryption
key which is allocated to drives or DKBs. You can delete the encryption key whose attribute is Free key.

2.12.11 Reference of Encryption Setting


You can check the encryption setting (Encryption: Enable/Disable) per RAID group from Parity Groups
screen of Web Console.

THEORY02-12-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-13-10

2.13 Operations Performed when Drive Errors Occur


2.13.1 I/O Operations Performed when Drive Failures Occur
This system can recover target data using parity data and data stored on normal Disk storage even when it
cannot read data due to failures occurring on physical Drives. This feature ensures non-disruptive processing
of applications in case of Drive failures. This system can also continue processing for the same reason in case
failures occur on physical Drives while processing write requirements.
Figure 2-62 shows the overview of data read processing in case a Drive failure occurs.

Figure 2-62 Overview of Data Read Processing (Requirement for read data B)

1. Normal time

B B

DKC (RAID5, RAID6) DKC (RAID1, 2D+2D)


B B

A B C P, A, C A A B B
E F P, D, F D C C D D
I P, G, L G H E E F F

Disk1 Disk2 Disk3 Disk4

RAID Pair RAID Pair


Disk1 Disk2 Disk3 Disk4
Parity Group

2. When a Disk failure occurs

B In the case of RAID 6, even when two Disk B


Drives fail, data can be restored through use of
data stored in the rest of the Disk Drives.

DKC DKC (RAID1, 2D+2D)


(RAID5,
A C D RAID6) B

A B C P, A, C A A B B
E F P, D, F D C C D D
I P, G, L G H E E F F

Disk1 Disk2 Disk3 Disk4

RAID Pair RAID Pair


Disk1 Disk2 Disk3 Disk4
Parity Group
A, B, C・・・: Data(A = A , B = B , C = C )
P : Parity data

THEORY02-13-10
Hitachi Proprietary DW850
Rev.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-13-20

2.13.2 Data Guarantee at the Time of Drive Failures


This system uses spare Disk Drives and reconfigures any Drives that are blocked due to failures or Drives
whose failure count exceeds a specified limit value using spare Disks. (Drives belonging to a Parity Group
with no LDEVs defined are not reconfigured.)
Since this processing is executed on the host in the background, this system can continue to accept I/O
requirements. The data saved on spare Disks are copied into the original location after the failure Drives are
replaced with new ones. But when the copy back mode is set to Disable , and when copying to the same
capacity spare Disk, the copy back is not performed.

1. Dynamic sparing
This system keeps track of the number of failures that occurred, for each Drive, when it executes normal
read or write processing. If the number of failures occurring on a certain Drive exceeds a predetermined
value, this system considers that the Drive is likely to cause unrecoverable failures and automatically
copies data from that Drive to a spare Disk. This function is called dynamic sparing. In RAID1 method,
this system is same as RAID5 dynamic sparing.

Figure 2-63 Overview of the Dynamic Sparing Overview

Ready to accept
I/O requirements

DKC

Physical Physical Physical Physical Spare Disk


device device device device

THEORY02-13-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-13-30

2. Correction copy
When this system cannot read or write data from or to a Drive due to an failure occurring on that Drive,
it regenerates the original data for that Drive using data from the other Drives and the parity data and
copies it onto a spare Disk.
• In RAID1 method, this system copies data from the another Drive to a spare Disk.
• In the case of RAID 6, the correction copy can be made to up to two Disk Drives in a parity group.

Figure 2-64 Overview of the Correction Copy Overview

Ready to accept Ready to accept


I/O requirements I/O requirements

DKC (RAID5, RAID6) DKC (RAID1, 2D+2D)

Physical Physical Physical Physical Physical Physical Physical Physical


Spare Disk Spare Disk
device device device device device device device device

RAID Pair RAID Pair

Parity Group

3. Allowable number of copying operations

Table 2-47 Allowable number of copying operations


RAID level Allowable number of copying operations
RAID1 Either the dynamic sparing or correction copy can be executed within a RAID pair.
RAID5 Either the dynamic sparing or correction copy can be executed within a parity group.
RAID6 The dynamic sparing and/or correction copy can be executed up to a total of twice
within a parity group.

THEORY02-13-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-14-10

2.14 Data Guarantee at the Time of a Power Outage due to Power Outage and Others
If a power failure due to power outage and others occurs, refer to 5. Battery of 4.6.4 Hardware
Component .

THEORY02-14-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-15-10

2.15 Overview of DKC Compression


2.15.1 Capacity Saving and Accelerated Compression
The following two functions are available for using the capacity of virtual volumes effectively.
• Capacity Saving (Compression and Deduplication)
Capacity Saving is a function to reduce the bit-cost by the data compression (Compression) and data
deduplication (Deduplication) performed by the storage system controller to compress the stored data.
The post process mode or the inline mode can be selected as the mode for writing new data.
• Accelerated Compression
Accelerated Compression is a function to expand the drive capacity and reduce the bit-cost while
maintaining the high data access of the storage system by using Compression on the drive.

THEORY02-15-10
Hitachi Proprietary DW850
Rev.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-15-20

2.15.2 Capacity Saving


Capacity Saving is a function to perform Compression and Deduplication of stored data by using the storage
system controller. Reducing data capacity allows data more than the size of drives on the system to be stored.
An empty area of a pool can be increased by Capacity Saving and the users can decrease the purchase cost of
drives in the product lifecycle. Capacity Saving is available for all types of drives and can be used with the
encryption function.

The following describes each function.

Host N N N N

A B C A B C A B C A B C

Storage System
Deduplication
system data volume
N LDEV setting N N N
(fingerprint)
LDEV setting Capacity Saving:
Capacity Saving: A B C A B C A B C A B C
Deduplication NNNN
Compression and Compression A A ABAB BCBC C C
... ... ............ ............ ... ...
... ... ............ ............ ... ...
Before compressing A B C A B C A B C A B C
data Information required
Data compression for retrieving
duplicate data is
After compressing stored.
data
A B C Compression A B C A B C A B C
Deduplication
system data volume
Before deleting (data store)
duplicated data A B C A B C A B C
NNNN

Deletion of duplicate data and storage of original A A ABAB BCBC C C


... ... ............ ............ ... ...
data in deduplication system data volume ... ... ............ ............ ... ...

A B C A B C A B C
Duplicate source data
Deduplication is stored.

(Legends)
: Deleted duplicate
: Written data data : Data flow

: LUN : Virtual volume : Process flow

When the Capacity Saving function is enabled, the pool capacity is consumed because the entire capacity
of metadata and garbage data is stored. The capacity to be consumed is equivalent to the physical capacity
of about 10% of the LDEV capacity that is processed by Capacity Saving. The pool capacity is dynamically
consumed according to usage of the Capacity Saving process. When the amount of data writes from the host
increases, the consumed capacity might exceed 10% of the pool capacity temporarily. When the amount
of data writes decreases, the used capacity becomes about 10% of the pool capacity due to the garbage
collection operation.

THEORY02-15-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-15-30

2.15.2.1 Compression
Compression is a function to convert a data size to a different smaller data size by encryption without
reducing the amount of information. LZ4 is used for Compression as the data compression algorithm. Set this
function to each virtual volume for Dynamic Provisioning.

2.15.2.2 Deduplication
Deduplication is a function to retain data on a single address and delete the duplicated data on other addresses
if the same data is written on different addresses. Deduplication is set for each of the virtual volumes of the
Dynamic Provisioning. When Deduplication is enabled, duplicated data among virtual volumes associated
with a pool is deleted. When virtual volumes with Deduplication enabled are created, system data volumes
for Deduplication (fingerprint) and system data volumes for Deduplication (data store) are created. The
system data volume for Deduplication (fingerprint) stores a table to search for duplicated data among data
stored in the pool. Four system data volumes for Deduplication (fingerprint) are created per pool. The system
data volume for Deduplication (data store) stores the original data of the duplicated data. Four system data
volume for Deduplication (data store) are created per pool.
When the settings of [Deduplication and Compression] of all virtual volumes are changed to [Disable],
system data volumes for Deduplication are automatically deleted.

THEORY02-15-30
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-10

2.16 Media Sanitization


2.16.1 Overview
Media Sanitization erases data in a drive by overwriting it. Data in the drive that caused Dynamic Sparing
(hereinafter referred to as DS) to be started is overwritten by the defined erase pattern data when DS ends.
Then, the data in the drive is compared with the erase pattern data and the data erase is completed.
Only CBLHN supports Media Sanitization.

Table 2-48 Overview


No. Item Description
1 Erase specifications See Table 2-49.
2 Execution method See Table 2-50.
3 Execution process See Table 2-51.
4 Check of result SIM indicating end of Media Sanitization is reported (normal end, abnormal
end, or end with warning). For details, see 2.16.3 Checking Result of
Erase .
5 Recovery from failure Replacement with a new drive
6 Stopping method Replacement of the drive for which the data erase needs to be stopped with
a new drive by using Maintenance Utility

Table 2-49 Erase Specifications


No. Item Description
1 Number of erases One erase for an entire drive (all LBAs) (for flash drives, excluding over
provisioning space)
2 Erase pattern 0 data
3 Check of erase Drive data after write of the erase pattern data is read to compare it with the
erase pattern data.
4 Erase time See 2.16.2 Estimated Erase Time .
5 LED action on drive In process of erase: The green LED is blinking.
After completion of erase: The red LED is lit. (The red LED might not light
up depending on the drive failure type.)

Table 2-50 Execution Method


No. Description
1 Setting the dedicated SOM to on is necessary. Contact the Technical Support Division.
When the SOM for PDEV erase is set to on, the SOM for Media Sanitization is prioritized.

Table 2-51 Execution Process


No. Description
1 After completion of DS, Media Sanitization is automatically started.
When erase is started, SIM is reported.

THEORY02-16-10
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-20

2.16.2 Estimated Erase Time


Estimated time required for erase is shown below.
Erase time might significantly exceed the estimated time due to the load on the storage system and a drive
error occurring during erase.

Table 2-52 Estimated Erase Time


Type of drive 1.9 TB 3.8 TB 7.6 TB 15 TB
NVMe SSD 4h 7h30m 14h30m 28h30m

THEORY02-16-20
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-30

2.16.3 Checking Result of Erase


2.16.3.1 SIMs Indicating End of Media Sanitization
Check the result of erase by referring to the following SIM list.

Table 2-53 SIMs Indicating End of Erase


No. SIM (*1) Type of end Result
1 4e4xxx Normal end Data erase by writing the erase pattern data to an entire drive
(all LBAs) ends normally (for flash drives, excluding over
provisioning space).
2 4e6xxx Abnormal end Data erase ends abnormally because either of the following
erase errors occurs:
• Writing the erase pattern data fails.
• In process of data comparison after the erase pattern data
is written, an inconsistency with the erase pattern data is
detected.

Tell the customer that user data might remain in the drive.
When the customer has the DRO agreements, give the faulty
drive to him or her and recommend destroying it physically or
other methods like that.
When the customer does not have the DRO agreements, bring
the faulty drive back with you after making the customer
understand that user data might remain in the drive.
(If the customer does not allow you to bring out the drive,
explain him or her that he or she needs to use services for
erasing data or make the DRO agreements.)
3 4e8xxx End with warning Data erase ends with warning because reading some areas of
the drive is unsuccessful while writing the erase pattern data
is successful (for flash drives, excluding over provisioning
space).
Tell the customer that writing the erase pattern data to an entire
drive is completed but data in some areas cannot be read.
Then, ask the customer whether he or she wants you to bring
out the drive.
For how to check the number of the areas (LBAs) where data
cannot be read, see 2.16.3.2 Checking Details of End with
Warning .
*1: The SIM indicating drive port blockade (see (SIMRC02-110)) might be also reported when the
SIM indicating end of Media Sanitization is reported. In such a case, prioritize the SIM indicating
end of Media Sanitization.

THEORY02-16-30
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-40

2.16.3.2 Checking Details of End with Warning


Factors of end with warning are shown below.

Table 2-54 Factors of End with Warning


No. Factor
1 In the erase process, the write by using the erase pattern data succeeds but the read fails.

Check SIMs indicating end with warning and related SSBs to know factors of end with warning as follows:

[1] In the Maintenance Utility window, select the [Alerts] tab and click the alert ID on the row of the SIM
indicating end with warning (reference code).

[2] The alert details are displayed. Check the concerned alert#.

[3] In the [Alerts] tab, select [View Internal Alerts - DKC].

THEORY02-16-40
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-50

[4] In the [SSB] tab, select the alert ID of the concerned alert# checked in previous steps.

THEORY02-16-50
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-60

[5] SSBs related to SIMs indicating end with warning


Details of each field of SSB are shown below.
Check the details of each field and consult a security administrator to determine whether it is possible to
bring out the target drive for data erase.

Table 2-55 Internal Information of SSB Related to SIM indicating End with Warning
Field Details
(a) Total number of LBAs on the target drive for data erase
(Field size: 6 bytes)
(a) = (b) + (c)
(b) The number of LBAs for which data erase is complete on the target drive for data erase
(Field size: 6 bytes)
(c) The number of LBAs for which the write by using the erase pattern data is successful and the read is
unsuccessful on the target drive for data erase
(Field size: 6 bytes)
(d) DB# and RDEV# of the target drive for data erase
(Lower 1 byte: DB#, upper 1 byte: RDEV#)

(b)
(c)
(d)
(a)

THEORY02-16-60
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-70

2.16.4 Influence between Media Sanitization and Maintenance Work


The following table shows whether each maintenance work is possible or not when Media Sanitization is in
process.

No Item 1 Item 2 Maintenance work is possible or not Media Sanitization action affected
possible by maintenance work
1 Replacement CTL/CM Possible None
2 LANB Possible None
3 CHB Possible None
4 Power supply Possible None
5 Maintenance PC Possible None
6 ENC/SAS Cable Possible (*2) None
7 DKB Possible None
8 PDEV Possible (*2) Media Sanitization ends abnormally
if you replace a drive in process of
it.
9 CFM Possible None
10 BKM/BKMF Possible Media Sanitization ends abnormally.
11 FAN Possible None
12 Battery Possible None
13 SFP Possible None
14 Addition/ CM Possible None
15 Removal SM Not possible (*3) None
16 CHB Possible (*2) None
17 Maintenance PC Possible None
18 DKB Possible (*2) None
19 PDEV Possible None
20 CFM Possible None
21 Parity Group Addition: Possible None
Removal: Possible (*3)
22 Spare drive Possible None
23 Drive Box (DB) Addition: Possible (*2) None
Removal: Possible (*3)
(To be continued)

THEORY02-16-70
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-71

(Continued from preceding page)


No Item 1 Item 2 Maintenance work is possible or not Media Sanitization action affected
possible by maintenance work
24 Firmware update Online (not Possible (*1) When either of the following is met,
including the firmware update with a firmware
HDD firmware) version that does not support Media
sanitization is not possible.
• The SOM for Media Sanitization is
set to on.
• Media Sanitization is in process.
25 Online Not possible (*2) None
(including the
HDD firmware)
26 Offline Possible (*2) For the DKCMAIN firmware
update, Media Sanitization ends
abnormally.
27 Maintenance PC Possible None
only
28 LDEV Blockade Possible None
29 maintenance Restore Possible None
30 Format Possible None
31 Verify Possible None

*1: The operation is suppressed with a message displayed. However, you can perform the operation
from Forcible task without safety check .
*2: The operation is suppressed with a message displayed when the copy back mode is disabled.
However, you can retry the operation by checking the checkbox for Forcibly run without safety
checks .
*3: The operation is suppressed with a message displayed when the copy back mode is disabled.
Perform either (1) or (2).
(1) If you want to prioritize the maintenance work, restore the blocked drive for which Media
Sanitization is being executed, and then retry the operation. However, if you restore the
blocked drive, Media Sanitization ends abnormally and cannot be executed again.
(2) If you want to prioritize Media Sanitization, wait until Media Sanitization ends, and then
perform the maintenance work.

THEORY02-16-71
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-80

2.16.5 Notes when Errors Occur


The following table shows influence on Media Sanitization when each failure occurs.

No. Item 1 Item 2 Influence on Media Sanitization


1 DKC Power outage Media Sanitization ends abnormally.
2 Disk Board Failure Media Sanitization might end abnormally due to disconnection
of the path to the target drive for Media Sanitization.
3 Drive Box (DB) Power outage Media Sanitization ends abnormally.
4 ENC Failure Media Sanitization might end abnormally due to disconnection
of the path to the target drive for Media Sanitization.

THEORY02-16-80
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-01-10

3. Specifications for the Operations of DW850


3.1 Precautions When Stopping the Storage System
Pay attention to the following instructions concerning the operation when the Storage System stops
immediately after turning on the PDU breaker and completing the power off of the Storage System.

3.1.1 Precautions in a Power-off Mode


• Even if the PDU breaker is immediately after turning on, or the Storage System is in the status of the power
off after the process of the power off is completed, the Storage System is in a standby mode.
• In this standby mode, the AC input is supplied to the Power Supply in the Storage System, and this supplies
the Power Supply to the FANs, the Cache Memory modules and some boards (I/F Boards, Controller
Boards, ENCs or the like).
• Please execute the following process when the standby electricity must be controlled, because the standby
electricity exists in the Storage System under this condition. See Table 3-1 for standby electricity.

1. When the Storage System is powered on


Turn on the breaker of each PDU just before the power on processing.

2. When the Storage System is powered off


After the power off processing is completed, turn each PDU breaker off.
• When turning each PDU breaker off, make sure the power off processing is completed in advance.
• When the breaker is turned off during the power off processing, the battery power is burned and the
power on time may take little longer at the next power on processing depending on to the battery
charge, because the Storage System shifts to the emergency SSD transfer processing of the data with
the battery.
• Moreover, in advance of the turning each PDU breaker off, make sure the AC cables of the other
Storage System are not connected to the PDU that is turned off.
• The management information stored in the memory is transferred to the Cache Flash Memory (CFM)
in the Controller Board during the power off process, therefore, leaving the breaker of the PDUs on is
needless for the data retention.

Table 3-1 Maximum Standby Electricity per Controller Chassis and Drive Box
Controller Chassis/Drive Box etc Maximum Standby Electricity [VA]
DBS (SFF Drive Box) 200
DBL (LFF Drive Box) 140
DB60 (3.5-inch Drive Box) 560
DBF (Flash Module Drive Box) 410
DBN (NVMe Drive Box) 500
CBL (Controller Chassis) 230
CBSS (Controller Chassis) 370
CBSL (Controller Chassis) 310
CBXSS (Controller Chassis) 230
CBXSL (Controller Chassis) 170
CHBB (Channel Board Box) 180
THEORY03-01-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-01-20

3.1.2 Operations When a Distribution Board Is Turned off


When the distribution board breaker or the PDU breaker is turned off, execute the operation after making
sure that the Power Supply of the Storage System was normally turned off, by confirming the PS was
normally turned off according to the Power OFF Event Log (Refer to 1.5.2 Storage System Power Off
(Planned Shutdown) in INSTALLATION SECTION.) of Maintenance PC.

When the Power OFF Event Log confirmation is impossible, suspend the operation and requirement
customers to restart the Storage System to confirm that the PS is normally turned off.

NOTICE: Requirement the thoroughness in the following operations to the customer if the
distribution board breaker or the PDU is impossible to keep the on-status after the
power of the Storage System is turned off.

1. Point to be checked before turning the breaker off


Requirement the customer to make sure that the power off processing of the
Storage System is completed (READY lamp and ALARM lamp are turned off)
before turning off the breaker.

2. Operation when breaker is turned off for two weeks or more


A built-in battery spontaneously becomes a status of discharge when the
distribution board breaker or PDU breaker is turned off after the Storage System
is powered off. Therefore, when the breaker is turned off for two weeks or more,
charging the built-in battery to full will take a maximum of 4.5 hours. Accordingly,
requirement the customer to charge the battery prior to the restarting of the
Storage System.

THEORY03-01-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-02-10

3.2 Precautions When Installing of Flash Drive and Flash Module Drive Addition
For precautions when installing of Flash Drive and Flash Module Drive, refer to INSTALLATION SECTION
1.3.4 Notes for Installing Flash Module Drive Boxes .

THEORY03-02-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-03-10

3.3 Notes on Maintenance during LDEV Format/Drive Copy Operations


This section describes whether maintenance operations can be performed when Dynamic Sparing, Correction
Copy, Copy Back, Correction Access or LDEV Format is running or when data copying to a spare Disk is
complete.
If Correction Copy runs due to a Drive failure or Dynamic Sparing runs due to preventive maintenance on
large-capacity Disk Drives or Flash Drives, it may take long to copy data. In the case of low-speed LDEV
Format performed due to volume addition, it may take time depending on the I/O frequency because host I/
Os are prioritized. In such a case, it is recommended to perform operations, such as replacement, addition,
and removal, after Dynamic Sparing, LDEV Format et cetera. is completed, based on the basic maintenance
policy, but the following maintenance operations are available.

THEORY03-03-10
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-03-20

Table 3-2 Correlation List of Storage System Statuses and Maintenance Available Parts
Storage System status
Maintenance operation Dynamic Correction Copy Back Correction Copied to LDEV
Sparing Copy Access spare Disk Format
Replacement CTL/CM Depending Depending Depending Possible Possible Impossible
on firmware on firmware on firmware (*1) (*8) (*6)
version (*19) version (*19) version (*19)
LANB Depending Depending Depending Possible Possible Impossible
on firmware on firmware on firmware (*1) (*8) (*6)
version (*19) version (*19) version (*19)
CHB Possible Possible Possible Possible Possible Impossible
(*1) (*8) (*6)
Power supply Possible Possible Possible Possible Possible Possible
SVP Possible Possible Possible Possible Possible Possible
ENC/SAS Possible Possible Possible Possible Possible Impossible
Cable (*1) (*8) (*6)
DKB Possible Possible Possible Possible Possible Impossible
(*1) (*8) (*6)
PDEV Possible Possible Possible Possible Possible Possible
(*15) (*15) (*15) (*1) (*8) (*4)
CFM Possible Possible Possible Possible Possible Possible
BKM/BKMF Possible Possible Possible Possible Possible Possible
(*1)
FAN Possible Possible Possible Possible Possible Possible
Battery Possible Possible Possible Possible Possible Possible
(*1)
SFP Possible Possible Possible Possible Possible Possible
PCIe Cable Possible Possible Possible Possible Possible Impossible
(*1) (*8) (*6)
PCIe channel Possible Possible Possible Possible Possible Impossible
Board (*1) (*8) (*6)
Channel Possible Possible Possible Possible Possible Possible
Board Box
Switch Depending Depending Depending Possible Possible Impossible
Package on firmware on firmware on firmware (*1) (*8) (*6)
version (*19) version (*19) version (*19)
PCIe Cable Possible Possible Possible Possible Possible Impossible
Connection (*1) (*8) (*6)
Package
(To be continued)

THEORY03-03-20
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-03-21

(Continued from the preceding page)


Storage System status
Maintenance operation Dynamic Correction Copy Back Correction Copied to LDEV
Sparing Copy Access spare Disk Format
Addition/ CM/SM Impossible Impossible Impossible Possible Possible Impossible
Removal (*7) (*7) (*7) (*1) (*8) (*16) (*6)
CHB Impossible Impossible Impossible Possible Possible Impossible
(*7) (*7) (*7) (*1) (*8) (*17) (*6)
SVP Possible Possible Possible Possible Possible Possible
DKB Impossible Impossible Impossible Possible Possible Impossible
(*7) (*7) (*7) (*1) (*8) (*17) (*6)
PDEV Impossible Impossible Impossible Possible Possible Impossible
(*7) (*7) (*7) (*1) (*8) (*2) (*18) (*6)
CFM Impossible Impossible Impossible Possible Possible Impossible
(*7) (*7) (*7) (*1) (*8) (*6)
Parity Group Impossible Impossible Impossible Possible Possible Impossible
(*7) (*7) (*7) (*1) (*8) (*2) (*6)
Spare Impossible Impossible Impossible Possible Possible Impossible
(*7) (*7) (*7) (*1) (*8) (*2) (*6)
Channel Impossible Impossible Impossible Impossible Possible Impossible
Board Box (*7) (*7) (*7) (*8) (*6)
Firmware Online Possible Possible Possible Possible Possible Impossible
exchange (HDD (*1) (*8) (*1) (*9) (*6)
firmware-
program
exchange is
not included.)
Online Impossible Impossible Impossible Possible Impossible Impossible
(HDD (*1) (*8) (*6)
firmware-
program
exchange is
included.)
Offline Impossible Impossible Impossible Impossible Possible Impossible
(*7) (*7) (*7) (*8) (*6)
SVP only Possible Possible Possible Possible Possible Possible
LDEV Blockade Possible Possible Possible Possible Possible Possible
maintenance (*5) (*9) (*5) (*9) (*5) (*9) (*5) (*9) (*10)
(*10) (*10) (*10)
Restore Possible Possible Possible Possible Possible Possible
(*5) (*9) (*5) (*9) (*5) (*9) (*5) (*9) (*11)
(*11) (*11) (*11)
Format Possible Possible Possible Possible Possible Impossible
(*5) (*5) (*5) (*5) (*9) (*12)
(*13)
Verify Impossible Impossible Impossible Possible Possible Impossible
(*10) (*10) (*10) (*9) (*14) (*10)

THEORY03-03-21
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-03-30

*1: It is prevented with the message. However, it is possible to perform it by checking the checkbox of
Perform forcibly without safety check .
*2: It is impossible to remove a RAID group in which data is migrated to a spare Disk and the spare
Disk.
*3: (Blank)
*4: It is impossible when high-speed LDEV Format is running. When low-speed LDEV Format is
running, it is possible to replace PDEV in a RAID group in which LDEV Format is not running.
*5: It is possible to perform LDEV maintenance for LDEV defined in a RAID group in which
Dynamic Sparing, Correction Copy, Copy Back or Correction Access is not running.
*6: It is prevented with message [30762-208158]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*7: It is prevented with message [30762-208159]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*8: It is prevented with message [33361-203503:33462-200046]. However, a different message might
be displayed depending on the occurrence timing of the state regarded as a prevention condition.
*9: It is prevented with the message. However, it is possible to perform it from Forcible task without
safety check .
*10: It is prevented with message [03005-002095]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*11: It is prevented with message [03005-202002]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*12: It is prevented with message [03005-202001]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*13: It is prevented with message [03005-202005]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*14: It is prevented with message [03005-002011]]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*15: It is prevented with message [30762-208159].
• When the RAID group to which the maintenance target PDEV belongs and the RAID group
whose Dynamic Sparing / Correction Copy / Copy Back is operating are not identical, it is
possible to perform it by checking the checkbox of “Perform forcibly without safety check”.
• When the RAID group to which the maintenance target PDEV belongs and the RAID group
whose Dynamic Sparing / Correction Copy / Copy Back is operating are identical and the RAID
level is RAID 6, it is possible to perform it by checking the checkbox of “Perform forcibly
without safety check” depending on the status of the PDEV other than the maintenance target.
However, a different message might be displayed depending on the occurrence timing of the state
regarded as a prevention condition.

THEORY03-03-30
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-03-31

*16: • For the firmware version earlier than 88-02-04-x0/xx, increasing or decreasing SM is suppressed
with message [30762-208180]. Resolve the blockade, then retry the operation.
• For the firmware version earlier than 88-02-04-x0/xx, increasing or decreasing CM is suppressed
with message [30762-208180]. To perform the operation, enable “Perform forcibly without safety
check” by checking its checkbox.
*17: For the firmware version earlier than 88-02-04-x0/xx, adding or removing a CHB/DKB is
suppressed with message [30762-208180]. To perform the operation, enable Perform forcibly
without safety check by checking its checkbox.
*18: For the firmware version earlier than 88-02-04-x0/xx, removing a PDEV is suppressed with
message [30762-208180]. To perform the operation, enable Perform forcibly without safety
check by checking its checkbox. Adding a PDEV is not suppressed.
*19: • For the firmware version 88-03-29-x0/xx or later
The maintenance operation is possible.
• For firmware versions other than the above
The maintenance operation is prevented with message [30762-208159]. However, a different
message might be displayed depending on the occurrence timing of the state regarded as a
prevention condition.

THEORY03-03-31
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-04-10

3.4 Inter Mix of Drives


Table 3-3 shows permitted coexistence of RAID levels and HDD types respectively.

Table 3-3 pecifications for Coexistence of Elements


Item Specification Remarks
Coexistence of RAID1 (2D+2D, 4D+4D), RAID5 (3D+1P, 4D+1P, 6D+1P, 7D+1P),
RAID levels RAID6 (6D+2P, 12D+2P, 14D+2P) can exist in the system.
Drive type Different drive types can be mixed for each parity group.
Spare drive When the following conditions 1 and 2 are met, the drives can be
used as spare drives.
1. Capacity of the spare drives is the same as or larger than the
drives in operation.
2. The type of the drives in operation and the type of the spare drives
fulfill the following conditions.

Type of Drive in Operation Type of Usable Spare Drive


HDD (7.2 krpm) HDD (7.2 krpm)
HDD (10 krpm) HDD (10 krpm)
HDD (15 krpm) HDD (15 krpm)
SAS SSD (*1) SAS SSD (*1)
NVMe SSD NVMe SSD
FMD (xRyFN) (*2) FMD (xRyFN, xRyFP) (*2)
FMD (xRyFP) (*2) FMD (xRyFP) (*2)

*1: When the drive in operation is 1R9MGM, 1T9MGM cannot be


used as a spare drive.
*2: x and y are an arbitrary number. Some drives do not contain
the number of y (e.g. 14RFP).
The numbers (x, y) of Type of Drive in Operation need not be the
same as those of Type of Usable Spare Drive.
For example, when the drives in operation are 1R6FN, the drives
of 1R6FN, 7R0FP, etc. can be used as spare drives.

THEORY03-04-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-10

4. Appendixes
4.1 DB Number - C/R Number Matrix
In the case of VSP G900, VSP F900, VSP G700, VSP F700, VSP G370, VSP F370, VSP G350, VSP F350,
VSP G130
For 12-bit DB#/RDEV# indicated in the PLC (Parts Location Code) of ACC and the SIM-RC, the relation
between the contents of bits and HDD location# is shown below. The correspondence between DB# and
CDEV# for each storage system model is also shown.

1. Relation between DB#, RDEV#, and HDD location#


DB# and RDEV# are indicated in the following format.

• DB#/RDEV# format
X (4 bit) Y (4 bit) Z (4 bit)
x x x x y y y y z z z z
DB# (6 bit) RDEV# (6 bit)

Example: In the case of XYZ = 5A5 (Hex) (Hex: Hexadecimal, Dec: Decimal)
5 A 5
0 1 0 1 1 0 1 0 0 1 0 1
DB# = 16 (Hex) RDEV# = 25 (Hex)
= 22 (Dec) = 37 (Dec)

The relation between DB#, RDEV#, and HDD location# is shown below.
• HDDxx-yy
RDEV# (Dec)
DB# (Dec)

Example: In the case of XYZ = 5A5 (Hex)


HDD22-37

The following is the relation between 12-bit DB#/RDEV#, DB#, RDEV# (R#), and HDD location# for
DB-00. For DB-01 or later, the relation between DB#/RDEV#, DB#, RDEV#, and HDD location# is the
same as that for DB-00.

THEORY04-01-10
Hitachi Proprietary DW850
Rev.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-20

Table 4-1 DB Number - R Number Matrix (DB-00 (*1))


SIM-RC/PLC Drive Box Number RDEV# HDD location#
DB#/RDEV# (Hex) (DB#) (R#)
000 DB-00 00 HDD00-00
001 01 HDD00-01
002 02 HDD00-02
003 03 HDD00-03
004 04 HDD00-04
005 05 HDD00-05
006 06 HDD00-06
007 07 HDD00-07
008 08 HDD00-08
009 09 HDD00-09
00A 10 HDD00-10
00B 11 HDD00-11
00C 12 HDD00-12
00D 13 HDD00-13
00E 14 HDD00-14
00F 15 HDD00-15
010 16 HDD00-16
011 17 HDD00-17
012 18 HDD00-18
013 19 HDD00-19
014 20 HDD00-20
015 21 HDD00-21
016 22 HDD00-22
017 23 HDD00-23
018 24 HDD00-24
019 25 HDD00-25
01A 26 HDD00-26
01B 27 HDD00-27
01C 28 HDD00-28
01D 29 HDD00-29
01E 30 HDD00-30
01F 31 HDD00-31
(To be continued)

THEORY04-01-20
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-30

(Continued from the preceding page)


SIM-RC/PLC Drive Box Number RDEV# HDD location#
DB#/RDEV# (Hex) (DB#) (R#)
020 DB-00 32 HDD00-32
021 33 HDD00-33
022 34 HDD00-34
023 35 HDD00-35
024 36 HDD00-36
025 37 HDD00-37
026 38 HDD00-38
027 39 HDD00-39
028 40 HDD00-40
029 41 HDD00-41
02A 42 HDD00-42
02B 43 HDD00-43
02C 44 HDD00-44
02D 45 HDD00-45
02E 46 HDD00-46
02F 47 HDD00-47
030 48 HDD00-48
031 49 HDD00-49
032 50 HDD00-50
033 51 HDD00-51
034 52 HDD00-52
035 53 HDD00-53
036 54 HDD00-54
037 55 HDD00-55
038 56 HDD00-56
039 57 HDD00-57
03A 58 HDD00-58
03B 59 HDD00-59
*1: In the case of CBXSS/CBXSL/CBSS/CBSL, DB-00 is contained in DKC.

THEORY04-01-30
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-40

2. Matrix of correspondence between DB# and CDEV# (C#)


The correspondence between DB# and C# differs depending on storage system models as shown in the
following matrix table.
NOTE: The number of drive boxes that can be connected depends on storage system models.

Table 4-2 DB Number - C Number Matrix


• VSP G900, VSP F900
DB# C# DB# C# DB# C# DB# C# DB# C# DB# C#
(Dec) (Hex) (Dec) (Hex) (Dec) (Hex) (Dec) (Hex) (Dec) (Hex) (Dec) (Hex)
DB-00 00 DB-08 02 DB-16 04 DB-24 40 DB-32 42 DB-40 44
DB-01 10 DB-09 12 DB-17 14 DB-25 50 DB-33 52 DB-41 54
DB-02 20 DB-10 22 DB-18 24 DB-26 60 DB-34 62 DB-42 64
DB-03 30 DB-11 32 DB-19 34 DB-27 70 DB-35 72 DB-43 74
DB-04 01 DB-12 03 DB-20 05 DB-28 41 DB-36 43 DB-44 45
DB-05 11 DB-13 13 DB-21 15 DB-29 51 DB-37 53 DB-45 55
DB-06 21 DB-14 23 DB-22 25 DB-30 61 DB-38 63 DB-46 65
DB-07 31 DB-15 33 DB-23 35 DB-31 71 DB-39 73 DB-47 75

• VSP G700, VSP F700


DB# C# DB# C# DB# C# DB# C# DB# C# DB# C#
(Dec) (Hex) (Dec) (Hex) (Dec) (Hex) (Dec) (Hex) (Dec) (Hex) (Dec) (Hex)
DB-00 00 DB-08 04 DB-16 08 DB-24 20 DB-32 24 DB-40 28
DB-01 10 DB-09 14 DB-17 18 DB-25 30 DB-33 34 DB-41 38
DB-02 01 DB-10 05 DB-18 09 DB-26 21 DB-34 25 DB-42 29
DB-03 11 DB-11 15 DB-19 19 DB-27 31 DB-35 35 DB-43 39
DB-04 02 DB-12 06 DB-20 0A DB-28 22 DB-36 26 DB-44 2A
DB-05 12 DB-13 16 DB-21 1A DB-29 32 DB-37 36 DB-45 3A
DB-06 03 DB-14 07 DB-22 0B DB-30 23 DB-38 27 DB-46 2B
DB-07 13 DB-15 17 DB-23 1B DB-31 33 DB-39 37 DB-47 3B

• VSP G370, VSP F370, VSP G350, VSP F350, VSP G130
DB# C# DB# C#
(Dec) (Hex) (Dec) (Hex)
DB-00 00 DB-08 17
(*1)
DB-01 10 DB-09 18
DB-02 11 DB-10 19
DB-03 12 DB-11 1A
DB-04 13
DB-05 14
DB-06 15
DB-07 16
*1: DB-00 is contained in the DKC.
THEORY04-01-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-50

In the case of VSP E990


For 13-bit DB#/HDD# indicated in the PLC (Parts Location Code) of ACC and the SIM-RC, the relation
between the contents of bits and HDD location# is shown below.

SIM Reference Code (SIM-RC) format


W (4bit) X (4bit) Y (4bit) Z (4bit)
w w w w x x x x y y y y z z z z
A (8bit) B (5bit)

The relation between DB#, HDD#, and HDD location# is shown below.
• HDDxx-yy
HDD# (Decimal) (*2)
DB# (Decimal) (*1)

*1: DB# can be calculated with the following calculating formula:


DB# = A (RC 6~13bit)/2 (Omit decimals)
*2: HDD# can be calculated with the following calculating formula:
A (r) = The remainder of (A (RC 6~13bit)/2)
HDD# = A (r) 12 + B (RC 0~5bit)

Example:SIM : eb75a5 (Case of Correction access occurred (eb6xxx/eb7xxx))


SIM : eb75a5
ebWXYZ = 75a5

7 5 a 5
0 1 1 1 0 1 0 1 1 0 1 0 0 1 0 1
A = AD (Hexadecimal) B = 05 (Hexadecimal)
173 (Decimal) 5 (Decimal)

DB# = 173 (A) /2 = 86 (Omit decimals)


A (r) = The remainder of (173 (A)/2) = 1
HDD# = 1 (A (r)) 12 + 5 (B) = 17

The HDD location number is HDD86-17.

THEORY04-01-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-60

Table 4-3 DB Number - C/R Number Matrix (VSP E990)


Reference Code/PLC Drive Box CDEV# (HEX) RDEV# (HEX) HDD Location Number
Number (DB#)
0000 DB-00 0 0 HDD00-00
0001 1 HDD00-01
0002 2 HDD00-02
0003 3 HDD00-03
0004 4 HDD00-04
0005 5 HDD00-05
0006 6 HDD00-06
0007 7 HDD00-07
0008 8 HDD00-08
0009 9 HDD00-09
000A A HDD00-10
000B B HDD00-11
0020 4 0 HDD00-12
0021 1 HDD00-13
0022 2 HDD00-14
0023 3 HDD00-15
0024 4 HDD00-16
0025 5 HDD00-17
0026 6 HDD00-18
0027 7 HDD00-19
0028 8 HDD00-20
0029 9 HDD00-21
002A A HDD00-22
002B B HDD00-23
0040 DB-01 1 0 HDD01-00
0041 1 HDD01-01
0042 2 HDD01-02
0043 3 HDD01-03
0044 4 HDD01-04
0045 5 HDD01-05
0046 6 HDD01-06
0047 7 HDD01-07
0048 8 HDD01-08
0049 9 HDD01-09
004A A HDD01-10
004B B HDD01-11
0060 5 0 HDD01-12
0061 1 HDD01-13
0062 2 HDD01-14
0063 3 HDD01-15
0064 4 HDD01-16
0065 5 HDD01-17
0066 6 HDD01-18
0067 7 HDD01-19
0068 8 HDD01-20
0069 9 HDD01-21
006A A HDD01-22
006B B HDD01-23
(To be continued)
THEORY04-01-60
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-70

(Continued from the preceding page)


Reference Code/PLC Drive Box CDEV# (HEX) RDEV# (HEX) HDD Location Number
Number (DB#)
0080 DB-02 2 0 HDD02-00
0081 1 HDD02-01
0082 2 HDD02-02
0083 3 HDD02-03
0084 4 HDD02-04
0085 5 HDD02-05
0086 6 HDD02-06
0087 7 HDD02-07
0088 8 HDD02-08
0089 9 HDD02-09
008A A HDD02-10
008B B HDD02-11
00A0 6 0 HDD02-12
00A1 1 HDD02-13
00A2 2 HDD02-14
00A3 3 HDD02-15
00A4 4 HDD02-16
00A5 5 HDD02-17
00A6 6 HDD02-18
00A7 7 HDD02-19
00A8 8 HDD02-20
00A9 9 HDD02-21
00AA A HDD02-22
00AB B HDD02-23
00C0 DB-03 3 0 HDD03-00
00C1 1 HDD03-01
00C2 2 HDD03-02
00C3 3 HDD03-03
00C4 4 HDD03-04
00C5 5 HDD03-05
00C6 6 HDD03-06
00C7 7 HDD03-07
00C8 8 HDD03-08
00C9 9 HDD03-09
00CA A HDD03-10
00CB B HDD03-11
00E0 7 0 HDD03-12
00E1 1 HDD03-13
00E2 2 HDD03-14
00E3 3 HDD03-15
00E4 4 HDD03-16
00E5 5 HDD03-17
00E6 6 HDD03-18
00E7 7 HDD03-19
00E8 8 HDD03-20
00E9 9 HDD03-21
00EA A HDD03-22
00EB B HDD03-23

THEORY04-01-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-02-10

4.2 Comparison of Pair Status on Storage Navigator, Command Control Interface (CCI)
Table 4-4 Comparison of Pair Status on Storage Navigator, CCI
No. Event Status on CCI Status on Storage Navigator
1 Simplex Volume P-VOL: SMPL P-VOL: SMPL
S-VOL: SMPL S-VOL: SMPL
2 Copying LU Volume P-VOL: PDUB P-VOL: PDUB
Partly completed (SYNC only) S-VOL: PDUB S-VOL: PDUB
3 Copying Volume P-VOL: COPY P-VOL: COPY
S-VOL: COPY S-VOL: COPY
4 Pair volume P-VOL: PAIR P-VOL: PAIR
S-VOL: PAIR S-VOL: PAIR
5 Pairsplit operation to P-VOL P-VOL: PSUS P-VOL: PSUS (S-VOL by operator)
S-VOL: SSUS S-VOL: PSUS (S-VOL by operator)/
SSUS
6 Pairsplit operation to S-VOL P-VOL: PSUS P-VOL: PSUS (S-VOL by operator)
S-VOL: PSUS S-VOL: PSUS (S-VOL by operator)
7 Pairsplit -P operation (*1) P-VOL: PSUS P-VOL: PSUS (P-VOL by operator)
(P-VOL failure, SYNC only) S-VOL: SSUS S-VOL: PSUS (by MCU)/SSUS
8 Pairsplit -R operation (*1) P-VOL: PSUS P-VOL: PSUS(Delete pair to RCU)
S-VOL: SMPL S-VOL: SMPL
9 P-VOL Suspend (failure) P-VOL: PSUE P-VOL: PSUE (S-VOL failure)
S-VOL: SSUS S-VOL: PSUE (S-VOL failure)/
SSUS
10 S-VOL Suspend (failure) P-VOL: PSUE P-VOL: PSUE (S-VOL failure)
S-VOL: PSUE S-VOL: PSUE (S-VOL failure)
11 PS ON failure P-VOL: PSUE P-VOL: PSUE (MCU IMPL)
S-VOL: — S-VOL: —
12 Copy failure (P-VOL failure) P-VOL: PSUE P-VOL: PSUE (Initial copy failed)
S-VOL: SSUS S-VOL: PSUE (Initial copy failed)/
SSUS
13 Copy failure (S-VOL failure) P-VOL: PSUE P-VOL: PSUE (Initial copy failed)
S-VOL: PSUE S-VOL: PSUE (Initial copy failed)
14 RCU accepted the notification of P-VOL: — P-VOL: —
MCU’s P/S-OFF S-VOL: SSUS S-VOL: PSUE (MCU P/S OFF)/
SSUS
15 MCU detected the failure of RCU P-VOL: PSUE P-VOL: PSUS (by RCU)/PSUE
S-VOL: PSUE S-VOL: PSUE (S-VOL failure)

*1: Operation on CCI

THEORY04-02-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-10

4.3 Parts Number of Correspondence Table

Table 4-5 Relationship between Cluster # and CTL #


Cluster CTL Cluster# (HEX) CTL# (HEX)
Location Name
Cluster-1 CTL1 0x00 0x00
Cluster-2 CTL2 0x01 0x01

Table 4-6 Relationship between Cluster # and MPU#, MP#


Cluster CTL MPU# VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
Location Name (HEX) MP# (HEX) MP# (HEX) MP# (HEX) MP# (HEX) MP# (HEX)
Cluster-1 CTL1 0x00 0x00, 0x01 0x00 ~ 0x05 0x00 ~ 0x09 0x00 ~ 0x0B 0x00 ~ 0x13
Cluster-2 CTL2 0x01 0x04, 0x05 0x08 ~ 0x0D 0x20 ~ 0x29 0x20 ~ 0x2B 0x20 ~ 0x33

Cluster CTL MPU# VSP E990


Location Name (HEX) MP# (HEX)
Cluster-1 CTL1 0x00 0x00 ~ 0x1B
Cluster-2 CTL2 0x01 0x1C ~ 0x37

Table 4-7 Correspondence Table of Cluster # and MP # of VSP G130 and a Variety of
Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
Cluster-2 CTL2 0x02 0x04 0x01 0x00 0x01 0x00
0x03 0x05 0x01 0x01

THEORY04-03-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-20

Table 4-8 Correspondence Table of Cluster # and MP # of VSP G350, VSP F350 and a
Variety of Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
0x02 0x02 0x02 0x02
0x03 0x03 0x03 0x03
0x04 0x04 0x04 0x04
0x05 0x05 0x05 0x05
Cluster-2 CTL2 0x06 0x08 0x01 0x00 0x01 0x00
0x07 0x09 0x01 0x01
0x08 0x0A 0x02 0x02
0x09 0x0B 0x03 0x03
0x0A 0x0C 0x04 0x04
0x0B 0x0D 0x05 0x05

Table 4-9 Correspondence Table of Cluster # and MP # of VSP G370, VSP F370 and a
Variety of Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
0x02 0x02 0x02 0x02
0x03 0x03 0x03 0x03
0x04 0x04 0x04 0x04
0x05 0x05 0x05 0x05
0x06 0x06 0x06 0x06
0x07 0x07 0x07 0x07
0x08 0x08 0x08 0x08
0x09 0x09 0x09 0x09
Cluster-2 CTL2 0x0A 0x20 0x01 0x00 0x01 0x00
0x0B 0x21 0x01 0x01
0x0C 0x22 0x02 0x02
0x0D 0x23 0x03 0x03
0x0E 0x24 0x04 0x04
0x0F 0x25 0x05 0x05
0x10 0x26 0x06 0x06
0x11 0x27 0x07 0x07
0x12 0x28 0x08 0x08
0x13 0x29 0x09 0x09

THEORY04-03-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-30

Table 4-10 Correspondence Table of Cluster # and MP # of VSP G700, VSP F700 and a
Variety of Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
0x02 0x02 0x02 0x02
0x03 0x03 0x03 0x03
0x04 0x04 0x04 0x04
0x05 0x05 0x05 0x05
0x06 0x06 0x06 0x06
0x07 0x07 0x07 0x07
0x08 0x08 0x08 0x08
0x09 0x09 0x09 0x09
0x0A 0x0A 0x0A 0x0A
0x0B 0x0B 0x0B 0x0B
Cluster-2 CTL2 0x0C 0x20 0x01 0x00 0x01 0x00
0x0D 0x21 0x01 0x01
0x0E 0x22 0x02 0x02
0x0F 0x23 0x03 0x03
0x10 0x24 0x04 0x04
0x11 0x25 0x05 0x05
0x12 0x26 0x06 0x06
0x13 0x27 0x07 0x07
0x14 0x28 0x08 0x08
0x15 0x29 0x09 0x09
0x16 0x2A 0x0A 0x0A
0x17 0x2B 0x0B 0x0B

THEORY04-03-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-40

Table 4-11 Correspondence Table of Cluster # and MP # of VSP G900, VSP F900 and a
Variety of Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
0x02 0x02 0x02 0x02
0x03 0x03 0x03 0x03
0x04 0x04 0x04 0x04
0x05 0x05 0x05 0x05
0x06 0x06 0x06 0x06
0x07 0x07 0x07 0x07
0x08 0x08 0x08 0x08
0x09 0x09 0x09 0x09
0x0A 0x0A 0x0A 0x0A
0x0B 0x0B 0x0B 0x0B
0x0C 0x0C 0x0C 0x0C
0x0D 0x0D 0x0D 0x0D
0x0E 0x0E 0x0E 0x0E
0x0F 0x0F 0x0F 0x0F
0x10 0x10 0x10 0x10
0x11 0x11 0x11 0x11
0x12 0x12 0x12 0x12
0x13 0x13 0x13 0x13
Cluster-2 CTL2 0x14 0x20 0x01 0x00 0x01 0x00
0x15 0x21 0x01 0x01
0x16 0x22 0x02 0x02
0x17 0x23 0x03 0x03
0x18 0x24 0x04 0x04
0x19 0x25 0x05 0x05
0x1A 0x26 0x06 0x06
0x1B 0x27 0x07 0x07
0x1C 0x28 0x08 0x08
0x1D 0x29 0x09 0x09
0x1E 0x2A 0x0A 0x0A
0x1F 0x2B 0x0B 0x0B
0x20 0x2C 0x0C 0x0C
0x21 0x2D 0x0D 0x0D
0x22 0x2E 0x0E 0x0E
0x23 0x2F 0x0F 0x0F
0x24 0x30 0x10 0x10
0x25 0x31 0x11 0x11
0x26 0x32 0x12 0x12
0x27 0x33 0x13 0x13
THEORY04-03-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-41

Table 4-12 Correspondence Table of Cluster # and MP # of VSP E990 and a Variety of
Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
0x02 0x02 0x02 0x02
0x03 0x03 0x03 0x03
0x04 0x04 0x04 0x04
0x05 0x05 0x05 0x05
0x06 0x06 0x06 0x06
0x07 0x07 0x07 0x07
0x08 0x08 0x08 0x08
0x09 0x09 0x09 0x09
0x0A 0x0A 0x0A 0x0A
0x0B 0x0B 0x0B 0x0B
0x0C 0x0C 0x0C 0x0C
0x0D 0x0D 0x0D 0x0D
0x0E 0x0E 0x0E 0x0E
0x0F 0x0F 0x0F 0x0F
0x10 0x10 0x10 0x10
0x11 0x11 0x11 0x11
0x12 0x12 0x12 0x12
0x13 0x13 0x13 0x13
0x14 0x14 0x14 0x14
0x15 0x15 0x15 0x15
0x16 0x16 0x16 0x16
0x17 0x17 0x17 0x17
0x18 0x18 0x18 0x18
0x19 0x19 0x19 0x19
0x1A 0x1A 0x1A 0x1A
0x1B 0x1B 0x1B 0x1B
(To be continued)

THEORY04-03-41
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-42

(Continued from the preceding page)


Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-2 CTL2 0x1C 0x1C 0x01 0x00 0x01 0x00
0x1D 0x1D 0x01 0x01
0x1E 0x1E 0x02 0x02
0x1F 0x1F 0x03 0x03
0x20 0x20 0x04 0x04
0x21 0x21 0x05 0x05
0x22 0x22 0x06 0x06
0x23 0x23 0x07 0x07
0x24 0x24 0x08 0x08
0x25 0x25 0x09 0x09
0x26 0x26 0x0A 0x0A
0x27 0x27 0x0B 0x0B
0x28 0x28 0x0C 0x0C
0x29 0x29 0x0D 0x0D
0x2A 0x2A 0x0E 0x0E
0x2B 0x2B 0x0F 0x0F
0x2C 0x2C 0x10 0x10
0x2D 0x2D 0x11 0x11
0x2E 0x2E 0x12 0x12
0x2F 0x2F 0x13 0x13
0x30 0x30 0x14 0x14
0x31 0x31 0x15 0x15
0x32 0x32 0x16 0x16
0x33 0x33 0x17 0x17
0x34 0x34 0x18 0x18
0x35 0x35 0x19 0x19
0x36 0x36 0x1A 0x1A
0x37 0x37 0x1B 0x1B

THEORY04-03-42
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-50

Table 4-13 Relationship between Cluster # and CHB#, DKB#


Cluster CHB/DKB VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
Location Name VSP F350 VSP F370 VSP F700 VSP F900
VSP E990
CHB# DKB# CHB# DKB# CHB# DKB# CHB# DKB# CHB# DKB#
Cluster-1 CHB-1A 0x00 − 0x00 − 0x00 − 0x00 − 0x00 −
CHB-1B − − 0x01 − 0x01 − 0x01 − 0x01 −
CHB-1C / DKB-1C − 0x00 − 0x00 − 0x00 0x02 − 0x02 −
CHB-1D − − − − − − 0x03 − 0x03 −
CHB-1E / DKB-1E − − − − − − 0x04 − 0x04 0x02
CHB-1F / DKB-1F − − − − − − 0x05 − 0x05 0x03
CHB-1G / DKB-1G − − − − − − 0x06 0x00 0x06 0x00
CHB-1H / DKB-1H − − − − − − 0x07 0x01 0x07 0x01
CHB-1J (*1) − − − − − − − − 0x08 −
CHB-1K (*1) − − − − − − − − 0x09 −
CHB-1L (*1) − − − − − − − − 0x0A −
CHB-1M (*1) − − − − − − − − 0x0B −
Cluster-2 CHB-2A 0x01 − 0x02 − 0x10 − 0x10 − 0x10 −
CHB-2B − − 0x03 − 0x11 − 0x11 − 0x11 −
CHB-2C / DKB-2C − 0x04 − 0x04 − 0x04 0x12 − 0x12 −
CHB-2D − − − − − − 0x13 − 0x13 −
CHB-2E / DKB-2E − − − − − − 0x14 − 0x14 0x06
CHB-2F / DKB-2F − − − − − − 0x15 − 0x15 0x07
CHB-2G / DKB-2G − − − − − − 0x16 0x04 0x16 0x04
CHB-2H / DKB-2H − − − − − − 0x17 0x05 0x17 0x05
CHB-2J (*1) − − − − − − − − 0x18 −
CHB-2K (*1) − − − − − − − − 0x19 −
CHB-2L (*1) − − − − − − − − 0x1A −
CHB-2M (*1) − − − − − − − − 0x1B −
*1: When the Channel Board Box is mounted.

THEORY04-03-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-60

Table 4-14 Relationship between Cluster # and Channel Port#


Cluster CHB VSP G130 VSP G350, VSP G370, VSP G700, VSP G900,
Location Name Channel Port# VSP F350 VSP F370 VSP F700 VSP F900,
(HEX) Channel Port# Channel Port# Channel Port# VSP E990
(HEX) (HEX) (HEX) Channel Port#
(HEX)
Cluster-1 CHB-1A 0x00, 0x01 0x00 to 0x03 0x00 to 0x03 0x00 to 0x03 0x00 to 0x03
CHB-1B − 0x04 to 0x07 0x04 to 0x07 0x04 to 0x07 0x04 to 0x07
CHB-1C − − − 0x08 to 0x0B 0x08 to 0x0B
CHB-1D − − − 0x0C to 0x0F 0x0C to 0x0F
CHB-1E − − − 0x10 to 0x13 0x10 to 0x13
CHB-1F − − − 0x14 to 0x17 0x14 to 0x17
CHB-1G − − − 0x18 to 0x1B 0x18 to 0x1B
CHB-1H − − − 0x1C to 0x1F 0x1C to 0x1F
CHB-1J (*1) − − − − 0x20 to 0x23
CHB-1K (*1) − − − − 0x24 to 0x27
CHB-1L (*1) − − − − 0x28 to 0x2B
CHB-1M (*1) − − − − 0x2C to 0x2F
Cluster-2 CHB-2A 0x04, 0x05 0x08 to 0x0B 0x40 to 0x43 0x40 to 0x43 0x40 to 0x43
CHB-2B − 0x0C to 0x0F 0x44 to 0x47 0x44 to 0x47 0x44 to 0x47
CHB-2C − − − 0x48 to 0x4B 0x48 to 0x4B
CHB-2D − − − 0x4C to 0x4F 0x4C to 0x4F
CHB-2E − − − 0x50 to 0x53 0x50 to 0x53
CHB-2F − − − 0x54 to 0x57 0x54 to 0x57
CHB-2G − − − 0x58 to 0x5B 0x58 to 0x5B
CHB-2H − − − 0x5C to 0x5F 0x5C to 0x5F
CHB-2J (*1) − − − − 0x60 to 0x63
CHB-2K (*1) − − − − 0x64 to 0x67
CHB-2L (*1) − − − − 0x68 to 0x6B
CHB-2M (*1) − − − − 0x6C to 0x6F
*1: When the Channel Board Box is mounted.

THEORY04-03-60
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-70

Table 4-15 Relationship between Cluster # and SASCTL#, SAS Port#


Cluster DKB VSP G370/G350 VSP G700 VSP G900
Location Name VSP F370/F350 VSP F700 VSP F900
VSP G130 VSP E990
SASCTL# SASPort# SASCTL# SASPort# SASCTL# SASPort#/
/PSW# NVMePort#
Cluster-1 DKB-1C 0x00 0x00 to − − − −
0x01
DKB-1E − − − − 0x02 0x04 to
0x05
DKB-1F − − − − 0x03 0x06 to
0x07
DKB-1G − − 0x00 0x00 to 0x00 0x00 to
0x01 0x01
DKB-1H − − 0x01 0x02 to 0x01 0x02 to
0x03 0x03
Cluster-2 DKB-2C 0x04 0x08 to − − − −
0x09
DKB-2E − − − − 0x06 0x0C to
0x0D
DKB-2F − − − − 0x07 0x0E to
0x0F
DKB-2G − − 0x04 0x08 to 0x04 0x08 to
0x09 0x09
DKB-2H − − 0x05 0x0A to 0x05 0x0A to
0x0B 0x0B

THEORY04-03-70
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-04-10

4.4 Connection Diagram of DKC

Figure 4-1 VSP G130/G350/G370, VSP F350/F370

• VSP G350/G370, VSP F350/F370

CHB-1A CHB-1B CHB-2A CHB-2B

DIMM00 DIMM00
DIMM01 MPU#0 MPU#1 DIMM01

CFM-1 CFM-2
I Path
BKM-1 BKM-2
CTL1 CTL2

DKB-1C DKB-2C

ENC ENC

• VSP G130

CTL1 CTL2
CHB-1A CHB-2A

DIMM00 MPU#0 MPU#1 DIMM00

CFM-1 CFM-2
I Path
BAT-1 BAT-2

DKB-1C DKB-2C

ENC ENC

THEORY04-04-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-04-20

Figure 4-2 VSP G700, VSP F700

CHB-1A CHB-1B CHB-1C CHB-1D CHB-2A CHB-2B CHB-2C CHB-2D

CTL1 CTL2
DIMM00 DIMM10 DIMM00 DIMM10
DIMM01 DIMM11 MPU#0 MPU#1 DIMM01 DIMM11
I Path#0
DIMM02 DIMM12 DIMM02 DIMM12
I Path#1
DIMM03 DIMM13 DIMM03 DIMM13

CFM-10 CFM-11 CFM-20 CFM-21

BKMF-10 BKMF-11 BKMF-12 BKMF-13 BKMF-20 BKMF-21 BKMF-22 BKMF-23

DKB-1G DKB-1H DKB-2G DKB-2H


CHB-1E CHB-1F CHB-2E CHB-2F
(CHB-1G) (CHB-1H) (CHB-2G) (CHB-2H)

Figure 4-3 VSP G900, VSP F900, VSP E990

CHB-1A CHB-1B CHB-1C CHB-1D CHB-2A CHB-2B CHB-2C CHB-2D

CTL1 CTL2
DIMM00 DIMM10 DIMM00 DIMM10
DIMM01 DIMM11 MPU#0 MPU#1 DIMM01 DIMM11
I Path#0
DIMM02 DIMM12 DIMM02 DIMM12
I Path#1
DIMM03 DIMM13 DIMM03 DIMM13

CFM-10 CFM-11 CFM-20 CFM-21

BKMF-10 BKMF-11 BKMF-12 BKMF-13 BKMF-20 BKMF-21 BKMF-22 BKMF-23

DKB-1E DKB-1F DKB-1G DKB-1H DKB-2E DKB-2F DKB-2G DKB-2H


(CHB-1E) (CHB-1F) (CHB-1G) (CHB-1H) (CHB-2E) (CHB-2F) (CHB-2G) (CHB-2H)

THEORY04-04-20
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-10

4.5 Channel Interface (Fiber and iSCSI)


4.5.1 Basic Functions
The basic specifications of the Fibre Channel Board and iSCSI Interface board are shown in Table 4-16.

Table 4-16 Basic specifications


Specification
Item
FC iSCSI
Max. # of Ports VSP G130 : 4 VSP G130 : 4
VSP G370/G350, VSP F370/F350 : VSP G370/G350, VSP F370/F350 :
16 8
VSP G700, VSP F700 : VSP G700, VSP F700 :
Host 48 (HDD less : 64) 24 (HDD less : 32)
Channel VSP G900, VSP F900, VSP E990 : VSP G900, VSP F900, VSP E990 :
48 (HDD less : 64) 24 (HDD less : 32)
64 (HDD less : 80) (*1) 32 (HDD less : 40) (*1)
Max. # of concurrent 256 255
paths/Port
Data transfer DW-F800-4HF32R: 4, 8, 16, 32 Gbps DW-F800-2HS10S: 10 Gbps
DW-F850-CTLXSFA: 4, 8, 16 Gbps DW-F800-2HS10B: 1, 10 Gbps
DW-F850-CTLXSSA: 10 Gbps
DW-F850-CTLXSCA: 1, 10 Gbps
RAID level RAID6/RAID5/RAID1
RAID configuration RAID6
RAID5
RAID1
*1: When the Channel Board Box is mounted.

THEORY04-05-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-20

4.5.2 Glossary
• iSCSI(Internet Small Computer Systems Interface)
This is a technology to transmit and receives the block data by SCSI on the IP network.

• iSNS(Internet Storage Name Service)


This is a technology to detect the iSCSI device on the IP network.

• NIC(Network Interface Card)


This is an interface card for network communication to be installed in the server or PC.
It is also called a network card, LAN card or LAN board. It includes a port installed on the motherboard in
the server or PC.

• CNA(Converged Network Adapter)


This is an integrated network adapter which supports LAN (TCP/IP) and iSCSI at a 10 Gbps Ethernet
speed.

• IPv4(Internet Protocol version 4)


This is an IP address of 32-bit address length.

• IPv6(Internet Protocol version 6)


This is an IP address of 128-bit address length.

• VLAN(Virtual LAN)
This is a technology to create a virtual LAN segment.

• CHAP(Challenge Handshake Authentication Protocol)


This is a user authentication method of handshakes by encoding a user name and secret.
In the [CHAP] authentication, the iSCSI target authenticates the iSCSI initiator.
Furthermore, in the bidirectional CHAP authentication, the iSCSI target and the iSCSI initiator authenticate
each other.

• iSCSI Digest
iSCSI Header Digest and iSCSI Data Digest exist and checks the data consistency end to end.

• iSCSI Name
The iSCSI node has an iSCSI name consisting of a maximum of 223 characters for node identification.

THEORY04-05-20
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-30

4.5.3 Interface Specifications


4.5.3.1 Fibre Channel Physical Interface Specifications
The physical interface specification supported for Fibre Channel (FC) is shown in Table 4-17 to Table 4-18.

Table 4-17 Fibre Channel Physical specification


No. Item Specification Remarks
1 Host interface Physical interface Fibre Channel FC-PH,FC-AL
Logical interface SCSI-3 FCP,FC-PLDA
Fibre FC-AL
2 Data Transfer Optic Fibre cable 4, 8, 16, 32 (*1) Gbps ̶
Rate
3 Cable Length Optic single mode Fibre 10km Longwave laser
Optic multi mode Fibre 500 m/400 m/190 m/125m/ Shortwave laser
100 m (*1)
4 Connector Type LC ̶
5 Topology NL-Port (FC-AL) ̶
F-Port
FL-Port
6 Service class 3 ̶
7 Protocol FCP ̶
8 Transfer code 4, 8 Gbps : 8B/10B translate ̶
16, 32 Gbps : 64B/66B translate
9 Number of hosts 255/Path ̶
10 Number of host groups 255/Path ̶
11 Maximum number of LUs 2048/Path ̶
12 PORT/PCB 4 Port ̶
2 Port (VSP G130 only)
*1: See Table 4-43.

THEORY04-05-30
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-40

Table 4-18 Fibre Channel Port name


Cluster CHB/DKB VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
Location VSP F350 VSP F370 VSP F700 VSP F900
VSP E990
CHB# DKB# CHB# DKB# CHB# DKB# CHB# DKB# CHB# DKB#
Cluster-1 CHB-1A 0x00 − 0x00 − 0x00 − 0x00 − 0x00 −
CHB-1B − − 0x01 − 0x01 − 0x01 − 0x01 −
CHB-1C / DKB-1C − 0x00 − 0x00 − 0x00 0x02 − 0x02 −
CHB-1D − − − − − − 0x03 − 0x03 −
CHB-1E / DKB-1E − − − − − − 0x04 − 0x04 0x02
CHB-1F / DKB-1F − − − − − − 0x05 − 0x05 0x03
CHB-1G / DKB-1G − − − − − − 0x06 0x00 0x06 0x00
CHB-1H / DKB-1H − − − − − − 0x07 0x01 0x07 0x01
CHB-1J (*1) − − − − − − − − 0x08 −
CHB-1K (*1) − − − − − − − − 0x09 −
CHB-1L (*1) − − − − − − − − 0x0A −
CHB-1M (*1) − − − − − − − − 0x0B −
Cluster-2 CHB-2A 0x01 − 0x02 − 0x10 − 0x10 − 0x10 −
CHB-2B − − 0x03 − 0x11 − 0x11 − 0x11 −
CHB-2C / DKB-2C − 0x04 − 0x04 − 0x04 0x12 − 0x12 −
CHB-2D − − − − − − 0x13 − 0x13 −
CHB-2E / DKB-2E − − − − − − 0x14 − 0x14 0x06
CHB-2F / DKB-2F − − − − − − 0x15 − 0x15 0x07
CHB-2G / DKB-2G − − − − − − 0x16 0x04 0x16 0x04
CHB-2H / DKB-2H − − − − − − 0x17 0x05 0x17 0x05
CHB-2J (*1) − − − − − − − − 0x18 −
CHB-2K (*1) − − − − − − − − 0x19 −
CHB-2L (*1) − − − − − − − − 0x1A −
CHB-2M (*1) − − − − − − − − 0x1B −
*1: When the Channel Board Box is mounted.

THEORY04-05-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-50

4.5.3.2 iSCSI Physical Interface Specifications


The physical interface specification supported for iSCSI is shown in Table 4-19 to Table 4-20.

Table 4-19 iSCSI Physical specification


No. Item Specification Remarks
1 Host interface Physical interface 10Gbps : 10Gbps SFP+ —
Logical interface RFC3720 —
2 Data Transfer Optic Fibre cable 10Gbps

Rate
3 Cable Length Optic single mode Fibre (Not supported) —
Optic multi mode Fibre OM2:82m, OM3:100m —
4 Connector Type Optic LC ̶
5 Topology — ̶
6 Service class — ̶
7 Protocol TCP/IP, iSCSI ̶
8 Transfer code — ̶
9 Number of hosts 255/Port ̶
10 Number of host groups 255/Port A target in case of
iSCSI
11 Maximum number of LUs 2048/Path (Same as Fibre)
12 PORT/PCB 2 Port ̶

THEORY04-05-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-60

The iSCSI port is a 2-port/CHB and a configuration without 5x, 7x, 6x and 8x installed for the Fibre Channel
port.

Table 4-20 CHB/DKB Location Name and Corresponding CHB/DKB #


Cluster CHB/DKB VSP G130 VSP G350 VSP G370 VSP G700 VSP G900
Location VSP F350 VSP F370 VSP F700 VSP F900
VSP E990
CHB# DKB# CHB# DKB# CHB# DKB# CHB# DKB# CHB# DKB#
Cluster-1 CHB-1A 0x00 − 0x00 − 0x00 − 0x00 − 0x00 −
CHB-1B − − 0x01 − 0x01 − 0x01 − 0x01 −
CHB-1C / DKB-1C − 0x00 − 0x00 − 0x00 0x02 − 0x02 −
CHB-1D − − − − − − 0x03 − 0x03 −
CHB-1E / DKB-1E − − − − − − 0x04 − 0x04 0x02
CHB-1F / DKB-1F − − − − − − 0x05 − 0x05 0x03
CHB-1G / DKB-1G − − − − − − 0x06 0x00 0x06 0x00
CHB-1H / DKB-1H − − − − − − 0x07 0x01 0x07 0x01
CHB-1J (*1) − − − − − − − − 0x08 −
CHB-1K (*1) − − − − − − − − 0x09 −
CHB-1L (*1) − − − − − − − − 0x0A −
CHB-1M (*1) − − − − − − − − 0x0B −
Cluster-2 CHB-2A 0x01 − 0x02 − 0x10 − 0x10 − 0x10 −
CHB-2B − − 0x03 − 0x11 − 0x11 − 0x11 −
CHB-2C / DKB-2C − 0x04 − 0x04 − 0x04 0x12 − 0x12 −
CHB-2D − − − − − − 0x13 − 0x13 −
CHB-2E / DKB-2E − − − − − − 0x14 − 0x14 0x06
CHB-2F / DKB-2F − − − − − − 0x15 − 0x15 0x07
CHB-2G / DKB-2G − − − − − − 0x16 0x04 0x16 0x04
CHB-2H / DKB-2H − − − − − − 0x17 0x05 0x17 0x05
CHB-2J (*1) − − − − − − − − 0x18 −
CHB-2K (*1) − − − − − − − − 0x19 −
CHB-2L (*1) − − − − − − − − 0x1A −
CHB-2M (*1) − − − − − − − − 0x1B −
*1: When the Channel Board Box is mounted.

THEORY04-05-60
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-70

4.5.4 Volume Specification (Common to Fibre/iSCSI)


1. Volume Specification
Model Number of Disk Drive and supported RAID Level are shown in Table 4-21.

Table 4-21 List of DW850 Model number


Model Number Disk Drive model (type name RAID Level
displayed in MPC window) (*1)
DKC-F810I-600JCMC DKR5D-J600SS/DKS5E-J600SS/ RAID1 (2D+2D/4D+4D)
DKR5G-J600SS/DKS5H-J600SS/ RAID5
DKS5K-J600SS/DKS5L-J600SS (3D+1P/4D+1P/6D+1P/7D+1P)
DKC-F810I-1R2JCMC DKR5E-J1R2SS/DKS5F-J1R2SS/ RAID6
DKR5G-J1R2SS/DKS5H-J1R2SS/ (6D+2P/12D+2P/14D+2P)
DKS5K-J1R2SS/DKS5L-J1R2SS
DKC-F810I-1R2J7MC DKR5E-J1R2SS/DKR5G-J1R2SS/
DKS5H-J1R2SS/DKS5K-J1R2SS/
DKS5L-J1R2SS
DKC-F810I-2R4JGM DKS5K-J2R4SS/DKS5L-J2R4SS
DKC-F810I-2R4J8M DKS5K-J2R4SS/DKS5L-J2R4SS
DKC-F810I-480MGM SLB5F-M480SS/SLB5G-M480SS
DKC-F810I-960MGM SLB5F-M960SS/SLB5G-M960SS
DKC-F810I-1R9MGM SLB5E-M1R9SS/SLB5G-M1R9SS
DKC-F810I-1T9MGM SLB5I-M1T9SS/SLM5B-M1T9SS
DKC-F810I-3R8MGM SLB5F-M3R8SS/SLB5G-M3R8SS/
SLR5E-M3R8SS/SLR5F-M3R8SS/
SLM5A-M3R8SS/SLM5B-M3R8SS
DKC-F810I-7R6MGM SLB5G-M7R6SS/SLR5E-M7R6SS/
SLR5F-M7R6SS/SLM5A-M7R6SS/
SLM5B-M7R6SS
DKC-F810I-15RMGM SLB5H-M15RSS/SLR5G-M15RSS/
SLM5B-M15RSS
DKC-F810I-30RMGM SLM5A-M30RSS/SLM5B-M30RSS
DKC-F810I-3R2FN NFHAE-Q3R2SS
DKC-F810I-7R0FP NFHAF-Q6R4SS/NFHAH-Q6R4SS/
NFHAJ-Q6R4SS/NFHAK-Q6R4SS/
NFHAL-Q6R4SS/NFHAM-Q6R4SS
DKC-F810I-14RFP NFHAF-Q13RSS/NFHAH-Q13RSS/
NFHAJ-Q13RSS/NFHAK-Q13RSS/
NFHAM-Q13RSS
DKC-F810I-6R0H9M DKS2F-H6R0SS/DKR2G-H6R0SS/
DKS2H-H6R0SS/DKS2M-H6R0SS
DKC-F810I-6R0HLM DKS2F-H6R0SS/DKR2G-H6R0SS/
DKS2H-H6R0SS/DKS2M-H6R0SS
DKC-F810I-10RH9M DKR2H-H10RSS/DKS2K-H10RSS/
DKS2N-H10RSS
DKC-F810I-10RHLM DKR2H-H10RSS/DKS2J-H10RSS/
DKS2K-H10RSS/DKS2N-H10RSS
DKC-F810I-14RH9M DKS2K-H14RSS/DKS2N-H14RSS
DKC-F810I-14RHLM DKS2K-H14RSS/DKS2N-H14RSS
(To be continued)
THEORY04-05-70
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-80

(Continued from the preceding page)


Model Number Disk Drive model (type name RAID Level
displayed in MPC window) (*1)
DKC-F910I-1R9RVM SNR5A-R1R9NC/SNB5A-R1R9NC/ RAID1 (2D+2D/4D+4D)
SNB5B-R1R9NC/SNM5A-R1R9NC RAID5
DKC-F910I-3R8RVM SNR5A-R3R8NC/SNB5A-R3R8NC/ (3D+1P/4D+1P/6D+1P/7D+1P)
SNB5B-R3R8NC/SNM5A-R3R8NC RAID6
DKC-F910I-7R6RVM SNR5A-R7R6NC/SNB5A-R7R6NC/ (6D+2P/12D+2P/14D+2P)
SNB5B-R7R6NC/SNM5A-R7R6NC
DKC-F910I-15RRVM SNB5A-R15RNC/SNB5B-R15RNC/
SNN5A-R15RNC/SNM5A-R15RNC

*1: The disk drive type name displayed in the MPC window might differ from the one on the drive. In
such a case, refer to INSTALLATION SECTION 1.2.2 Disk Drive Model .
NOTE: • As for RAID1, the concatenation of two parity groups is possible (8HDDs).
In this case the number of volumes required is doubled.
Two concatenation and four concatenation (16HDDs and 32HDDs) of the RAID
Groups are possible for RAID5 (7D+1P).
In this case, the number of volumes becomes twice or four times.
When OPEN-V is set in the parity group of the above-mentioned connection
configuration, the maximum volume size becomes the parity cycle size of the source
(2D+2D) or (7D+1P). It does not become twice or four times.
• The Storage System capacity is different from one of Maintenance PC, because of
1GB=1000Mbyte calculation.

THEORY04-05-80
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-90

2. List of emulation types


NOTE: In the Accelerated Compression enabled RAID group, a logical volume that has more
than or equal to the drive capacity can be created depending on the compression rate
of the data to be stored. For the details, see Provisioning Guide .
(1) VSP G900 and VSP F900
NOTE: DBS and DBF are the Drive Box that can be mounted on VSP F900.

Table 4-22 List of emulation types


RAID Level
Storage capacity 2D+2D (RAID1) 3D+1P (RAID5) 4D+1P (RAID5) 6D+1P (RAID5)
(GB/volume)
DBS 600JCMC PG 1 - 287 1 - 287 1 - 229 1 - 164
Capacity 1,152.7 - 330,824.9 1,729.1 - 496,251.7 2,305.5 - 528,881.7 3,458.3 - 565,679.1
1R2JCMC PG 1 - 287 1 - 287 1 - 229 1 - 164
Capacity 2,305.5 - 661,678.5 3,458.3 - 992,532.1 4,611.1 - 1,057,786.3 6,916.7 - 1,131,374.5
2R4JGM PG 1 - 287 1 - 287 1 - 229 1 - 164
Capacity 4,611.1 - 1,323,385.7 6,916.7 - 1,985,092.9 9,222.2 - 2,115,572.7 13,833.4 - 2,262,749.0
960MGM PG 1 - 287 1 - 287 1 - 229 1 - 164
Capacity 1,890.4 - 542,544.8 2,835.6 - 813,817.2 3,780.9 - 867,338.5 5,671.3 - 927,662.6
1R9MGM/ PG 1 - 287 1 - 287 1 - 229 1 - 164
1T9MGM Capacity 3,780.9 - 1,085,118.3 5,671.3 - 1,627,663.1 7,561.8 - 1,734,676.9 11,342.7 - 1,855,341.6
3R8MGM PG 1 - 287 1 - 287 1 - 229 1 - 164
Capacity 7,561.8 - 2,170,236.6 11,342.7 - 3,255,354.9 15,123.6 - 3,469,353.8 22,685.5 - 3,710,699.6
7R6MGM PG 1 - 287 1 - 287 1 - 229 1.0 - 164
Capacity 15,123.6 - 4,340,473.2 22,685.5 - 6,510,738.5 30,247.3 - 6,938,730.6 45,371.0 - 7,421,399.3
15RMGM PG 1 - 287 1 - 287 1 - 229 1.0 - 164
Capacity 30,096.9 - 8,637,810.3 45,145.4 - 12,956,729.8 60,193.9 - 13,808,480.7 90,290.9 - 14,769,011.5
30RMGM PG 1 - 287 1 - 287 1 - 229 1.0 - 164
Capacity 60,191.8 - 17,275,046.6 90,287.7 - 25,912,569.9 120,383.6 - 27,615,997.8 180,575.4 - 29,536,976.1
DBL 6R0H9M PG 1 - 143 1 - 143 1 - 114 1 - 81
Capacity 11,748.4 - 1,680,021.2 17,622.6 - 2,520,031.8 23,496.8 - 2,683,334.6 35,245.2 - 2,864,931.3
10RH9M PG 1 - 143 1 - 143 1 - 114 1 - 81
Capacity 19,580.7 - 2,800,040.1 29,371.0 - 4,200,053.0 39,161.4 - 4,472,231.9 58,742.1 - 4,774,893.6
14RH9M PG 1 - 143 1 - 143 1 - 114 1 - 81
Capacity 27,413.0 - 3,920,059.0 41,119.5 - 5,880,088.5 54,826.0 - 6,261,129.2 82,239.0 - 6,684,855.9
DB60 1R2J7MC PG 1 - 359 1 - 359 1 - 287 1 - 205
Capacity 2,305.5 - 827,674.5 3,458.3 - 1,241,529.7 4,611.1 - 1,323,385.7 6,916.7 - 1,415,947.3
2R4J8M PG 1 - 359 1 - 359 1 - 287 1 - 205
Capacity 4,611.1 - 1,655,384.9 6,916.7 - 2,483,095.3 9,222.2 - 2,646,771.4 13,833.4 - 2,831,894.6
6R0HLM PG 1 - 359 1 - 359 1 - 287 1 - 205
Capacity 11,748.4 - 4,217,675.6 17,622.6 - 6,326,513.4 23,496.8 - 6,743,581.6 35,245.2 - 7,215,195.9
10RHLM PG 1 - 359 1 - 359 1 - 287 1 - 205
Capacity 19,580.7 - 7,029,471.3 29,371.0 - 10,544,189.0 39,161.4 - 11,239,321.8 58,742.1 - 12,025,347.0
14RHLM PG 1 - 359 1 - 359 1 - 287 1 - 205
Capacity 27,413.0 - 9,841,267.0 41,119.5 - 14,761,900.5 54,826.0 - 15,735,062.0 82,239.0 - 16,835,498.1
DBF 3R2FN PG 1 - 143 1 - 143 1 - 114 1 - 81
Capacity 7,036.8 - 1,006,262.4 10,555.3 - 1,509,407.9 14,073.7 - 1,607,216.5 21,110.6 - 1,715,990.2
7R0FP PG 1 - 143 1 - 143 1 - 114 1 - 81
Capacity 14,073.7 - 2,012,546.0 21,110.6 - 3,018,815.8 28,147.4 - 3,214,433.1 42,221.2 - 3,431,980.4
14RFP PG 1 - 143 1 - 143 1 - 114 1 - 81
Capacity 28,147.5 - 4,025,091.9 42,221.2 - 6,037,631.6 56,294.9 - 6,428,877.6 84,442.4 - 6,863,960.8

(To be continued)

THEORY04-05-90
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-100

(Continued from the preceding page)


RAID Level
Storage capacity 7D+1P (RAID5) 6D+2P (RAID6) 12D+2P (RAID6) 14D+2P (RAID6)
(GB/volume)
DBS 600JCMC PG 1 - 143 1 - 143 1 - 81 1 - 71
Capacity 4,034.7 - 576,962.1 3,458.3 - 494,536.9 6,916.7 - 562,228.9 8,069.5 - 572,934.5
1R2JCMC PG 1 - 143 1 - 143 1 - 81 1 - 71
Capacity 8,069.5 - 1,153,938.5 6,916.7 - 989,088.1 13,833.4 - 1,124,457.8 16,139.0 - 1,145,869.0
2R4JGM PG 1 - 143 1 - 143 1 - 81 1 - 71
Capacity 16,139.0 - 2,307,877.0 13,833.4 - 1,978,176.2 27,666.8 - 2,248,915.6 32,278.0 - 2,291,738.0
960MGM PG 1 - 143 1 - 143 1 - 81 1 - 71
Capacity 6,616.6 - 946,173.8 5,671.3 - 810,995.9 11,342.7 - 921,999.5 13,233.2 - 939,557.2
1R9MGM/ PG 1 - 143 1 - 143 1 - 81 1 - 71
1T9MGM Capacity 13,233.2 - 1,892,347.6 11,342.7 - 1,622,006.1 22,685.5 - 1,844,007.1 26,466.4 - 1,879,114.4
3R8MGM PG 1 - 143 1 - 143 1 - 81 1 - 71
Capacity 26,466.4 - 3,784,695.2 22,685.5 - 3,244,026.5 45,371.0 - 3,688,014.1 52,932.9 - 3,758,235.9
7R6MGM PG 1 - 143 1 - 143 1 - 81 1 - 71
Capacity 52,932.9 - 7,569,404.7 45,371.0 - 6,488,053.0 90,742.1 - 7,376,036.4 105,865.8 - 7,516,471.8
15RMGM PG 1 - 143 1 - 143 1 - 81 1 - 71
Capacity 105,339.4 - 15,063,534.2 90,290.9 - 12,911,598.7 180,581.8 - 14,678,720.6 210,678.8 - 14,958,194.8
30RMGM PG 1 - 143 1 - 143 1 - 81 1 - 71
Capacity 210,671.3 - 30,125,995.9 180,575.4 - 25,822,282.2 361,150.8 - 29,356,400.7 421,342.7 - 29,915,331.7
DBL 6R0H9M PG 1 - 71 1 - 71 1 - 40 1 - 35
Capacity 41,119.5 - 2,919,484.5 35,245.2 - 2,502,409.2 70,490.5 - 2,829,690.1 82,239.0 - 2,878,365.0
10RH9M PG 1 - 71 1 - 71 1 - 40 1 - 35
Capacity 68,532.5 - 4,865,807.5 58,742.1 - 4,170,689.1 117,484.3 - 4,716,155.5 137,065.0 - 4,797,275.0
14RH9M PG 1 - 71 1 - 71 1 - 40 1 - 35
Capacity 95,945.5 - 6,812,130.5 82,239.0 - 5,838,969.0 164,478.0 - 6,602,616.9 191,891.0 - 6,716,185.0
DB60 1R2J7MC PG 1 - 179 1 - 179 1 - 102 1 - 89
Capacity 8,069.5 - 1,444,440.5 6,916.7 - 1,238,089.3 13,833.4 - 1,409,030.6 16,139.0 - 1,436,371.0
2R4J8M PG 1 - 179 1 - 179 1 - 102 1 - 89
Capacity 16,139.0 - 2,888,881.0 13,833.4 - 2,476,178.6 27,666.8 - 2,818,061.2 32,278.0 - 2,872,742.0
6R0HLM PG 1 - 179 1 - 179 1 - 102 1 - 89
Capacity 41,119.5 - 7,360,390.5 35,245.2 - 6,308,890.8 70,490.5 - 7,179,960.9 82,239.0 - 7,319,271.0
10RHLM PG 1 - 179 1 - 179 1 - 102 1 - 89
Capacity 68,532.5 - 12,267,317.5 58,742.1 - 10,514,835.9 117,484.3 - 11,966,615.1 137,065.0 - 12,198,785.0
14RHLM PG 1 - 179 1 - 179 1 - 102 1 - 89
Capacity 95,945.5 - 17,174,244.5 82,239.0 - 14,720,781.0 164,478.0 - 16,753,259.1 191,891.0 - 17,078,299.0
DBF 3R2FN PG 1 - 71 1 - 71 1 - 40 1 - 35
Capacity 24,629.0 - 1,748,659.0 21,110.6 - 1,498,852.6 42,221.2 - 1,694,879.6 49,258.1 - 1,724,033.5
7R0FP PG 1 - 71 1 - 71 1 - 40 1 - 35
Capacity 49,258.1 - 3,497,325.1 42,221.2 - 2,997,705.2 84,442.4 - 3,389,759.2 98,516.2 - 3,448,067.0
14RFP PG 1 - 71 1 - 71 1 - 40 1 - 35
Capacity 98,516.2 - 6,994,650.2 84,442.4 - 5,995,410.4 168,884.9 - 6,779,522.4 197,032.4 - 6,896,134.0

THEORY04-05-100
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-110

(2) VSP G700 and VSP F700


NOTE: DBS and DBF are the Drive Box that can be mounted on VSP F700.
Table 4-23 List of emulation types
RAID Level
Storage capacity 2D+2D (RAID1) 3D+1P (RAID5) 4D+1P (RAID5) 6D+1P (RAID5)
(GB/volume)
DBS 600JCMC PG 1 - 215 1 - 215 1 - 172 1 - 122
Capacity 1,152.7 - 247,830.5 1,729.1 - 371,756.5 2,305.5 - 396,084.9 3,458.3 - 423,394.7
1R2JCMC PG 1 - 215 1 - 215 1 - 172 1 - 122
Capacity 2,305.5 - 495,682.5 3,458.3 - 743,534.5 4,611.1 - 792,187.0 6,916.7 - 846,801.7
2R4JGM PG 1 - 215 1 - 215 1 - 172 1 - 122
Capacity 4,611.1 - 991,386.5 6,916.7 - 1,487,090.5 9,222.2 - 1,584,374.0 13,833.4 - 1,693,603.4
480MGM PG 1 - 215 1 - 215 1 - 172 1 - 122
Capacity 945.2 - 203,218.0 1,417.8 - 304,827.0 1,890.4 - 324,770.7 2,835.6 - 347,158.5
960MGM PG 1 - 215 1 - 215 1 - 172 1 - 122
Capacity 1,890.4 - 406,436.0 2,835.6 - 609,654.0 3,780.9 - 649,558.6 5,671.3 - 694,329.2
1R9MGM/ PG 1 - 215 1 - 215 1 - 172 1 - 122
1T9MGM Capacity 3,780.9 - 812,893.5 5,671.3 - 1,219,329.5 7,561.8 - 1,299,117.2 11,342.7 - 1,388,670.6
3R8MGM PG 1 - 215 1 - 215 1 - 172 1 - 122
Capacity 7,561.8 - 1,625,787.0 11,342.7 - 2,438,680.5 15,123.6 - 2,598,234.5 22,685.5 - 2,777,353.4
7R6MGM PG 1 - 215 1 - 215 1 - 172 1 - 122
Capacity 15,123.6 - 3,251,574.0 22,685.5 - 4,877,382.5 30,247.3 - 5,196,486.1 45,371.0 - 5,554,706.7
15RMGM PG 1 - 215 1 - 215 1 - 172 1 - 122
Capacity 30,096.9 - 6,470,833.5 45,145.4 - 9,706,261.0 60,193.9 - 10,341,312.0 90,290.9 - 11,054,185.9
30RMGM PG 1 - 215 1 - 215 1 - 172 1 - 122
Capacity 60,191.8 - 12,941,237.0 90,287.7 - 19,411,855.5 120,383.6 - 20,681,902.5 180,575.4 - 22,107,588.3
DBL 6R0H9M PG 1 - 107 1 - 107 1 - 85 1 - 61
Capacity 11,748.4 - 1,257,078.8 17,622.6 - 1,885,618.2 23,496.8 - 2,006,626.7 35,245.2 - 2,139,887.1
10RH9M PG 1 - 107 1 - 107 1 - 85 1 - 61
Capacity 19,580.7 - 2,095,134.9 29,371.0 - 3,142,697.0 39,161.4 - 3,344,383.6 58,742.1 - 3,566,484.6
14RH9M PG 1 - 107 1 - 107 1 - 85 1 - 61
Capacity 27,413.0 - 2,933,191.0 41,119.5 - 4,399,786.5 54,826.0 - 4,682,140.4 82,239.0 - 4,993,082.1
DB60 1R2J7MC PG 1 - 299 1 - 299 1 - 239 1 - 170
Capacity 2,305.5 - 689,344.5 3,458.3 - 1,034,031.7 4,611.1 - 1,102,052.9 6,916.7 - 1,178,803.3
2R4J8M PG 1 - 299 1 - 299 1 - 239 1 - 170
Capacity 4,611.1 - 1,378,718.9 6,916.7 - 2,068,093.3 9,222.2 - 2,204,105.8 13,833.4 - 2,357,606.6
6R0HLM PG 1 - 299 1 - 299 1 - 239 1 - 170
Capacity 11,748.4 - 3,512,771.6 17,622.6 - 5,269,157.4 23,496.8 - 5,615,735.2 35,245.2 - 6,006,789.1
10RHLM PG 1 - 299 1 - 299 1 - 239 1 - 170
Capacity 19,580.7 - 5,854,629.3 29,371.0 - 8,781,929.0 39,161.4 - 9,359,574.6 58,742.1 - 10,011,332.2
14RHLM PG 1 - 299 1 - 299 1 - 239 1 - 170
Capacity 27,413.0 - 8,196,487.0 41,119.5 - 12,294,730.5 54,826.0 - 13,103,414.0 82,239.0 - 14,015,875.3
DBF 3R2FN PG 1 - 107 1 - 107 1 - 85 1 - 61
Capacity 7,036.8 - 752,937.6 10,555.3 - 1,129,417.1 14,073.7 - 1,201,894.0 21,110.6 - 1,281,715.0
7R0FP PG 1 - 107 1 - 107 1 - 85 1 - 61
Capacity 14,073.7 - 1,505,891.0 21,110.6 - 2,258,834.2 28,147.4 - 2,403,788.0 42,221.2 - 2,563,430.0
14RFP PG 1 - 107 1 - 107 1 - 85 1 - 61
Capacity 28,147.5 - 3,011,782.1 42,221.2 - 4,517,668.4 56,294.9 - 4,807,584.5 84,442.4 - 5,126,860.0

(To be continued)

THEORY04-05-110
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-120

(Continued from the preceding page)


RAID Level
Storage capacity 7D+1P (RAID5) 6D+2P (RAID6) 12D+2P (RAID6) 14D+2P (RAID6)
(GB/volume)
DBS 600JCMC PG 1 - 107 1 - 107 1 - 61 1 - 53
Capacity 4,034.7 - 431,712.9 3,458.3 - 370,038.1 6,916.7 - 419,942.5 8,069.5 - 427,683.5
1R2JCMC PG 1 - 107 1 - 107 1 - 61 1 - 53
Capacity 8,069.5 - 863,436.5 6,916.7 - 740,086.9 13,833.4 - 839,885.0 16,139.0 - 855,367.0
2R4JGM PG 1 - 107 1 - 107 1 - 61 1 - 53
Capacity 16,139.0 - 1,726,873.0 13,833.4 - 1,480,173.8 27,666.8 - 1,679,770.0 32,278.0 - 1,710,734.0
480MGM PG 1 - 107 1 - 107 1 - 61 1 - 53
Capacity 3,308.3 - 353,988.1 2,835.6 - 303,409.2 5,671.3 - 344,328.9 6,616.6 - 350,679.8
960MGM PG 1 - 107 1 - 107 1 - 61 1 - 53
Capacity 6,616.6 - 707,976.2 5,671.3 - 606,829.1 11,342.7 - 688,663.9 13,233.2 - 701,359.6
1R9MGM/ PG 1 - 107 1 - 107 1 - 61 1 - 53
1T9MGM Capacity 13,233.2 - 1,415,952.4 11,342.7 - 1,213,668.9 22,685.5 - 1,377,333.9 26,466.4 - 1,402,719.2
3R8MGM PG 1 - 107 1 - 107 1 - 61 1 - 53
Capacity 26,466.4 - 2,831,904.8 22,685.5 - 2,427,348.5 45,371.0 - 2,754,667.9 52,932.9 - 2,805,443.7
7R6MGM PG 1 - 107 1 - 107 1 - 61 1 - 53
Capacity 52,932.9 - 5,663,820.3 45,371.0 - 4,854,697.0 90,742.1 - 5,509,341.8 105,865.8 - 5,610,887.4
15RMGM PG 1 - 107 1 - 107 1 - 61 1 - 53
Capacity 105,339.4 - 11,271,315.8 90,290.9 - 9,661,126.3 180,581.8 - 10,963,895.0 210,678.8 - 11,165,976.4
30RMGM PG 1 - 107 1 - 107 1 - 61 1 - 53
Capacity 210,671.3 - 22,541,829.1 180,575.4 - 19,321,567.8 361,150.8 - 21,927,012.9 421,342.7 - 22,331,163.1
DBL 6R0H9M PG 1 - 53 1 - 53 1 - 30 1 - 26
Capacity 41,119.5 - 2,179,333.5 35,245.2 - 1,867,995.6 70,490.5 - 2,104,644.9 82,239.0 - 2,138,214.0
10RH9M PG 1 - 53 1 - 53 1 - 30 1 - 26
Capacity 68,532.5 - 3,632,222.5 58,742.1 - 3,113,331.3 117,484.3 - 3,507,745.5 137,065.0 - 3,563,690.0
14RH9M PG 1 - 53 1 - 53 1 - 30 1 - 26
Capacity 95,945.5 - 5,085,111.5 82,239.0 - 4,358,667.0 164,478.0 - 4,910,843.1 191,891.0 - 4,989,166.0
DB60 1R2J7MC PG 1 - 149 1 - 149 1 - 85 1 - 74
Capacity 8,069.5 - 1,202,355.5 6,916.7 - 1,030,588.3 13,833.4 - 1,171,886.6 16,139.0 - 1,194,286.0
2R4J8M PG 1 - 149 1 - 149 1 - 85 1 - 74
Capacity 16,139.0 - 2,404,711.0 13,833.4 - 2,061,176.6 27,666.8 - 2,343,773.2 32,278.0 - 2,388,572.0
6R0HLM PG 1 - 149 1 - 149 1 - 85 1 - 74
Capacity 41,119.5 - 6,126,805.5 35,245.2 - 5,251,534.8 70,490.5 - 5,971,552.4 82,239.0 - 6,085,686.0
10RHLM PG 1 - 149 1 - 149 1 - 85 1 - 74
Capacity 68,532.5 - 10,211,342.5 58,742.1 - 8,752,572.9 117,484.3 - 9,952,598.6 137,065.0 - 10,142,810.0
14RHLM PG 1 - 149 1 - 149 1 - 85 1 - 74
Capacity 95,945.5 - 14,295,879.5 82,239.0 - 12,253,611.0 164,478.0 - 13,933,636.3 191,891.0 - 14,199,934.0
DBF 3R2FN PG 1 - 53 1 - 53 1 - 30 1 - 26
Capacity 24,629.0 - 1,305,337.0 21,110.6 - 1,118,861.8 42,221.2 - 1,260,604.4 49,258.1 - 1,280,710.6
7R0FP PG 1 - 53 1 - 53 1 - 30 1 - 26
Capacity 49,258.1 - 2,610,679.3 42,221.2 - 2,237,723.6 84,442.4 - 2,521,208.8 98,516.2 - 2,561,421.2
14RFP PG 1 - 53 1 - 53 1 - 30 1 - 26
Capacity 98,516.2 - 5,221,358.6 84,442.4 - 4,475,447.2 168,884.9 - 5,042,420.6 197,032.4 - 5,122,842.4

THEORY04-05-120
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-130

(3) VSP G370 and VSP F370


NOTE: DBS is the only Drive Box that can be mounted on VSP F370.
Table 4-24 List of emulation types
RAID Level
Storage capacity 2D+2D (RAID1) 3D+1P (RAID5) 4D+1P (RAID5) 6D+1P (RAID5)
(GB/volume)
DBS 600JCMC PG 1 - 71 1 - 71 1 - 57 1 - 40
(*1) Capacity 1,152.7 - 81,841.7 1,729.1 - 122,766.1 2,305.5 - 130,491.3 3,458.3 - 138,826.0
1R2JCMC PG 1 - 71 1 - 71 1 - 57 1 - 40
Capacity 2,305.5 - 163,690.5 3,458.3 - 245,539.3 4,611.1 - 260,988.3 6,916.7 - 277,656.1
2R4JGM PG 1 - 71 1 - 71 1 - 57 1 - 40
Capacity 4,611.1 - 327,388.1 6,916.7 - 491,085.7 9,222.2 - 521,976.5 13,833.4 - 555,312.2
480MGM PG 1 - 71 1 - 71 1 - 57 1 - 40
Capacity 945.2 - 67,109.2 1,417.8 - 100,663.8 1,890.4 - 106,996.6 2,835.6 - 113,829.1
960MGM PG 1 - 71 1 - 71 1 - 57 1 - 40
Capacity 1,890.4 - 134,218.4 2,835.6 - 201,327.6 3,780.9 - 213,998.9 5,671.3 - 227,662.2
1R9MGM/ PG 1 - 71 1 - 71 1 - 57 1 - 40
1T9MGM Capacity 3,780.9 - 268,443.9 5,671.3 - 402,662.3 7,561.8 - 427,997.9 11,342.7 - 455,328.4
3R8MGM PG 1 - 71 1 - 71 1 - 57 1 - 40
Capacity 7,561.8 - 536,887.8 11,342.7 - 805,331.7 15,123.6 - 855,995.8 22,685.5 - 910,660.8
7R6MGM PG 1 - 71 1 - 71 1 - 57 1 - 40
Capacity 15,123.6 - 1,073,775.6 22,685.5 - 1,610,670.5 30,247.3 - 1,711,997.2 45,371.0 - 1,821,321.6
15RMGM PG 1 - 71 1 - 71 1 - 57 1 - 40
Capacity 30,096.9 - 2,136,879.9 45,145.4 - 3,205,323.4 60,193.9 - 3,406,974.7 90,290.9 - 3,624,534.7
30RMGM PG 1 - 71 1 - 71 1 - 57 1 - 40
Capacity 60,191.8 - 4,273,617.8 90,287.7 - 6,410,426.7 120,383.6 - 6,813,711.8 180,575.4 - 7,248,812.5
DBL 6R0H9M PG 1 - 35 1 - 35 1 - 28 1 - 20
(*2) Capacity 11,748.4 - 411,194.0 17,622.6 - 616,791.0 23,496.8 - 653,211.0 35,245.2 - 689,798.9
10RH9M PG 1 - 35 1 - 35 1 - 28 1 - 20
Capacity 787.6 - 27,566.0 1,181.5 - 41,352.5 1,575.3 - 43,793.3 58,742.1 - 1,149,666.8
14RH9M PG 1 - 35 1 - 35 1 - 28 1 - 20
Capacity 27,413.0 - 959,455.0 41,119.5 - 1,439,182.5 54,826.0 - 1,524,162.8 82,239.0 - 1,609,534.7
DB60 1R2J7MC PG 1 - 92 1 - 92 1 - 73 1 - 52
Capacity 2,305.5 - 212,106.0 3,458.3 - 318,163.6 4,611.1 - 338,454.7 6,916.7 - 360,656.5
2R4J8M PG 1 - 92 1 - 92 1 - 73 1 - 52
Capacity 4,611.1 - 424,221.2 6,916.7 - 636,336.4 9,222.2 - 676,909.5 13,833.4 - 721,313.0
6R0HLM PG 1 - 92 1 - 92 1 - 73 1 - 52
Capacity 11,748.4 - 1,080,852.8 17,622.6 - 1,621,279.2 23,496.8 - 1,724,665.1 35,245.2 - 1,837,785.4
10RHLM PG 1 - 92 1 - 92 1 - 73 1 - 52
Capacity 19,580.7 - 1,801,424.4 29,371.0 - 2,702,132.0 39,161.4 - 2,874,446.8 58,742.1 - 3,062,980.9
14RHLM PG 1 - 92 1 - 92 1 - 73 1 - 52
Capacity 27,413.0 - 2,521,996.0 41,119.5 - 3,782,994.0 54,826.0 - 4,024,228.4 82,239.0 - 4,288,176.4

*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G370 (CBSS2)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G370 (CBSL2)).
(To be continued)

THEORY04-05-130
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-140

(Continued from the preceding page)


RAID Level
Storage capacity 7D+1P (RAID5) 6D+2P (RAID6) 12D+2P (RAID6) 14D+2P (RAID6)
(GB/volume)
DBS 600JCMC PG 1 - 35 1 - 35 1 - 20 1 - 17
(*1) Capacity 4,034.7 - 141,214.5 3,458.3 - 121,040.5 6,916.7 - 135,369.7 8,069.5 - 137,181.5
1R2JCMC PG 1 - 35 1 - 35 1 - 20 1 - 17
Capacity 8,069.5 - 282,432.5 6,916.7 - 242,084.5 13,833.4 - 270,739.4 16,139.0 - 274,363.0
2R4JGM PG 1 - 35 1 - 35 1 - 20 1 - 17
Capacity 16,139.0 - 564,865.0 13,833.4 - 484,169.0 27,666.8 - 541,478.8 32,278.0 - 548,726.0
480MGM PG 1 - 35 1 - 35 1 - 20 1 - 17
Capacity 3,308.3 - 115,790.5 2,835.6 - 99,246.0 5,671.3 - 110,995.4 6,616.6 - 112,482.2
960MGM PG 1 - 35 1 - 35 1 - 20 1 - 17
Capacity 6,616.6 - 231,581.0 5,671.3 - 198,495.5 11,342.7 - 221,992.8 13,233.2 - 224,964.4
1R9MGM/ PG 1 - 35 1 - 35 1 - 20 1 - 17
1T9MGM Capacity 13,233.2 - 463,162.0 11,342.7 - 396,994.5 22,685.5 - 443,987.6 26,466.4 - 449,928.8
3R8MGM PG 1 - 35 1 - 35 1 - 20 1 - 17
Capacity 26,466.4 - 926,324.0 22,685.5 - 793,992.5 45,371.0 - 887,975.3 52,932.9 - 899,859.3
7R6MGM PG 1 - 35 1 - 35 1 - 20 1 - 17
Capacity 52,932.9 - 1,852,651.5 45,371.0 - 1,587,985.0 90,742.1 - 1,775,952.5 105,865.8 - 1,799,718.6
15RMGM PG 1 - 35 1 - 35 1 - 20 1 - 17
Capacity 105,339.4 - 3,686,879.0 90,290.9 - 3,160,181.5 180,581.8 - 3,534,243.8 210,678.8 - 3,581,539.6
30RMGM PG 1 - 35 1 - 35 1 - 20 1 - 17
Capacity 210,671.3 - 7,373,495.5 180,575.4 - 6,320,139.0 361,150.8 - 7,068,237.1 421,342.7 - 7,162,825.9
DBL 6R0H9M PG 1 - 17 1 - 17 1-9 1-8
(*2) Capacity 41,119.5 - 699,031.5 35,245.2 - 599,168.4 70,490.5 - 654,554.6 82,239.0 - 657,912.0
10RH9M PG 1 - 17 1 - 17 1-9 1-8
Capacity 68,532.5 - 1,165,052.5 58,742.1 - 998,615.7 117,484.3 - 1,090,925.6 137,065.0 - 1,096,520.0
14RH9M PG 1 - 17 1 - 17 1-9 1-8
Capacity 95,945.5 - 1,631,073.5 82,239.0 - 1,398,063.0 164,478.0 - 1,480,302.0 191,891.0 - 1,535,128.0
DB60 1R2J7MC PG 1 - 46 1 - 46 1 - 26 1 - 22
Capacity 8,069.5 - 367,162.3 6,916.7 - 314,709.9 13,833.4 - 353,739.8 16,139.0 - 359,092.8
2R4J8M PG 1 - 46 1 - 46 1 - 26 1 - 22
Capacity 16,139.0 - 734,324.5 13,833.4 - 629,419.7 27,666.8 - 707,479.6 32,278.0 - 718,185.5
6R0HLM PG 1 - 46 1 - 46 1 - 26 1 - 22
Capacity 41,119.5 - 1,870,937.3 35,245.2 - 1,603,656.6 70,490.5 - 1,802,542.8 82,239.0 - 1,829,817.8
10RHLM PG 1 - 46 1 - 46 1 - 26 1 - 22
Capacity 68,532.5 - 3,118,228.8 58,742.1 - 2,672,765.6 117,484.3 - 3,004,241.4 137,065.0 - 3,049,696.3
14RHLM PG 1 - 46 1 - 46 1 - 26 1 - 22
Capacity 95,945.5 - 4,365,520.3 82,239.0 - 3,741,874.5 164,478.0 - 4,276,428.0 191,891.0 - 4,221,602.0

*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G370 (CBSS2)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G370 (CBSL2)).

THEORY04-05-140
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-150

(4) VSP G350 and VSP F350


NOTE: DBS is the only Drive Box that can be mounted on VSP F350.
Table 4-25 List of emulation types
RAID Level
Storage capacity 2D+2D (RAID1) 3D+1P (RAID5) 4D+1P (RAID5) 6D+1P (RAID5)
(GB/volume)
DBS 600JCMC PG 1 - 47 1 - 47 1 - 37 1 - 26
(*1) Capacity 1,152.7 - 54,176.9 1,729.1 - 81,267.7 2,305.5 - 86,225.7 3,458.3 - 91,397.9
1R2JCMC PG 1 - 47 1 - 47 1 - 37 1 - 26
Capacity 2,305.5 - 108,358.5 3,458.3 - 162,540.1 4,611.1 - 172,455.1 6,916.7 - 182,798.5
2R4JGM PG 1 - 47 1 - 47 1 - 37 1 - 26
Capacity 4,611.1 - 216,721.7 6,916.7 - 325,084.9 9,222.2 - 344,910.3 13,833.4 - 365,597.0
480MGM PG 1 - 47 1 - 47 1 - 37 1 - 26
Capacity 945.2 - 44,424.4 1,417.8 - 66,636.6 1,890.4 - 70,701.0 2,835.6 - 74,940.9
960MGM PG 1 - 47 1 - 47 1 - 37 1 - 26
Capacity 1,890.4 - 88,848.8 2,835.6 - 133,273.2 3,780.9 - 141,405.7 5,671.3 - 149,884.4
1R9MGM/ PG 1 - 47 1 - 47 1 - 37 1 - 26
1T9MGM Capacity 3,780.9 - 177,702.3 5,671.3 - 266,551.1 7,561.8 - 282,811.3 11,342.7 - 299,771.4
3R8MGM PG 1 - 47 1 - 47 1 - 37 1 - 26
Capacity 7,561.8 - 355,404.6 11,342.7 - 533,106.9 15,123.6 - 565,622.6 22,685.5 - 599,545.4
7R6MGM PG 1 - 47 1 - 47 1 - 37 1 - 26
Capacity 15,123.6 - 710,809.2 22,685.5 - 1,066,218.5 30,247.3 - 1,131,249.0 45,371.0 - 1,199,090.7
15RMGM PG 1 - 47 1 - 47 1 - 37 1 - 26
Capacity 30,096.9 - 1,414,554.3 45,145.4 - 2,121,833.8 60,193.9 - 2,251,251.9 90,290.9 - 2,386,259.5
30RMGM PG 1 - 47 1 - 47 1 - 37 1 - 26
Capacity 60,191.8 - 2,829,014.6 90,287.7 - 4,243,521.9 120,383.6 - 4,502,346.6 180,575.4 - 4,772,349.9
DBL 6R0H9M PG 1 - 23 1 - 23 1 - 18 1 - 13
(*2) Capacity 11,748.4 - 270,213.2 17,622.6 - 405,319.8 23,496.8 - 427,641.8 35,245.2 - 448,117.5
10RH9M PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 19,580.7 - 450,356.1 29,371.0 - 675,533.0 39,161.4 - 712,737.5 58,742.1 - 746,863.8
14RH9M PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 27,413.0 - 630,499.0 41,119.5 - 945,748.5 54,826.0 - 997,833.2 82,239.0 - 1,045,610.1
DB60 1R2J7MC PG 1 - 62 1 - 62 1 - 49 1 - 35
Capacity 2,305.5 - 142,941.0 3,458.3 - 214,414.6 4,611.1 - 227,788.3 6,916.7 - 242,084.5
2R4J8M PG 1 - 62 1 - 62 1 - 49 1 - 35
Capacity 4,611.1 - 285,888.2 6,916.7 - 428,835.4 9,222.2 - 455,576.7 13,833.4 - 484,169.0
6R0HLM PG 1 - 62 1 - 62 1 - 49 1 - 35
Capacity 11,748.4 - 728,400.8 17,622.6 - 1,092,601.2 23,496.8 - 1,160,741.9 35,245.2 - 1,233,582.0
10RHLM PG 1 - 62 1 - 62 1 - 49 1 - 35
Capacity 19,580.7 - 1,214,003.4 29,371.0 - 1,821,002.0 39,161.4 - 1,934,573.2 58,742.1 - 2,055,973.5
14RHLM PG 1 - 62 1 - 62 1 - 49 1 - 35
Capacity 27,413.0 - 1,699,606.0 41,119.5 - 2,549,409.0 54,826.0 - 2,708,404.4 82,239.0 - 2,878,365.0

*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G350 (CBSS1)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G350 (CBSL1)).
(To be continued)

THEORY04-05-150
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-160

(Continued from the preceding page)


RAID Level
Storage capacity 7D+1P (RAID5) 6D+2P (RAID6) 12D+2P (RAID6) 14D+2P (RAID6)
(GB/volume)
DBS 600JCMC PG 1 - 23 1 - 23 1 - 13 1 - 11
(*1) Capacity 4,034.7 - 92,798.1 3,458.3 - 79,540.9 6,916.7 - 87,940.9 8,069.5 - 88,764.5
1R2JCMC PG 1 - 23 1 - 23 1 - 13 1 - 11
Capacity 8,069.5 - 185,598.5 6,916.7 - 159,084.1 13,833.4 - 175,881.8 16,139.0 - 177,529.0
2R4JGM PG 1 - 23 1 - 23 1 - 13 1 - 11
Capacity 16,139.0 - 371,197.0 13,833.4 - 318,168.2 27,666.8 - 351,763.6 32,278.0 - 355,058.0
480MGM PG 1 - 23 1 - 23 1 - 13 1 - 11
Capacity 3,308.3 - 76,090.9 2,835.6 - 65,218.8 5,671.3 - 72,106.5 6,616.6 - 72,782.6
960MGM PG 1 - 23 1 - 23 1 - 13 1 - 11
Capacity 6,616.6 - 152,181.8 5,671.3 - 130,439.9 11,342.7 - 144,214.3 13,233.2 - 145,565.2
1R9MGM/ PG 1 - 23 1 - 23 1 - 13 1 - 11
1T9MGM Capacity 13,233.2 - 304,363.6 11,342.7 - 260,882.1 22,685.5 - 288,429.9 26,466.4 - 291,130.4
3R8MGM PG 1 - 23 1 - 23 1 - 13 1 - 11
Capacity 26,466.4 - 608,727.2 22,685.5 - 521,766.5 45,371.0 - 576,859.9 52,932.9 - 582,261.9
7R6MGM PG 1 - 23 1 - 23 1 - 13 1 - 11
Capacity 52,932.9 - 1,217,456.7 45,371.0 - 1,043,533.0 90,742.1 - 1,153,721.0 105,865.8 - 1,164,523.8
15RMGM PG 1 - 23 1 - 23 1 - 13 1 - 11
Capacity 105,339.4 - 2,422,806.2 90,290.9 - 2,076,690.7 180,581.8 - 2,295,968.6 210,678.8 - 2,317,466.8
30RMGM PG 1 - 23 1 - 23 1 - 13 1 - 11
Capacity 210,671.3 - 4,845,439.9 180,575.4 - 4,153,234.2 361,150.8 - 4,591,774.5 421,342.7 - 4,634,769.7
DBL 6R0H9M PG 1 - 11 1 - 11 1-6 1-5
(*2) Capacity 41,119.5 - 452,314.5 35,245.2 - 387,697.2 70,490.5 - 412,872.9 82,239.0 - 411,195.0
10RH9M PG 1 - 11 1 - 11 1-6 1-5
Capacity 68,532.5 - 753,857.5 58,742.1 - 646,163.1 117,484.3 - 688,122.3 137,065.0 - 685,325.0
14RH9M PG 1 - 11 1 - 11 1-6 1-5
Capacity 95,945.5 - 1,055,400.5 82,239.0 - 904,629.0 164,478.0 - 963,371.1 191,891.0 - 959,455.0
DB60 1R2J7MC PG 1 - 31 1 - 31 1 - 17 1 - 15
Capacity 8,069.5 - 246,119.8 6,916.7 - 210,959.4 13,833.4 - 235,167.8 16,139.0 - 238,050.3
2R4J8M PG 1 - 31 1 - 31 1 - 17 1 - 15
Capacity 16,139.0 - 492,239.5 13,833.4 - 421,918.7 27,666.8 - 470,335.6 32,278.0 - 476,100.5
6R0HLM PG 1 - 31 1 - 31 1 - 17 1 - 15
Capacity 41,119.5 - 1,254,144.8 35,245.2 - 1,074,978.6 70,490.5 - 1,198,338.5 82,239.0 - 1,213,025.3
10RHLM PG 1 - 31 1 - 31 1 - 17 1 - 15
Capacity 68,532.5 - 2,090,241.3 58,742.1 - 1,791,634.1 117,484.3 - 1,997,233.1 137,065.0 - 2,021,708.8
14RHLM PG 1 - 31 1 - 31 1 - 17 1 - 15
Capacity 95,945.5 - 2,926,337.8 82,239.0 - 2,508,289.5 164,478.0 - 2,796,126.0 191,891.0 - 2,830,392.3

*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G350 (CBSS1)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G350 (CBSL1)).

THEORY04-05-160
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-161

(5) VSP G130

Table 4-26 List of emulation types


RAID Level
Storage capacity 2D+2D (RAID1) 3D+1P (RAID5) 4D+1P (RAID5) 6D+1P (RAID5)
(GB/volume)
DBS 600JCMC PG 1 - 23 1 - 23 1 - 18 1 - 13
(*1) Capacity 1,152.7 - 26,512.1 1,729.1 - 39,769.3 2,305.5 - 41,960.1 3,458.3 - 43,969.8
1R2JCMC PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 2,305.5 - 53,026.5 3,458.3 - 79,540.9 4,611.1 - 83,922.0 6,916.7 - 87,940.9
2R4JGM PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 4,611.1 - 106,055.3 6,916.7 - 159,084.1 9,222.2 - 167,844.0 13,833.4 - 175,881.8
480MGM PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 945.2 - 21,739.6 1,417.8 - 32,609.4 1,890.4 - 34,405.3 2,835.6 - 36,052.6
960MGM PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 1,890.4 - 43,479.2 2,835.6 - 65,218.8 3,780.9 - 68,812.4 5,671.3 - 72,106.5
1R9MGM/ PG 1 - 23 1 - 23 1 - 18 1 - 13
1T9MGM Capacity 3,780.9 - 86,960.7 5,671.3 - 130,439.9 7,561.8 - 137,624.8 11,342.7 - 144,214.3
3R8MGM PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 7,561.8 - 173,921.4 11,342.7 - 260,882.1 15,123.6 - 275,249.5 22,685.5 - 288,429.9
7R6MGM PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 15,123.6 - 347,842.8 22,685.5 - 521,766.5 30,247.3 - 550,500.9 45,371.0 - 576,859.9
15RMGM PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 30,096.9 - 692,228.7 45,145.4 - 1,038,344.2 60,193.9 - 1,095,529.0 90,290.9 - 1,147,984.3
30RMGM PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 60,191.8 - 1,384,411.4 90,287.7 - 2,076,617.1 120,383.6 - 2,190,981.5 180,575.4 - 2,295,887.2
DBL 6R0H9M PG 1 - 23 1 - 23 1 - 18 1 - 13
(*2) Capacity 11,748.4 - 270,213.2 17,622.6 - 405,319.8 23,496.8 - 427,641.8 35,245.2 - 448,117.5
10RH9M PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 19,580.7 - 450,356.1 29,371.0 - 675,533.0 39,161.4 - 712,737.5 58,742.1 - 746,863.8
14RH9M PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 27,413.0 - 630,499.0 41,119.5 - 945,748.5 54,826.0 - 997,833.2 82,239.0 - 1,045,610.1

*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G130 (CBXSS)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G130 (CBXSL)).
(To be continued)

THEORY04-05-161
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-162

(Continued from the preceding page)


RAID Level
Storage capacity 7D+1P (RAID5) 6D+2P (RAID6) 12D+2P (RAID6) 14D+2P (RAID6)
(GB/volume)
DBS 600JCMC PG 1 - 11 1 - 11 1-6 1-5
(*1) Capacity 4,034.7 - 44,381.7 3,458.3 - 38,041.3 6,916.7 - 40,512.1 8,069.5 - 40,347.5
1R2JCMC PG 1 - 11 1 - 11 1-6 1-5
Capacity 8,069.5 - 88,764.5 6,916.7 - 76,083.7 13,833.4 - 81,024.2 16,139.0 - 80,695.0
2R4JGM PG 1 - 11 1 - 11 1-6 1-5
Capacity 16,139.0 - 177,529.0 13,833.4 - 152,167.4 27,666.8 - 162,048.4 32,278.0 - 161,390.0
480MGM PG 1 - 11 1 - 11 1-6 1-5
Capacity 3,308.3 - 36,391.3 2,835.6 - 31,191.6 5,671.3 - 33,217.6 6,616.6 - 33,083.0
960MGM PG 1 - 11 1 - 11 1-6 1-5
Capacity 6,616.6 - 72,782.6 5,671.3 - 62,384.3 11,342.7 - 66,435.8 13,233.2 - 66,166.0
1R9MGM/ PG 1 - 11 1 - 11 1-6 1-5
1T9MGM Capacity 13,233.2 - 145,565.2 11,342.7 - 124,769.7 22,685.5 - 132,872.2 26,466.4 - 132,332.0
3R8MGM PG 1 - 11 1 - 11 1-6 1-5
Capacity 26,466.4 - 291,130.4 22,685.5 - 249,540.5 45,371.0 - 265,744.4 52,932.9 - 264,664.5
7R6MGM PG 1 - 11 1 - 11 1-6 1-5
Capacity 52,932.9 - 582,261.9 45,371.0 - 499,081.0 90,742.1 - 531,489.4 105,865.8 - 529,329.0
15RMGM PG 1 - 11 1 - 11 1-6 1-5
Capacity 105,339.4 - 1,158,733.4 90,290.9 - 993,199.9 180,581.8 - 1,057,693.4 210,678.8 - 1,053,394.0
30RMGM PG 1 - 11 1 - 11 1-6 1-5
Capacity 210,671.3 - 2,317,384.3 180,575.4 - 1,986,329.4 361,150.8 - 2,115,311.8 421,342.7 - 2,106,713.5
DBL 6R0H9M PG 1 - 11 1 - 11 1-6 1-5
(*2) Capacity 41,119.5 - 452,314.5 35,245.2 - 387,697.2 70,490.5 - 412,872.9 82,239.0 - 411,195.0
10RH9M PG 1 - 11 1 - 11 1-6 1-5
Capacity 68,532.5 - 753,857.5 58,742.1 - 646,163.1 117,484.3 - 688,122.3 137,065.0 - 685,325.0
14RH9M PG 1 - 11 1 - 11 1-6 1-5
Capacity 95,945.5 - 1,055,400.5 82,239.0 - 904,629.0 164,478.0 - 963,371.1 191,891.0 - 959,455.0

*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G130 (CBXSS)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G130 (CBXSL)).

THEORY04-05-162
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-163

(6) VSP E990

Table 4-27 List of emulation types


RAID Level
Storage capacity 2D+2D (RAID1) 3D+1P (RAID5) 4D+1P (RAID5) 6D+1P (RAID5)
(GB/volume)
DBN 1R9RVM PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 3,780.9 - 86,960.7 5,671.3 - 130,439.9 7,561.8 - 137,624.8 11,342.7 - 144,214.3
3R8RVM PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 7,561.8 - 173,921.4 11,342.7 - 260,882.1 15,123.6 - 275,249.5 22,685.5 - 288,429.9
7R6RVM PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 15,123.6 - 347,842.8 22,685.5 - 521,766.5 30,247.3 - 550,500.9 45,371.0 - 576,859.9
15RRVM PG 1 - 23 1 - 23 1 - 18 1 - 13
Capacity 30,096.9 - 692,228.7 45,145.4 - 1,038,344.2 60,193.9 - 1,095,529.0 90,290.9 - 1,147,984.3

RAID Level
Storage capacity 7D+1P (RAID5) 6D+2P (RAID6) 12D+2P (RAID6) 14D+2P (RAID6)
(GB/volume)
DBN 1R9RVM PG 1 - 11 1 - 11 1-6 1-5
Capacity 13,233.2 - 145,565.2 11,342.7 - 124,769.7 22,685.5 - 132,872.2 26,466.4 - 132,332.0
3R8RVM PG 1 - 11 1 - 11 1-6 1-5
Capacity 26,466.4 - 291,130.4 22,685.5 - 249,540.5 45,371.0 - 265,744.4 52,932.9 - 264,664.5
7R6RVM PG 1 - 11 1 - 11 1-6 1-5
Capacity 52,932.9 - 582,261.9 45,371.0 - 499,081.0 90,742.1 - 531,489.4 105,865.8 - 529,329.0
15RRVM PG 1 - 11 1 - 11 1-6 1-5
Capacity 105,339.4 - 1,158,733.4 90,290.9 - 993,199.9 180,581.8 - 1,057,693.4 210,678.8 - 1,053,394.0

THEORY04-05-163
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-170

3. The number of Drives and blocks for each RAID level

Table 4-28 The number of Drives and blocks for each RAID level
Capacity
RAID Levcel Drive Type
MB Logical Block
RAID1 2D+2D DKR5x-J600SS/DKS5x-J600SS 1,099,383 2,251,536,384
DKR5x-J1R2SS/DKS5x-J1R2SS 2,198,767 4,503,073,792
DKS5x-J2R4SS 4,397,534 9,006,148,608
DKS2x-H6R0SS/DKR2x-H6R0SS 11,204,177 22,946,153,472
DKR2x-H10RSS/DKS2x-H10RSS 18,673,627 38,243,589,120
DKS2x-H14RSS 26,143,079 53,541,025,792
SLB5x-M480SS 901,442 1,846,153,216
SLB5x-M960SS 1,802,884 3,692,307,456
SLB5x-M1R9SS/SLB5x-M1T9SS/ 3,605,769 7,384,614,912
SLM5x-M1T9SS
SLB5x-M3R8SS/SLR5x-M3R8SS/ 7,211,538 14,769,230,848
SLM5x-M3R8SS
SLB5x-M7R6SS/SLR5x-M7R6SS/ 14,423,077 29,538,461,696
SLM5x-M7R6SS
SLB5x-M15RSS/SLM5x-M15RSS 28,702,715 58,783,161,344
SLM5x-M30RSS 57,403,387 117,562,137,600
NFHAx-Q3R2SS 6,710,884 13,743,889,408
NFHAx-Q6R4SS 13,421,772 27,487,788,032
NFHAx-Q13RSS 26,843,543 54,975,577,088
SNB5x-R1R9NC/SNR5x-R1R9NC/ 3,605,769 7,384,615,424
SNM5x-R1R9NC
SNB5x-R3R8NC/SNR5x-R3R8NC/ 7,211,538 14,769,230,848
SNM5x-R3R8NC
SNB5x-R7R6NC/SNR5x-R7R6NC/ 14,423,077 29,538,461,696
SNM5x-R7R6NC
SNB5x-R15RNC/SNN5x-R15RNC/ 28,702,715 58,783,161,856
SNM5x-R15RNC
x: A, B, C, ...
(To be continued)

THEORY04-05-170
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-171

(Continued from the preceding page)


Capacity
RAID Levcel Drive Type
MB Logical Block
RAID5 3D+1P DKR5x-J600SS/DKS5x-J600SS 1,649,075 3,377,304,576
DKR5x-J1R2SS/DKS5x-J1R2SS 3,298,150 6,754,610,688
DKS5x-J2R4SS 6,596,300 13,509,222,912
DKS2x-H6R0SS/DKR2x-H6R0SS 16,806,265 34,419,230,208
DKR2x-H10RSS/DKS2x-H10RSS 28,010,441 57,365,383,680
DKS2x-H14RSS 39,214,618 80,311,538,688
SLB5x-M480SS 1,352,163 2,769,229,824
SLB5x-M960SS 2,704,326 5,538,461,184
SLB5x-M1R9SS/SLB5x-M1T9SS/ 5,408,653 11,076,922,368
SLM5x-M1T9SS
SLB5x-M3R8SS/SLR5x-M3R8SS/ 10,817,307 22,153,846,272
SLM5x-M3R8SS
SLB5x-M7R6SS/SLR5x-M7R6SS/ 21,634,616 44,307,692,544
SLM5x-M7R6SS
SLB5x-M15RSS/SLM5x-M15RSS 43,054,073 88,174,742,016
SLM5x-M30RSS 86,105,081 176,343,206,400
NFHAx-Q3R2SS 10,066,325 20,615,834,112
NFHAx-Q6R4SS 20,132,657 41,231,682,048
NFHAx-Q13RSS 40,265,315 82,463,365,632
SNB5x-R1R9NC/SNR5x-R1R9NC/ 5,408,653 11,076,923,136
SNM5x-R1R9NC
SNB5x-R3R8NC/SNR5x-R3R8NC/ 10,817,307 22,153,846,272
SNM5x-R3R8NC
SNB5x-R7R6NC/SNR5x-R7R6NC/ 21,634,615 44,307,692,544
SNM5x-R7R6NC
SNB5x-R15RNC/SNN5x-R15RNC/ 43,054,073 88,174,742,784
SNM5x-R15RNC
x: A, B, C, ...
(To be continued)

THEORY04-05-171
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-180

(Continued from the preceding page)


Capacity
RAID Levcel Drive Type
MB Logical Block
RAID5 4D+1P DKR5x-J600SS/DKS5x-J600SS 2,198,766 4,503,072,768
DKR5x-J1R2SS/DKS5x-J1R2SS 4,397,533 9,006,147,584
DKS5x-J2R4SS 8,795,067 18,012,297,216
DKS2x-H6R0SS/DKR2x-H6R0SS 22,408,353 45,892,306,944
DKR2x-H10RSS/DKS2x-H10RSS 37,347,255 76,487,178,240
DKS2x-H14RSS 52,286,158 107,082,051,584
SLB5x-M480SS 1,802,884 3,692,306,432
SLB5x-M960SS 3,605,769 7,384,614,912
SLB5x-M1R9SS/SLB5x-M1T9SS/ 7,211,538 14,769,229,824
SLM5x-M1T9SS
SLB5x-M3R8SS/SLR5x-M3R8SS/ 14,423,077 29,538,461,696
SLM5x-M3R8SS
SLB5x-M7R6SS/SLR5x-M7R6SS/ 28,846,154 59,076,923,392
SLM5x-M7R6SS
SLB5x-M15RSS/SLM5x-M15RSS 57,405,431 117,566,322,688
SLM5x-M30RSS 114,806,775 235,124,275,200
NFHAx-Q3R2SS 13,421,767 27,487,778,816
NFHAx-Q6R4SS 26,843,543 54,975,576,064
NFHAx-Q13RSS 53,687,087 109,951,154,176
SNB5x-R1R9NC/SNR5x-R1R9NC/ 7,211,538 14,769,230,848
SNM5x-R1R9NC
SNB5x-R3R8NC/SNR5x-R3R8NC/ 14,423,077 29,538,461,696
SNM5x-R3R8NC
SNB5x-R7R6NC/SNR5x-R7R6NC/ 28,846,154 59,076,923,392
SNM5x-R7R6NC
SNB5x-R15RNC/SNN5x-R15RNC/ 57,405,431 117,566,323,712
SNM5x-R15RNC
x: A, B, C, ...
(To be continued)

THEORY04-05-180
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-181

(Continued from the preceding page)


Capacity
RAID Levcel Drive Type
MB Logical Block
RAID5 6D+1P DKR5x-J600SS/DKS5x-J600SS 3,298,149 6,754,609,152
DKR5x-J1R2SS/DKS5x-J1R2SS 6,596,300 13,509,221,376
DKS5x-J2R4SS 13,192,601 27,018,445,824
DKS2x-H6R0SS/DKR2x-H6R0SS 33,612,530 68,838,460,416
DKR2x-H10RSS/DKS2x-H10RSS 56,020,882 114,730,767,360
DKS2x-H14RSS 78,429,237 160,623,077,376
SLB5x-M480SS 2,704,326 5,538,459,648
SLB5x-M960SS 5,408,653 11,076,922,368
SLB5x-M1R9SS/SLB5x-M1T9SS/ 10,817,307 22,153,844,736
SLM5x-M1T9SS
SLB5x-M3R8SS/SLR5x-M3R8SS/ 21,634,615 44,307,692,544
SLM5x-M3R8SS
SLB5x-M7R6SS/SLR5x-M7R6SS/ 43,269,231 88,615,385,088
SLM5x-M7R6SS
SLB5x-M15RSS/SLM5x-M15RSS 86,108,146 176,349,484,032
SLM5x-M30RSS 172,210,162 352,686,412,800
NFHAx-Q3R2SS 20,132,651 41,231,668,224
NFHAx-Q6R4SS 40,265,315 82,463,364,096
NFHAx-Q13RSS 80,530,630 164,926,731,264
SNB5x-R1R9NC/SNR5x-R1R9NC/ 10,817,307 22,153,846,272
SNM5x-R1R9NC
SNB5x-R3R8NC/SNR5x-R3R8NC/ 21,634,615 44,307,692,544
SNM5x-R3R8NC
SNB5x-R7R6NC/SNR5x-R7R6NC/ 43,269,231 88,615,385,088
SNM5x-R7R6NC
SNB5x-R15RNC/SNN5x-R15RNC/ 86,108,147 176,349,485,568
SNM5x-R15RNC
x: A, B, C, ...
(To be continued)

THEORY04-05-181
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-190

(Continued from the preceding page)


Capacity
RAID Levcel Drive Type
MB Logical Block
RAID5 7D+1P DKR5x-J600SS/DKS5x-J600SS 3,847,841 7,880,377,344
DKR5x-J1R2SS/DKS5x-J1R2SS 7,695,683 15,760,758,272
DKS5x-J2R4SS 15,391,367 31,521,520,128
DKS2x-H6R0SS/DKR2x-H6R0SS 39,214,618 80,311,537,152
DKR2x-H10RSS/DKS2x-H10RSS 65,357,696 133,852,561,920
DKS2x-H14RSS 91,500,776 187,393,590,272
SLB5x-M480SS 3,155,047 6,461,536,256
SLB5x-M960SS 6,310,095 12,923,076,096
SLB5x-M1R9SS/SLB5x-M1T9SS/ 12,620,191 25,846,152,192
SLM5x-M1T9SS
SLB5x-M3R8SS/SLR5x-M3R8SS/ 25,240,384 51,692,307,968
SLM5x-M3R8SS
SLB5x-M7R6SS/SLR5x-M7R6SS/ 50,480,770 103,384,615,936
SLM5x-M7R6SS
SLB5x-M15RSS/SLM5x-M15RSS 100,459,504 205,741,064,704
SLM5x-M30RSS 200,911,856 411,467,481,600
NFHAx-Q3R2SS 23,488,092 48,103,612,928
NFHAx-Q6R4SS 46,976,200 96,207,258,112
NFHAx-Q13RSS 93,952,402 192,414,519,808
SNB5x-R1R9NC/SNR5x-R1R9NC/ 12,620,192 25,846,153,984
SNM5x-R1R9NC
SNB5x-R3R8NC/SNR5x-R3R8NC/ 25,240,384 51,692,307,968
SNM5x-R3R8NC
SNB5x-R7R6NC/SNR5x-R7R6NC/ 50,480,769 103,384,615,936
SNM5x-R7R6NC
SNB5x-R15RNC/SNN5x-R15RNC/ 100,459,505 205,741,066,496
SNM5x-R15RNC
x: A, B, C, ...
(To be continued)

THEORY04-05-190
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-191

(Continued from the preceding page)


Capacity
RAID Levcel Drive Type
MB Logical Block
RAID6 6D+2P DKR5x-J600SS/DKS5x-J600SS 3,298,149 6,754,609,152
DKR5x-J1R2SS/DKS5x-J1R2SS 6,596,300 13,509,221,376
DKS5x-J2R4SS 13,192,601 27,018,445,824
DKS2x-H6R0SS/DKR2x-H6R0SS 33,612,530 68,838,460,416
DKR2x-H10RSS/DKS2x-H10RSS 56,020,882 114,730,767,360
DKS2x-H14RSS 78,429,237 160,623,077,376
SLB5x-M480SS 2,704,326 5,538,459,648
SLB5x-M960SS 5,408,653 11,076,922,368
SLB5x-M1R9SS/SLB5x-M1T9SS/ 10,817,307 22,153,844,736
SLM5x-M1T9SS
SLB5x-M3R8SS/SLR5x-M3R8SS/ 21,634,615 44,307,692,544
SLM5x-M3R8SS
SLB5x-M7R6SS/SLR5x-M7R6SS/ 43,269,231 88,615,385,088
SLM5x-M7R6SS
SLB5x-M15RSS/SLM5x-M15RSS 86,108,146 176,349,484,032
SLM5x-M30RSS 172,210,162 352,686,412,800
NFHAx-Q3R2SS 20,132,651 41,231,668,224
NFHAx-Q6R4SS 40,265,315 82,463,364,096
NFHAx-Q13RSS 80,530,630 164,926,731,264
SNB5x-R1R9NC/SNR5x-R1R9NC/ 10,817,307 22,153,846,272
SNM5x-R1R9NC
SNB5x-R3R8NC/SNR5x-R3R8NC/ 21,634,615 44,307,692,544
SNM5x-R3R8NC
SNB5x-R7R6NC/SNR5x-R7R6NC/ 43,269,231 88,615,385,088
SNM5x-R7R6NC
SNB5x-R15RNC/SNN5x-R15RNC/ 86,108,147 176,349,485,568
SNM5x-R15RNC
x: A, B, C, ...
(To be continued)

THEORY04-05-191
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-200

(Continued from the preceding page)


Capacity
RAID Levcel Drive Type
MB Logical Block
RAID6 12D+2P DKR5x-J600SS/DKS5x-J600SS 6,596,298 13,509,218,304
DKR5x-J1R2SS/DKS5x-J1R2SS 13,192,599 27,018,442,752
DKS5x-J2R4SS 26,385,201 54,036,891,648
DKS2x-H6R0SS/DKR2x-H6R0SS 67,225,059 137,676,920,832
DKR2x-H10RSS/DKS2x-H10RSS 112,041,765 229,461,534,720
DKS2x-H14RSS 156,858,474 321,246,154,752
SLB5x-M480SS 5,408,652 11,076,919,296
SLB5x-M960SS 10,817,307 22,153,844,736
SLB5x-M1R9SS/SLB5x-M1T9SS/ 21,634,614 44,307,689,472
SLM5x-M1T9SS
SLB5x-M3R8SS/SLR5x-M3R8SS/ 43,269,231 88,615,385,088
SLM5x-M3R8SS
SLB5x-M7R6SS/SLR5x-M7R6SS/ 86,538,462 177,230,770,176
SLM5x-M7R6SS
SLB5x-M15RSS/SLM5x-M15RSS 172,216,293 352,698,968,064
SLM5x-M30RSS 344,420,325 705,372,825,600
NFHAx-Q3R2SS 40,265,301 82,463,336,448
NFHAx-Q6R4SS 80,530,629 164,926,728,192
NFHAx-Q13RSS 161,061,261 329,853,462,528
SNB5x-R1R9NC/SNR5x-R1R9NC/ 21,634,615 44,307,692,544
SNM5x-R1R9NC
SNB5x-R3R8NC/SNR5x-R3R8NC/ 43,269,231 88,615,385,088
SNM5x-R3R8NC
SNB5x-R7R6NC/SNR5x-R7R6NC/ 86,538,462 177,230,770,176
SNM5x-R7R6NC
SNB5x-R15RNC/SNN5x-R15RNC/ 172,216,294 352,698,971,136
SNM5x-R15RNC
x: A, B, C, ...

THEORY04-05-200
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-201

(Continued from the preceding page)


Capacity
RAID Levcel Drive Type
MB Logical Block
RAID6 14D+2P DKR5x-J600SS/DKS5x-J600SS 7,695,681 15,760,754,688
DKR5x-J1R2SS/DKS5x-J1R2SS 15,391,366 31,521,516,544
DKS5x-J2R4SS 30,782,735 63,043,040,256
DKS2x-H6R0SS/DKR2x-H6R0SS 78,429,236 160,623,074,304
DKR2x-H10RSS/DKS2x-H10RSS 130,715,392 267,705,123,840
DKS2x-H14RSS 183,001,553 374,787,180,544
SLB5x-M480SS 6,310,094 12,923,072,512
SLB5x-M960SS 12,620,191 25,846,152,192
SLB5x-M1R9SS/SLB5x-M1T9SS/ 25,240,383 51,692,304,384
SLM5x-M1T9SS
SLB5x-M3R8SS/SLR5x-M3R8SS/ 50,480,769 103,384,615,936
SLM5x-M3R8SS
SLB5x-M7R6SS/SLR5x-M7R6SS/ 100,961,539 206,769,231,872
SLM5x-M7R6SS
SLB5x-M15RSS/SLM5x-M15RSS 200,919,008 411,482,129,408
SLM5x-M30RSS 401,823,712 822,934,963,200
NFHAx-Q3R2SS 46,976,185 96,207,225,856
NFHAx-Q6R4SS 93,952,401 192,414,516,224
NFHAx-Q13RSS 187,904,804 384,829,039,616
SNB5x-R1R9NC/SNR5x-R1R9NC/ 25,240,384 51,692,307,968
SNM5x-R1R9NC
SNB5x-R3R8NC/SNR5x-R3R8NC/ 50,480,769 103,384,615,936
SNM5x-R3R8NC
SNB5x-R7R6NC/SNR5x-R7R6NC/ 100,961,539 206,769,231,872
SNM5x-R7R6NC
SNB5x-R15RNC/SNN5x-R15RNC/ 200,919,010 411,482,132,992
SNM5x-R15RNC
x: A, B, C, ...

THEORY04-05-201
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-210

4.5.5 SCSI Commands


4.5.5.1 Common to Fibre/iSCSI
The DASD commands defined under the SCSI-3 standards and those supported by the DKC are listed in
Table 4-29.

Table 4-29 SCSI-3 DASD commands and DKC-supported commands


Group Op Code Name of Command Type :Supported Remarks
0 00H Test Unit Ready CTL/SNS  —
(00H -1FH) 01H Rezero Unit CTL/SNS Nop —
03H Requirement Sense CTL/SNS  —
04H Format Unit DIAG Nop —
07H Reassign Blocks DIAG Nop —
08H Read (6) RD/WR  —
0AH Write (6) RD/WR  —
0BH Seek (6) CTL/SNS Nop —
12H Inquiry CTL/SNS  —
15H Mode Select (6) CTL/SNS  —
16H Reserve CTL/SNS  —
17H Release CTL/SNS  —
1AH Mode Sense (6) CTL/SNS  —
1BH Start/Stop Unit CTL/SNS Nop —
1CH Receive Diagnostic Results DIAG ̶ —
1DH Send Diagnostic DIAG Nop Supported only for self-
test.
1 25H Read Capacity (10) CTL/SNS  —
(20H -3FH) 28H Read (10) RD/WR  —
2AH Write (10) RD/WR  —
2BH Seek (10) CTL/SNS Nop —
2EH Write And Verify (10) RD/WR  Supported only Write.
2FH Verify (10) RD/WR Nop —
35H Synchronize Cache (10) CTL/SNS Nop —
37H Read Defect Data (10) DIAG ̶ No defect always
reported.
3BH Write Buffer DIAG  —
3CH Read Buffer DIAG  —
2 42H Unmap CTL/SNS  —
(40H -5FH) 4DH Log Sense CTL/SNS  —
55H Mode Select (10) CTL/SNS  —
56H Reserve (10) CTL/SNS  —
57H Release (10) CTL/SNS  —
5AH Mode Sense (10) CTL/SNS  —
5EH Persistent Reserve IN CTL/SNS  —
5FH Persistent Reserve OUT CTL/SNS  —
(To be continued)

THEORY04-05-210
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-220
(Continued from preceding page)
Group Op Code Name of Command Type :Supported Remarks
3 83H/00H
Extended Copy CTL/SNS  —
(80H -9FH) 83H/11H
Write Using Token CTL/SNS  —
84H/03H
Receive Copy Result CTL/SNS  —
84H/07H
Receive ROD Token Information CTL/SNS  —
88HRead (16) RD/WR  —
89HCompare and Write RD/WR  —
8AH Write (16) RD/WR  —
8EH Write And Verify (16) RD/WR  Supported only Write.
8FH Verify (16) RD/WR Nop —
91H Synchronized Cache (16) CTL/SNS Nop —
93H Write Same (16) RD/WR  —
9E/10H Read Capacity (16) CTL/SNS  —
9E/12H Get LBA Status CTL/SNS  —
4 A0H Report LUN CTL/SNS  —
(A0H -BFH) A3H/05H Report Device Identifier CTL/SNS  —
A3H/0AH Report Target Port Groups CTL/SNS  —
A3H/0BH Report Aliases CTL/SNS ̶ —
A3H/0CH Report Supported Operation CTL/SNS ̶ —
Codes
A3H/0DH Report Supported Task CTL/SNS ̶ —
Management Functions
A3H/0EH Report Priority CTL/SNS ̶ —
A3H/0FH Report Timestamp CTL/SNS ̶ —
A4H/XXH Maintenance OUT CTL/SNS ̶ —
A4H/06H Set Device Identifier CTL/SNS ̶ —
A4H/0AH Set Target Port Groups CTL/SNS  —
A4H/0BH Change Aliases CTL/SNS ̶ —
A4H/0EH Set Priority CTL/SNS ̶ —
A4H/0FH Set Timestamp CTL/SNS ̶ —
A8H Read (12) RD/WR  —
AAH Write (12) RD/WR  —
AEH Write And Verify (12) RD/WR  However, only the
Write operation.
AFH Verify (12) RD/WR Nop —
B7H Read Defect Data (12) CTL/SNS  It always reports on
No defect.
5 E8H Read With Skip Mask (IBM- CTL/SNS ̶ —
(E0H -FFH) unique)
EAH Write With Skip Mask (IBM- CTL/SNS ̶ —
unique)

THEORY04-05-220
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-10

4.6 Outline of Hardware


DW850 is a new Storage System of a high midrange-class that offers high performance of the enterprise-
class.
DW850 consists of the Controller Chassis and the Drive Box as well as the Storage System of the midrange-
class, and they are installed in a 19-inch rack.

Drives can be installed in the Controller Chassis and the Drive Box.
The maximum number of installable drives is shown below.
• VSP G130 : 96 (CBXSS + DBS x 3)
• VSP G350 : 264 (CBSS + DB60 x 4)
• VSP G370 : 384 (CBSS + DB60 x 6)
• VSP G700 : 1,200 (DB60 x 20)
• VSP G900 : 1,440 (DB60 x 24)
• VSP F350 : 192 (CBSS + DBS x 7)
• VSP F370 : 288 (CBSS + DBS x 11)
• VSP F700 : 864 (DBS x 36)
• VSP F900 : 1,152 (DBS x 48)
• VSP E990 : 96 (DBN x 4)
The Dual Controller configuration is adopted in the controller part that is installed in the Controller Chassis.
The Channel I/F supports only open systems and does not support Mainframe.
The Power Supply is single phase AC100V/200V for the VSP G130, G350, G370 model and single phase
AC200V for the VSP G700, G900 models and DB60.

Figure 4-4 DW850 Storage System

Drive Box

Controller Chassis (DKC)

For information about the service processor used with HDS VSP storage systems, refer to the Service
Processor (SVP) Technical Reference (FE-94HM8036).
THEORY04-06-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-20

4.6.1 Outline Features


1. Scalability
DW850 provides a variety of storage system configurations according to the types and the numbers of
selected options: Channel Boards, Cache Memory, Disk Drive, Flash Drive and Flash Module Drive
(FMD).
• Number of installed Channel options: 12 to 20
(Number of installed Channel option is 20 only when Channel Board Box is installed.)
• Capacity of Cache Memory: 32 GiB to 1,024 GiB
• Number of HDDs (VSP G900) : Up to 1,440 (When using DB60)

2. High-performance
• DW850 supports three types of high-speed Disk Drives at the speed of 7,200 min-1, 10,000 min-1 and
15,000 min-1.
• DW850 supports Flash Drives with ultra high-speed response.
• DW850 supports Flash Module Drive (FMD) with ultra high-speed response and high capacity.
• The high-speed data transfer between DKB and HDDs at a rate of 12 Gbps with the SAS interface is
achieved.
• DW850 uses Intel processor with brand new technology which performs as excellent as that of the
enterprise device DKC810I.

3. Large Capacity
• DW850 supports Disk Drives with capacities of 600 GB, 1.2 TB, 2.4 TB, 6 TB, 10 TB and 14 TB.
• DW850 supports Flash Drives with capacities of 480 GB, 960 GB, 1.9 TB, 3.8 TB, 7.6 TB, 15 TB and
30 TB.
• DW850 supports Flash Module Drive (FMD) with capacities of 3.5 TB, 7 TB and 14 TB.
• DW850 supports Flash Drive (NVMe SSD) with capacities of 1.9 TB, 3.8 TB, 7.6 TB and 15 TB.
• DW850 controls up to 65,280 logical volumes and up to 1,440 Disk Drives and provides a physical
Disk capacity of approximately 14,098 TB per Storage System.

4. Flash Module Drive (FMD)


The FMD is a Flash Drive with large capacity which has been accomplished by adopting the Hitachi
original package.
Its interface is 12 Gbps SAS same as that of the HDD/SDD.
The FMD uses MLC/TLC-NAND Flash Memory and features high performance, long service life and
superior cost performance by virtue of its original control methods.

THEORY04-06-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-30

5. Connectivity
DW850 supports OS s for various UNIX servers and PC servers, so that it conforms to heterogeneous
system environment in which those various OS s coexist.
The platforms that can be connected are shown in the following table.

Table 4-30 Support OS Type


Manufacturer OS
HPE HP-UX
Tru64
OpenVMS
Oracle Solaris
IBM AIX 5L
Microsoft Windows
NOVELL NetWare
SUSE Linux
Red Hat Red Hat Linux
VMware ESX Server

A Channel interface supported by the DW850 is shown below.


• Fibre Channel
• iSCSI

6. High reliability
• DW850 supports RAID6 (6D+2P/12D+2P/14D+2P), RAID5 (3D+1P, 4D+1P, 6D+1P, 7D+1P) and
RAID1 (2D+2D/4D+4D).
• Main components are implemented with a duplex or redundant configuration, so even when single
point of the component failure occurs, the Storage System can continue the operation.
• However, when the failure of the Controller Board with the Cache Memory is addressed, the Channel
ports and the Drive ports of the cluster concerned are blocked.

7. Non-disruptive maintenance
• Main components can be added, removed and replaced without shutting down a device while the
Storage System is in operation.
However, when the addition of the Cache Memory is executed, the Channel ports and the Drive ports
of the cluster concerned are blocked.
• The firmware can be upgraded without shutting down the Storage System.

THEORY04-06-30
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-40

4.6.2 External View of Hardware


DW850 consists of the Controller Chassis installed with Control Boards and the Drive Box to be installed
with various types of HDDs. The Controller Chassis and the Drive Box are mounted in a 19-inch rack.

1. VSP G130, G350, G370, G700, G900, VSP E990 models


There are five types of Drive Boxes.
• The Drive Box (DBS) can be installed with up to 24 2.5-inch HDDs.
• The Drive Box (DBL) can be installed with up to 12 3.5-inch HDDs.
• The Drive Box (DB60) can be installed with up to 60 3.5-inch HDDs. (Not supported on VSP G130.)
• The Drive Box (DBF) can be installed with up to 12 FMDs. (only VSP G700 and G900)
• The Drive Box (DBN) can be installed with up to 24 flash drives (NVMe SSD). (only VSP E990)

• The Drive Box (DBS), the Drive Box (DBL), the Drive Box (DB60) and the Drive Box (DBF) can be
mixed in the Storage System.
• The number of installable Drives changes depending on the Storage System models and Drive Boxes.

The size of each unit is as shown below.


• VSP G130, G350, G370 Controller Chassis : 2U
• VSP G700, G900, VSP E990 Controller Chassis : 4U
• Drive Boxes (DBS/DBL/DBF/DBN) : 2U
• Drive Box (DB60) : 4U
• The minimum configuration of the Storage System consists of one Controller Chassis and one Drive
Box because DW850 allows free allocation of HDDs to make a RAID group. However, to configure the
Storage System with super performance, it is recommended to install or add the Drive Box (DBS/DBL/
DBF) in units of four at a time or the Drive Box (DB60/DBN) in units of two at a time per Controller
Chassis.

Figure 4-5 DW850 Configuration

Drive Box addition


area: 32U

Controller Chassis:
VSP G130, G350, G370 : 2U
VSP G700, G900, VSP E990 : 4U

THEORY04-06-40
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-50

This page is for editorial purpose only.

THEORY04-06-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-60

4.6.3 Hardware Architecture


1. Controller Chassis (DKC)
• The VSP G130 Controller Chassis (DKC) consists of Controller Board (CTL), Power Supply (DKCPS)
and Drives.
• The VSP G350, G370 Controller Chassis (DKC) consists of Controller Board (CTL), Channel Board
(CHB), Power Supply (DKCPS), Backup Module (BKM), Cache Flash Memory (CFM) and Drives.
• The VSP G700, G900, VSP E990 Controller Chassis (DKC) consists of Controller Board (CTL),
Channel Board (CHB), Disk Board (DKB), Power Supply (DKCPS), Backup Module (BKMF) and
Cache Flash Memory (CFM).
• The Cache Memory is installed in the Controller Boards.
• The Battery and the Cache Flash Memory are also installed in the Controller Boards to prevent data
loss in case of a power outage or the like.
• The Storage System continues to operate when a single point of failure occurs, by adopting a duplexed
configuration for each Controller Board (CTL, LANB, CHB, DKB) and the Power Supply Unit,
and a redundant configuration for the Power Supply Unit and the cooling fan. The addition and the
replacement of the components and the upgrading of the firmware can be processed while the Storage
System is in operation. However, when performing the maintenance and replacement of the Controller
Boards, the Channel Boards and the Disk Boards in the cluster are blocked.

2. Drive Box (DBS)


• The Drive Box (DBS) is a chassis to install the 2.5-inch Disk Drives and the 2.5-inch Flash Drives, and
it consists of ENC and the integrated cooling fan power supply.
• The duplex configuration is adopted in ENC and Power Supply Unit, and the redundant configuration
is adopted in Power Supply Unit and the cooling fan. All the components can be replaced and added
while the Storage System is in operation.

3. Drive Box (DBL)


• The Drive Box (DBL) is a chassis to install the 3.5-inch Disk Drives and it consists of ENC and the
integrated cooling fan power supply.
• The duplex configuration is adopted in ENC and Power Supply Unit, and the redundant configuration
is adopted in Power Supply Unit and the cooling fan. All the components can be replaced and added
while the Storage System is in operation.

4. Drive Box (DB60)


• The Drive Box (DB60) is a chassis to install the 2.5/3.5-inch Disk Drives and it consists of ENC and
the integrated cooling fan power supply.
• The duplex configuration is adopted in ENC and Power Supply Unit, and the redundant configuration
is adopted in Power Supply Unit and the cooling fan. All the components can be replaced and added
while the Storage System is in operation.

THEORY04-06-60
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-70

5. Drive Box (DBF)


• The Drive Box (DBF) is a chassis to install the Flash Module Drives (FMD), and consists of ENC, and
it consists of ENC and the integrated cooling fan power supply.
• The duplex configuration is adopted in ENC and Power Supply Unit, and the redundant configuration
is adopted in Power Supply Unit and the cooling fan. All the components can be replaced and added
while the Storage System is in operation.

6. Drive Box (DBN)


• The Drive Box (DBN) is a chassis to install the 2.5-inch Flash Drives with NVMe interface, and it
consists of two ENCs and two integrated cooling fan power supplies.
• The duplex configuration is adopted in ENC and Power Supply Unit, and the redundant configuration
is adopted in Power Supply Unit and the cooling fan. All the components can be replaced and added
while the Storage System is in operation.

7. Channel Board Box (CHBB)


• The Channel Board Box (CHBB) is a chassis to install the channel options, and consists of PCIe-cable
Connecting Package (PCP), Power Supply and Switch Package (SWPK).
• Channel Board Box (CHBB) can connect only to VSP G900 model, VSP F900 model and VSP E990
model.
• The duplex configuration is adopted in SWPK and Power Supply Unit, and the redundant configuration
is adopted in Power Supply Unit. All the components can be replaced and added while the storage
system is in operation.

THEORY04-06-70
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-80

Figure 4-6 DW850 Hardware Configuration Overviews (Back-end SAS)

To Next Drive Box To Next Drive Box

Drive Box 03

Power Supply HDD


PDU Unit ENC ENC
AC INPUT
HDD
Power Supply
PDU Unit

Drive Box 02

Power Supply
HDD
PDU Unit ENC ENC
AC INPUT
HDD
Power Supply
PDU Unit

Drive Box 01

Power Supply HDD


PDU Unit ENC ENC
AC INPUT
HDD
Power Supply
PDU Unit

Drive Box 00
Power Supply HDD
PDU Unit ENC ENC
AC INPUT
HDD
Power Supply
PDU Unit

Drive Path (4path each)

Controller
Chassis SAS
(12Gbps/port)
LANB LANB

DKB-1 CTL CTL DKB-2


DIMM DIMM
Power Supply BKM/BKMF BKM/BKMF
PDU CFM CFM
Unit
DKB-1 DKB-2
AC INPUT
Power Supply
PDU Unit
CHB CHB

GCTL
+
GUM

Fibre Channel Interface/iSCSI Interface

THEORY04-06-80
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-81

Figure 4-7 DW850 Hardware Configuration Overviews (Back-end NVMe)

Drive Box 01
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit

Drive Box 00
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit

Drive Path (4path each)

Controller
Chassis NVMe
(8Gbps/port)
LANB LANB

DKB-1 CTL CTL DKB-2


DIMM DIMM
Power BKM/BKMF BKM/BKMF
PDU CFM CFM
Supply Unit
DKB-1 DKB-2
AC INPUT
Power
PDU Supply Unit
CHB CHB

GCTL
+
GUM

Fibre Channel Interface/iSCSI Interface

THEORY04-06-81
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-90

8. Drive Path
(1) When using 2.5-inch HDD (SFF)
DW850 controls 1,152 HDDs with eight paths.

Figure 4-8 Drive Path Connection Overview when using 2.5-inch Drives

DB00 to DB23 DB24 to DB47


(Basic) (Optional)

DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB
24HDDs/DB 24HDDs/DB
DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB

CBL

(2) When using 3.5-inch HDD (LFF)


DW850 controls 576 HDDs with eight paths.

Figure 4-9 Drive Path Connection Overview when using 3.5-inch Drives

DB00 to DB23 DB24 to DB47


(Basic) (Optional)

DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB
12HDDs/DB 12HDDs/DB
DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB

CBL

THEORY04-06-90
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-100

(3) When using 3.5-inch HDD


DW850 controls 1,440 HDDs with eight paths.

Figure 4-10 Drive Path Connection Overview when using 3.5-inch Drives

DB00 to DB11 DB12 to DB23


(Basic) (Optional)

DB DB DB DB DB DB DB DB

DB 60HDDs/DB
DB DB DB DB 60HDDs/DB
DB DB DB

DB DB DB DB DB DB DB DB

CBL

NOTICE: Up to six DB60 can be installed in a rack. Up to five DB60 can be installed in a rack
when a DKC (H model) is installed there.
Install the DB60 at a height of 1,300 mm or less above the ground (at a range
between 2U and 26U).

(4) When using Flash Module Drive (FMD) (DBF)


DW850 controls 576 FMDs with eight paths.

Figure 4-11 Drive Path Connection Overview when using FMDs (DBF)

DB00 to DB23 DB24 to DB47


(Basic) (Optional)

DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB
12FMDs/DB 12FMDs/DB
DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB

DB DB DB DB DB DB DB DB

CBL

THEORY04-06-100
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-101

(5) When using Flash Drive (NVMe SSD) (DBN)


DW850 controls 96 Flash Drives (NVMe SSD) with 16 paths.

Figure 4-12 Drive Path Connection Overview when using Flash Drives (NVMe SSD) (DBN)

DB00 to DB01 DB02 to DB03


(Basic) (Optional)

24SSDs/DB
DB DB DB DB

CBL

THEORY04-06-101
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-110

4.6.4 Hardware Component


1. Controller Chassis (DKC)
(1) VSP G130 Model
The Controller Chassis for the VSP G130 model has a Controller Board (CTL), Power Supply (PS)
and Disk Drives.

Figure 4-13 Controller Chassis (CBXSS)


Front View Rear View

Controller Board
HDD

Controller Chassis LAN Controller Chassis


Channel SAS PS

Figure 4-14 Controller Chassis (CBXSL)

Front View Rear View

Controller Board
HDD

Controller Chassis LAN Controller Chassis


Channel SAS PS

THEORY04-06-110
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-111

(2) VSP G350, G370 Model


The Controller Chassis for the VSP G350, G370 model has a Controller Board (CTL), Channel
Board (CHB), Power Supply (PS), Cache Flash Memories (CFMs) and Disk Drives.
• Channel Boards (CHB) is installed two or more. The addition unit of Channel Boards (CHB) is
two. A maximum of four Channel Boards is installable.

Figure 4-15 Controller Chassis (CBSS1/CBSS2)


Front View Rear View

Battery
CHB
HDD

CFM
Controller Chassis LAN Controller Chassis
UPS SAS PS

Figure 4-16 Controller Chassis (CBSL1/CBSL2)

Front View Rear View

Battery
CHB
HDD

CFM
Controller Chassis LAN Controller Chassis
UPS SAS PS

THEORY04-06-111
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-120

(3) VSP G700, G900, VSP E990 Model


The Controller Chassis of the VSP G700, G900, VSP E990 model installs Controller Board (CTL),
Channel Board (CHB), Power Supply (PS) and Cache Flash Memories (CFMs).
• When installing Disk Drives (HDD) in the VSP G700, G900 model Disk Boards (DKB) need
to be installed in the Controller Chassis. In a disk-less configuration Disk Boards (DKB) are not
required.
• Up to four Disk Boards (DKBs) (for the VSP G700 model) or eight DKBs (for the VSP G900
model) can be installed in the Disk Board slots. For the VSP G900 model, the DKBs are installed
in increments of two boards per Controller Board (four boards per Controller Chassis).
• For the VSP E990 model, up to eight Disk Boards (DKBNs) can be installed in the Disk Board
slots. The DKBNs are installed in increments of two boards per Controller Board (four boards per
Controller Chassis).
• For the VSP G700, G900, and VSP E990 models, up to 12 Channel Boards (CHBs) can be
installed. In the disk-less configuration of VSP G700 and G900 models, up to 16 CHBs (up to 20
CHBs when the Channel Board Box is installed) can be installed.

Figure 4-17 Controller Chassis (VSP G700, G900, VSP E990 model)

Front View Rear View

Controller Board 2
CHB-2
PS-1

Controller Chassis Controller Chassis


CHB-1 PS-2
Controller Board 1 CFM DKB VSP G700 model

CHB-2

PS-1

CHB-1 Controller Chassis


PS-2
DKB VSP G900, VSP E990 model

THEORY04-06-120
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-130

Figure 4-18 Channel Board Box (CHBB)

PCP1 PCP2
CHBBPS2

Rear view of CHBB


CHBBPS1

THEORY04-06-130
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-140

2. Controller Boards (CTL)


The Controller Board has Cache Memories (DIMMs) and Cache Flash Memories (CFMs).

Table 4-31 Controller Boards Specifications


Support Model VSP G130 VSP G350 VSP G370
Model Name DW-F850- DW-F850- DW-F850- DW-F850- DW-F850-
CTLXS CTLS CTLSE CTLSH CTLSHE
Number of PCB 1 1 1
Necessary number of 2 2 2
PCB per Controller
Chassis
Number of DIMM 1 2 2
slot
Cache Memory 16 GiB 32 GiB to 64 GiB 64 GiB to 128 GiB
Capacity
Data encryption Not supported Not supported Supported Not supported Supported

Support Model VSP G700 VSP G900 VSP E990


Model Name DW-F850- DW-F850- DW-F850-
CTLM CTLH CTLHN
Number of PCB 1 1 1
Necessary number of 2 2 2
PCB per Controller
Chassis
Number of DIMM 8 8 8
slot
Cache Memory 64 GiB to 256 128 GiB to 512 128 GiB to 512
Capacity GiB GiB GiB
Data encryption - - -

THEORY04-06-140
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-141

Figure 4-19 Top of Controller Board (VSP G130 Model)

Top view

• DIMM00 belongs to CMG0 (Cache Controller Board


Memory Group 0).
• Make the other Controller Board be
the same addition configuration.
DIMM Location

DIMM00

Figure 4-20 Top of Controller Board (VSP G350, G370 Model)

Top view

• DIMM00 and DIMM01 belong to Controller Board


CMG0 (Cache Memory Group 0).
• Be sure to install the DIMM in
CMG0.
• Install the same capacity of DIMMs
by a set of two. DIMM Location
• Furthermore, make the other
Controller Board be the same DIMM01
addition configuration. DIMM00

THEORY04-06-141
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-150

Figure 4-21 Top of Controller Board (VSP G700, G900, VSP E990 Model)

Top view

Controller Board

DIMM Location
• The DIMM with the DIMM location
number DIMM0x belongs to CMG0
(Cache Memory Group 0) and the DIMM13
DIMM with DIMM1x belongs to DIMM12
CMG1 (Cache Memory Group 1).
• Be sure to install the DIMM in DIMM02
CMG0. DIMM03
• Install the same capacity of DIMMs
by a set of four.
DIMM11
• CMG1 is a slot for adding DIMMs.
DIMM10
• Furthermore, make the other
Controller Board be the same
addition configuration. DIMM00
DIMM01

THEORY04-06-150
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-160

CFM/Battery addition of the VSP G700, G900 model is as shown below.


• VSP G700 model :
For DIMM capacity CTL of 256 GiB or more, add CFM to CFM-11/21.
• VSP G900, VSP E990 model :
For 32 GiB DIMM capacity CTL of 256 GiB or more, add CFM to CFM-11/21.
For 64 GiB DIMM capacity CTL of 512 GiB or more, add CFM to CFM-11/21.

The installable CFMs are shown below by models.


• VSP G130 model : BM05
• VSP G350,G370 model : BM15
• VSP G700 model : BM35
• VSP G900 model : BM35 or BM45
• VSP E990 model : BM55, BM65, BM5E or BM6E
NOTE : • It is necessary to match the type (model name) of CFM-10/20 and CFM-11/21
(addition side).
When adding Cache Memories, check the model name of CFM-10/20 and add the
same model.
• When replacing Cache Flash Memories, it is necessary to match the type (model
name) defined in the configuration information.
Example: When the configuration information is defined as BM35, replacing to
BM45 is impossible.

Table 4-32 Controller Board (VSP G130 Model)


DIMM Capacity of Capacity of Types of Number of Batteries
Number of
Model Capacity DIMMs DIMMs CFM installed Installed in System
DIMMs/CTL
(GiB) (GiB)/CTL (GiB)/System in CFM-1/2 (BAT-1/2)
VSP G130 16 1 16 32 BM05 2

Figure 4-22 Controller Board (VSP G130 Model)

Cover Controller Board

CFM

FAN

Battery

THEORY04-06-160
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-161

Table 4-33 Correspondence List of DIMM Capacity and CFM, BKM (VSP G350, G370
model)
DIMM Capacity of Capacity of Types of Number of Batteries
Number of
Model Capacity DIMMs DIMMs CFM installed Installed in System
DIMMs/CTL
(GiB) (GiB)/CTL (GiB)/System in CFM-1/2 (BAT-1/2)
VSP G370 64 2 128 256 BM15 2
32 2 64 128 BM15 2
VSP G350 32 2 64 128 BM15 2
16 2 32 64 BM15 2

Figure 4-23 Controller Board (VSP G350, G370 Model)


FAN

Controller Board

Battery
CFM

THEORY04-06-161
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-170

Table 4-34 Correspondence List of DIMM Capacity and CFM, BKMF (VSP G700, G900, VSP
E990 model)
DIMM Number Capacity Capacity Types of CFM Types of CFM Number of
Capacity of of of installed in installed in Batteries
Model (GiB) DIMMs/ DIMMs DIMMs CFM-10/20 CFM-11/21 Installed in
CTL (GiB)/ (GiB)/ (*2) (*2) System (*1)
CTL System
VSP E990 64 8 512 1,024 BM65/BM6E BM65/BM6E 6
4 256 512 BM65/BM6E - 6
32 8 256 512 BM55/BM5E BM55/BM5E 6
4 128 256 BM55/BM5E - 6
VSP G900 64 8 512 1,024 BM45 BM45 6
4 256 512 BM45 - 6
32 8 256 512 BM35 BM35 6
4 128 256 BM35 - 6
VSP G700 32 8 256 512 BM35 BM35 6
4 128 256 BM35 - 6
16 8 128 256 BM35 - 6
4 64 128 BM35 - 6
*1 : (BKMF-x1/x2/x3)
*2 : • It is necessary to match the type (model name) of CFM-10/20 and CFM-11/21 (additional side).
When adding Cache Memories, check the model name of CFM-10/20 and add the same model.
• When replacing Cache Memories, it is necessary to match the type (model name) defined in the
configuration information.
Example: When the configuration information is defined as BM35, replacing to BM45 is
impossible.

NOTICE: Adding a battery for BKMF-10 and BKMF-20 is impossible.

THEORY04-06-170
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-180

Figure 4-24 Controller Board (VSP G700, G900, VSP E990 Model)

Controller Board

Battery
BKMF
CFM

THEORY04-06-180
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-190

3. Cache Memory (DIMM)


DW850 can use three types of DIMM capacity.

Table 4-35 Cache Memory Specifications


Capacity Component Model Number
16 GiB 16 GiB RDIMM × 1 DW-F850-CM16G
32 GiB 32 GiB RDIMM × 1 DW-F850-CM32G
64 GiB 64 GiB RDIMM × 1 DW-F850-CM64G
64 GiB LRDIMM × 1 DW-F850-CM64GL

4. Cache Flash Memory (CFM)


The Cache Flash Memory saves the Cache Memory data when a power failure occurs.

5. Battery
(1) The battery for the data saving is installed on each Controller Board in DW850.
• When the power failure continues for more than 20 milliseconds, the Storage System uses power
from the batteries to back up the Cache Memory data and the Storage System configuration data
onto the Cache Flash Memory.
• Environmentally friendly nickel hydride battery is used for the Storage System.

Figure 4-25 Data Backup Process

Power Failure Occurs


Detection of Power Failure
Backing up data
VSP G130, G350, G370 model : Max. 8.5 minutes unrestrictedly.
20ms VSP G700, G900 model : Max. 13 minutes
Data Backup
Mode (*1)
The Cache Memory data and the Storage
Storage Data is backed up in the
System configuration data are backed up
System Cache Flash Memory.
onto the Cache Flash Memory.
Operating

*1: The data backup processing is continued when the power outage is
restored while the data is being backed up.

THEORY04-06-190
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-200

(2) Installation Location of Battery


The Storage System has the Batteries shown in Figure 4-26, Figure 4-27 and Figure 4-28.

Figure 4-26 Battery Location (CBLH/CBLHN)

Controller Board 2

Controller Board 1
Front view of CBLH

Controller Board

Battery

BKMF

Figure 4-27 Battery Location (CBSS/CBSL)

Controller Board 1 Controller Board 2

Rear view of CBSS/CBSL

Battery

Controller Board

BKM

THEORY04-06-200
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-210

Figure 4-28 Battery Location (CBXSS/CBXSL)

Controller Board 1 Controller Board 2

Rear view of CBXSS/CBXSL

Cover Controller Board

Battery

(3) Battery lifetime


The battery life time is affected by the battery temperature. The battery temperature changes
depending on the intake temperature and height of the Storage System, the configuration and
operation of the Controllers and Drives (VSP G130, G350, G370 only), charge-discharge count,
variation in parts and others, the battery lifetime will be two to five years.
The battery lifetime (expected value) in the standard environment is as shown below.

Storage System Intake


CBLH/CBLHN CBXSS/CBSS CBXSL/CBSL
Temperature
Up to 24 degrees 5 years 5 years 5 years
Celsius
Up to 30 degrees 5 years 5 years 4 years
Celsius
Up to 34 degrees 4 years 4 years 3 years
Celsius
Up to 40 degrees 3 years 3 years 2 years
Celsius

THEORY04-06-210
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-220

(4) Relation between Battery Charge Level and System Startup Action
No. Power Status Battery Charge Level System Startup Action
1 PS ON <Case1> The system does not start up until the battery
The battery charge level of both charge level of either or both of the Controller
the Controller Boards is below Boards becomes 30% or more. (It takes a maximum
30%. of 90 minutes (*2).) (*1)
2 <Case2> SIM that shows the lack of battery charge is
The battery charge level of both reported and the system starts up.
the Controller Boards is below I/O is executed by the pseudo through operation
50%. until the battery charge level of either or both of the
(In the case other than Case1) Controller Boards becomes 50% or more. (It takes
a maximum of 60 minutes (*2).)
3 <Case3> The system starts up normally.
Other than <Case1>, <Case2>. If the condition changed from Case2 to Case3
(The battery charge level of during startup, SIM that shows the completion of
either or both of the Controller battery charge is reported.
Boards is 50% or more.)
*1: Action when System Option Mode 837 is off (default setting).
*2: Battery charge time: 4.5 hours to charge from 0% to 100%.

(5) Relation between Power Status and SM/CM Data Backup Methods
No. Power Status SM/CM Data Backup Methods Data Restore Methods during
Restart
1 PS OFF (planned power off) SM data (including CM directory SM data is restored from CFM.
information) is stored in CFM If CM data was stored, CM data is
before PS OFF is completed. also restored from CFM.
If PIN data exists, all the CM data
including PIN data is also stored.
2 When power Instant power If power is recovered in a moment, SM/CM data in memory is used.
outage occurs outage SM/CM data remains in memory
and is not stored in CFM.
3 Power outage All the SM/CM data is stored in All the SM/CM data is restored
while the CFM. from CFM.
system is in If a power outage occurred after If CM data was not stored, only
operation the system started up in the CM data is volatilized and the
condition of Case2 (the battery system starts up.
charge level of both the Controller
Boards had been below 50%),
only SM data is stored.
4 Power outage Data storing in CFM is not done. The data that was stored in the
while the (The latest backup data that was latest power off operation or
system is successfully stored remains.) power outage is restored from
starting up CFM.

THEORY04-06-220
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-230

(6) Action When CFM Error Occurs


No. DKC Status Description of Error Action When Error Occurs
1 In operation CFM error or data comparing error • CFM Failure SIMRC = 30750x
was detected at the time of CFM (Environmental error: CFM Failure) is
health check (*1). output.
2 Planed power CFM error was detected, and • DKC power off process is executed.
off or power moreover, retry failed four times • Blockage occurs in Controller Board or
outage during data storing. CMG in Controller Board depending
• Data storing error is managed in a on the location of the failed memory.
per module group (MG) basis and For details, refer to “2.5.2 Maintenance/
is classified into data storing error Failure Blockade Specification”.
only in the MG concerned and data
storing error in all the MG depending
on the location of the failed memory.
3 When powered CFM error or protection cord (*2) • Blockage occurs in Controller Board or
on -1 error occurred during data restoring. CMG in Controller Board depending on
(In the case that the location of the failed memory.
data storing was • If the failed memory is in CMG0, the
successfully Controller Board concerned becomes
done in No.2) blocked. If the failed memory is in CMG1,
the CACHE concerned is volatilized and
the system starts up.
(If data in the other Controller Board can
be restored, the data is not lost.)
4 When powered — • Blockage occurs in Controller Board or
on -2 CMG in Controller Board depending on
(In the case that the location of data storing error. (Same as
data storing described in No.2.)
failed in No.2)
*1: CFM health check: Function that executes the test of read and write of a certain amount of data at
specified intervals to CFM while the DKC is in operation.
*2: Protection code: The protection code (CRC) is generated and saved onto CFM at the time of data
storing in CFM and is checked at the time of data restoring.
NOTE: CFM handles only the data in the Controller Board in which it is installed.

e.g.: Cache data in CTL1 is not stored in CFM which is installed in CTL2.
Similarly, CFM data in the CTL1 is not restored to Cache Memory in CTL2.

(7) Notes during Planned Power Off (PS OFF)


Removing the Controller Board when the system is off and the breakers on the PDU are on may
result in <Case1> of (1) because of the lack of battery charge.
Therefore, to remove the board and the battery, replace them when the system is on, or remove
them after the breakers on the PDU are powered off.

THEORY04-06-230
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-240

6. Disk Board (DKB, DKBN, EDKBN)


The Disk Board (DKB) controls data transfer between the Drive and Cache Memory. Controller 1 and 2
should be in the same configuration on the Disk Board.

Table 4-36 Disk Board Specifications


Model Number VSP G700, G900
DKB
DW-F800-BS12G DW-F800-BS12GE
Number of PCB 1 1
Necessary number of PCB per Controller Chassis 1 1
Data Encryption Not Supported Supported
Performance of SAS Port 12 Gbps 12 Gbps
Performance of NVMe Port - -

Model Number VSP E990


DKBN EDKBN
DKC-F910I-BN8G DKC-F910I-BN8GE
Number of PCB 1 1
Necessary number of PCB per Controller Chassis 1 1
Data Encryption Not Supported Supported
Performance of SAS Port - -
Performance of NVMe Port 8 Gbps 8 Gbps

Table 4-37 The Number of Installed DKBs and SAS Ports by Model
Item VSP G130, G350, G370 VSP G700 VSP G900
Number of DKB Built into CTL 2 piece / cluster 2, 4 piece / cluster
(4 piece / system) (4, 8 piece / system)
Number of DKBN/ - - -
EDKBN
Number of SAS Port 1 port / cluster 4 port / cluster 4, 8 port / cluster
(2 port / system) (8 port / system) (8, 16 port / system)
Number of NVMe Port - - -

Item VSP E990


Number of DKB -
Number of DKBN/ 2, 4 piece / cluster
EDKBN (4, 8 piece / system)
Number of SAS Port -
Number of NVMe Port 4, 8 port / cluster
(8, 16 port / system)
The VSP G700, G900 and VSP E990 model also supports the HDD-less configuration without DKB
installed.
THEORY04-06-240
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-250

7. Channel Board (CHB)


The Channel Board controls data transfer between the upper host and the Cache Memory.
It supports the following CHBs. The addition is common to the VSP G350, G370, G700, G900 and VSP
E990 models.
The number of CHBs and CHB types needs to be in the same configuration between clusters.
The Channel Board (CHB) for the VSP G130 is not an independent package board (PCB). It is integrated
in the Controller Board of the VSP G130.

Table 4-38 Types CHB


Type Option Name
32 G 4Port FC DW-F800-4HF32R
10 G 2Port iSCSI (Optic) DW-F800-2HS10S
10 G 2Port iSCSI (Copper) DW-F800-2HS10B

Table 4-39 Types CHB Function for VSP G130


Type Option Name
16G 2Port FC DW-F850-CTLXSFA
10G 2Port iSCSI (Optic) DW-F850-CTLXSSA
10G 2Port iSCSI (Copper) DW-F850-CTLXSCA

The number of installable CHBs is shown below.

Table 4-40 The Number of Installable CHBs by Model (VSP G130, G350, G370)
Item VSP G130 VSP G350, G370
Minimum installable Built into CTL 1 piece/cluster
number 2 port/cluster (2 piece/system)
Maximum installable (4 port/system) 2 piece/cluster
number (HDD) (4 piece/system)
Maximum installable 2 piece/cluster
number (HDD less) (4 piece/system)

Table 4-41 The Number of Installable CHBs by Model (VSP G700)


Item VSP G700
Minimum installable 1 piece/cluster
number (2 piece/system)
Maximum installable 6 piece/cluster
number (HDD) (12 piece/system)
Maximum installable 8 piece/cluster
number (HDD less) (16 piece/system)

THEORY04-06-250
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-260

Table 4-42 The Number of Installable CHBs by Model (VSP G900, VSP E990)
Item VSP G900, VSP E990
CHBB is not installed CHBB is installed
Minimum installable 1 piece/cluster 2 piece/cluster
number (2 piece/system) (4 piece/system)
Maximum installable 4 piece/cluster 6 piece/cluster
number (HDD) (8 piece/system) (*1) (12 piece/system) (*1)
6 piece/cluster 8 piece/cluster
(12 piece/system) (16 piece/system)
Maximum installable 8 piece/cluster 10 piece/cluster
number (HDD less) (16 piece/system) (20 piece/system)
*1: When installing four DKBs per cluster.

THEORY04-06-260
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-270

The CHB for Fibre Channel connection can correspond to Shortwave or Longwave by port unit by
selecting a transceiver to be installed in each port.
Note that a port of each CHB installs a transceiver for Shortwave as standard.
When changing to a Longwave supported port, addition of DKC-F810I-1PL16 (SFP for 16Gbps
Longwave) is required.

Table 4-43 Maximum cable length (Fibre Channel, Shortwave)


Item Maximum cable length
Data Transfer OM2 (50/125 µm OM3 (50/125 µm laser OM4 (50/125 µm laser
Rate multi-mode fibre) optimized multi-mode fibre) optimized multi-mode fibre)
400 MB/s 150 m 380 m 400 m
800 MB/s 50 m 150 m 190 m
1600 MB/s 35 m 100 m 125 m
3200 MB/s 20 m 70 m 100 m

Table 4-44 Maximum cable length (iSCSI, Shortwave)


Item Maximum cable length
Data Transfer OM2 (50/125µm OM3 (50/125µm laser OM4 (50/125µm laser
Rate multi-mode fibre optimized multi-mode fibre) optimized multi-mode fibre)
1000 MB/s 82 m 300 m 550 m

Table 4-45 Maximum cable length ( iSCSI (Copper))


Item Maximum cable length
Data Transfer Cable type : Category 5e LAN cable Cable type : Category 6a LAN cable
Rate Corresponding Transmission Band : Corresponding Transmission Band :
1000 BASE-T 10G BASE-T
100 MB/s 100 m 100 m
1000 MB/s - 50 m

THEORY04-06-270
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-280

8. Drive Box (DBS)


The Drive Box (DBS) is a chassis to install the 2.5-inch Disk Drives and the 2.5-inch Flash Drives, and
consists of two ENCs and two Power Supplies with a built-in cooling fan.

Figure 4-29 Drive Box (DBS)


Front View Rear View

Power Supply with a


SFF HDD
ENC built-in cooling fan

24 SFF HDDs can be installed. ENC and Power Supply take a duplex configuration.

9. Drive Box (DBL)


The Drive Box (DBL) is a chassis to install the 3.5-inch Disk Drives and consists of two ENCs and two
Power Supplies with a built-in cooling fan.

Figure 4-30 Drive Box (DBL)


Front View Rear View

Power Supply with a


LFF HDD ENC
built-in cooling fan

12 LFF HDDs can be installed. ENC and Power Supply take a duplex configuration.

THEORY04-06-280
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-290

10. Drive Box (DB60)


The Drive Box (DB60) is a chassis to install the 2.5/3.5-inch Disk Drives consists of two ENCs and two
Power Supplies with a built-in cooling fan.

Figure 4-31 Drive Box (DB60)


Front View Rear View

LFF HDD

Power Supply with a


ENC built-in cooling fan
60 LFF HDDs can be installed. ENC and Power Supply take a duplex configuration.

11. Drive Box (DBF)


The Drive Box (DBF) is a chassis to install the Flash Module Drives (FMD), and consists of two ENCs
and two Power Supplies with a built-in cooling fan.

Figure 4-32 Drive Box (DBF)


Front View Rear View

ENC

Power Supply with a


FMD built-in cooling fan
12 FMDs can be installed. ENC and Power Supply take a duplex configuration.

THEORY04-06-290
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-300

12. Drive Box (DBN)


The Drive Box (DBN) is a chassis to install the 2.5-inch NVMe-interface Flash Drives, and consists of
two ENCs and two Power Supplies with a built-in cooling fan.

Figure 4-33 Drive Box (DBN)

Front View Rear View

Power Supply with a


SFF Drive ENC
built-in cooling fan

24 SFF Drives can be installed. ENC and Power Supply take a duplex configuration.

13. Channel Board Box (CHBB)


The Channel Board Box (CHBB) is a chassis to install the Channel Board (CHB), and consists of two
PCIe-cable Connecting Packages (PCP), two power supplies and two Switch Packages (SWPK).

Figure 4-34 Channel Board Box (CHBB)

Front View Rear View

PCP
SWPK CHB
CHBBPS
8 CHBs can be installed.

THEORY04-06-300
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-310

14. Disk Drive, Flash Drive and Flash Module Drive


The Disk Drives, Flash Drives and Flash Module Drives supported by DW850 are shown below.

Table 4-46 Disk Drive, Flash Drive and Flash Module Drive Support Type
Revolution
Maximum
Speed (min-1)
Transfer
Group I/F Size (inch) or Capacity
Rate
Flash Memory
(Gbps)
Type
Disk Drive (HDD) SAS 2.5 (SFF) 6 10,000 600 GB, 1.2 TB
12 10,000 600 GB, 1.2 TB, 2.4 TB
SAS 3.5 (LFF) 6 10,000 1.2 TB
12 10,000 1.2 TB, 2.4 TB
12 7,200 6 TB, 10 TB, 14 TB
Flash Drive SAS 2.5 (SFF) 12 MLC/TLC 480 GB, 960 GB, 1.9 TB, 3.8 TB,
(SAS SSD) 7.6 TB, 15 TB, 30 TB
Flash Module Drive SAS ̶ 12 MLC 3.5 TB
(FMD) MLC/TLC 7 TB, 14 TB
Flash Drive NVMe 2.5 (SFF) 8 TLC 1.9 TB, 3.8 TB, 7.6 TB, 15 TB
(NVMe SSD)

THEORY04-06-310
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-320

Table 4-47 LFF Disk Drive Specifications


Item DKC-F810I-1R2J7MC DKC-F810I-2R4J8M
Disk Drive Seagate DKS5H-J1R2SS/ DKS5K-J2R4SS
Model Name DKS5K-J1R2SS
HGST DKR5E-J1R2SS/ ̶
DKR5G-J1R2SS
User Capacity 1152.79 GB 2305.58 GB
Number of heads DKS5H : 6 8
DKS5K : 4
DKR5E : 8
DKR5G : 6
Number of Disks DKS5H : 3 4
DKS5K : 2
DKR5E : 4
DKR5G : 3
Seek Time (ms) Average DKS5H : 4.4/4.8 4.4/4.8
(Read/Write) (*1) DKS5K : 4.2/4.6
DKR5E : 4.6/5.0
DKR5G : 3.5/4.2
Average latency time (ms) DKS5H : 2.9 2.9
DKS5K : 2.9
DKR5E : 3.0
DKR5G : 2.85
Revolution speed (min-1) 10,000 10,000
Data transfer rate (Gbps) DKS5H : 12 12
DKS5K : 12
DKR5E : 6
DKR5G : 12
Internal data transfer rate DKS5H : - ̶
(MB/s) DKS5K : -
DKR5E : Max. 279
DKR5G : Max. 357.4
(To be continued)
*1: The Controller Board overhead is excluded.

THEORY04-06-320
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-330

(Continued from preceding page)


DKC-F810I-6R0H9M/ DKC-F810I-10RH9M/ DKC-F810I-14RH9M/
Item
DKC-F810I-6R0HLM DKC-F810I-10RHLM DKC-F810I-14RHLM
Disk Drive Seagate DKS2F-H6R0SS/ DKS2J-H10RSS/ DKS2K-H14RSS/
Model Name DKS2H-H6R0SS/ DKS2K-H10RSS/ DKS2N-H14RSS
DKS2M-H6R0SS DKS2N-H10RSS
HGST DKR2G-H6R0SS DKR2H-H10RSS ̶
User Capacity 5874.22 GB 9790.36 GB 13706.50 GB
Number of heads DKS2F : 12 DKR2H : 14 DKS2K : 16
DKS2H : 10 DKS2J : 14 DKS2N : 18
DKS2M : 8 DKS2K : 13
DKR2G : 10 DKS2N : 16
Number of Disks DKS2F : 6 DKR2H : 7 DKS2K : 8
DKS2H : 5 DKS2J : 7 DKS2N : 9
DKS2M : 4 DKS2K : 8
DKR2G : 5 DKS2N : 9
Seek Time (ms) Average DKS2F : 8.5/9.5 DKR2H : 8.0/8.6 8.5/9.5
(Read/Write) (*1) DKS2H : 8.5/9.5 DKS2J : 8.5/9.5
DKS2M : 8.5/9.5 DKS2K : 8.5/9.5
DKR2G : 7.6/8.0 DKS2N : 8.5/9.5
Average latency time (ms) DKS2F : 4.16 4.16 4.16
DKS2H : 4.16
DKS2M : 4.16
DKR2G : 4.2
Revolution speed (min-1) 7,200 7,200 7,200
Data transfer rate (Gbps) 12 12 12
Internal data transfer rate DKS2F : Max. 226 DKR2H : Max. 277.75 DKS2K : Max 354.1
(MB/s) DKS2H : Max. 304 DKS2J : Max. 266.2 DKS2N : Max 346.5
DKS2M : Max. 356 DKS2K : Max. 354.1
DKR2G : Max. 270 DKS2N : Max 346.5
*1: The Controller Board overhead is excluded.

THEORY04-06-330
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-340

Table 4-48 SFF Disk Drive Specifications


Item DKC-F810I-600JCMC DKC-F810I-1R2JCMC
Disk Drive Seagate DKS5E-J600SS/ DKS5F-J1R2SS/
Model Name DKS5H-J600SS/ DKS5H-J1R2SS/
DKS5K-J600SS DKS5K-J1R2SS
HGST DKR5D-J600SS/ DKR5E-J1R2SS/
DKR5G-J600SS DKR5G-J1R2SS
User Capacity 576.39 GB 1152.79 GB
Number of heads DKS5E : 4 DKS5F : 8
DKS5H : 3 DKS5H : 6
DKS5K : 2 DKS5K : 4
DKR5D : 4 DKR5E : 8
DKR5G : 3 DKR5G : 6
Number of Disks DKS5E : 2 DKS5F : 4
DKS5H : 2 DKS5H : 3
DKS5K : 1 DKS5K : 2
DKR5D : 2 DKR5E : 4
DKR5G : 2 DKR5G : 3
Seek Time (ms) Average DKS5E : 3.6/4.1 DKS5F : 3.7/4.3
(Read/Write) (*1) DKS5H : 4.2/4.6 DKS5H : 4.4/4.8
DKS5K : 4.2/4.6 DKS5K : 4.2/4.6
DKR5D : 3.8/4.2 DKR5E : 4.6/5.0
DKR5G : 3.3/3.8 DKR5G : 3.5/4.2
Average latency time (ms) DKS5E : 2.9 DKS5F : 3.0
DKS5H : 2.9 DKS5H : 2.9
DKS5K : 2.9 DKS5K : 2.9
DKR5D : 3.0 DKR5E : 3.0
DKR5G : 2.85 DKR5G : 2.85
Revolution speed (min-1) 10,000 10,000
Interface data transfer rate DKS5E : 6 DKS5F : 6
(Gbps) DKS5H : 12 DKS5H : 12
DKS5K : 12 DKS5K : 12
DKR5D : 6 DKR5E : 6
DKR5G : 12 DKR5G : 12
Internal data transfer rate DKS5E : Max. 293.8 DKS5F : Max. 293.8
(MB/s) DKS5H : - DKS5H : -
DKS5K : - DKS5K : -
DKR5D : Max. 279 DKR5E : Max. 279
DKR5G : Max. 357.4 DKR5G : Max. 357.4
(To be continued)
*1: The Controller Board overhead is excluded.

THEORY04-06-340
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-350

(Continued from preceding page)


Item DKC-F810I-2R4JGM
Disk Drive Seagate DKS5K-J2R4SS
Model Name HGST ̶
User Capacity 2305.58 GB
Number of heads 8
Number of Disks 4
Seek Time (ms) Average 4.4/4.8
(Read/Write) (*1)
Average latency time (ms) 2.9

Revolution speed (min-1) 10,000


Interface data transfer rate 12
(Gbps)
Internal data transfer rate ̶
(MB/s)
*1: The Controller Board overhead is excluded.

THEORY04-06-350
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-360

Table 4-49 SFF Flash Drive (SAS SSD) Specifications


Item DKC-F810I-480MGM DKC-F810I-960MGM DKC-F810I-1R9MGM
Flash Drive Toshiba SLB5F-M480SD/ SLB5F-M960SD/ SLB5E-M1R9SD/
Model Name SLB5G-M480SD SLB5G-M960SS SLB5G-M1R9SS
HGST ̶ ̶ ̶
Form Factor 2.5 inch 2.5 inch 2.5 inch
User Capacity 472.61 GB 945.23 GB 1890.46 GB
Flash memory type SLB5F: MLC SLB5F: MLC SLB5E: MLC
SLB5G: TLC SLB5G: TLC SLB5G: TLC
Interface data transfer rate 12 12 12
(Gbps)

Item DKC-F810I-1T9MGM DKC-F810I-3R8MGM DKC-F810I-7R6MGM


Flash Drive Toshiba SLB5I-M1T9SS SLB5F-M3R8SS/ SLB5G-M7R6SS
Model Name SLB5G-M3R8SS
HGST ̶ SLR5E-M3R8SS/ SLR5E-M7R6SS/
SLR5F-M3R8SS SLR5F-M7R6SS
Samsung SLM5B-M1T9SS SLM5A-M3R8SS/ SLM5A-M7R6SS/
SLM5B-M3R8SS SLM5B-M7R6SS
Form Factor 2.5 inch 2.5 inch 2.5 inch
User Capacity 1890.46 GB 3780.92 GB 7561.85 GB
Flash memory type TLC SLB5F: MLC TLC
SLB5G: TLC
SLR5E: TLC
SLR5F: TLC
SLM5A: TLC
SLM5B: TLC
Interface data transfer rate 12 12 12
(Gbps)

Item DKC-F810I-15RMGM DKC-F810I-30RMGM


Flash Drive Toshiba SLB5H-M15RSS ̶
Model Name HGST SLR5G-M15RSS ̶
Samsung SLM5B-M15RSS SLM5A-M30RSS/
SLM5B-M30RSS
Form Factor 2.5 inch 2.5 inch
User Capacity 15048 GB 30095 GB
Flash memory type TLC TLC
Interface data transfer rate 12 12
(Gbps)

THEORY04-06-360
Hitachi Proprietary DW850
Rev.5.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-370

Table 4-50 Flash Module Drive Specifications


Item DKC-F810I-3R2FN DKC-F810I-7R0FP DKC-F810I-14RFP
Flash Module Drive Model NFHAE-Q3R2SS NFHAF-Q6R4SS/ NFHAF-Q13RSS/
Name NFHAH-Q6R4SS/ NFHAH-Q13RSS/
NFHAJ-Q6R4SS/ NFHAJ-Q13RSS/
NFHAK-Q6R4SS/ NFHAK-Q13RSS/
NFHAL-Q6R4SS/ NFHAM-Q13RSS
NFHAM-Q6R4SS
Form Factor ̶ ̶ ̶
User Capacity 3518.43 GB 7036.87 GB 14073.74 GB
Flash memory type MLC NFHAF: MLC NFHAF: MLC
NFHAH: TLC NFHAH: TLC
NFHAJ: TLC NFHAJ: TLC
NFHAK: TLC NFHAK: TLC
NFHAL: MLC NFHAM: TLC
NFHAM: TLC
Interface data transfer rate 12 12 12
(Gbps)

THEORY04-06-370
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-380

Table 4-51 SFF Flash Drive (NVMe SSD) Specifications


Item DKC-F910I-1R9RVM DKC-F910I-3R8RVM DKC-F910I-7R6RVM
Flash Drive Model Name SNR5A-R1R9NC/ SNR5A-R3R8NC/ SNR5A-R7R6NC/
SNB5A-R1R9NC/ SNB5A-R3R8NC/ SNB5A-R7R6NC/
SNB5B-R1R9NC/ SNB5B-R3R8NC/ SNB5B-R7R6NC/
SNM5A-R1R9NC SNM5A-R3R8NC SNM5A-R7R6NC
Form Factor 2.5 inch 2.5 inch 2.5 inch
User Capacity 1890.46 GB 3780.92 GB 7561.85 GB
Flash memory type TLC TLC TLC
Interface data transfer rate 8 8 8
(Gbps)

Item DKC-F910I-15RRVM
Flash Drive Model Name SNB5A-R15RNC/
SNB5B-R15RNC/
SNN5A-R15RNC/
SNM5A-R15RNC
Form Factor 2.5 inch
User Capacity 15048.49 GB
Flash memory type TLC
Interface data transfer rate 8
(Gbps)

THEORY04-06-380
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-07-10

4.7 Mounted Numbers of Drive Box and the Maximum Mountable Number of Drive
Table 4-52 Mounted numbers of Drive Box and the maximum mountable number of drive
(VSP G130)
Number of mounted Drive Box (*1) Maximum mountable number of drives (*2)
Model name
DBS DBL DBS+DBL
VSP G130 3 0 96
(CBXSS) 2 2 96
1 4 96
0 6 96
VSP G130 3 1 96
(CBXSL) 2 3 96
1 5 96
0 7 96
*1: The maximum number of boxes that can be installed per PATH
VSP G130 : 7
*2: VSP G130 includes the drive to be installed in Controller Chassis.

Table 4-53 Mounted numbers of Drive Box and the maximum mountable number of drive
(VSP G350, G370, G700, G900)
Number of mounted Drive Box (*1) Maximum mountable number of drives (*2)
Model name
DBS/DBL/DBF DB60 DBS+DB60 DBL/DBF+DB60
VSP G350 7 0 192 108
(CBSS1/ 5 1 204 144
CBSS1E) (*3) 3 2 216 180
1 3 228 216
0 4 264 264
VSP G350 7 0 180 96
(CBSL1/ 5 1 192 132
CBSL1E) (*3) 3 2 204 168
1 3 216 204
0 4 252 252
VSP G370 11 0 288 156
(CBSS2/ 9 1 300 192
CBSS2E) (*3) 7 2 312 228
5 3 324 264
3 4 336 300
1 5 348 336
0 6 384 384
VSP G370 11 0 276 144
(CBSL2/ 9 1 288 180
CBSL2E) (*3) 7 2 300 216
5 3 312 252
3 4 324 288
1 5 336 324
0 6 372 372
(To be continued)

THEORY04-07-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-07-20

(Continued from preceding page)


Number of mounted Drive Box (*1) Maximum mountable number of drives (*2)
Model name
DBS/DBL/DBF DB60 DBS+DB60 DBL/DBF+DB60
VSP G700 36 0 864 432
33 1 852 456
32 2 888 504
29 3 876 528
28 4 912 576
25 5 900 600
24 6 936 648
21 7 924 672
20 8 960 720
17 9 948 744
16 10 984 792
13 11 972 816
12 12 1,008 864
9 13 996 888
8 14 1,032 936
5 15 1,020 960
4 16 1,056 1,008
1 17 1,044 1,032
0 18 1,080 1,080
0 19 1,140 1,140
0 20 1,200 1,200
(To be continued)

THEORY04-07-20
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-07-30

(Continued from preceding page)


Number of mounted Drive Box (*1) Maximum mountable number of drives (*2)
Model name
DBS/DBL/DBF DB60 DBS+DB60 DBL/DBF+DB60
VSP G900 48 0 1,152 576
45 1 1,140 600
44 2 1,176 648
41 3 1,164 672
40 4 1,200 720
37 5 1,188 744
36 6 1,224 792
33 7 1,212 816
32 8 1,248 864
29 9 1,236 888
28 10 1,272 936
25 11 1,260 960
24 12 1,296 1,008
21 13 1,284 1,032
20 14 1,320 1,080
17 15 1,308 1,104
16 16 1,344 1,152
13 17 1,332 1,176
12 18 1,368 1,224
9 19 1,356 1,248
8 20 1,392 1,296
5 21 1,380 1,320
4 22 1,416 1,368
1 23 1,404 1,392
0 24 1,440 1,440
*1: The maximum number of boxes that can be installed per PATH
VSP G350 : 7
VSP G370 : 11
VSP G700 : 12
VSP G900 : 6
*2: VSP G350, G370 includes the drive to be installed in Controller Chassis.
*3: The DBF cannot be connected.

Table 4-54 Mounted numbers of Drive Box and the maximum mountable number of drive
(VSP E990 models)
Number of mounted Drive Box (*1) Maximum mountable number of drives
Model name
DBN DBN (SSD)
VSP E990 4 96
*1: The maximum number of boxes that can be installed per PATH
VSP E990 : 1

THEORY04-07-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-07-40

Table 4-55 Mounted numbers of Drive Box and the maximum mountable number of drive
(VSP F350, F370, F700, F900 models)
Number of mounted Drive Box (*1) Maximum mountable number of drives
Model name
DBS DBF DBS (SSD) DBF (FMD)
VSP F350 7 ̶ 192 ̶
VSP F370 11 ̶ 288 ̶
VSP F700 36 ̶ 864 ̶
̶ 36 ̶ 432
VSP F900 48 ̶ 1,152 ̶
̶ 48 ̶ 576
*1: The maximum number of boxes that can be installed per PATH
VSP F350 : 7
VSP F370 : 11
VSP F700 : 12
VSP F900 : 6

THEORY04-07-40
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-10

4.8 Storage System Physical Specifications


Table 4-56 Storage System Physical Specifications
Heat Power Dimension (mm)
Weight Air Flow
# Model Number Output Consumption
(kg) Width Depth Height (m3/min)
(W) (VA) (*1)
1 DW850-CBL (VSP E990) 55.2 580 594 483.0 808.1 174.3 6.0
2 DW850-CBL (VSP G900) 55.2 453 493 483.0 808.1 174.3 6.0
3 DW850-CBL (VSP G700) 55.2 338 363 483.0 808.1 174.3 6.0
4 DW800-CBSS (VSP G370) 31.9 218 226 483.0 813.0 88.0 4.0
5 DW800-CBSS (VSP G350) 31.9 218 226 483.0 813.0 88.0 4.0
6 DW800-CBSS (VSP G130) 31.9 196 200 483.0 813.0 88.0 2.3
7 DW800-CBSL (VSP G370) 31.7 192 200 483.0 813.0 88.0 3.5
8 DW800-CBSL (VSP G350) 31.7 192 200 483.0 813.0 88.0 3.5
9 DW800-CBSL (VSP G130) 31.7 191 195 483.0 813.0 88.0 1.6
10 DW-F800-DBSC/ 17.0 116 126 482.0 565.0 88.2 2.2
DW-F800-DBSE
11 DW-F800-DBLC/ 17.4 124 144 482.0 565.0 88.2 2.2
DW-F800-DBLE
12 DW-F800-DB60C 36.0 184 191 482.0 1029.0 176.0 5.1
13 DW-F850-DBF 19.3 120 130 483.0 762.0 87.0 1.6
14 DW-F850-DBN 15.5 171 180 482.0 455.0 86.0 4.1
15 DW-F800-CHBB 33.2 222 230 483.0 891.7 88.0 2.0
16 DW-F800-SCQ1 0.2 ̶ ̶ ̶ ̶ ̶ ̶
17 DW-F800-SCQ1F 0.2 ̶ ̶ ̶ ̶ ̶ ̶
18 DW-F800-SCQ3 0.45 ̶ ̶ ̶ ̶ ̶ ̶
19 DW-F800-SCQ5 0.6 ̶ ̶ ̶ ̶ ̶ ̶
20 DW-F800-SCQ10A 0.2 ̶ ̶ ̶ ̶ ̶ ̶
21 DW-F800-SCQ30A 0.4 ̶ ̶ ̶ ̶ ̶ ̶
22 DW-F800-SCQ1HA 1.0 ̶ ̶ ̶ ̶ ̶ ̶
23 DW-F850-NMC1F 0.15 ̶ ̶ ̶ ̶ ̶ ̶
24 DW-F800-BS12G 0.5 16 17.2 ̶ ̶ ̶ ̶
25 DW-F800-BS12GE 0.5 16 17.2 ̶ ̶ ̶ ̶
26 DKC-F910I-BN8G 0.5 16.2 17.1 ̶ ̶ ̶ ̶
27 DKC-F910I-BN8GE 0.6 16.2 17.1 ̶ ̶ ̶ ̶
28 DW-F850-CM16G 0.022 4 4.2 ̶ ̶ ̶ ̶
29 DW-F850-CM32G 0.054 4 4.2 ̶ ̶ ̶ ̶
30 DW-F850-CM64G 0.054 4.8 5.0 ̶ ̶ ̶ ̶
31 DW-F850-CM64GL 0.054 4.8 5.0 ̶ ̶ ̶ ̶
32 DW-F850-BM15 0.15 5 5.2 ̶ ̶ ̶ ̶
33 DW-F850-BM35 0.2 5 5.2 ̶ ̶ ̶ ̶
34 DW-F850-BM45 0.2 6.5 6.8 ̶ ̶ ̶ ̶
35 DW-F850-BM55 0.2 5 5.2 ̶ ̶ ̶ ̶
(To be continued)
THEORY04-08-10
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-20

(Continued from preceding page)


Heat Power Dimension (mm)
Weight Air Flow
# Model Number Output Consumption
(kg) Width Depth Height (m3/min)
(W) (VA) (*1)
36 DW-F850-BM5E 0.2 5 5.2 ̶ ̶ ̶ ̶
37 DW-F850-BM65 0.2 6.5 6.8 ̶ ̶ ̶ ̶
38 DW-F850-BM6E 0.2 6.5 6.8 ̶ ̶ ̶ ̶
39 DW-F800-BAT 0.6 24.4 25.7 ̶ ̶ ̶ ̶
40 DW-F800-4HF32R 0.5 17.9 19.9 ̶ ̶ ̶ ̶
41 DW-F800-2HS10S 0.5 18.0 18.9 ̶ ̶ ̶ ̶
42 DW-F800-2HS10B 0.5 28.5 30.0 ̶ ̶ ̶ ̶
43 DKC-F810I-1PL16 0.02 0.79 0.88 ̶ ̶ ̶ ̶
44 DKC-F810I-1PS16 0.02 0.94 1.05 ̶ ̶ ̶ ̶
45 DKC-F810I-1PS32 0.02 1.29 1.43 ̶ ̶ ̶ ̶
46 DKC-F810I-600JCMC 0.3 8.0 8.4 ̶ ̶ ̶ ̶
47 DKC-F810I-1R2JCMC 0.3 8.3 8.7 ̶ ̶ ̶ ̶
48 DKC-F810I-1R2J7MC 0.4 8.3 8.7 ̶ ̶ ̶ ̶
49 DKC-F810I-2R4JGM 0.3 9.0 9.4 ̶ ̶ ̶ ̶
50 DKC-F810I-2R4J8M 0.4 9.0 9.4 ̶ ̶ ̶ ̶
51 DKC-F810I-6R0H9M 0.85 12.9 13.5 ̶ ̶ ̶ ̶
52 DKC-F810I-6R0HLM 0.96 12.9 13.5 ̶ ̶ ̶ ̶
53 DKC-F810I-10RH9M 0.73 12.9 13.5 ̶ ̶ ̶ ̶
54 DKC-F810I-10RHLM 0.84 12.9 13.5 ̶ ̶ ̶ ̶
55 DKC-F810I-14RH9M 0.77 12.9 13.5 ̶ ̶ ̶ ̶
56 DKC-F810I-14RHLM 0.88 12.9 13.5 ̶ ̶ ̶ ̶
57 DKC-F810I-480MGM 0.23 6.7 7.0 ̶ ̶ ̶ ̶
58 DKC-F810I-960MGM 0.23 6.7 7.0 ̶ ̶ ̶ ̶
59 DKC-F810I-1R9MGM 0.23 6.7 7.0 ̶ ̶ ̶ ̶
60 DKC-F810I-1T9MGM 0.23 6.7 7.0 ̶ ̶ ̶ ̶
61 DKC-F810I-3R8MGM 0.23 6.7 7.0 ̶ ̶ ̶ ̶
62 DKC-F810I-7R6MGM 0.23 7.9 8.3 ̶ ̶ ̶ ̶
63 DKC-F810I-15RMGM 0.23 7.9 8.3 ̶ ̶ ̶ ̶
64 DKC-F810I-30RMGM 0.23 7.9 8.3 ̶ ̶ ̶ ̶
65 DKC-F810I-3R2FN 1.4 25.0 26.0 ̶ ̶ ̶ ̶
66 DKC-F810I-7R0FP 1.4 25.0 26.0 ̶ ̶ ̶ ̶
67 DKC-F810I-14RFP 1.4 25.0 26.0 ̶ ̶ ̶ ̶
68 DKC-F910I-1R9RVM 0.22 19 20 ̶ ̶ ̶ ̶
69 DKC-F910I-3R8RVM 0.22 19 20 ̶ ̶ ̶ ̶
70 DKC-F910I-7R6RVM 0.22 19 20 ̶ ̶ ̶ ̶
71 DKC-F910I-15RRVM 0.22 19 20 ̶ ̶ ̶ ̶
*1: Actual values at a typical I/O condition. (Random Read and Write, 50 IOPSs for HDD, 2500
IOPSs for SSD, Data Length: 8 kbytes. All fans rotate at normal.) These values may increase for
future compatible drives.

THEORY04-08-20
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-30

4.8.1 Environmental Specifications


The environmental specifications are shown in the following table.

1. Environmental Conditions

Table 4-57 Usage Environment Conditions


Item Condition
Operating (*1) (*5)
Model Name CBL/ DBS/DBL DBF DB60/DBN
CBSS2/CBSL2/
CBSS1/CBSL1/
CBXSS/CBXSL/
CHBB
Temperature range (ºC) 10 to 40 10 to 40 10 to 40 10 to 35
Relative humidity (%) (*4) 8 to 80 8 to 80 8 to 80 8 to 80
Maximum wet-bulb 29 29 29 29
temperature (ºC)
Temperature gradient 10 10 10 10
(ºC/hour)
Dust (mg/m3) 0.15 or less 0.15 or less 0.15 or less 0.15 or less
Gaseous contaminants (*7) G1 classification levels
Altitude (m) (*8) ~ 3,050 (*8) ~3,050 (*8) ~ 3,050 (*9) ~3,050
(Ambient temperature) (10 ºC ~ 28 ºC) (10 ºC ~28 ºC) (10 ºC ~ 28 ºC) (10 ºC ~ 28 ºC)
~ 950 ~ 950 ~ 950 ~ 950
(10 ºC ~40 ºC) (10 ºC ~40 ºC) (10 ºC ~ 40 ºC) (10 ºC ~ 35 ºC)
Noise Level 90 dB or less (*6)
(Recommended)

Item Condition
Non-Operating (*2)
Model Name CBL/ DBS/DBL DBF DB60/DBN
CBSS2/CBSL2/
CBSS1/CBSL1/
CBXSS/CBXSL/
CHBB
Temperature range (ºC) -10 to 50 -10 to 50 -10 to 50 -10 to 50
Relative humidity (%) (*4) 8 to 90 8 to 90 8 to 90 8 to 90
Maximum wet-bulb 29 29 29 29
temperature (ºC)
Temperature gradient 10 10 10 10
(ºC/hour)
Dust (mg/m3) — — — —
Gaseous contaminants (*7) G1 classification levels
Altitude (m) -60 to 12,000 -60 to 12,000 -60 to 12,000 -60 to 12,000

THEORY04-08-30
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-40

Item Condition
Transportation, Storage (*3)
Model Name CBL/ DBS/DBL DBF DB60/DBN
CBSS2/CBSL2/
CBSS1/CBSL1/
CBXSS/CBXSL/
CHBB
Temperature range (ºC) -30 to 60 -30 to 60 -30 to 60 -30 to 60
Relative humidity (%) (*4) 5 to 95 5 to 95 5 to 95 5 to 95
Maximum wet-bulb 29 29 29 29
temperature (ºC)
Temperature gradient 10 10 10 10
(ºC/hour)
Dust (mg/m3) — — — —
Gaseous contaminants (*7) —
Altitude (m) -60 to 12,000 -60 to 12,000 -60 to 12,000 -60 to 12,000
*1: Storage system which is ready for being powered on
*2: Including packed and unpacked storage systems
*3: Storage system packed for shipping
*4: No dew condensation is allowed.
*5: The system monitors the intake temperature and the internal temperature of the Controller and the
Power Supply. It executes the following operations in accordance with the temperatures.
*6: Fire suppression systems and acoustic noise:
Some data center inert gas fire suppression systems when activated release gas from pressurized
cylinders that moves through the pipes at very high velocity. The gas exits through multiple
nozzles in the data center. The release through the nozzles could generate high-level acoustic
noise. Similarly, pneumatic sirens could also generate high-level acoustic noise. These acoustic
noises may cause vibrations to the hard disk drives in the storage systems resulting in I/O
errors, performance degradation in and to some extent damage to the hard disk drives. Hard
disk drives (HDD) noise level tolerance may vary among different models, designs, capacities
and manufactures. The acoustic noise level of 90dB or less in the operating environment table
represents the current operating environment guidelines in which Hitachi storage systems are
designed and manufactured for reliable operation when placed 2 meters from the source of the
noise.
Hitachi does not test storage systems and hard disk drives for compatibility with fire suppression
systems and pneumatic sirens. Hitachi also does not provide recommendations or claim
compatibility with any fire suppression systems and pneumatic sirens. Customer is responsible to
follow their local or national regulations.
To prevent unnecessary I/O error or damages to the hard disk drives in the storage systems, Hitachi
recommends the following options:
(1) Install noise-reducing baffles to mitigate the noise to the hard disk drives in the storage
systems.

THEORY04-08-40
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-50

(2) Consult the fire suppression system manufacturers on noise reduction nozzles to reduce the
acoustic noise to protect the hard disk drives in the storage systems.
(3) Locate the storage system as far as possible from noise sources such as emergency sirens.
(4) If it can be safely done without risk of personal injury, shut down the storage systems to avoid
data loss and damages to the hard disk drives in the storage systems.
DAMAGE TO HARD DISK DRIVES FROM FIRE SUPPRESSION SYSTEMS OR
PNEUMATIC SIRENS WILL VOID THE HARD DISK DRIVE WARRANTY.
*7: See ANSI/ISA-71.04-2013 Environmental Conditions for Process Measurement and Control
Systems: Airborne Contaminants.
*8: Meets the highest allowable temperature conditions and complies with ASHRAE (American
Society of Heating, Refrigerating and Air-Conditioning Engineers) 2011 Thermal Guidelines Class
A3. The maximum value of the ambient temperature and the altitude is from 40 degrees C at an
altitude of 950 meters (3000 feet) to 28 degrees C at an altitude of 3050 meters (10000 feet).
The allowable ambient temperature is decreased by 1 degree C for every 175-meter increase in
altitude above 950 meters.
*9: Meets the highest allowable temperature conditions and complies with ASHRAE (American
Society of Heating, Refrigerating and Air-Conditioning Engineers) 2011 Thermal Guidelines Class
A2. The maximum value of the ambient temperature and the altitude is from 35 degrees C at an
altitude of 950 meters (3000 feet) to 28 degrees C at an altitude of 3050 meters (10000 feet).
The allowable ambient temperature is decreased by 1 degree C for every 300-meter increase in
altitude above 950 meters.

(1) VSP G130, G350, G370


• If the use environment temperature rises to 43 degrees C or higher, the external temperature
warning (SIM-RC = af110x) is notified.
• If the use environment temperature rises to 58 degrees C or higher or the Controller Board
internal temperature rises to 96 degrees C or higher, the external temperature alarm (SIM-RC =
af120x) is notified.
If both Controller Boards are alarmed, the system executes the power-off processing (planned
power off) automatically.
• If the use environment temperature is 5 degrees C or lower, the external temperature warning
(SIM-RC = af110x) is notified.
• If the temperature of the CPU exceeds its operation guarantee value, the MP temperature
abnormality warning (SIM-RC = af100x) is notified.
• If the temperature of the Controller Board exceeds its operation guarantee value, the thermal
monitor warning (SIM-RC = af130x) is notified.

THEORY04-08-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-60

(2) VSP G700, G900, VSP E990


• If the use environment temperature rises to 43 degrees C or higher, the external temperature
warning (SIM-RC = af110x) is notified.
• If the use environment temperature rises to 50 degrees C or higher or the Controller Board
internal temperature rises to 69 degrees C or higher, the external temperature alarm (SIM-RC =
af120x) is notified.
If both Controller Boards are alarmed, the system executes the power-off processing (planned
power off) automatically.
• If the use environment temperature is 5 degrees C or lower, the external temperature warning
(SIM-RC = af110x) is notified.
• If the temperature of the CPU exceeds its operation guarantee value, the MP temperature
abnormality warning (SIM-RC = af100x) is notified.
• If the temperature of the Controller Board exceeds its operation guarantee value, the thermal
monitor warning (SIM-RC = af130x) is notified.

(3) DBS/DBL
• If the internal temperature of the Power Supply rises to 55 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 64.5 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.

(4) DBF
• If the internal temperature of the Power Supply rises to 62 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 78 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.

(5) DB60/DBN
• If the internal temperature of the Power Supply rises to 60 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 70 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.

(6) CHBB
• If the use environment temperature rises to 43 degrees C or higher, the CHBB temperature
warning (SIM-RC = af46xx) is notified.

THEORY04-08-60
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-70

2. Mechanical Environmental Conditions


It is recommended to install the storage system in a computer room (*3) in a data center and the like,
where the effects of train vibration and continuous vibration of air conditioner outdoor units are almost
eliminated. The equipment for earthquake resistance or seismic isolation might be required at a customer
site so that the mechanical environmental conditions are met.

Table 4-58 Mechanical Environmental Conditions


Item In operating In non-operating
Guaranteed value to vibration 0.25 Grms, 5-500 Hz 0.6 Grms, 3-500 Hz
(*1) (*2)
Guaranteed value to impact ̶ 5 G, 11 ms, half sine, three-axis direction,
(*2) 10 G, 6 ms, half sine, three-axis direction
and
10 G, 11 ms, half sine, falling direction
*1: Vibration that is constantly applied to the storage system due to construction works and so on
*2: Guaranteed value for each chassis of the storage system. If the vibration or impact exceeding the
specified value is imposed, the acceleration value to which the storage system is subjected to needs
to be reduced to the specified value or lower by the equipment for earthquake resistance or seismic
isolation so that the storage system can operate continuously. For general 19-inch racks, the lateral
vibration amplitude tends to be larger at the upper installation location. Therefore, it is recommend
to install the chassis in order from the bottom of the rack without making a vacant space, If the
rack frame and storage system are moved while the storage system is operating, the operation is
not guaranteed.
*3: The definition of computer room is as follows:
• A room where servers in which highly valuable information assets are stored operate
• A separate room. Not an area of a general office room.
• Security devices such as security cameras and burglar alarms are equipped according to
importance of information.
• A few designated doors with locks are used.
• To achieve stable operation 24 hours a day and 365 days a year, room temperature is optimized.
• To achieve stable operation 24 hours a day and 365 days a year, an emergency power system is
installed in case of a power outage.

THEORY04-08-70
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-10

4.9 Power Specifications


4.9.1 Storage System Current
DW850 input power specifications are shown below.
Model Rated Power
CBL (VSP G700, G900) 1600 VA
CBSS/CBSL 800 VA
DBS 480 VA
DBL 380 VA
DB60 1200 VA
DBF 520 VA
DBN 800 VA
CHBB 560 VA

DW850 input current are shown as each Power Supply.

Table 4-59 Input Power Specifications


Input Current Inrush Current
(Rating) (*1)
Power
Input When Leakage 1st (0-p)
Item When Cord Plug
Power two Current 1st (0-p) 2nd (0-p) Time
one PS is Type
PSs are (-25%)
operating
operating
DKC (VSP
8.0 A 4.0 A 1.75 mA 30 A 20 A 25 ms
G700, G900) PS
DKC (VSP
G130, G350, Single 4.0 A 2.0 A 1.75 mA 30 A 28 A 25 ms
G370) PS phase,
AC200V
DBS/DBL PS 2.4 A 1.2 A 1.75 mA 30 A 25 A 25 ms
to
DB60 PS 6.0 A 3.0 A 1.75 mA 45 A 35 A 25 ms
AC240V
DBF PS 2.6 A 1.3 A 1.75 mA 20 A 15 A 80 ms
DBN PS 4.0 A 2.0 A 1.75 mA 24 A 18 A 25 ms
CHBB PS 4.0 A 2.0 A 1.75 mA 30 A 28 A 25 ms
DKC (VSP
G130, G350, Single 8.0 A 4.0 A 1.75 mA 30 A 28 A 25 ms
G370) PS phase,
AC100V
DBS/DBL PS 4.8 A 2.4 A 1.75 mA 30 A 25 A 25 ms
to
DB60 PS − − − − − −
AC120V
DBF PS 5.2 A 2.6 A 1.75 mA 20 A 15 A 80 ms
*1: When two power supplies are operating, each power supply provides about half of the required
power for the storage system. When only one of the two power supplies is operating, the power
supply provides all required power for the storage system. Therefore, use the power supplies that
meet the rated input current for when one power supply is operating.

THEORY04-09-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-20

Figure 4-35 Power Supply Locations


1. Controller Chassis (VSP G700, G900 and VSP E990)

Controller Chassis to PDU


C14

DKCPS-1
CTL2

DKCPS-2
AC0(*1) to PDU AC1(*1)
CTL1
C14

Plug DKC PS Power Cord

2. Controller Chassis (VSP G130, G350, G370 (AC model))

AC1(*1)

Controller Chassis C14 to PDU


DKC
PS-2 Power Cord
CTL1 CTL2
DKC
PS-1 Plug
C14 to PDU

AC0(*1)
DKC PS

*1: It is necessary to separate AC0 and AC1 for AC redundant.

THEORY04-09-20
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-30

3. Drive Box (DBS/DBL/DBF/DBN (AC model))

ENC ENC
to PDU to PDU

AC0(*1) C14 C14 AC1(*1)


DBPS-1 DBPS-2

Plug DB PS Power Cord

4. Drive Box (DB60)

DB PS
to PDU to PDU

AC0(*1) C14 DBPS-1 DBPS-2 C14 AC1(*1)

ENC ENC
Plug Power Cord

5. Channel Board Box (CHBB)

AC1(*1)

C14 to PDU

CHBB Power Cord


SWPK2 PS2

SWPK1 CHBB
PS1 Plug

C14 to PDU
CHBB PS
AC0(*1)

*1: It is necessary to separate AC0 and AC1 for AC redundant.

THEORY04-09-30
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-40

4.9.2 Input Voltage and Frequency


The following shows the electric power system specifications for feeding to the Storage System.

1. Input Voltage and Frequency


The following shows the input voltage and frequency to be supported.

• CBLH1/CBLH2/DB60/DBN
Input Voltage Voltage Tolerance Frequency Wire Connection
200V to 240V +10% or -11% 50Hz ± 2Hz 1 Phase 2 Wire + Ground
60Hz ± 2Hz

• CBXSS/CBXSL/CBSS1/CBSL1/CBSS2/CBSL2/DBS/DBL/DBF/CHBB
Input Voltage (AC) Voltage Tolerance Frequency Wire Connection
100V to 120V/ +10% or -11% 50Hz ± 2Hz 1 Phase 2 Wire + Ground
200V to 240V 60Hz ± 2Hz

2. PDU specifications
The two types of the PDU (Power Distribution Unit) are a vertical PDU mounted on a rack frame post
and a horizontal PDU of 1U size. Order the required number of PDUs together with the PDU AC cables
in accordance with the configuration of the device to be mounted on the rack frame.

Table 4-60 PDU Device Specifications


Item Vertical Type Horizontal Type
Model Name PDU A-F6516-PDU6 A-F4933-PDU6
AC cable A-F6516-P620
AC Input AC200V
PDU Output 6 outlets: A circuit breaker is available for every three
outlets (10 rated amperes)
Remarks Two-set configuration per model name for both PDU/
AC cable options

For information about the Hitachi Universal V2 rack used with HDS VSP storage systems, refer to the
Hitachi Universal V2 Rack Reference Guide, MK-97RK000-00.

THEORY04-09-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-50

Figure 4-36 PDU Specifications

A-F4933-PDU6
A-F4933-PDU6 Horizontal Type
occupies 1U per PDU
3 outlets/8A PDU

Connect the device


so that a total of
three outlets does not
exceed 8A
3 outlets/8A

Vertical Type
A-F6516-PDU6 PDU

A-F6516-PDU6 can
PDU mount a maximum of
three sets on the post of
the RKU rack
A-F6516-P620
(Length 4.5m)

PDU and AC cable PDU rack mounting image diagram

When using AC100V, request the customer to prepare the PDU.

The following shows the specifications of the PDU power codes and connectors.
The available cable lengths of the PDU power codes differ according to the installation location of the
PDU.
PDU Location Available Plug Receptacle
Cable Length Rating Manufacturer Parts No. Manufacturer Parts No.
(*1)
Upper PDU 2.7 m 20A AMERICAN L6-20P ̶ L6-20R
Mid PDU 3.2 m DENKI CO.,LTD. (*2)
Lower PDU 3.7 m
*1 : This is a length outside the rack chassis.
*2 : When the receptacle is L6-30R, select P630 as an option.

THEORY04-09-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-60

4.9.3 Efficiency and Power Factor of Power Supplies


The efficiency and power factor of the power supplies installed in Controller Chassis and Drive Boxes are
shown in the following table:

Table 4-61 Efficiency and Power Factor of Power Supplies


Efficiency Power factor
Model name
Load rate 10% Load rate 20% Load rate 50% Load rate 100% Load rate 50%
CBSS/CBSL/ 82 % 91 % 93 % 92 % 0.95
CHBB/CBXS
CBLH 90 % 94 % 95 % 93 % 0.95
DBS/DBL 87 % 92 % 95 % 94 % 0.95
DB60 83 % 90 % 92 % 89 % 0.95
DBF 82 % 89 % 93 % 92 % 0.95
DBN 85 % 91 % 94 % 94 % 0.95

THEORY04-09-60
Hitachi Proprietary DW850
Rev.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-10-10

4.10 Locations where Configuration Information is Stored and Timing of Information


Update
Locations where configuration information is stored and timing of information update is indicated.

CTL


Shared
MP ② memory


CFM


GUM
:Locations where configuration
⑥ information is stored

Management
MPC
client

Backup media
(e.g.,Media)

No., Locations Update/Save/Load


 MP ⇔ Shared memory • The configuration information is updated due to the configuration change
by operators for VOL creation, LUNM setting, and the like.
• The configuration information is updated due to the change of resource
allocation.
 Shared memory  MP If the storage system starts up when the shared memory is not volatile, the
configuration information in the shared memory is loaded into MPs.
 Shared memory  CFM • When the storage system is powered off, the configuration information is
saved into the CFM.
• When the configuration information is updated, it is saved into the CFM by
online configuration backup.
 CFM  Shared memory If the storage system starts up when the shared memory is volatile, the
configuration information saved into the CFM in  is loaded into the shared
memory.
 CFM  GUM The configuration information in the CFM is saved into the GUM by
periodic backup (once per day).
 GUM  Management client The configuration information in the GUM is saved into the management
client, according to the operation settings for configuration information
download.
 Backup media (e.g.,Media)  The configuration information saved in the backup media is loaded into
(MPC)  Shared memory the shared memory, according to the operation settings for configuration
information restore in the MPC window.

THEORY04-10-10

You might also like