E 05 THEORY0
E 05 THEORY0
E 05 THEORY0
THEORY OF OPERATION
SECTION
THEORY00-00-00
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY00-00-10
Contents
1. Storage System Overview of DW850 .........................................................................THEORY01-01-10
1.1 Overview ..............................................................................................................THEORY01-01-10
1.2 Features of Hardware ..........................................................................................THEORY01-02-10
1.3 Storage System Configuration .............................................................................THEORY01-03-10
1.3.1 Hardware Configuration ...............................................................................THEORY01-03-20
1.3.2 Software Configuration .................................................................................THEORY01-03-70
1.3.2.1 Software to Perform Data I/O ...............................................................THEORY01-03-70
1.3.2.2 Software to Manage the Storage System .............................................THEORY01-03-80
1.3.2.3 Software to Maintain the Storage System ............................................THEORY01-03-90
1.4 Specifications by Model .......................................................................................THEORY01-04-10
1.4.1 Storage System Specifications .....................................................................THEORY01-04-10
THEORY00-00-20
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY00-00-30
THEORY00-00-30
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY00-00-40
4. Appendixes .................................................................................................................THEORY04-01-10
4.1 DB Number - C/R Number Matrix ........................................................................THEORY04-01-10
4.2 Comparison of Pair Status on Storage Navigator, Command Control
Interface (CCI) ....................................................................................................THEORY04-02-10
4.3 Parts Number of Correspondence Table ..............................................................THEORY04-03-10
4.4 Connection Diagram of DKC ................................................................................THEORY04-04-10
4.5 Channel Interface (Fiber and iSCSI) ....................................................................THEORY04-05-10
4.5.1 Basic Functions ............................................................................................THEORY04-05-10
4.5.2 Glossary .......................................................................................................THEORY04-05-20
4.5.3 Interface Specifications ................................................................................THEORY04-05-30
4.5.3.1 Fibre Channel Physical Interface Specifications...................................THEORY04-05-30
4.5.3.2 iSCSI Physical Interface Specifications ................................................THEORY04-05-50
4.5.4 Volume Specification (Common to Fibre/iSCSI) ...........................................THEORY04-05-70
4.5.5 SCSI Commands ........................................................................................THEORY04-05-210
4.5.5.1 Common to Fibre/iSCSI ......................................................................THEORY04-05-210
4.6 Outline of Hardware .............................................................................................THEORY04-06-10
4.6.1 Outline Features ...........................................................................................THEORY04-06-20
4.6.2 External View of Hardware ...........................................................................THEORY04-06-40
4.6.3 Hardware Architecture ..................................................................................THEORY04-06-60
4.6.4 Hardware Component ................................................................................ THEORY04-06-110
4.7 Mounted Numbers of Drive Box and the Maximum Mountable Number of Drive THEORY04-07-10
4.8 Storage System Physical Specifications ..............................................................THEORY04-08-10
4.8.1 Environmental Specifications .......................................................................THEORY04-08-30
4.9 Power Specifications ............................................................................................THEORY04-09-10
4.9.1 Storage System Current ...............................................................................THEORY04-09-10
4.9.2 Input Voltage and Frequency .......................................................................THEORY04-09-40
4.9.3 Efficiency and Power Factor of Power Supplies ...........................................THEORY04-09-60
4.10 Locations where Configuration Information is Stored and Timing of
Information Update .............................................................................................THEORY04-10-10
THEORY00-00-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-01-10
NOTICE: Unless otherwise stated, firmware version in this section indicates DKCMAIN
firmware.
1.1 Overview
DW850 are the 19-inchi rack mount models and they consist of the controller chassis that controls the drive
and the drive installed drive box.
Controller chassis is the hardware that plays a vital role in the storage system, and it controls the drive box.
Chassis has the two clustered controllers and it provides the redundant configuration in which the all major
components such as processor, memory and power supply are duplicated.
When a failure occurs on one side of the controllers, a continuous processing can be performed on the other
side of the controllers. When a load is concentrated on one side of the controllers, an acceleration of the
processing performance is achieved by distributing the processor resource to all CPUs of the both controllers.
Furthermore, each component and firmware can minimize the influence from the suspension of the
maintenance operation system as they can replace and update while operating the system.
Five types of the drive boxes are available. Also, the number of drive boxes and the size of the drive box are
expandable depending on the usage purpose. Like the controller chassis, the major components of the drive
box have the duplicated redundancy configuration.
THEORY01-01-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-02-10
High-performance
• Distributing the processing by the cluster configured controller
• High-speed of processing is achieved by large capacity cache memory
• High-speed of I/O processing is achieved by flash disk and FMD
• High-speed data transfer is achieved by 16/32 Gbps Fibre Channel and 10 Gbs iSCSI interface
High Availability
• Continuous operation by the duplicated major components
• RAID1/5/6 are supported (RAID 6 supports up to 14D+2P)
• Data is maintained during a power failure by saving data in cash flash memory
• File can be shared between the different types of server
THEORY01-02-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-10
Maintenance Utility
19 inch Rack
THEORY01-03-10
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-20
Drive Box
(DBS/DBL/DBF)
Drive Box
(DBS/DBL/DBF)
Drive Box
(DB60)
Controller Chassis
(CBXSS/CBXSL/CBSS1/CBSL1/
CBSS2/CBSL2/CBLH1/CBLH2)
Drive Box
(DB60)
THEORY01-03-20
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-30
Figure 1-3 shows the system hardware configuration of the storage system.
Drive Box 03
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit
Drive Box 02
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit
Drive Box 01
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit
Drive Box 00
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit
Controller
Chassis SAS
(12Gbps/port)
LANB LANB
GCTL
+
GUM
THEORY01-03-30
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-31
Drive Box 01
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit
Drive Box 00
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit
Controller
Chassis NVMe
(8Gbps/port)
LANB LANB
GCTL
+
GUM
THEORY01-03-31
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-40
[Controller Chassis]
It consists of a controller board, a channel board (CHB), a disk board (DKB) and a power supply that
supplies the power to them.
[Drive Box]
It consists of ENC, drives and the cooling fan integrated power supply.
For the drive boxes, the five types of DBS, DBL, DBF, DBN and DB60 are available.
For the maximum number of drive boxes which can be installed for each model, see Table 1-1 Storage
System Specifications (VSP G130, G350, G370, G700, G900 Model) , Table 1-2 Storage System
Specifications (VSP F350, F370, F700, F900 models) or Table 1-3 Storage System Specifications (VSP
E990 models) .
Figure 1-5 shows the installing configuration in a rack. DBS, DBL, DBF and DB60 can be mixed in the
same system.
THEORY01-03-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-50
DBS/DBL/DBF
DBS/DBL/DBF
DB60
DBS/DBL/DBF DBS/DBL/DBF
*1 DBS/DBL/DBF DB60
DBS/DBL/DBF
*3
DBS/DBL/DBF
DBS/DBL/DBF DB60 6 DB60
*2 DBS/DBL/DBF
DBS/DBL/DBF Cases
DBS/DBL/DBF DBS/DBL/DBF DB60
CHBB DBS/DBL/DBF CHBB DB60
DBS/DBL/DBF
DKC DBS/DBL/DBF DKC DB60
Blank Blank Blank Blank
DKC+DBN
DBN
4 DBN
Cases DBN
DBN
DKC
Blank
THEORY01-03-50
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-60
THEORY01-03-60
Hitachi Proprietary DW850
Rev.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-70
In the storage, data area is divided by blocks. Access the head of the data that is the address assigned.
THEORY01-03-70
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-80
• Storage Navigator
It is the storage management software to manage the hardware (setting the configuration information,
defining the logical device and displaying the status) and the performance management (tuning) of the
storage system. Install Storage Navigator in SVP to use. When installing Storage Navigator, StorageDevice
List is also installed. Due to the Web application, the storage system can be operated by accessing from the
Web browser on the LAN connected PC.
If the following conditions are met, Storage Navigator for the DW800 storage system (VSP G200, G/F400,
G/F600, and G/F800) can be installed on the SVP for the DW850 storage system (VSP G130, G/F350, G/
F370, G/F700, G/F900, and VSP E990).
Item Conditions
SVP [Version] displayed in the upper right of the window of Storage Device List is 88-
03-03-00/xx or later.
For the DW850 storage system (VSP G130, G/F350, G/F370, G/F700, and G/F900),
use the SVP installation media version 88-03-03-x0/xx or later. For the DW850
storage system (VSP E990), use the SVP installation media for VSP E990 (any
media version is allowed).
Note that the following restrictions are applied:
• TLS1.0/1.1 cannot be enabled as the communication protocol between the SVP
and the client PC or the storage system. (TLS1.0/1.1 can be enabled on the SVP for
the DW800 storage system (VSP G200, G/F400, G/F600, and G/F800).)
• The Log Dump automation function (dumps are automatically collected when
specific SIMs are generated) can be used only for the DW800 storage system.
Storage Navigator 83-03-21-x0/xx or later
software Installation from the SVP installation media version 83-03-21-x0/xx or later for
DW800 storage system is performed.
NOTE: The storage management software for the DW850 storage system (VSP G130, G/
F350, G/F370, G/F700, G/F900, and VSP E990) cannot be installed in the SVP for the
DW800 storage system (VSP G200, G/F400, G/F600, and G/F800).
THEORY01-03-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-03-90
• Maintenance Utility
It is the Web application to be used for the failure monitoring of the storage system, parts replacement, an
upgrade of the firmware and installation of the program product.
Maintenance Utility is incorporated into GUM (Gateway for Unified Management) controller that is
mounted in the controller chassis. Installation is not required.
Maintenance Utility is started by specifying the IP address of CTL on the Web browser or using the Web
Console window or the MPC window on the Maintenance PC. Note that Maintenance Utility can still
be accessed even if the power is turned off as GUM is operated as long as the controller chassis is powered
on.
THEORY01-03-90
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-10
Table 1-1 Storage System Specifications (VSP G130, G350, G370, G700, G900 Model)
Item Specifications
VSP G900 VSP G700 VSP G370 VSP G350 VSP G130
System Number of Minimum 4 (Disk-in model) / 0 (Diskless model)
HDDs Maximum 1,440 1,200 384 264 96
Number of Minimum 4 (disk-in model) / 0 (diskless model)
Flash Drives Maximum 1,152 864 288 192 96
Number of Minimum 4 (disk-in model) −
Flash Module Maximum 576 432 −
Drives
RAID Level RAID6/RAID5/RAID1
RAID Group RAID6 6D+2P, 12D+2P, 14D+2P
Configuration RAID5 3D+1P, 4D+1P, 6D+1P, 7D+1P
RAID1 2D+2D, 4D+4D (*9)
Maximum Number of 64 (*1) 48 (*1) 24 (*1) 16 (*1) 16 (*1)
Spare Disk Drives
Maximum Number of Volumes 65,280 49,152 32,768 16,384 2,048
Maximum 2.4 TB 2.5 HDD 2,656 TB 1,992 TB 664 TB 443 TB 221 TB
Storage System used
Capacity 14 TB 3.5 HDD 19,737 TB 16,447 TB 5,263 TB 3,618 TB 1,315 TB
(Physical used
Capacity) 15 TB 2.5 SSD 17,335 TB 13,001 TB 4,333 TB 2,889 TB 1,444 TB
used
14 TB FMD used 8,106 TB 6,080 TB −
Maximum External Configuration 255 PiB 192 PiB 128 PiB 64 PiB 8 PiB
Maximum Number of DBs (*6) DBS/DBL/ DBS/DBL/ DBS/DBL : DBS/DBL : DBS : 3
DBF : 48 DBF : 36 11 7 DBL : 7
DB60 : 24 DB60 : 20 DB60 : 6 DB60 : 4
Memory Cache Memory Capacity 256 GiB to 128 GiB to 128 GiB to 64 GiB to 32 GiB
1,024 GiB 512 GiB 256 GiB 128 GiB
Cache Flash Memory Type BM35/ BM35 BM15 BM15 BM05
BM45
(To be continued)
THEORY01-04-10
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-20
THEORY01-04-20
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-30
Table 1-2 Storage System Specifications (VSP F350, F370, F700, F900 models)
Item Specifications
VSP F900 VSP F700 VSP F370 VSP F350
System Number of Minimum 4 (Disk-in model)
Flash Drives Maximum 1,152 864 288 192
Number of Minimum 4 (Disk-in model) −
Flash Module Maximum 576 432 − −
Drives
RAID Level RAID6/RAID5/RAID1
RAID Group RAID6 6D+2P, 12D+2P, 14D+2P
Configuration RAID5 3D+1P, 4D+1P, 6D+1P, 7D+1P
RAID1 2D+2D, 4D+4D (*8)
Maximum Number of Spare Disk 64 (*1) 48 (*1) 24 (*1) 16 (*1)
Drives
Maximum Number of Volumes 64 k 48 k 32 k 16 k
Maximum 15 TB 2.5 SSD 17,335 TB 13,001 TB 4,333 TB 2,889 TB
Storage System used
Capacity 14 TB FMD used 8,106 TB 6,080 TB −
(Physical
Capacity)
Maximum External Configuration 255 PiB 192 PiB 128 PiB 64 PiB
Maximum Number of DBs (*5) DBS/DBF : 48 DBS/DBF : 36 DBS : 11 DBS : 7
Memory Cache Memory Capacity 256 GiB to 128 GiB to 512 128 GiB to 256 64 GiB to 128
1,024 GiB GiB GiB GiB
Cache Flash Memory Type BM35/BM45 BM35 BM15 BM15
Storage DKC-DB Interface SAS/Dual Port
I/F Data Transfer Rate 12 Gbps
Maximum Number of drive per SAS 24
I/F
Device Number of DKB PCB 8 4 − −
I/F Support Channel Type Fibre Channel Shortwave (*2)/iSCSI (Optic/Copper)
Data Transfer Fibre Channel 400/800/1600/3200 MB/s
Rate (MB/s) iSCSI 1000 MB/s (Optic)
100/1000 MB/s (Copper)
Maximum CHBB is not 12 (16: When 12 (16: When 4 4
Number of Installed DKB slot is DKB slot is
CHB used) used)
CHBB is Installed 16 (20: When − − −
DKB slot is
used)
(To be continued)
THEORY01-04-40
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-50
*7: It is recommended to install the storage system in a computer room in a data center and the like.
It is possible to install the storage system in a general office, however, take measures against noise
as required.
When you replace the old Hitachi storage system with the new one in a general office, especially
note the following to take measures against noise.
The cooling fans in the storage system are downsized to enhance the high density of the storage
system. As a result, the rotation number of the fan is increased than before to maintain the cooling
performance. Therefore, the rate of the noise occupied by high-frequency content is high.
*8: RAID1 (4D+4D) is a concatenation of two RAID1 (2D+2D).
THEORY01-04-60
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-70
THEORY01-04-70
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-80
THEORY01-04-80
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY01-04-90
*7: It is recommended to install the storage system in a computer room in a data center and the like.
It is possible to install the storage system in a general office, however, take measures against noise
as required.
When you replace the old Hitachi storage system with the new one in a general office, especially
note the following to take measures against noise.
The cooling fans in the storage system are downsized to enhance the high density of the storage
system. As a result, the rotation number of the fan is increased than before to maintain the cooling
performance. Therefore, the rate of the noise occupied by high-frequency content is high.
*8: RAID1 (4D+4D) is a concatenation of two RAID1 (2D+2D).
THEORY01-04-90
Hitachi Proprietary DW850
Rev.4.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-10
THEORY02-01-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-20
(To be continued)
THEORY02-01-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-30
RAID6 Data block Overview Data blocks are divided into multiple
A B C D E F ... Disks in the same way as RAID 5 and
DKC two parity Disks, P and Q, are set in
each row.
A B C D P0 Q0 Therefore, data can be assured even
F : : P1 Q1 E when failures occur in up to two Disk
: : : : : :
Drives in a parity group.
Data Disks + Parity Disks P and Q
NOTE:
Advantage RAID 6 is far more reliable than
The above diagram shows the 4D+2P RAID 1 and RAID 5 because it can
configuration. restore data even when failures occur
in up to two Disks in a parity group.
Disadvantage Because the parity data P and Q
must be updated when data is
updated, RAID 6 is imposed write
penalty heavier than that on RAID 5,
performance of the random write is
lower than that of RAID 5 in the case
where the number of Drives makes a
bottleneck.
(To be continued)
THEORY02-01-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-40
THEORY02-01-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-50
THEORY02-01-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-60
CM D0 CM D0
D0 D0 D1 D1 D0 D1 D2 P0
D2 D2 D3 D3 D4 D5 P1 D3
: : : : : : : :
Primary Secondary Primary Secondary
Drive Drive Drive Drive
Perform single Drive read for single Perform single Drive read for single
read from the host. read from the host.
CM D0 D2 D1 D3 CM D0 D1 D2 D3
D0 D0 D1 D1 D0 D1 D2 P0
D2 D2 D3 D3 D4 D5 P1 D3
: : : : : : : :
Primary Secondary Primary Secondary
Drive Drive Drive Drive
Perform Drive read for requests from Perform Drive read for requests from
the host. the host.
For the example in the above For the example in the above
diagram, perform Drive read four diagram, perform Drive read four
times for four requests of D0 to D3 times for four requests of D0 to D3
from the host. from the host.
(To be continued)
THEORY02-01-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-70
Read
(1) (2)
Old D0 Old P0
D0 D0 D1 D1
D2 D2 D3 D3 (3) (4)
(1) (2)
: : : :
Primary Secondary Primary Secondary D0 D1 D2 P0
Drive Drive Drive Drive
D4 D5 P1 D3
: : : :
• In case of RAID6:
In addition to the case of RAID5,
perform old parity read and new
parity write of the second parity.
Operate the I/O six times in total.
(To be continued)
THEORY02-01-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-80
CM D0 D1 CM D0 D1 D2
• In case of RAID6:
In addition to the case of RAID5,
create the second parity and write
the data from the host and two sets
of parity to the Drive.
For example, in case of 6D+2P,
create two sets of parity for write
of six sets of data by the host and
perform Drive write eight times
combining the data and parity.
THEORY02-01-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-01-90
4. Reliability
Table 2-6 shows the reliability related to each RAID level.
THEORY02-01-90
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-10
(1) This enables multiplatform system users to share the high reliable and high performance resource
realized by the DKC.
• The SCSI interface is complied with ANSI SCSI-3, a standard interface for various peripheral
devices for open systems. Thus, the DKC can be easily connected to various open-market Fibre
host systems (e.g. Workstation servers and PC servers).
• DW850 can be connected to open system via Fibre interface by installing Fibre Channel Board
(DW-F800-4HF32R). Fibre connectivity is provided as Channel option of DW850. Fibre Channel
Board can be installed in any CHB location of DW850.
• The iSCSI interface transmits and receives the block data by SCSI on the IP network. For this
reason, you can configure and operate IP-SAN (IP-Storage Area Network) at a low cost using the
existing network devices. The iSCSI interface board (DW-F800-2HS10S/DW-F800-2HS10B) can
be inserted in an optional place of the DW-F800 CHB slot.
THEORY02-02-10
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-20
THEORY02-02-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-30
1. Before LUN path configuration is changed, Fibre I/O on the related Fibre port must be stopped.
2. Before Fibre Channel Board or LDEV is removed, the related LUN path must be removed.
3. Before Fibre Channel Board is replaced, the related Fibre I/O must be stopped.
4. When Fibre-Topology information is changed, pull out a Fibre cable between the port and SWITCH and
put it back again. Before a change of Fibre-Topology information, pull out Fibre cable and put it back
after completing the change.
The precautions against the iSCSI interface maintenance work are as shown below.
1. Before changing the LUN path definition, the iSCSI interface port I/O needs to be stopped.
2. Before removing the iSCSI interface board or LDEV, the LUN path definition needs to be removed.
3. Before replacing the iSCSI interface board, the I/O needs to be stopped.
THEORY02-02-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-40
2.2.3 Configuration
2.2.3.1 System Configuration
1. All Fibre Configuration
The DKC can also have the All Fibre configuration installed only by CHB adapters.
The all Fibre configuration example is shown below.
Fibre HOST
Fibre I/F
Fibre Channel
Adapter CHB CHB
Open
Volume
DKC 1st DB
DKB
HOST HOST
FC FC FC FC
DKC
THEORY02-02-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-50
THEORY02-02-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-60
Storage System
port
CL1-A
host group 01 LUN0
hg-hpux hpux01 02:01 LUN1
02:02
hpux02
THEORY02-02-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-70
ID*:
Each has a different ID number within a range of 0
LUN
through EF on a bus.
0, 1, 2, 3, 4, 5, 6, 7 to 2047
THEORY02-02-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-80
2. iSCSI interface
The iSCSI interface specifies an IPv4 address or IPv6 address and connects it to the iSCSI port.
Up to 16 virtual ports can be added to an iSCSI physical port. Use Command Control Interface (CCI)
when adding virtual ports.
THEORY02-02-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-90
*1: “0” is added to the emulation type of the V-VOLs (e.g. OPEN-0V).
THEORY02-02-90
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-100
This flexible LU and LDEV mapping scheme enables the same logical volume to be set to multiple paths
so that the host system can configure a shared volume configuration such as a High Availability (HA)
configuration. In the shared volume environment, however, some lock mechanism need to be provided by
the host systems.
HOST HOST
Fibre/iSCSI
port
Max. 2048 LUNs
Shared
Volume
LU
LU
LU
DKB pair
CU#0:LDEV#00 CU#0:LDEV#14 CU#1:LDEV#00 CU#2:LDEV#00
CU#0:LDEV#01 CU#0:LDEV#15 CU#1:LDEV#01 CU#2:LDEV#01
DKB
CU#0:LDEV#12 CU#0:LDEV#26 CU#1:LDEV#12 CU#2:LDEV#12
CU#0:LDEV#13 CU#0:LDEV#27 CU#1:LDEV#13 CU#2:LDEV#13
THEORY02-02-100
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-110
3. LUN Security
(1) Overview
This function connects various types of servers into a segregated, secure environment via the
switch in the Fibre/iSCSI port, and thus enables the storage and the server to be used in the SAN
environment.
The MCU (initiator) port of TrueCopy does not support this function.
Host 1
SW •••• ••••
Host 2 LUN:0 1 •••• 7 8 9 10 11 • • • • 2047
For Host 1
Host 1
SW •••• ••••
Host 2 LUN:0 1 •••• 7 0 1 2 3 ••••
LU group 1 LU group 2
SW •••• ••••
Host 2 LUN:0 1 • • • • 7 0 1 2 3 ••••
LU group 1 LU group 2
THEORY02-02-110
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-120
2. LUN setting
- LUN setting:
• Select the CHB, Fibre port and the LUN, and select the CU# and LDEV# to be allocated to the LUN.
• Repeat the above procedure as needed.
The MCU port (Initiator port) of TrueCopy function does not support this setting.
*1: It is possible to refer to the contents which is already set on the Maintenance PC display.
*2: The above setting can be done during on-line.
*3: Duplicated access paths’ setting from the different hosts to the same LDEV is allowed. This
will provide a means to share the same volume among host computers. It is, however, the host
responsibility to manage an exclusive control on the shared volume.
Refer to MAINTENANCE PC SECTION 4.1.3 Allocating the Logical Devices of a Storage System to a
Host for more detailed procedures.
THEORY02-02-120
Hitachi Proprietary DW850
Rev.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-130
*1: There are no functional differences between host mode 01 and 21. When you first connect a host, it
is recommended that you set host mode 21.
*2: There are no functional differences between host mode 0C and 2C. When you first connect a host,
it is recommended that you set host mode 2C.
THEORY02-02-130
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-140
4. Non-volatile Cache
Batteries and Cache Flash Memories (CFM) are installed in Controller Board in a DKC. Once a data has
been written into a Cache, even if a power interruption occurs, it always holds the data because the data
is transferred to the CFM.
THEORY02-02-140
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-150
THEORY02-02-150
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-160
LAN
AP : Application program
CHB0 CHB1 CHB0 CHB1 FS : File system
HW : Hardware
LU0
LU0
DKC/DB
• The HA software under the hot-standby configuration operates in the following sequence:
(1) The HA software within the active host monitors the operational status of own system by using a
monitoring agent and sends the results to the standby host through the monitoring communication
line (this process is referred to as heart beat transmission ). The HA software within the standby
host monitors the operational status of the active host based on the received information.
(2) If an error message is received from the active host or no message is received, the HA software
of the standby host judges that a failure has occurred in the active host. As a result, it transfers
management of the IP addresses, Disks, and other common resources, to the standby host (this
process is referred to as fail-over ).
(3) The HA software starts the application program concerned within the standby host to take over the
processing on behalf of the active host.
• Use of the HA software allows a processing requirement from a client to be taken over. In the case of some
specific application programs, however, it appears to the client as if the host that was processing the task has
been rebooted due to the host switching. To ensure continued processing, therefore, a login to the application
program within the host or sending of the processing requirement may need to be executed once again.
THEORY02-02-160
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-170
LAN
Fibre/iSCSI Fibre/iSCSI
DKC/DB
CHB0 CHB1 CHB0 CHB1
AP : Application program
LU0
LU0
• In the mutual standby configuration, since both hosts operate as the active hosts, no resources exist that
become unnecessary during normal processing. On the other hand, however, during a backup operation
the disadvantages are caused that performance deteriorated and that the software configuration becomes
complex.
• This Storage System is scheduled to support Oracle SUN CLUSTER, Symantec Cluster server, Hewlett-
Packard MC/ServiceGuard, and IBM HACMP and so on.
THEORY02-02-170
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-180
LAN
Strage System
CHB0 CHB1 CHB0 CHB1
LU0
LU0
The path switching function enables processing to be continued without host switching in the event of a
failure in the adapter, Fibre/iSCSI cable, Storage System or other components.
THEORY02-02-180
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-190
2.2.6.2 Specifications
1. General
(1) LUN addition function supports Fibre interface.
(2) LUN addition can be executed by Maintenance PC or by Web Console.
(3) Some operating systems require reboot operation to recognize the newly added volumes.
(4) When new LDEVs should be installed for LUN addition, install the LDEVs by Maintenance PC
first. Then add LUNs by LUN addition from Maintenance PC or Web Console.
2. Platform support
Host Platforms supported for LUN addition are shown in Table 2-8.
THEORY02-02-190
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-200
2.2.6.3 Operations
1. Operations
Step 1: Execute LUN addition from Maintenance PC.
Step 2: Check whether or not the platform of the Fibre port supports LUN recognition with Table 2-8.
Support (A) : Execute LUN recognition procedures in Table 2-8.
Not support (B) : Reboot host and execute normal install procedure.
2. Host operations
Host operations for LUN recognition are shown in Table 2-9.
THEORY02-02-200
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-210
2.2.7.2 Specifications
1. General
(1) LUN removal can be used only for the ports on which LUNs are already existing.
(2) LUN removal can be executed by Maintenance PC or by Web Console .
(3) When LUNs should be removed for LUN removal, stop Host I/O of concerned LUNs.
(4) If necessary, execute backup of concerned LUNs.
(5) Remove concerned LUNs from HOST.
(6) In case of AIX, release the reserve of concerned LUNs.
(7) In case of HP-UX do not remove LUN=0 under existing target ID.
NOTE: If LUN removal is done without stopping Host I/O, or releasing the reserve, it would fail. Then
stop HOST I/O or release the reserve of concerned LUNs and try again. If LUN removal would
fail after stopping Host I/O or releasing the reserve, there is a possibility that the health check
command from HOST is issued.
At that time, wait about three minutes and try again.
2. Platform support
Host platforms supported for LUN removal are shown in Table 2-10.
THEORY02-02-210
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-220
2.2.7.3 Operations
1. Operations
Step 1: Confirm whether or not the platform supports LUN removal with Table 2-10.
Support : Go to Step 2.
Not support : Go to Step 3.
Step 2: If HOST MODE of the port is not 00 or 04 or 07 use, go to Step 4.
Step 3: Stop Host I/O of concerned LUNs.
Step 4: If necessary, execute backup of concerned LUNs.
Step 5: Remove concerned LUNs form HOST.
Step 6: In case AIX, release the reserve of concerned LUNs.
If not, go to Step 7.
Step 7: Execute LUN removal from Maintenance PC.
2. Host operations
Host operations for LUN removal procedures are shown in Table 2-11.
THEORY02-02-220
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-230
The Prioritized Port Control option has two different control targets: fibre port and open-systems host s World
Wide Name (WWN). The fibre ports used on production servers are called prioritized ports, and the fibre
ports used on development servers are called non-prioritized ports. Similarly, the WWNs used on production
servers are called prioritized WWNs, and the WWNs used on development servers are called non-prioritized
WWNs.
The Prioritized Port Control option cannot be used simultaneously for both the ports and WWNs for the same
DKC. Up to 80 ports or 2048 WWNs can be controlled for each DKC.
*: When the number of the installed ports in the storage system is less than this value, the maximum
number is the number of the installed ports in the storage system.
The Prioritized Port Control option monitors I/O rate and transfer rate of the fibre ports or WWNs. The
monitored data (I/O rate and transfer rate) is called the performance data, and it can be displayed in graphs.
You can use the performance data to estimate the threshold and upper limit for the ports or WWNs, and
optimize the total performance of the DKC.
THEORY02-02-230
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-240
(1) The Port/WWN Real Time Mode is recommended if you want to monitor the port or WWN
performance for a specific period of time (within 24 hours) of a day to check the performance in
real time.
(2) The Port/WWN Offline Mode is recommended if you want to collect certain amount of the port or
WWN performance data (maximum of one week), and check the performance in non-real time.
To determine a preliminary upper limit and threshold, run the development server by using the
performance data collected from the production server that was run beforehand and check the changes of
performance of a prioritized port. If the performance of the prioritized port does not change, set a value
by increasing an upper limit of the non-prioritized port. After that, recollect and analyze the performance
data. Repeat these steps to determine the optimized upper limit and threshold.
THEORY02-02-240
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-250
No
(6) Determining an upper limit
Yes
Is this upper limit of non-prioritized ports (WWNs) the maximum value,
without affecting the performance of prioritized ports (WWNs)?
THEORY02-02-250
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-02-260
THEORY02-02-260
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-10
THEORY02-03-10
Hitachi Proprietary DW850
Rev.4.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-20
1. HDD
The formatting time of HDD doesn t depend on number of logical volumes, and be decided by capacity
and the rotational speed of HDD.
(1) High speed LDEV formatting
The high-speed format time is indicated as follows.
It is an aim to the last in the standard time required, and the real formatting time may be different
by RAID GROUP and a Drive type.
THEORY02-03-20
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-30
THEORY02-03-30
Hitachi Proprietary DW850
Rev.4.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-40
THEORY02-03-40
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-50
2. SAS SSD
SAS SSD doesn t have the self LDEV format function.
LDEV fomatting is performed by slow LDEV format only.
Rough formatting time per 1 TB/1 PG without host I/O is indicated as follows (*2) (*4).
THEORY02-03-50
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-60
THEORY02-03-60
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-61
THEORY02-03-61
Hitachi Proprietary DW850
Rev.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-70
3. FMD
The formatting time of FMD doesn t depend on number of ECC, and be decided by capacity of FMD.
(1) High speed LDEV formatting
The high-speed format time is indicated as follows.
It is an aim to the last in the standard time required, and the real formatting time may be different
by RAID GROUP and a Drive type.
THEORY02-03-70
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-80
THEORY02-03-80
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-90
4. NVMe SSD
NVMe SSD doesn t have the self LDEV format function.
LDEV fomatting is performed by slow LDEV format only.
Rough formatting time per 1 TB/1 PG without host I/O is indicated as follows (*2) (*4).
THEORY02-03-90
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-91
The formatting time becomes the same in 16 SSDs because the transmission of the format data does not
arrive even at the limit of passing.
THEORY02-03-91
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-92
*1: After the standard formatting time has elapsed, the display on the Web Console shows 99% until it
reaches to the monitoring time. Because Drive itself performs the format, and the progress rate to
the total capacity is not understood, the ratio at the elapsed time from the format beginning to the
Formatting time required is displayed.
*2: If there is an I/O operation, the minimum formatting time is over 6 times as long as the discrete
value, depending on the I/O load.
*3: The formatting time varies according to the generation of the Drive in standard time distance.
NOTE: The formatting time when mixing the Drive types and the configurations described in
(1) High speed LDEV formatting and (2) Slow LDEV formatting divides into the
following cases.
(a) When only the high speed formatting available Drives (1. HDD, 3. FMD) are
mixed
The formatting time is the same as the formatting time of Drive types and
configurations with the maximum standard time.
(b) When only the low speed formatting available Drives (2. SAS SSD) are mixed
The formatting time is the same as the formatting time of Drive types and
configurations with the maximum standard time.
(c) When the high speed formatting available Drives (1. HDD, 3. FMD) and the low
speed formatting available Drives (2. SAS SSD) are mixed
(1) The maximum standard time in the high speed formatting available Drive
configuration is the maximum high speed formatting time.
(2) The maximum standard time in the low speed formatting available Drive
configuration is the maximum low speed formatting time.
The formatting time is the sum of the above formatting time (1) and (2).
When the high speed formatting available Drives and the low speed formatting
available Drives are mixed in one formatting process, the low speed formatting
starts after the high speed formatting is completed. Even after the high speed
formatting is completed, the logical volumes with the completed high speed
formatting cannot be used until the low speed formatting is completed.
In all cases of (a), (b) and (c), the time required to start using the logical volumes
takes longer than the case that the high speed formatting available Drives and the low
speed formatting available Drives are not mixed.
Therefore, when formatting multiple Drive types and the configurations, we
recommend dividing the formatting work and starting the work individually from a
Drive type and a configuration with the shorter standard time.
*4: The time required to format the drive might be increased by up to approximately 20% in the DB on
the rear stage in cascade connection.
THEORY02-03-92
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-100
THEORY02-03-100
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-110
THEORY02-03-110
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-120
THEORY02-03-120
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-130
THEORY02-03-130
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-140
• When Quick Format is executed to parity groups with different Drive capacities at the same time, calculate
the time based on the parity group with the largest capacity.
THEORY02-03-140
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-150
THEORY02-03-150
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-160
THEORY02-03-160
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-03-170
THEORY02-03-170
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-10
2.4.1.1 Requirement #1
Requirement
Maximize system performance by using MPU effectively.
Issue
Way to distribute resources to balance load of each MPU.
How to realize
(1) User directly allocates resources to each MPU.
(2) User does not allocate resources. Resources are allocated to each MPU automatically.
Case
(A) At the time of initial construction (Auto-Define-Configuration)
Target resource : LDEV
Setting IF : Maintenance PC
THEORY02-04-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-20
2.4.1.2 Requirement #2
Requirement
Maximize system performance by using MPU effectively.
Issue
Way to move resources to balance load of each MPU.
How to realize
User directly requirement s to move resources.
Case
Performance tuning
Target resources : LDEV / E xternal VOL / JNLG
Setting IF : Storage Navigator / CLI / RMLib
2.4.1.3 Requirement #3
Requirement
Troubleshooting in the case of problems related to ownership.
Issue
Way to move resources required for solving problems.
How to realize
Maintenance personnel directly requirement s to move resources.
Case
Troubleshooting
Target resources :LDEV / External VOL / JNLG
Setting IF :Storage Navigator / CLI / RMLib
THEORY02-04-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-30
2.4.1.4 Requirement #4
Requirement
Confirm resources allocated to each MPU.
Issue
Way to reference resources allocated to each MPU.
How to realize
User directly requirement to reference resources.
Case
(A) Before ownership management resources are added.
Target resources : LDEV / External VOL / JNLG
Referring IF : Storage Navigator / CLI / Report (XPDT) / RMLib
(C) Troubleshooting
Target resources : L DEV / External VOL / JNLG
Referring IF : S
torage Navigator / CLI / Report (XPDT) / RMLib
THEORY02-04-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-40
2.4.1.5 Requirement #5
Requirement
Maintain performance for resources allocated to specific MPU.
Issue
Way to move resources allocated to each MPU automatically and, way to prevent movement of resources
during addition of MPU.
How to realize
Resources are NOT allocated / moved automatically to the MPU that user specified.
Case
(A) When adding ownership management resources, preventing allocation of resources to the Auto
Allocation Disable MPU.
Disable Eable
MPU MPU
Resource Resource
To be added
Resource Resource Resource
THEORY02-04-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-50
Addition of LDEV(s)
Allocation for LDEV ownership
(Addition of ECC/CV operation)
THEORY02-04-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-60
MPU MPU
#0 #2 #1 #3
#4 #5
MPU MPU
#4 #2 #1 #4
#2
Total 6 Total 5 7
But, automation allocation cannot consider the weight of each device.
THEORY02-04-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-70
MPU MPU
SAS
SSD/FMD
DP VOL
THEORY02-04-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-80
MPU MPU
SAS
ECC Gr.1-1 ECC Gr.1-3
(3D+1P) (7D+1P)
(5 LDEV) (6 LDEV)
Total 11 Total 6 11
MPU MPU
LDEV#4
Total 2 3 Total 2
DP VOL
Total 5 Total 4 5
THEORY02-04-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-90
MPU MPU
E-vol #2 E-vol #3
LDEV#4 LDEV#5 LDEV#6
E-vol #5 E-vol #4
LDEV#7
Total 2 3 Total 3
MPU MPU
LDEV#8
Total 1 Total 0 1
THEORY02-04-90
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-100
Host Host
FE IFPK FE IFPK
Port Port
LDEV #0
CTL CTL
MPU MPU
MP <owner> <owner> MP
Processing PM PM
LDEV #0 LDEV #1
LDEV
MP #0 MP
LDEV #0 : :
MP : : MP
LDEV #0
MP MP
SM SM
LDEV #0 LDEV #0
THEORY02-04-100
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-110
Host Host
FE IFPK FE IFPK
Port Port
LDEV #0
CTL CTL
MPU MPU
MP <owner> <owner> MP
Processing PM PM
LDEV #0 LDEV #1
LDEV
MP #0 MP
LDEV #0 : :
MP : : MP
LDEV #0
MP MP
SM SM
LDEV #0 LDEV #0
THEORY02-04-110
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-120
Step2. Switch MPU, to which I/O is issued, to the target MPU (to which the ownership is moved).
Host Host
FE IFPK FE IFPK
Port Port
LDEV #0 LDEV #0
MPU MPU
MP <owner> <owner> MP
Processing PM PM
LDEV #0 LDEV #1
LDEV
MP #0 MP
LDEV #0 : :
MP : : MP
LDEV #0
MP MP
CTL SM SM CTL
LDEV #0 LDEV #0
THEORY02-04-120
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-130
Step3. Complete the ongoing processing in the source MP whose ownership is moved.
(New processing is not performed in the source MP.)
Figure 2-23 MPU block for maintenance (3)
Host Host
Host Host
CTL SM SM CTL
LDEV #0 LDEV #0
THEORY02-04-130
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-140
Step5. Moving ownership is completed and the processing starts in the target MPU.
Figure 2-25 MPU block for maintenance (5)
Host Host
CTL SM SM CTL
LDEV #0 LDEV #0
Step6. Perform Step1. to Step5. for all resources under the MPU blocked and after they are completed,
block MPU.
Figure 2-26 MPU block for maintenance (6)
Host Host
CTL SM SM CTL
LDEV #0 LDEV #0
THEORY02-04-140
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-150
Host Host
FE IFPK FE IFPK
Port Port
LDEV #0
MPU MPU
MP <owner> <owner> MP
PM PM
MP LDEV #0 LDEV #1 MP
LDEV #0 : LDEV #0
MP : : MP
:
MP MP
CTL SM SM CTL
LDEV #0 LDEV #0
THEORY02-04-150
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-160
Step2. Switch MPU, to which I/O is issued, to MPU that takes over the ownership.
Host Host
FE IFPK FE IFPK
Port Port
LDEV #0 LDEV #0
MPU MPU
MP <owner> <owner> MP
PM PM
MP LDEV #0 LDEV #1 MP
LDEV #0 : LDEV #0
MP : : MP
:
MP MP
CTL SM SM CTL
LDEV #0 LDEV #0
THEORY02-04-160
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-04-170
Step3. Perform WCHK1 processing at the initiative of MPU that takes over the ownership.
Figure 2-29 MPU blocked due to failure (3)
Host Host
CTL SM SM CTL
LDEV #0 LDEV #0
Step4. WCHK1 processing is completed, and the processing starts in the target MPU.
Figure 2-30 MPU blocked due to failure (4)
Host Host
CTL SM SM CTL
LDEV #0 LDEV #0
THEORY02-04-170
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-10
VSP G370, VSP G350, VSP G130 (*1) and VSP F370, VSP F350
Controller1
Controller2
*1: For VSP G130, one DIMM can be installed in each controller.
VSP G900, VSP G700, VSP F900, VSP F700 and VSP E990
Controller2
THEORY02-05-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-20
All models
Controller1
Power boundary
Controller2
MG : Module Group
THEORY02-05-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-30
Controller1 Controller 2
MPU MPU
MP MP MP MP MP MP MP MP
PM PM
CM CM
THEORY02-05-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-40
Controller 1 Controller 2
MPU MPU
PM PM
SGCB SGCB
00 02 05 07 01 03 06 09
11 13 14 1b 12 18 19 1a
00 01 02 03 10 11 12 13
04 05 06 07 14 15 16 17
08 09 0a 0b 18 19 1a 1b
SGCB SGCB
CM CM
THEORY02-05-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-50
Controller 1 Controller 2
MPU MPU
PM PM
SGCB SGCB
00 01 02 03 04 05 06 07
10 11 12 13 14 15 16 17
00 01 02 03 10 11 12 03
04 05 06 07 14 15 16 17
SGCB SGCB
CM CM
THEORY02-05-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-60
Controller 1 Controller 2
MPU MPU
PM PM
Cache DIR SGCB Cache DIR SGCB
C-VDEV#0 C-VDEV#2
C-VDEV#1
00 01 02 03 04 05 06 07
Cache DIR
C-VDEV#0 10 11 12 13 14 15 16 17
C-VDEV#1
04 05 06 07 14 15 16 17
SGCB SGCB
CM CM
THEORY02-05-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-70
Controller 1 Controller 2
MPU MPU
PM PM
Cache DIR SGCB Cache DIR SGCB
C-VDEV#0
C-VDEV#1 C-VDEV#2 04 05 06 07
00 01 02 03
Cache DIR 14 15 16 17
C-VDEV#0 10 11 12 13
C-VDEV#1
01 12
04 05 06 07 14 15 16 17
SGCB SGCB
CM CM
THEORY02-05-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-80
Controller 1 Controller 1
MPU (High workload) MPU (Low workload)
PM PM
SGCB SGCB
D D C C C C F F
C D D D F C F F
F F F F F
SGCB SGCB
CM CM
D:Dirty
C:Clean
F:Free
THEORY02-05-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-90
Controller 1 Controller 2
MPU (High workload) MPU (Low workload)
PM PM
SGCB SGCB
D D C C C C F F
C D D D F C F F
F F F F F
SGCB SGCB
CM CM
D:Dirty
C:Clean
F:Free
THEORY02-05-90
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-100
Controller 1 Controller 2
MPU (High workload) MPU (Low workload)
PM PM
SGCB SGCB
D D C C
C C F F
C D D D
F C F F
F F F F
F F F F F
SGCB SGCB
CM CM
D:Dirty
C:Clean
F:Free
THEORY02-05-100
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-110
D F C F C D F C
C D F D
D F C C F C D C
D F F D
SGCB SGCB
CM (Controller1) CM (Controller2)
D:Dirty
C:Clean
F:Free
THEORY02-05-110
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-120
D F C D
C F
D F C C
D F
SGCB SGCB
CM (Controller1) CM (Controller2)
D:Dirty
C:Clean
F:Free
THEORY02-05-120
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-130
D F C D
C C D F
D F C C
D F
SGCB SGCB
CM (Controller1) CM (Controller2)
D:Dirty
C:Clean
F:Free
THEORY02-05-130
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-140
C C D F
D F C C
D F
SGCB SGCB
CM (Controller1) CM (Controller2)
D:Dirty
C:Clean
F:Free
THEORY02-05-140
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-150
C C D
D F C C
D F
SGCB SGCB
CM (Controller1) CM (Controller2)
D:Dirty
C:Clean
F:Free
THEORY02-05-150
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-160
C C D F
D F C C
D F
SGCB SGCB
CM (Controller1) CM (Controller2)
D:Dirty
C:Clean
F:Free
THEORY02-05-160
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-170
C C D
D F C C
D F
SGCB SGCB
CM (Controller1) CM (Controller2)
D:Dirty
C:Clean
F:Free
THEORY02-05-170
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-180
C F F
D F C C F F F F
D F F F
SGCB SGCB
CM (Controller1) CM (Controller2)
D:Dirty
C:Clean
F:Free
THEORY02-05-180
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-190
Unallocated bitmap
Free queue MPU Free bitmap Free queue MPU Free bitmap
MPU MPU
THEORY02-05-190
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-05-200
Unallocated bitmap
Free queue MPU Free bitmap Free queue MPU Free bitmap
Free-counter Free-counter Free-counter
Free-counter
Discard Cache data
Clean-counter Clean-counter Clean-counter Clean-counter
Data access
ALL-カウンタ
Destage / Staging ALL-counter ALL-counter ALL-counter
MPU MPU
THEORY02-05-200
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-06-10
THEORY02-06-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-06-20
2.6.1.2 Features
• The capacity of the ECC group can be fully used.
Host
RAID5(3D+1P)
16 LDEVs
(OPEN-V)
LDEV PDEV
CV #1
Regular Mapping LDEV
(OPEN-V)
150 LBA OPEN-V Base Volume
CV #2 PDEV
volume size
(OPEN-V) LDEV
30 LBA
Unused LDEV PDEV
area 1 LDEV
LDEV
PDEV
ECC Group
CV #3 Base Volume
(Physical Image)
Regular
(OPEN-V) OPEN-V
volume size
CV #4(OPEN-V)
ECC Group
CV #5(OPEN-V)
(Logical Image)
Unused area
THEORY02-06-20
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-06-30
2.6.1.3 Specifications
The CVS option consists of a function to provide variable capacity volumes.
THEORY02-06-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-06-40
TSD
SVP User
TEL Line
CE LAN Maintenance
function of the
MPC item No.2,3,4, and
5 in a table (*1)
All DKC DKC DKC
maintenance
function in a
table
*1: Operated from Command Control Interface in the case of the configuration that does not contain
the SVP.
THEORY02-06-40
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-10
THEORY02-07-10
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-20
THEORY02-07-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-30
THEORY02-07-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-40
THEORY02-07-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-50
*1: When PDEV that stops PDEV Erase is installed into DKC again, it might fail by Spin-up failure.
*2: It is not likely to be able to maintain it when failing because of concerned MSG until PDEV Erase
is completed or terminates abnormally.
THEORY02-07-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-07-60
THEORY02-07-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-10
R0 R1 RL
HA C D C K D C K D
2KB
520
For increased directory search efficiency, a single virtual device (VDEV) is divided into 16-slot groups which
are controlled using VDEV-GRPP and CACHE-GRPT.
The directories VDEV-GRPP, CACHE-GRPT, CACHE-SLCB, and CACHE-SGCB are used to identify the
Cache hit and miss conditions. These control tables are stored in the shared memory.
THEORY02-08-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-20
In addition to the Cache hit and miss control, the shared memory is used to classify and control the data in
Cache according to its attributes. Queues are something like boxes that are used to classify data according to
its attributes.
Basically, queues are controlled in slot units (some queues are controlled in segment units). Like SLCB-
SGCB, queues are controlled using a queue control table so that queue data of the seemingly same attribute
can be controlled as a single data group. These control tables are briefly described below.
THEORY02-08-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-30
Figure 2-54
0
RD data
1
WR data
15
SGCB CACHE
0 16 32 40
SLCB
RSEG1ADR=0 64 RSEG1 WSEG2
128 WSEG1 RSEG3
RSEG2ADR=208
192 RSEG2 RSEG4
RSEG3ADR=176 256 WSEG4 WSEG3
320
RSEG4ADR=240
SLCB
SLCB
WSEG1ADR=128
WSEG2ADR=32
WSEG3ADR=288
WSEG4ADR=256
SLCB
VDEV-GRPP
VDEV#51
(1)
LDEV-DIR
VDEV#2
VDEV#1
VDEV#0
(2)
0 (4)
Slot group#0
15
0
(3)
(5)
15
(1) The current VDEV-GRPP is referenced through the LDEV-DIR to determine the hit/miss condition
of the VDEV-groups.
(2) If a VDEV-group hits, CACHE-GRPT is referenced to determine the hit/miss condition of the
slots.
(3) If a slot hits, CACHE-SLCB is referenced to determine the hit/miss condition of the segments.
(4) If a segment hits, CACHE-SGCB is referenced to access the data in Cache.
If a search miss occurs during the searches from 1. through 4., the target data causes a Cache miss.
From the above formulas, the VDEV number ranges from 0 to 2047.
THEORY02-08-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-50
3. Queue structures
The DKC and DB uses 10 types of queues to control data in Cache segments according to its attributes.
These queues are described below.
The control table for these queues is located in the shared memory and points to the head and tail
segments of the queues.
THEORY02-08-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-60
Destaging complete
(WR/RD SEG)
Free queue
LRU basis
RD MISS/WR MISS
Clean queue
WR HIT
RD HIT
WR HIT
RD HIT/WR HIT
Parity Parity
Parity not creation Parity creation Parity not
RD HIT starts complete
WR HIT reflected in-creation reflected RD HIT
& dirty status status & dirty status
THEORY02-08-60
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-70
CHB
CACHE A CACHE B
DRIVE
The Cache area to be used for destaging read data is determined depending on whether the result of
evaluating the following expression is odd or even:
(CYL# x 15 + HD#) / 16
The read data is destaged into area A if the result is even and into area B if the result is odd.
Read data is not duplexed and its destaging Cache area is determined by the formula shown in Figure
2-57. Staging is performed not only on the segments containing the pertinent block but also on the
subsequent segments up to the end of track (for increased hit ratio). Consequently, one track equivalence
of data is prefetched starting at the target block. This formula is introduced so that the Cache activity
ratios for areas A and B are even. The staged Cache area is called the Cache area and the other area NVS
area.
THEORY02-08-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-80
Data
(2) (2)
CACHE A
Write data Write data
CACHE B
Old data New Parity Old data New Parity
(1) (5) (3) (5)
This system handles write data (new data) and read data (old data) in separate segments as shown in
Figure 2-58 (not overwritten as in the conventional systems), whereby compensating for the write
penalty.
(1) If the write data in question causes a Cache miss, the data from the block containing the target
record up to the end of the track is staged into a read data slot.
(2) In parallel with Step (1), the write data is transferred when the block in question is established in
the read data slot.
(3) The parity data for the block in question is checked for a hit or miss condition and, if a Cache miss
condition is detected, the old parity is staged into a read parity slot.
(4) When all data necessary for generating new parity is established, create the Parity in the DRR
processing of the CPU.
(5) When the new parity is completed, the DRR transfers it into the write parity slots for Cache A and
Cache B (the new parity is handled in the same manner as the write data).
The reason for write the write data into both Cache areas is that data will be lost if a Cache error occurs
when it is not yet written on the Disk.
Although two Cache areas are used as described above, the read data (including parity) is staged into
either Cache A or Cache B simply by duplexing only the write data (including parity) (in the same
manner as in the read mode).
THEORY02-08-80
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-08-90
The control information necessary for controlling Cache is stored in the shared memory.
THEORY02-08-90
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-09-10
WR WR
(4)Switch to Read
area
RD RD
(3)Segment released
(1) The write data is copied from the NVS area (1) Parity generation correction read (old parity)
into the read area. occurs.
(2) The write segment in the Cache area is (2) New parity is generated.
released.
(3) Simultaneously, the segment in the NVS area is (3) The old parity in the read segment is released.
switched from write to read segment.
(4) Destaging. (4) The segments in the Cache and NVS areas are
switched from write to read segment.
(5) The read segment in the NVS area is released. (5) Destaging.
(6) The read segment in the NVS area is released.
Write data is stored in write segments before parity is generated but stored in read segments after parity
is generated. When Drive data is stored, therefore, the data from the read segment is transferred.
THEORY02-09-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-09-20
Data slot
Cache area NVS area
WR Data
RD RD
THEORY02-09-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-09-30
• Single-stripe blocking
Two or more dirty segments in a stripe are combined into a single dirty data block. Contiguous dirty
blocks are placed in a single area. If an unloaded block exists between dirty blocks, the system destages
the dirty blocks separately at the unloaded block. If a clean block exists between dirty blocks, the
system destages the blocks including the clean block.
• Multiple-stripe blocking
The sequence of stripes in a parity group are blocked to reduce the number of write penalties. This
mode is useful for sequential data transfer.
• Drive blocking
In the Drive blocking mode, blocks to be destaged are written in a block with a single Drive command
if they are contiguous when viewed from a physical Drive to shorten the Drive's latency time.
The single- and multiple-stripe blocking modes are also called in-Cache blocking modes. The DMP
determines which mode to use. The Drive blocking mode is identified by the DSP.
THEORY02-09-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-10-10
1. BIOS
The BIOS starts other MP cores after a ROM boot. Subsequently, the BIOS expands the OS loader from
the flash memory into the local memory and OS loader is executed.
2. OS loader
The OS loader performs the minimum necessary amount of initializations, tests the hardware resources,
then loads the Real Time OS modules into the local memory and the Real Time OS is executed.
4. DKC task
When the DKC task is created, it executes initialization routines. Initialization routines initialize the most
part of the environment that the DKC task uses. When the environment is established so that the DKC
task can start scanning, the DKC task notifies the Maintenance PC of a power event log. Subsequently,
the DKC task turns on the power for the physical Drives and, when the logical Drives become ready, The
DKC task notifies the host processor of an NRTR.
THEORY02-10-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-10-20
Power On
BIOS
• Start MP core
• Load OS loader
OS loader
• MP register initialization
• CUDG for BSP
• CUDG for each MP core
• Load Real Time OS
DKC task
• CUDG
• Initialize LM/CM
• FCDG
• Send Power event log
• Start up physical Drives
SCAN
THEORY02-10-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-10-30
The hardware turns off main power when power-off grants for all processors are presented.
Storage MP
Navigator
PS-off detected
Grant PS-off
DKC PS off
THEORY02-10-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-11-10
THEORY02-11-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-11-20
2.11.1 Data Check Using LA (Logical Address) (LA Check) (Common to SAS Drives and
SSD)
When data is transferred, the LA value of the target BLK (LA expectation value) and the LA value of the
actual transferred data (Read LA value) are compared to guarantee data. This data guarantee is called LA
check .
With the LA check, it is possible to check whether data is read from the correct BLK location.
1. Receive Write requirement from Host. 1. DKB calculates the LA expectation value based
2. CHB stores data on Cache and adds the LA on the logical address of the BLK to read.
value and, at the same time, adds an LA value, 2. Perform read from HDD.
which is a check code, to each BLK. (LA value 3. Check whether the LA expectation value and the
is calculated based on the logical address of each LA value of the read data are consistent. (When
BLK) the LBA to read is wrong, the LA values would
3. DKB stores data on HDD. be inconsistent, and the error can be detected.
In such a case, a correction read is performed to
restore data.)
4. CHB transfers data to Host by removing the LA
field.
THEORY02-11-20
Hitachi Proprietary DW850
Rev.4 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-12-10
THEORY02-12-10
Hitachi Proprietary DW850
Rev.4 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-12-20
THEORY02-12-20
Hitachi Proprietary DW850
Rev.4 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-12-30
In the following cases, however, creation of encryption key is inhibited to avoid data corruption.
• Due to a failure in the Storage System, the Storage System does not have any encryption key but it has a
RAID group in which encryption is set.
In this case, restore the backed up encryption key.
• Primary backup
Encryption key created on SM is backed up in the Cache Flash Memory in the Storage System.
Encryption key is automatically backed up within the Storage System at the time it is created, deletion, a
status are changed.
• Secondary backup
Encryption key created on SM is backed up in the management client (client PC to use Storage Navigator
or REST API) or the key management server of the user.
The secondary backup is performed from Storage Navigator or REST API by direction of the security
administrator.
THEORY02-12-30
Hitachi Proprietary DW850
Rev.4 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-12-40
NOTE:
• Encryption can be set and released only when all volumes that belong to the RAID group are
blocked, or when there is no volume in the RAID group.
When the RAID group contains at least one volume that is not blocked, you cannot set and
release encryption.
• When you switch the encryption setting, you need to perform LDEV format again. Therefore set
encryption before formatting the entire RAID group when installing RAID groups et cetera.
THEORY02-12-40
Hitachi Proprietary DW850
Rev.4 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-12-50
THEORY02-12-50
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-13-10
Figure 2-62 Overview of Data Read Processing (Requirement for read data B)
1. Normal time
B B
A B C P, A, C A A B B
E F P, D, F D C C D D
I P, G, L G H E E F F
A B C P, A, C A A B B
E F P, D, F D C C D D
I P, G, L G H E E F F
THEORY02-13-10
Hitachi Proprietary DW850
Rev.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-13-20
1. Dynamic sparing
This system keeps track of the number of failures that occurred, for each Drive, when it executes normal
read or write processing. If the number of failures occurring on a certain Drive exceeds a predetermined
value, this system considers that the Drive is likely to cause unrecoverable failures and automatically
copies data from that Drive to a spare Disk. This function is called dynamic sparing. In RAID1 method,
this system is same as RAID5 dynamic sparing.
Ready to accept
I/O requirements
DKC
THEORY02-13-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-13-30
2. Correction copy
When this system cannot read or write data from or to a Drive due to an failure occurring on that Drive,
it regenerates the original data for that Drive using data from the other Drives and the parity data and
copies it onto a spare Disk.
• In RAID1 method, this system copies data from the another Drive to a spare Disk.
• In the case of RAID 6, the correction copy can be made to up to two Disk Drives in a parity group.
Parity Group
THEORY02-13-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-14-10
2.14 Data Guarantee at the Time of a Power Outage due to Power Outage and Others
If a power failure due to power outage and others occurs, refer to 5. Battery of 4.6.4 Hardware
Component .
THEORY02-14-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-15-10
THEORY02-15-10
Hitachi Proprietary DW850
Rev.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-15-20
Host N N N N
A B C A B C A B C A B C
Storage System
Deduplication
system data volume
N LDEV setting N N N
(fingerprint)
LDEV setting Capacity Saving:
Capacity Saving: A B C A B C A B C A B C
Deduplication NNNN
Compression and Compression A A ABAB BCBC C C
... ... ............ ............ ... ...
... ... ............ ............ ... ...
Before compressing A B C A B C A B C A B C
data Information required
Data compression for retrieving
duplicate data is
After compressing stored.
data
A B C Compression A B C A B C A B C
Deduplication
system data volume
Before deleting (data store)
duplicated data A B C A B C A B C
NNNN
A B C A B C A B C
Duplicate source data
Deduplication is stored.
(Legends)
: Deleted duplicate
: Written data data : Data flow
When the Capacity Saving function is enabled, the pool capacity is consumed because the entire capacity
of metadata and garbage data is stored. The capacity to be consumed is equivalent to the physical capacity
of about 10% of the LDEV capacity that is processed by Capacity Saving. The pool capacity is dynamically
consumed according to usage of the Capacity Saving process. When the amount of data writes from the host
increases, the consumed capacity might exceed 10% of the pool capacity temporarily. When the amount
of data writes decreases, the used capacity becomes about 10% of the pool capacity due to the garbage
collection operation.
THEORY02-15-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-15-30
2.15.2.1 Compression
Compression is a function to convert a data size to a different smaller data size by encryption without
reducing the amount of information. LZ4 is used for Compression as the data compression algorithm. Set this
function to each virtual volume for Dynamic Provisioning.
2.15.2.2 Deduplication
Deduplication is a function to retain data on a single address and delete the duplicated data on other addresses
if the same data is written on different addresses. Deduplication is set for each of the virtual volumes of the
Dynamic Provisioning. When Deduplication is enabled, duplicated data among virtual volumes associated
with a pool is deleted. When virtual volumes with Deduplication enabled are created, system data volumes
for Deduplication (fingerprint) and system data volumes for Deduplication (data store) are created. The
system data volume for Deduplication (fingerprint) stores a table to search for duplicated data among data
stored in the pool. Four system data volumes for Deduplication (fingerprint) are created per pool. The system
data volume for Deduplication (data store) stores the original data of the duplicated data. Four system data
volume for Deduplication (data store) are created per pool.
When the settings of [Deduplication and Compression] of all virtual volumes are changed to [Disable],
system data volumes for Deduplication are automatically deleted.
THEORY02-15-30
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-10
THEORY02-16-10
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-20
THEORY02-16-20
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-30
Tell the customer that user data might remain in the drive.
When the customer has the DRO agreements, give the faulty
drive to him or her and recommend destroying it physically or
other methods like that.
When the customer does not have the DRO agreements, bring
the faulty drive back with you after making the customer
understand that user data might remain in the drive.
(If the customer does not allow you to bring out the drive,
explain him or her that he or she needs to use services for
erasing data or make the DRO agreements.)
3 4e8xxx End with warning Data erase ends with warning because reading some areas of
the drive is unsuccessful while writing the erase pattern data
is successful (for flash drives, excluding over provisioning
space).
Tell the customer that writing the erase pattern data to an entire
drive is completed but data in some areas cannot be read.
Then, ask the customer whether he or she wants you to bring
out the drive.
For how to check the number of the areas (LBAs) where data
cannot be read, see 2.16.3.2 Checking Details of End with
Warning .
*1: The SIM indicating drive port blockade (see (SIMRC02-110)) might be also reported when the
SIM indicating end of Media Sanitization is reported. In such a case, prioritize the SIM indicating
end of Media Sanitization.
THEORY02-16-30
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-40
Check SIMs indicating end with warning and related SSBs to know factors of end with warning as follows:
[1] In the Maintenance Utility window, select the [Alerts] tab and click the alert ID on the row of the SIM
indicating end with warning (reference code).
[2] The alert details are displayed. Check the concerned alert#.
THEORY02-16-40
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-50
[4] In the [SSB] tab, select the alert ID of the concerned alert# checked in previous steps.
THEORY02-16-50
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-60
Table 2-55 Internal Information of SSB Related to SIM indicating End with Warning
Field Details
(a) Total number of LBAs on the target drive for data erase
(Field size: 6 bytes)
(a) = (b) + (c)
(b) The number of LBAs for which data erase is complete on the target drive for data erase
(Field size: 6 bytes)
(c) The number of LBAs for which the write by using the erase pattern data is successful and the read is
unsuccessful on the target drive for data erase
(Field size: 6 bytes)
(d) DB# and RDEV# of the target drive for data erase
(Lower 1 byte: DB#, upper 1 byte: RDEV#)
(b)
(c)
(d)
(a)
THEORY02-16-60
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-70
No Item 1 Item 2 Maintenance work is possible or not Media Sanitization action affected
possible by maintenance work
1 Replacement CTL/CM Possible None
2 LANB Possible None
3 CHB Possible None
4 Power supply Possible None
5 Maintenance PC Possible None
6 ENC/SAS Cable Possible (*2) None
7 DKB Possible None
8 PDEV Possible (*2) Media Sanitization ends abnormally
if you replace a drive in process of
it.
9 CFM Possible None
10 BKM/BKMF Possible Media Sanitization ends abnormally.
11 FAN Possible None
12 Battery Possible None
13 SFP Possible None
14 Addition/ CM Possible None
15 Removal SM Not possible (*3) None
16 CHB Possible (*2) None
17 Maintenance PC Possible None
18 DKB Possible (*2) None
19 PDEV Possible None
20 CFM Possible None
21 Parity Group Addition: Possible None
Removal: Possible (*3)
22 Spare drive Possible None
23 Drive Box (DB) Addition: Possible (*2) None
Removal: Possible (*3)
(To be continued)
THEORY02-16-70
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-71
*1: The operation is suppressed with a message displayed. However, you can perform the operation
from Forcible task without safety check .
*2: The operation is suppressed with a message displayed when the copy back mode is disabled.
However, you can retry the operation by checking the checkbox for Forcibly run without safety
checks .
*3: The operation is suppressed with a message displayed when the copy back mode is disabled.
Perform either (1) or (2).
(1) If you want to prioritize the maintenance work, restore the blocked drive for which Media
Sanitization is being executed, and then retry the operation. However, if you restore the
blocked drive, Media Sanitization ends abnormally and cannot be executed again.
(2) If you want to prioritize Media Sanitization, wait until Media Sanitization ends, and then
perform the maintenance work.
THEORY02-16-71
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY02-16-80
THEORY02-16-80
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-01-10
Table 3-1 Maximum Standby Electricity per Controller Chassis and Drive Box
Controller Chassis/Drive Box etc Maximum Standby Electricity [VA]
DBS (SFF Drive Box) 200
DBL (LFF Drive Box) 140
DB60 (3.5-inch Drive Box) 560
DBF (Flash Module Drive Box) 410
DBN (NVMe Drive Box) 500
CBL (Controller Chassis) 230
CBSS (Controller Chassis) 370
CBSL (Controller Chassis) 310
CBXSS (Controller Chassis) 230
CBXSL (Controller Chassis) 170
CHBB (Channel Board Box) 180
THEORY03-01-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-01-20
When the Power OFF Event Log confirmation is impossible, suspend the operation and requirement
customers to restart the Storage System to confirm that the PS is normally turned off.
NOTICE: Requirement the thoroughness in the following operations to the customer if the
distribution board breaker or the PDU is impossible to keep the on-status after the
power of the Storage System is turned off.
THEORY03-01-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-02-10
3.2 Precautions When Installing of Flash Drive and Flash Module Drive Addition
For precautions when installing of Flash Drive and Flash Module Drive, refer to INSTALLATION SECTION
1.3.4 Notes for Installing Flash Module Drive Boxes .
THEORY03-02-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-03-10
THEORY03-03-10
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-03-20
Table 3-2 Correlation List of Storage System Statuses and Maintenance Available Parts
Storage System status
Maintenance operation Dynamic Correction Copy Back Correction Copied to LDEV
Sparing Copy Access spare Disk Format
Replacement CTL/CM Depending Depending Depending Possible Possible Impossible
on firmware on firmware on firmware (*1) (*8) (*6)
version (*19) version (*19) version (*19)
LANB Depending Depending Depending Possible Possible Impossible
on firmware on firmware on firmware (*1) (*8) (*6)
version (*19) version (*19) version (*19)
CHB Possible Possible Possible Possible Possible Impossible
(*1) (*8) (*6)
Power supply Possible Possible Possible Possible Possible Possible
SVP Possible Possible Possible Possible Possible Possible
ENC/SAS Possible Possible Possible Possible Possible Impossible
Cable (*1) (*8) (*6)
DKB Possible Possible Possible Possible Possible Impossible
(*1) (*8) (*6)
PDEV Possible Possible Possible Possible Possible Possible
(*15) (*15) (*15) (*1) (*8) (*4)
CFM Possible Possible Possible Possible Possible Possible
BKM/BKMF Possible Possible Possible Possible Possible Possible
(*1)
FAN Possible Possible Possible Possible Possible Possible
Battery Possible Possible Possible Possible Possible Possible
(*1)
SFP Possible Possible Possible Possible Possible Possible
PCIe Cable Possible Possible Possible Possible Possible Impossible
(*1) (*8) (*6)
PCIe channel Possible Possible Possible Possible Possible Impossible
Board (*1) (*8) (*6)
Channel Possible Possible Possible Possible Possible Possible
Board Box
Switch Depending Depending Depending Possible Possible Impossible
Package on firmware on firmware on firmware (*1) (*8) (*6)
version (*19) version (*19) version (*19)
PCIe Cable Possible Possible Possible Possible Possible Impossible
Connection (*1) (*8) (*6)
Package
(To be continued)
THEORY03-03-20
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-03-21
THEORY03-03-21
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-03-30
*1: It is prevented with the message. However, it is possible to perform it by checking the checkbox of
Perform forcibly without safety check .
*2: It is impossible to remove a RAID group in which data is migrated to a spare Disk and the spare
Disk.
*3: (Blank)
*4: It is impossible when high-speed LDEV Format is running. When low-speed LDEV Format is
running, it is possible to replace PDEV in a RAID group in which LDEV Format is not running.
*5: It is possible to perform LDEV maintenance for LDEV defined in a RAID group in which
Dynamic Sparing, Correction Copy, Copy Back or Correction Access is not running.
*6: It is prevented with message [30762-208158]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*7: It is prevented with message [30762-208159]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*8: It is prevented with message [33361-203503:33462-200046]. However, a different message might
be displayed depending on the occurrence timing of the state regarded as a prevention condition.
*9: It is prevented with the message. However, it is possible to perform it from Forcible task without
safety check .
*10: It is prevented with message [03005-002095]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*11: It is prevented with message [03005-202002]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*12: It is prevented with message [03005-202001]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*13: It is prevented with message [03005-202005]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*14: It is prevented with message [03005-002011]]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*15: It is prevented with message [30762-208159].
• When the RAID group to which the maintenance target PDEV belongs and the RAID group
whose Dynamic Sparing / Correction Copy / Copy Back is operating are not identical, it is
possible to perform it by checking the checkbox of “Perform forcibly without safety check”.
• When the RAID group to which the maintenance target PDEV belongs and the RAID group
whose Dynamic Sparing / Correction Copy / Copy Back is operating are identical and the RAID
level is RAID 6, it is possible to perform it by checking the checkbox of “Perform forcibly
without safety check” depending on the status of the PDEV other than the maintenance target.
However, a different message might be displayed depending on the occurrence timing of the state
regarded as a prevention condition.
THEORY03-03-30
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-03-31
*16: • For the firmware version earlier than 88-02-04-x0/xx, increasing or decreasing SM is suppressed
with message [30762-208180]. Resolve the blockade, then retry the operation.
• For the firmware version earlier than 88-02-04-x0/xx, increasing or decreasing CM is suppressed
with message [30762-208180]. To perform the operation, enable “Perform forcibly without safety
check” by checking its checkbox.
*17: For the firmware version earlier than 88-02-04-x0/xx, adding or removing a CHB/DKB is
suppressed with message [30762-208180]. To perform the operation, enable Perform forcibly
without safety check by checking its checkbox.
*18: For the firmware version earlier than 88-02-04-x0/xx, removing a PDEV is suppressed with
message [30762-208180]. To perform the operation, enable Perform forcibly without safety
check by checking its checkbox. Adding a PDEV is not suppressed.
*19: • For the firmware version 88-03-29-x0/xx or later
The maintenance operation is possible.
• For firmware versions other than the above
The maintenance operation is prevented with message [30762-208159]. However, a different
message might be displayed depending on the occurrence timing of the state regarded as a
prevention condition.
THEORY03-03-31
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY03-04-10
THEORY03-04-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-10
4. Appendixes
4.1 DB Number - C/R Number Matrix
In the case of VSP G900, VSP F900, VSP G700, VSP F700, VSP G370, VSP F370, VSP G350, VSP F350,
VSP G130
For 12-bit DB#/RDEV# indicated in the PLC (Parts Location Code) of ACC and the SIM-RC, the relation
between the contents of bits and HDD location# is shown below. The correspondence between DB# and
CDEV# for each storage system model is also shown.
• DB#/RDEV# format
X (4 bit) Y (4 bit) Z (4 bit)
x x x x y y y y z z z z
DB# (6 bit) RDEV# (6 bit)
Example: In the case of XYZ = 5A5 (Hex) (Hex: Hexadecimal, Dec: Decimal)
5 A 5
0 1 0 1 1 0 1 0 0 1 0 1
DB# = 16 (Hex) RDEV# = 25 (Hex)
= 22 (Dec) = 37 (Dec)
The relation between DB#, RDEV#, and HDD location# is shown below.
• HDDxx-yy
RDEV# (Dec)
DB# (Dec)
The following is the relation between 12-bit DB#/RDEV#, DB#, RDEV# (R#), and HDD location# for
DB-00. For DB-01 or later, the relation between DB#/RDEV#, DB#, RDEV#, and HDD location# is the
same as that for DB-00.
THEORY04-01-10
Hitachi Proprietary DW850
Rev.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-20
THEORY04-01-20
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-30
THEORY04-01-30
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-40
• VSP G370, VSP F370, VSP G350, VSP F350, VSP G130
DB# C# DB# C#
(Dec) (Hex) (Dec) (Hex)
DB-00 00 DB-08 17
(*1)
DB-01 10 DB-09 18
DB-02 11 DB-10 19
DB-03 12 DB-11 1A
DB-04 13
DB-05 14
DB-06 15
DB-07 16
*1: DB-00 is contained in the DKC.
THEORY04-01-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-50
The relation between DB#, HDD#, and HDD location# is shown below.
• HDDxx-yy
HDD# (Decimal) (*2)
DB# (Decimal) (*1)
7 5 a 5
0 1 1 1 0 1 0 1 1 0 1 0 0 1 0 1
A = AD (Hexadecimal) B = 05 (Hexadecimal)
173 (Decimal) 5 (Decimal)
THEORY04-01-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-01-60
THEORY04-01-70
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-02-10
4.2 Comparison of Pair Status on Storage Navigator, Command Control Interface (CCI)
Table 4-4 Comparison of Pair Status on Storage Navigator, CCI
No. Event Status on CCI Status on Storage Navigator
1 Simplex Volume P-VOL: SMPL P-VOL: SMPL
S-VOL: SMPL S-VOL: SMPL
2 Copying LU Volume P-VOL: PDUB P-VOL: PDUB
Partly completed (SYNC only) S-VOL: PDUB S-VOL: PDUB
3 Copying Volume P-VOL: COPY P-VOL: COPY
S-VOL: COPY S-VOL: COPY
4 Pair volume P-VOL: PAIR P-VOL: PAIR
S-VOL: PAIR S-VOL: PAIR
5 Pairsplit operation to P-VOL P-VOL: PSUS P-VOL: PSUS (S-VOL by operator)
S-VOL: SSUS S-VOL: PSUS (S-VOL by operator)/
SSUS
6 Pairsplit operation to S-VOL P-VOL: PSUS P-VOL: PSUS (S-VOL by operator)
S-VOL: PSUS S-VOL: PSUS (S-VOL by operator)
7 Pairsplit -P operation (*1) P-VOL: PSUS P-VOL: PSUS (P-VOL by operator)
(P-VOL failure, SYNC only) S-VOL: SSUS S-VOL: PSUS (by MCU)/SSUS
8 Pairsplit -R operation (*1) P-VOL: PSUS P-VOL: PSUS(Delete pair to RCU)
S-VOL: SMPL S-VOL: SMPL
9 P-VOL Suspend (failure) P-VOL: PSUE P-VOL: PSUE (S-VOL failure)
S-VOL: SSUS S-VOL: PSUE (S-VOL failure)/
SSUS
10 S-VOL Suspend (failure) P-VOL: PSUE P-VOL: PSUE (S-VOL failure)
S-VOL: PSUE S-VOL: PSUE (S-VOL failure)
11 PS ON failure P-VOL: PSUE P-VOL: PSUE (MCU IMPL)
S-VOL: — S-VOL: —
12 Copy failure (P-VOL failure) P-VOL: PSUE P-VOL: PSUE (Initial copy failed)
S-VOL: SSUS S-VOL: PSUE (Initial copy failed)/
SSUS
13 Copy failure (S-VOL failure) P-VOL: PSUE P-VOL: PSUE (Initial copy failed)
S-VOL: PSUE S-VOL: PSUE (Initial copy failed)
14 RCU accepted the notification of P-VOL: — P-VOL: —
MCU’s P/S-OFF S-VOL: SSUS S-VOL: PSUE (MCU P/S OFF)/
SSUS
15 MCU detected the failure of RCU P-VOL: PSUE P-VOL: PSUS (by RCU)/PSUE
S-VOL: PSUE S-VOL: PSUE (S-VOL failure)
THEORY04-02-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-10
Table 4-7 Correspondence Table of Cluster # and MP # of VSP G130 and a Variety of
Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
Cluster-2 CTL2 0x02 0x04 0x01 0x00 0x01 0x00
0x03 0x05 0x01 0x01
THEORY04-03-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-20
Table 4-8 Correspondence Table of Cluster # and MP # of VSP G350, VSP F350 and a
Variety of Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
0x02 0x02 0x02 0x02
0x03 0x03 0x03 0x03
0x04 0x04 0x04 0x04
0x05 0x05 0x05 0x05
Cluster-2 CTL2 0x06 0x08 0x01 0x00 0x01 0x00
0x07 0x09 0x01 0x01
0x08 0x0A 0x02 0x02
0x09 0x0B 0x03 0x03
0x0A 0x0C 0x04 0x04
0x0B 0x0D 0x05 0x05
Table 4-9 Correspondence Table of Cluster # and MP # of VSP G370, VSP F370 and a
Variety of Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
0x02 0x02 0x02 0x02
0x03 0x03 0x03 0x03
0x04 0x04 0x04 0x04
0x05 0x05 0x05 0x05
0x06 0x06 0x06 0x06
0x07 0x07 0x07 0x07
0x08 0x08 0x08 0x08
0x09 0x09 0x09 0x09
Cluster-2 CTL2 0x0A 0x20 0x01 0x00 0x01 0x00
0x0B 0x21 0x01 0x01
0x0C 0x22 0x02 0x02
0x0D 0x23 0x03 0x03
0x0E 0x24 0x04 0x04
0x0F 0x25 0x05 0x05
0x10 0x26 0x06 0x06
0x11 0x27 0x07 0x07
0x12 0x28 0x08 0x08
0x13 0x29 0x09 0x09
THEORY04-03-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-30
Table 4-10 Correspondence Table of Cluster # and MP # of VSP G700, VSP F700 and a
Variety of Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
0x02 0x02 0x02 0x02
0x03 0x03 0x03 0x03
0x04 0x04 0x04 0x04
0x05 0x05 0x05 0x05
0x06 0x06 0x06 0x06
0x07 0x07 0x07 0x07
0x08 0x08 0x08 0x08
0x09 0x09 0x09 0x09
0x0A 0x0A 0x0A 0x0A
0x0B 0x0B 0x0B 0x0B
Cluster-2 CTL2 0x0C 0x20 0x01 0x00 0x01 0x00
0x0D 0x21 0x01 0x01
0x0E 0x22 0x02 0x02
0x0F 0x23 0x03 0x03
0x10 0x24 0x04 0x04
0x11 0x25 0x05 0x05
0x12 0x26 0x06 0x06
0x13 0x27 0x07 0x07
0x14 0x28 0x08 0x08
0x15 0x29 0x09 0x09
0x16 0x2A 0x0A 0x0A
0x17 0x2B 0x0B 0x0B
THEORY04-03-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-40
Table 4-11 Correspondence Table of Cluster # and MP # of VSP G900, VSP F900 and a
Variety of Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
0x02 0x02 0x02 0x02
0x03 0x03 0x03 0x03
0x04 0x04 0x04 0x04
0x05 0x05 0x05 0x05
0x06 0x06 0x06 0x06
0x07 0x07 0x07 0x07
0x08 0x08 0x08 0x08
0x09 0x09 0x09 0x09
0x0A 0x0A 0x0A 0x0A
0x0B 0x0B 0x0B 0x0B
0x0C 0x0C 0x0C 0x0C
0x0D 0x0D 0x0D 0x0D
0x0E 0x0E 0x0E 0x0E
0x0F 0x0F 0x0F 0x0F
0x10 0x10 0x10 0x10
0x11 0x11 0x11 0x11
0x12 0x12 0x12 0x12
0x13 0x13 0x13 0x13
Cluster-2 CTL2 0x14 0x20 0x01 0x00 0x01 0x00
0x15 0x21 0x01 0x01
0x16 0x22 0x02 0x02
0x17 0x23 0x03 0x03
0x18 0x24 0x04 0x04
0x19 0x25 0x05 0x05
0x1A 0x26 0x06 0x06
0x1B 0x27 0x07 0x07
0x1C 0x28 0x08 0x08
0x1D 0x29 0x09 0x09
0x1E 0x2A 0x0A 0x0A
0x1F 0x2B 0x0B 0x0B
0x20 0x2C 0x0C 0x0C
0x21 0x2D 0x0D 0x0D
0x22 0x2E 0x0E 0x0E
0x23 0x2F 0x0F 0x0F
0x24 0x30 0x10 0x10
0x25 0x31 0x11 0x11
0x26 0x32 0x12 0x12
0x27 0x33 0x13 0x13
THEORY04-03-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-41
Table 4-12 Correspondence Table of Cluster # and MP # of VSP E990 and a Variety of
Numbering
Cluster CTL MP# PK#
Location Name Hardware Internal MPU# MP in MPPK# MPPK in
Part MP# MP#
Cluster-1 CTL1 0x00 0x00 0x00 0x00 0x00 0x00
0x01 0x01 0x01 0x01
0x02 0x02 0x02 0x02
0x03 0x03 0x03 0x03
0x04 0x04 0x04 0x04
0x05 0x05 0x05 0x05
0x06 0x06 0x06 0x06
0x07 0x07 0x07 0x07
0x08 0x08 0x08 0x08
0x09 0x09 0x09 0x09
0x0A 0x0A 0x0A 0x0A
0x0B 0x0B 0x0B 0x0B
0x0C 0x0C 0x0C 0x0C
0x0D 0x0D 0x0D 0x0D
0x0E 0x0E 0x0E 0x0E
0x0F 0x0F 0x0F 0x0F
0x10 0x10 0x10 0x10
0x11 0x11 0x11 0x11
0x12 0x12 0x12 0x12
0x13 0x13 0x13 0x13
0x14 0x14 0x14 0x14
0x15 0x15 0x15 0x15
0x16 0x16 0x16 0x16
0x17 0x17 0x17 0x17
0x18 0x18 0x18 0x18
0x19 0x19 0x19 0x19
0x1A 0x1A 0x1A 0x1A
0x1B 0x1B 0x1B 0x1B
(To be continued)
THEORY04-03-41
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-42
THEORY04-03-42
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-50
THEORY04-03-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-60
THEORY04-03-60
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-03-70
THEORY04-03-70
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-04-10
DIMM00 DIMM00
DIMM01 MPU#0 MPU#1 DIMM01
CFM-1 CFM-2
I Path
BKM-1 BKM-2
CTL1 CTL2
DKB-1C DKB-2C
ENC ENC
• VSP G130
CTL1 CTL2
CHB-1A CHB-2A
CFM-1 CFM-2
I Path
BAT-1 BAT-2
DKB-1C DKB-2C
ENC ENC
THEORY04-04-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-04-20
CTL1 CTL2
DIMM00 DIMM10 DIMM00 DIMM10
DIMM01 DIMM11 MPU#0 MPU#1 DIMM01 DIMM11
I Path#0
DIMM02 DIMM12 DIMM02 DIMM12
I Path#1
DIMM03 DIMM13 DIMM03 DIMM13
CTL1 CTL2
DIMM00 DIMM10 DIMM00 DIMM10
DIMM01 DIMM11 MPU#0 MPU#1 DIMM01 DIMM11
I Path#0
DIMM02 DIMM12 DIMM02 DIMM12
I Path#1
DIMM03 DIMM13 DIMM03 DIMM13
THEORY04-04-20
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-10
THEORY04-05-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-20
4.5.2 Glossary
• iSCSI(Internet Small Computer Systems Interface)
This is a technology to transmit and receives the block data by SCSI on the IP network.
• VLAN(Virtual LAN)
This is a technology to create a virtual LAN segment.
• iSCSI Digest
iSCSI Header Digest and iSCSI Data Digest exist and checks the data consistency end to end.
• iSCSI Name
The iSCSI node has an iSCSI name consisting of a maximum of 223 characters for node identification.
THEORY04-05-20
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-30
THEORY04-05-30
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-40
THEORY04-05-40
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-50
THEORY04-05-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-60
The iSCSI port is a 2-port/CHB and a configuration without 5x, 7x, 6x and 8x installed for the Fibre Channel
port.
THEORY04-05-60
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-70
*1: The disk drive type name displayed in the MPC window might differ from the one on the drive. In
such a case, refer to INSTALLATION SECTION 1.2.2 Disk Drive Model .
NOTE: • As for RAID1, the concatenation of two parity groups is possible (8HDDs).
In this case the number of volumes required is doubled.
Two concatenation and four concatenation (16HDDs and 32HDDs) of the RAID
Groups are possible for RAID5 (7D+1P).
In this case, the number of volumes becomes twice or four times.
When OPEN-V is set in the parity group of the above-mentioned connection
configuration, the maximum volume size becomes the parity cycle size of the source
(2D+2D) or (7D+1P). It does not become twice or four times.
• The Storage System capacity is different from one of Maintenance PC, because of
1GB=1000Mbyte calculation.
THEORY04-05-80
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-90
(To be continued)
THEORY04-05-90
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-100
THEORY04-05-100
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-110
(To be continued)
THEORY04-05-110
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-120
THEORY04-05-120
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-130
*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G370 (CBSS2)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G370 (CBSL2)).
(To be continued)
THEORY04-05-130
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-140
*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G370 (CBSS2)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G370 (CBSL2)).
THEORY04-05-140
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-150
*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G350 (CBSS1)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G350 (CBSL1)).
(To be continued)
THEORY04-05-150
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-160
*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G350 (CBSS1)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G350 (CBSL1)).
THEORY04-05-160
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-161
*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G130 (CBXSS)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G130 (CBXSL)).
(To be continued)
THEORY04-05-161
Hitachi Proprietary DW850
Rev.5 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-162
*1: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G130 (CBXSS)).
*2: The number of parity groups includes the Disk Drives installed in the Controller Chassis (VSP
G130 (CBXSL)).
THEORY04-05-162
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-163
RAID Level
Storage capacity 7D+1P (RAID5) 6D+2P (RAID6) 12D+2P (RAID6) 14D+2P (RAID6)
(GB/volume)
DBN 1R9RVM PG 1 - 11 1 - 11 1-6 1-5
Capacity 13,233.2 - 145,565.2 11,342.7 - 124,769.7 22,685.5 - 132,872.2 26,466.4 - 132,332.0
3R8RVM PG 1 - 11 1 - 11 1-6 1-5
Capacity 26,466.4 - 291,130.4 22,685.5 - 249,540.5 45,371.0 - 265,744.4 52,932.9 - 264,664.5
7R6RVM PG 1 - 11 1 - 11 1-6 1-5
Capacity 52,932.9 - 582,261.9 45,371.0 - 499,081.0 90,742.1 - 531,489.4 105,865.8 - 529,329.0
15RRVM PG 1 - 11 1 - 11 1-6 1-5
Capacity 105,339.4 - 1,158,733.4 90,290.9 - 993,199.9 180,581.8 - 1,057,693.4 210,678.8 - 1,053,394.0
THEORY04-05-163
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-170
Table 4-28 The number of Drives and blocks for each RAID level
Capacity
RAID Levcel Drive Type
MB Logical Block
RAID1 2D+2D DKR5x-J600SS/DKS5x-J600SS 1,099,383 2,251,536,384
DKR5x-J1R2SS/DKS5x-J1R2SS 2,198,767 4,503,073,792
DKS5x-J2R4SS 4,397,534 9,006,148,608
DKS2x-H6R0SS/DKR2x-H6R0SS 11,204,177 22,946,153,472
DKR2x-H10RSS/DKS2x-H10RSS 18,673,627 38,243,589,120
DKS2x-H14RSS 26,143,079 53,541,025,792
SLB5x-M480SS 901,442 1,846,153,216
SLB5x-M960SS 1,802,884 3,692,307,456
SLB5x-M1R9SS/SLB5x-M1T9SS/ 3,605,769 7,384,614,912
SLM5x-M1T9SS
SLB5x-M3R8SS/SLR5x-M3R8SS/ 7,211,538 14,769,230,848
SLM5x-M3R8SS
SLB5x-M7R6SS/SLR5x-M7R6SS/ 14,423,077 29,538,461,696
SLM5x-M7R6SS
SLB5x-M15RSS/SLM5x-M15RSS 28,702,715 58,783,161,344
SLM5x-M30RSS 57,403,387 117,562,137,600
NFHAx-Q3R2SS 6,710,884 13,743,889,408
NFHAx-Q6R4SS 13,421,772 27,487,788,032
NFHAx-Q13RSS 26,843,543 54,975,577,088
SNB5x-R1R9NC/SNR5x-R1R9NC/ 3,605,769 7,384,615,424
SNM5x-R1R9NC
SNB5x-R3R8NC/SNR5x-R3R8NC/ 7,211,538 14,769,230,848
SNM5x-R3R8NC
SNB5x-R7R6NC/SNR5x-R7R6NC/ 14,423,077 29,538,461,696
SNM5x-R7R6NC
SNB5x-R15RNC/SNN5x-R15RNC/ 28,702,715 58,783,161,856
SNM5x-R15RNC
x: A, B, C, ...
(To be continued)
THEORY04-05-170
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-171
THEORY04-05-171
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-180
THEORY04-05-180
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-181
THEORY04-05-181
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-190
THEORY04-05-190
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-191
THEORY04-05-191
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-200
THEORY04-05-200
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-201
THEORY04-05-201
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-210
THEORY04-05-210
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-05-220
(Continued from preceding page)
Group Op Code Name of Command Type :Supported Remarks
3 83H/00H
Extended Copy CTL/SNS —
(80H -9FH) 83H/11H
Write Using Token CTL/SNS —
84H/03H
Receive Copy Result CTL/SNS —
84H/07H
Receive ROD Token Information CTL/SNS —
88HRead (16) RD/WR —
89HCompare and Write RD/WR —
8AH Write (16) RD/WR —
8EH Write And Verify (16) RD/WR Supported only Write.
8FH Verify (16) RD/WR Nop —
91H Synchronized Cache (16) CTL/SNS Nop —
93H Write Same (16) RD/WR —
9E/10H Read Capacity (16) CTL/SNS —
9E/12H Get LBA Status CTL/SNS —
4 A0H Report LUN CTL/SNS —
(A0H -BFH) A3H/05H Report Device Identifier CTL/SNS —
A3H/0AH Report Target Port Groups CTL/SNS —
A3H/0BH Report Aliases CTL/SNS ̶ —
A3H/0CH Report Supported Operation CTL/SNS ̶ —
Codes
A3H/0DH Report Supported Task CTL/SNS ̶ —
Management Functions
A3H/0EH Report Priority CTL/SNS ̶ —
A3H/0FH Report Timestamp CTL/SNS ̶ —
A4H/XXH Maintenance OUT CTL/SNS ̶ —
A4H/06H Set Device Identifier CTL/SNS ̶ —
A4H/0AH Set Target Port Groups CTL/SNS —
A4H/0BH Change Aliases CTL/SNS ̶ —
A4H/0EH Set Priority CTL/SNS ̶ —
A4H/0FH Set Timestamp CTL/SNS ̶ —
A8H Read (12) RD/WR —
AAH Write (12) RD/WR —
AEH Write And Verify (12) RD/WR However, only the
Write operation.
AFH Verify (12) RD/WR Nop —
B7H Read Defect Data (12) CTL/SNS It always reports on
No defect.
5 E8H Read With Skip Mask (IBM- CTL/SNS ̶ —
(E0H -FFH) unique)
EAH Write With Skip Mask (IBM- CTL/SNS ̶ —
unique)
THEORY04-05-220
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-10
Drives can be installed in the Controller Chassis and the Drive Box.
The maximum number of installable drives is shown below.
• VSP G130 : 96 (CBXSS + DBS x 3)
• VSP G350 : 264 (CBSS + DB60 x 4)
• VSP G370 : 384 (CBSS + DB60 x 6)
• VSP G700 : 1,200 (DB60 x 20)
• VSP G900 : 1,440 (DB60 x 24)
• VSP F350 : 192 (CBSS + DBS x 7)
• VSP F370 : 288 (CBSS + DBS x 11)
• VSP F700 : 864 (DBS x 36)
• VSP F900 : 1,152 (DBS x 48)
• VSP E990 : 96 (DBN x 4)
The Dual Controller configuration is adopted in the controller part that is installed in the Controller Chassis.
The Channel I/F supports only open systems and does not support Mainframe.
The Power Supply is single phase AC100V/200V for the VSP G130, G350, G370 model and single phase
AC200V for the VSP G700, G900 models and DB60.
Drive Box
For information about the service processor used with HDS VSP storage systems, refer to the Service
Processor (SVP) Technical Reference (FE-94HM8036).
THEORY04-06-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-20
2. High-performance
• DW850 supports three types of high-speed Disk Drives at the speed of 7,200 min-1, 10,000 min-1 and
15,000 min-1.
• DW850 supports Flash Drives with ultra high-speed response.
• DW850 supports Flash Module Drive (FMD) with ultra high-speed response and high capacity.
• The high-speed data transfer between DKB and HDDs at a rate of 12 Gbps with the SAS interface is
achieved.
• DW850 uses Intel processor with brand new technology which performs as excellent as that of the
enterprise device DKC810I.
3. Large Capacity
• DW850 supports Disk Drives with capacities of 600 GB, 1.2 TB, 2.4 TB, 6 TB, 10 TB and 14 TB.
• DW850 supports Flash Drives with capacities of 480 GB, 960 GB, 1.9 TB, 3.8 TB, 7.6 TB, 15 TB and
30 TB.
• DW850 supports Flash Module Drive (FMD) with capacities of 3.5 TB, 7 TB and 14 TB.
• DW850 supports Flash Drive (NVMe SSD) with capacities of 1.9 TB, 3.8 TB, 7.6 TB and 15 TB.
• DW850 controls up to 65,280 logical volumes and up to 1,440 Disk Drives and provides a physical
Disk capacity of approximately 14,098 TB per Storage System.
THEORY04-06-20
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-30
5. Connectivity
DW850 supports OS s for various UNIX servers and PC servers, so that it conforms to heterogeneous
system environment in which those various OS s coexist.
The platforms that can be connected are shown in the following table.
6. High reliability
• DW850 supports RAID6 (6D+2P/12D+2P/14D+2P), RAID5 (3D+1P, 4D+1P, 6D+1P, 7D+1P) and
RAID1 (2D+2D/4D+4D).
• Main components are implemented with a duplex or redundant configuration, so even when single
point of the component failure occurs, the Storage System can continue the operation.
• However, when the failure of the Controller Board with the Cache Memory is addressed, the Channel
ports and the Drive ports of the cluster concerned are blocked.
7. Non-disruptive maintenance
• Main components can be added, removed and replaced without shutting down a device while the
Storage System is in operation.
However, when the addition of the Cache Memory is executed, the Channel ports and the Drive ports
of the cluster concerned are blocked.
• The firmware can be upgraded without shutting down the Storage System.
THEORY04-06-30
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-40
• The Drive Box (DBS), the Drive Box (DBL), the Drive Box (DB60) and the Drive Box (DBF) can be
mixed in the Storage System.
• The number of installable Drives changes depending on the Storage System models and Drive Boxes.
Controller Chassis:
VSP G130, G350, G370 : 2U
VSP G700, G900, VSP E990 : 4U
THEORY04-06-40
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-50
THEORY04-06-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-60
THEORY04-06-60
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-70
THEORY04-06-70
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-80
Drive Box 03
Drive Box 02
Power Supply
HDD
PDU Unit ENC ENC
AC INPUT
HDD
Power Supply
PDU Unit
Drive Box 01
Drive Box 00
Power Supply HDD
PDU Unit ENC ENC
AC INPUT
HDD
Power Supply
PDU Unit
Controller
Chassis SAS
(12Gbps/port)
LANB LANB
GCTL
+
GUM
THEORY04-06-80
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-81
Drive Box 01
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit
Drive Box 00
Power HDD
PDU Supply Unit ENC ENC
AC INPUT
HDD
Power
PDU Supply Unit
Controller
Chassis NVMe
(8Gbps/port)
LANB LANB
GCTL
+
GUM
THEORY04-06-81
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-90
8. Drive Path
(1) When using 2.5-inch HDD (SFF)
DW850 controls 1,152 HDDs with eight paths.
Figure 4-8 Drive Path Connection Overview when using 2.5-inch Drives
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
24HDDs/DB 24HDDs/DB
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
CBL
Figure 4-9 Drive Path Connection Overview when using 3.5-inch Drives
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
12HDDs/DB 12HDDs/DB
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
CBL
THEORY04-06-90
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-100
Figure 4-10 Drive Path Connection Overview when using 3.5-inch Drives
DB DB DB DB DB DB DB DB
DB 60HDDs/DB
DB DB DB DB 60HDDs/DB
DB DB DB
DB DB DB DB DB DB DB DB
CBL
NOTICE: Up to six DB60 can be installed in a rack. Up to five DB60 can be installed in a rack
when a DKC (H model) is installed there.
Install the DB60 at a height of 1,300 mm or less above the ground (at a range
between 2U and 26U).
Figure 4-11 Drive Path Connection Overview when using FMDs (DBF)
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
12FMDs/DB 12FMDs/DB
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
DB DB DB DB DB DB DB DB
CBL
THEORY04-06-100
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-101
Figure 4-12 Drive Path Connection Overview when using Flash Drives (NVMe SSD) (DBN)
24SSDs/DB
DB DB DB DB
CBL
THEORY04-06-101
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-110
Controller Board
HDD
Controller Board
HDD
THEORY04-06-110
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-111
Battery
CHB
HDD
CFM
Controller Chassis LAN Controller Chassis
UPS SAS PS
Battery
CHB
HDD
CFM
Controller Chassis LAN Controller Chassis
UPS SAS PS
THEORY04-06-111
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-120
Figure 4-17 Controller Chassis (VSP G700, G900, VSP E990 model)
Controller Board 2
CHB-2
PS-1
CHB-2
PS-1
THEORY04-06-120
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-130
PCP1 PCP2
CHBBPS2
THEORY04-06-130
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-140
THEORY04-06-140
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-141
Top view
DIMM00
Top view
THEORY04-06-141
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-150
Figure 4-21 Top of Controller Board (VSP G700, G900, VSP E990 Model)
Top view
Controller Board
DIMM Location
• The DIMM with the DIMM location
number DIMM0x belongs to CMG0
(Cache Memory Group 0) and the DIMM13
DIMM with DIMM1x belongs to DIMM12
CMG1 (Cache Memory Group 1).
• Be sure to install the DIMM in DIMM02
CMG0. DIMM03
• Install the same capacity of DIMMs
by a set of four.
DIMM11
• CMG1 is a slot for adding DIMMs.
DIMM10
• Furthermore, make the other
Controller Board be the same
addition configuration. DIMM00
DIMM01
THEORY04-06-150
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-160
CFM
FAN
Battery
THEORY04-06-160
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-161
Table 4-33 Correspondence List of DIMM Capacity and CFM, BKM (VSP G350, G370
model)
DIMM Capacity of Capacity of Types of Number of Batteries
Number of
Model Capacity DIMMs DIMMs CFM installed Installed in System
DIMMs/CTL
(GiB) (GiB)/CTL (GiB)/System in CFM-1/2 (BAT-1/2)
VSP G370 64 2 128 256 BM15 2
32 2 64 128 BM15 2
VSP G350 32 2 64 128 BM15 2
16 2 32 64 BM15 2
Controller Board
Battery
CFM
THEORY04-06-161
Hitachi Proprietary DW850
Rev.6.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-170
Table 4-34 Correspondence List of DIMM Capacity and CFM, BKMF (VSP G700, G900, VSP
E990 model)
DIMM Number Capacity Capacity Types of CFM Types of CFM Number of
Capacity of of of installed in installed in Batteries
Model (GiB) DIMMs/ DIMMs DIMMs CFM-10/20 CFM-11/21 Installed in
CTL (GiB)/ (GiB)/ (*2) (*2) System (*1)
CTL System
VSP E990 64 8 512 1,024 BM65/BM6E BM65/BM6E 6
4 256 512 BM65/BM6E - 6
32 8 256 512 BM55/BM5E BM55/BM5E 6
4 128 256 BM55/BM5E - 6
VSP G900 64 8 512 1,024 BM45 BM45 6
4 256 512 BM45 - 6
32 8 256 512 BM35 BM35 6
4 128 256 BM35 - 6
VSP G700 32 8 256 512 BM35 BM35 6
4 128 256 BM35 - 6
16 8 128 256 BM35 - 6
4 64 128 BM35 - 6
*1 : (BKMF-x1/x2/x3)
*2 : • It is necessary to match the type (model name) of CFM-10/20 and CFM-11/21 (additional side).
When adding Cache Memories, check the model name of CFM-10/20 and add the same model.
• When replacing Cache Memories, it is necessary to match the type (model name) defined in the
configuration information.
Example: When the configuration information is defined as BM35, replacing to BM45 is
impossible.
THEORY04-06-170
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-180
Figure 4-24 Controller Board (VSP G700, G900, VSP E990 Model)
Controller Board
Battery
BKMF
CFM
THEORY04-06-180
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-190
5. Battery
(1) The battery for the data saving is installed on each Controller Board in DW850.
• When the power failure continues for more than 20 milliseconds, the Storage System uses power
from the batteries to back up the Cache Memory data and the Storage System configuration data
onto the Cache Flash Memory.
• Environmentally friendly nickel hydride battery is used for the Storage System.
*1: The data backup processing is continued when the power outage is
restored while the data is being backed up.
THEORY04-06-190
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-200
Controller Board 2
Controller Board 1
Front view of CBLH
Controller Board
Battery
BKMF
Battery
Controller Board
BKM
THEORY04-06-200
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-210
Battery
THEORY04-06-210
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-220
(4) Relation between Battery Charge Level and System Startup Action
No. Power Status Battery Charge Level System Startup Action
1 PS ON <Case1> The system does not start up until the battery
The battery charge level of both charge level of either or both of the Controller
the Controller Boards is below Boards becomes 30% or more. (It takes a maximum
30%. of 90 minutes (*2).) (*1)
2 <Case2> SIM that shows the lack of battery charge is
The battery charge level of both reported and the system starts up.
the Controller Boards is below I/O is executed by the pseudo through operation
50%. until the battery charge level of either or both of the
(In the case other than Case1) Controller Boards becomes 50% or more. (It takes
a maximum of 60 minutes (*2).)
3 <Case3> The system starts up normally.
Other than <Case1>, <Case2>. If the condition changed from Case2 to Case3
(The battery charge level of during startup, SIM that shows the completion of
either or both of the Controller battery charge is reported.
Boards is 50% or more.)
*1: Action when System Option Mode 837 is off (default setting).
*2: Battery charge time: 4.5 hours to charge from 0% to 100%.
(5) Relation between Power Status and SM/CM Data Backup Methods
No. Power Status SM/CM Data Backup Methods Data Restore Methods during
Restart
1 PS OFF (planned power off) SM data (including CM directory SM data is restored from CFM.
information) is stored in CFM If CM data was stored, CM data is
before PS OFF is completed. also restored from CFM.
If PIN data exists, all the CM data
including PIN data is also stored.
2 When power Instant power If power is recovered in a moment, SM/CM data in memory is used.
outage occurs outage SM/CM data remains in memory
and is not stored in CFM.
3 Power outage All the SM/CM data is stored in All the SM/CM data is restored
while the CFM. from CFM.
system is in If a power outage occurred after If CM data was not stored, only
operation the system started up in the CM data is volatilized and the
condition of Case2 (the battery system starts up.
charge level of both the Controller
Boards had been below 50%),
only SM data is stored.
4 Power outage Data storing in CFM is not done. The data that was stored in the
while the (The latest backup data that was latest power off operation or
system is successfully stored remains.) power outage is restored from
starting up CFM.
THEORY04-06-220
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-230
e.g.: Cache data in CTL1 is not stored in CFM which is installed in CTL2.
Similarly, CFM data in the CTL1 is not restored to Cache Memory in CTL2.
THEORY04-06-230
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-240
Table 4-37 The Number of Installed DKBs and SAS Ports by Model
Item VSP G130, G350, G370 VSP G700 VSP G900
Number of DKB Built into CTL 2 piece / cluster 2, 4 piece / cluster
(4 piece / system) (4, 8 piece / system)
Number of DKBN/ - - -
EDKBN
Number of SAS Port 1 port / cluster 4 port / cluster 4, 8 port / cluster
(2 port / system) (8 port / system) (8, 16 port / system)
Number of NVMe Port - - -
Table 4-40 The Number of Installable CHBs by Model (VSP G130, G350, G370)
Item VSP G130 VSP G350, G370
Minimum installable Built into CTL 1 piece/cluster
number 2 port/cluster (2 piece/system)
Maximum installable (4 port/system) 2 piece/cluster
number (HDD) (4 piece/system)
Maximum installable 2 piece/cluster
number (HDD less) (4 piece/system)
THEORY04-06-250
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-260
Table 4-42 The Number of Installable CHBs by Model (VSP G900, VSP E990)
Item VSP G900, VSP E990
CHBB is not installed CHBB is installed
Minimum installable 1 piece/cluster 2 piece/cluster
number (2 piece/system) (4 piece/system)
Maximum installable 4 piece/cluster 6 piece/cluster
number (HDD) (8 piece/system) (*1) (12 piece/system) (*1)
6 piece/cluster 8 piece/cluster
(12 piece/system) (16 piece/system)
Maximum installable 8 piece/cluster 10 piece/cluster
number (HDD less) (16 piece/system) (20 piece/system)
*1: When installing four DKBs per cluster.
THEORY04-06-260
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-270
The CHB for Fibre Channel connection can correspond to Shortwave or Longwave by port unit by
selecting a transceiver to be installed in each port.
Note that a port of each CHB installs a transceiver for Shortwave as standard.
When changing to a Longwave supported port, addition of DKC-F810I-1PL16 (SFP for 16Gbps
Longwave) is required.
THEORY04-06-270
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-280
24 SFF HDDs can be installed. ENC and Power Supply take a duplex configuration.
12 LFF HDDs can be installed. ENC and Power Supply take a duplex configuration.
THEORY04-06-280
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-290
LFF HDD
ENC
THEORY04-06-290
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-300
24 SFF Drives can be installed. ENC and Power Supply take a duplex configuration.
PCP
SWPK CHB
CHBBPS
8 CHBs can be installed.
THEORY04-06-300
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-310
Table 4-46 Disk Drive, Flash Drive and Flash Module Drive Support Type
Revolution
Maximum
Speed (min-1)
Transfer
Group I/F Size (inch) or Capacity
Rate
Flash Memory
(Gbps)
Type
Disk Drive (HDD) SAS 2.5 (SFF) 6 10,000 600 GB, 1.2 TB
12 10,000 600 GB, 1.2 TB, 2.4 TB
SAS 3.5 (LFF) 6 10,000 1.2 TB
12 10,000 1.2 TB, 2.4 TB
12 7,200 6 TB, 10 TB, 14 TB
Flash Drive SAS 2.5 (SFF) 12 MLC/TLC 480 GB, 960 GB, 1.9 TB, 3.8 TB,
(SAS SSD) 7.6 TB, 15 TB, 30 TB
Flash Module Drive SAS ̶ 12 MLC 3.5 TB
(FMD) MLC/TLC 7 TB, 14 TB
Flash Drive NVMe 2.5 (SFF) 8 TLC 1.9 TB, 3.8 TB, 7.6 TB, 15 TB
(NVMe SSD)
THEORY04-06-310
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-320
THEORY04-06-320
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-330
THEORY04-06-330
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-340
THEORY04-06-340
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-350
THEORY04-06-350
Hitachi Proprietary DW850
Rev.7.1 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-360
THEORY04-06-360
Hitachi Proprietary DW850
Rev.5.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-370
THEORY04-06-370
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-06-380
Item DKC-F910I-15RRVM
Flash Drive Model Name SNB5A-R15RNC/
SNB5B-R15RNC/
SNN5A-R15RNC/
SNM5A-R15RNC
Form Factor 2.5 inch
User Capacity 15048.49 GB
Flash memory type TLC
Interface data transfer rate 8
(Gbps)
THEORY04-06-380
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-07-10
4.7 Mounted Numbers of Drive Box and the Maximum Mountable Number of Drive
Table 4-52 Mounted numbers of Drive Box and the maximum mountable number of drive
(VSP G130)
Number of mounted Drive Box (*1) Maximum mountable number of drives (*2)
Model name
DBS DBL DBS+DBL
VSP G130 3 0 96
(CBXSS) 2 2 96
1 4 96
0 6 96
VSP G130 3 1 96
(CBXSL) 2 3 96
1 5 96
0 7 96
*1: The maximum number of boxes that can be installed per PATH
VSP G130 : 7
*2: VSP G130 includes the drive to be installed in Controller Chassis.
Table 4-53 Mounted numbers of Drive Box and the maximum mountable number of drive
(VSP G350, G370, G700, G900)
Number of mounted Drive Box (*1) Maximum mountable number of drives (*2)
Model name
DBS/DBL/DBF DB60 DBS+DB60 DBL/DBF+DB60
VSP G350 7 0 192 108
(CBSS1/ 5 1 204 144
CBSS1E) (*3) 3 2 216 180
1 3 228 216
0 4 264 264
VSP G350 7 0 180 96
(CBSL1/ 5 1 192 132
CBSL1E) (*3) 3 2 204 168
1 3 216 204
0 4 252 252
VSP G370 11 0 288 156
(CBSS2/ 9 1 300 192
CBSS2E) (*3) 7 2 312 228
5 3 324 264
3 4 336 300
1 5 348 336
0 6 384 384
VSP G370 11 0 276 144
(CBSL2/ 9 1 288 180
CBSL2E) (*3) 7 2 300 216
5 3 312 252
3 4 324 288
1 5 336 324
0 6 372 372
(To be continued)
THEORY04-07-10
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-07-20
THEORY04-07-20
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-07-30
Table 4-54 Mounted numbers of Drive Box and the maximum mountable number of drive
(VSP E990 models)
Number of mounted Drive Box (*1) Maximum mountable number of drives
Model name
DBN DBN (SSD)
VSP E990 4 96
*1: The maximum number of boxes that can be installed per PATH
VSP E990 : 1
THEORY04-07-30
Hitachi Proprietary DW850
Rev.0 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-07-40
Table 4-55 Mounted numbers of Drive Box and the maximum mountable number of drive
(VSP F350, F370, F700, F900 models)
Number of mounted Drive Box (*1) Maximum mountable number of drives
Model name
DBS DBF DBS (SSD) DBF (FMD)
VSP F350 7 ̶ 192 ̶
VSP F370 11 ̶ 288 ̶
VSP F700 36 ̶ 864 ̶
̶ 36 ̶ 432
VSP F900 48 ̶ 1,152 ̶
̶ 48 ̶ 576
*1: The maximum number of boxes that can be installed per PATH
VSP F350 : 7
VSP F370 : 11
VSP F700 : 12
VSP F900 : 6
THEORY04-07-40
Hitachi Proprietary DW850
Rev.7 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-10
THEORY04-08-20
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-30
1. Environmental Conditions
Item Condition
Non-Operating (*2)
Model Name CBL/ DBS/DBL DBF DB60/DBN
CBSS2/CBSL2/
CBSS1/CBSL1/
CBXSS/CBXSL/
CHBB
Temperature range (ºC) -10 to 50 -10 to 50 -10 to 50 -10 to 50
Relative humidity (%) (*4) 8 to 90 8 to 90 8 to 90 8 to 90
Maximum wet-bulb 29 29 29 29
temperature (ºC)
Temperature gradient 10 10 10 10
(ºC/hour)
Dust (mg/m3) — — — —
Gaseous contaminants (*7) G1 classification levels
Altitude (m) -60 to 12,000 -60 to 12,000 -60 to 12,000 -60 to 12,000
THEORY04-08-30
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-40
Item Condition
Transportation, Storage (*3)
Model Name CBL/ DBS/DBL DBF DB60/DBN
CBSS2/CBSL2/
CBSS1/CBSL1/
CBXSS/CBXSL/
CHBB
Temperature range (ºC) -30 to 60 -30 to 60 -30 to 60 -30 to 60
Relative humidity (%) (*4) 5 to 95 5 to 95 5 to 95 5 to 95
Maximum wet-bulb 29 29 29 29
temperature (ºC)
Temperature gradient 10 10 10 10
(ºC/hour)
Dust (mg/m3) — — — —
Gaseous contaminants (*7) —
Altitude (m) -60 to 12,000 -60 to 12,000 -60 to 12,000 -60 to 12,000
*1: Storage system which is ready for being powered on
*2: Including packed and unpacked storage systems
*3: Storage system packed for shipping
*4: No dew condensation is allowed.
*5: The system monitors the intake temperature and the internal temperature of the Controller and the
Power Supply. It executes the following operations in accordance with the temperatures.
*6: Fire suppression systems and acoustic noise:
Some data center inert gas fire suppression systems when activated release gas from pressurized
cylinders that moves through the pipes at very high velocity. The gas exits through multiple
nozzles in the data center. The release through the nozzles could generate high-level acoustic
noise. Similarly, pneumatic sirens could also generate high-level acoustic noise. These acoustic
noises may cause vibrations to the hard disk drives in the storage systems resulting in I/O
errors, performance degradation in and to some extent damage to the hard disk drives. Hard
disk drives (HDD) noise level tolerance may vary among different models, designs, capacities
and manufactures. The acoustic noise level of 90dB or less in the operating environment table
represents the current operating environment guidelines in which Hitachi storage systems are
designed and manufactured for reliable operation when placed 2 meters from the source of the
noise.
Hitachi does not test storage systems and hard disk drives for compatibility with fire suppression
systems and pneumatic sirens. Hitachi also does not provide recommendations or claim
compatibility with any fire suppression systems and pneumatic sirens. Customer is responsible to
follow their local or national regulations.
To prevent unnecessary I/O error or damages to the hard disk drives in the storage systems, Hitachi
recommends the following options:
(1) Install noise-reducing baffles to mitigate the noise to the hard disk drives in the storage
systems.
THEORY04-08-40
Hitachi Proprietary DW850
Rev.2 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-50
(2) Consult the fire suppression system manufacturers on noise reduction nozzles to reduce the
acoustic noise to protect the hard disk drives in the storage systems.
(3) Locate the storage system as far as possible from noise sources such as emergency sirens.
(4) If it can be safely done without risk of personal injury, shut down the storage systems to avoid
data loss and damages to the hard disk drives in the storage systems.
DAMAGE TO HARD DISK DRIVES FROM FIRE SUPPRESSION SYSTEMS OR
PNEUMATIC SIRENS WILL VOID THE HARD DISK DRIVE WARRANTY.
*7: See ANSI/ISA-71.04-2013 Environmental Conditions for Process Measurement and Control
Systems: Airborne Contaminants.
*8: Meets the highest allowable temperature conditions and complies with ASHRAE (American
Society of Heating, Refrigerating and Air-Conditioning Engineers) 2011 Thermal Guidelines Class
A3. The maximum value of the ambient temperature and the altitude is from 40 degrees C at an
altitude of 950 meters (3000 feet) to 28 degrees C at an altitude of 3050 meters (10000 feet).
The allowable ambient temperature is decreased by 1 degree C for every 175-meter increase in
altitude above 950 meters.
*9: Meets the highest allowable temperature conditions and complies with ASHRAE (American
Society of Heating, Refrigerating and Air-Conditioning Engineers) 2011 Thermal Guidelines Class
A2. The maximum value of the ambient temperature and the altitude is from 35 degrees C at an
altitude of 950 meters (3000 feet) to 28 degrees C at an altitude of 3050 meters (10000 feet).
The allowable ambient temperature is decreased by 1 degree C for every 300-meter increase in
altitude above 950 meters.
THEORY04-08-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-60
(3) DBS/DBL
• If the internal temperature of the Power Supply rises to 55 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 64.5 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.
(4) DBF
• If the internal temperature of the Power Supply rises to 62 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 78 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.
(5) DB60/DBN
• If the internal temperature of the Power Supply rises to 60 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 70 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.
(6) CHBB
• If the use environment temperature rises to 43 degrees C or higher, the CHBB temperature
warning (SIM-RC = af46xx) is notified.
THEORY04-08-60
Hitachi Proprietary DW850
Rev.8 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-08-70
THEORY04-08-70
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-10
THEORY04-09-10
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-20
DKCPS-1
CTL2
DKCPS-2
AC0(*1) to PDU AC1(*1)
CTL1
C14
AC1(*1)
AC0(*1)
DKC PS
THEORY04-09-20
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-30
ENC ENC
to PDU to PDU
DB PS
to PDU to PDU
ENC ENC
Plug Power Cord
AC1(*1)
C14 to PDU
SWPK1 CHBB
PS1 Plug
C14 to PDU
CHBB PS
AC0(*1)
THEORY04-09-30
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-40
• CBLH1/CBLH2/DB60/DBN
Input Voltage Voltage Tolerance Frequency Wire Connection
200V to 240V +10% or -11% 50Hz ± 2Hz 1 Phase 2 Wire + Ground
60Hz ± 2Hz
• CBXSS/CBXSL/CBSS1/CBSL1/CBSS2/CBSL2/DBS/DBL/DBF/CHBB
Input Voltage (AC) Voltage Tolerance Frequency Wire Connection
100V to 120V/ +10% or -11% 50Hz ± 2Hz 1 Phase 2 Wire + Ground
200V to 240V 60Hz ± 2Hz
2. PDU specifications
The two types of the PDU (Power Distribution Unit) are a vertical PDU mounted on a rack frame post
and a horizontal PDU of 1U size. Order the required number of PDUs together with the PDU AC cables
in accordance with the configuration of the device to be mounted on the rack frame.
For information about the Hitachi Universal V2 rack used with HDS VSP storage systems, refer to the
Hitachi Universal V2 Rack Reference Guide, MK-97RK000-00.
THEORY04-09-40
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-50
A-F4933-PDU6
A-F4933-PDU6 Horizontal Type
occupies 1U per PDU
3 outlets/8A PDU
Vertical Type
A-F6516-PDU6 PDU
A-F6516-PDU6 can
PDU mount a maximum of
three sets on the post of
the RKU rack
A-F6516-P620
(Length 4.5m)
The following shows the specifications of the PDU power codes and connectors.
The available cable lengths of the PDU power codes differ according to the installation location of the
PDU.
PDU Location Available Plug Receptacle
Cable Length Rating Manufacturer Parts No. Manufacturer Parts No.
(*1)
Upper PDU 2.7 m 20A AMERICAN L6-20P ̶ L6-20R
Mid PDU 3.2 m DENKI CO.,LTD. (*2)
Lower PDU 3.7 m
*1 : This is a length outside the rack chassis.
*2 : When the receptacle is L6-30R, select P630 as an option.
THEORY04-09-50
Hitachi Proprietary DW850
Rev.6 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-09-60
THEORY04-09-60
Hitachi Proprietary DW850
Rev.3 Copyright © 2018, 2020, Hitachi, Ltd.
THEORY04-10-10
CTL
①
Shared
MP ② memory
③
④
CFM
⑤
⑦
GUM
:Locations where configuration
⑥ information is stored
Management
MPC
client
Backup media
(e.g.,Media)
THEORY04-10-10