The 1853
The 1853
The 1853
Preparation vILT:
Foundations Modular
THE1853
Management Tools
y Hitachi Basic Operating System
y Hitachi Basic Operating System V
y Hitachi Resource Manager™ utility package
Module Volume Migration Software
LUN Manager/LUN Expansion
Network Data Management Protocol (NDMP) agents
Logical Unit Size Expansion (LUSE)
Cache Partition Manager feature
Cache Residency Manager feature
Storage Navigator program
Storage Navigator Modular program
Storage Navigator Modular 2 program
Replication Software
Remote Replication:
y Hitachi Universal Replicator software
y Hitachi TrueCopy® Heterogeneous Remote Replication software bundle
y Hitachi TrueCopy® Remote Replication software bundle (for modular systems)
Other Software
y Hitachi Backup and Recovery software, powered by CommVault®
y Hitachi Backup Services Manager software, powered by APTARE®
y Hitachi Business Continuity Manager software
y Hitachi Command Control Interface (CCI) Software
y Hitachi Dynamic Provisioning software
y Hitachi Storage Resource Management Solutions
y Hitachi Volume Migration software
y Hi-Track® Monitor
INTRODUCTION ............................................................................... IX
Hitachi Data Systems Certified Professional Program 2009.......................ix
Framework................................................................................................... x
Exams.......................................................................................................... x
Website........................................................................................................xi
Foundations Track......................................................................................xii
Program Elements..................................................................................... xiii
Test Development ..................................................................................... xiii
Certification Exam Preparation vILT: Foundations Modular .....................xiv
3. SECTION 3 ...............................................................................3-1
Business Continuity ...............................................................................3-2
Business Continuity Solutions...................................................3-3
RAID Manager (CCI) .................................................................3-5
ShadowImage Software ............................................................3-6
TrueCopy Remote Replication Software...................................3-7
TrueCopy Extended Distance ...................................................3-8
Hitachi ShadowImage® Replication Software ......................................3-9
Overview ................................................................................ 3-10
Applications for ShadowImage Replication Software ............ 3-11
Overview ................................................................................ 3-12
Internal ShadowImage Replication Software Operation ........ 3-13
Overview ................................................................................ 3-14
Differential Management ........................................................ 3-15
ShadowImage Replication software Copy Operations........... 3-16
ShadowImage Replication Software Commands................... 3-18
ShadowImage Replication Software Operations ................... 3-20
Hitachi Copy-on-Write Snapshot Software ....................................... 3-21
Overview ................................................................................ 3-22
Operation Scenarios............................................................... 3-24
Hitachi TrueCopy® Remote Replication Software ............................ 3-25
Disaster Recovery .................................................................. 3-26
TrueCopy Specifications ........................................................ 3-27
Configurations ........................................................................ 3-28
TrueCopy and Copy-on-Write Snapshot Configurations........ 3-29
True Copy Extended Distance ............................................... 3-30
Functional Overview............................................................... 3-31
Concurrent Use with Other Copy Products ............................ 3-32
GLOSSARY
EVALUATING THIS COURSE
• Overview
– The Hitachi Data Systems Academy has a fundamental role to play in the
future of Hitachi Data Systems. Certification is a key component in
education as it validates skills and knowledge for partners and Hitachi
Data Systems personnel.
Framework
Certification Qualification
Storage
Integration Implementation Architect Sales
Manager
HDS personnel & HDS personnel & HDS personnel & HDS personnel &
I&C Partners
Authorized Partners Customers Authorized Partners Authorized Partners
Hitachi Data Systems Tiered Credentials: Hitachi Data Systems Hitachi Data Systems Hitachi Data Systems
Certified Certified Certified Qualified
Integration Professional Storage Manager Storage Architect Sales Professional
Hitachi Data Systems
Certified Implementer
HDS Storage Manager HDS Architect
--------------------- Expert Expert
(SNIA exam required) (SNIA exam required)
Hitachi Data Systems
Certified
Implementation
Specialist
Exams
Sales Qualification
Hitachi Data Systems Sales Foundation Qualification Exam (HDS-SQ100)
Website
www.hds.com/certification
Foundations Track
Cost is $200 U.S. and Canada Cost is $200 U.S. and Canada
$225 outside the U.S. and Canada $225 outside the U.S. and Canada
Supporting Course: THI0517 4-day ILT Supporting Course: THI0515 3-day ILT
Hitachi Data Systems Storage Foundations - Enterprise Hitachi Data Systems Storage Foundations - Modular
Program Elements
• Strategic Plan
• Market and Audience Research
• Program Assessment, Gap Analysis and Development
• Operations
• Job Task Analysis
• Curriculum review, Gap Analysis and Development
• Marketing Plan
• Execution, Measurements and Evaluation
Test Development
• Course Goal
– This virtual instructor-led course helps learners prepare for and take the
Hitachi Data Systems Foundations Modular Certification exam (HH0-120).
This refresher provides a focus on key areas of expertise for the Hitachi Data
Systems Professional (Foundations – Modular Track) credential. The training
is applicable for those with experience with Hitachi Data Systems Modular
products and technology.
• Certification Exam
– There is no online Prometric test available at the end of this session.
– Learners will have to take their exams from a Prometric test sites near where
they live.
• Course Structure
– Section 1
• Hitachi Adaptable Modular Storage 1000 Family Architecture
• Hitachi Adaptable Modular Storage 2000 Family Architecture and Administration
• Active-Active I/O Architecture
– Section 2
• Hitachi Adaptable Modular Storage Software
• Storage Navigator Modular 2 Program
• Hitachi Essential NAS Platform
• Hitachi Dynamic Link Manager and Hitachi Global Link Availability Manager Software
– Section 3
• Business Continuity
• Hitachi ShadowImage® Replication Software
• Hitachi Copy-on-Write Snapshot Software
• Hitachi TrueCopy® Remote Replication Software
• RAID Manager and Command Control Interface
– Section 4
• Services Oriented Storage Solutions from Hitachi Data Systems
• Hitachi Device Manager Software
• Hitachi Tuning Manager Software
• Hitachi Content Archive Platform
• Virtual Tape Library Solutions by Hitachi Data Systems And Hitachi Data Protection Suite
Solutions
10
le
Hitachi Adaptable eab
Modular Storage 200 g rad
up
Hitachi Workgroup
le Adaptable Modular Storage
Modular Storage 100 ab
de
gra • 1 or 2 CTL
up
• Up to 8 - 4Gb front end ports
• Up to 8 - 2Gb back end ports
• Up to 450HDDs maximum
Scalability
3
Adaptable Modular Storage and Workgroup Modular Storage product lines consist
of four products:
The Workgroup Modular Storage 100 replaces the Thunder 9520V™ workgroup
modular storage. It is an all-SATA device designed for the SMB/SME market and as
an archive platform for tiered storage. The Workgroup Modular Storage 100 is not
upgradeable to the Adaptable Modular Storage line.
The Adaptable Modular Storage 200 replaces the Thunder 9530V™ entry-level
storage deck. It supports both SATA and Fibre Channel drives and is intended for
the lower end of the modular market. The model Adaptable Modular Storage 200
can be upgraded to the model Adaptable Modular Storage 500.
The Adaptable Modular Storage 500 replaces the Thunder 9570V™ high-end
modular storage. It also supports SATA and Fibre Channel drives and is intended
for the middle to high end of the modular market.
The Adaptable Modular Storage 1000 replaces the Thunder 9585V™ ultra high-end
modular storage system. The Adaptable Modular Storage 1000 system offers the best
midrange performance on the market.
The Adaptable Modular Storage and Workgroup Modular Storage families have
more functionality, capacity, reliability and performance than the Thunder series.
They use the same architecture as the Thunder (legacy system) series and customers
familiar with those products will have an easy time migrating to the new systems.
From a product speeds and feeds perspective Hitachi competes effectively against
its primary competitors. The Adaptable Modular Storage and Workgroup Modular
Storage are positioned to be 25 to 40 percent less expensive than its leading
competitors’ comparable products while being more scalable. Customers should
find this especially appealing as Hitachi Data Systems is known for providing a high
level of quality and among the best customer satisfaction ratings.
The Network Storage Controller, model NSC55 is differentiated from Adaptable
Modular Storage and Workgroup Modular Storage by having the Universal Star
Network architecture (Adaptable Modular Storage and Workgroup Modular
Storage continues to use the High Performance architecture). In addition, the NSC55,
unlike the Adaptable Modular Storage and Workgroup Modular Storage families,
supports heterogeneous storage and OS390 FICON or ESCON ports.
Note: The Adaptable Modular Storage/Workgroup Modular Storage families can
store OS390 volumes when attached as external storage to a Universal Storage
Platform or Network Storage Controller.
The combination of the new cost effective Adaptable Modular Storage and
Workgroup Modular Storage midrange storage systems with scalable capacity and
the Universal Storage Platform and Network Storage Controller enables an
intelligent tiered storage network that will ultimately reduce cost and complexity
within the data center.
Features
The Adaptable Modular Storage systems scale higher and offer significant
performance boosts over their Thunder 9500 V Series systems predecessors.
The Adaptable Modular Storage and Workgroup Modular Storage families are
RoHS (Reduction of Hazardous Substances) compliant, meeting strict EU guidelines
for reducing the use of certain hazardous substances in electrical and electronic
equipment in order to protect human and animal health and the environment.
The Adaptable Modular Storage 200 is almost identical to the Workgroup Modular
Storage 100, except that it supports Fibre Channel (FC) drives, therefore offering
somewhat better performance and availability and it includes two FC-AL backend
paths.
A minimum number of two (2) Fibre Channel drives are required for the Adaptable
Modular Storage 200, and customers may not mix SATA and Fibre Channel in the
same shelf.
The Adaptable Modular Storage 200 can be upgraded to an Adaptable Modular
Storage 500 (this is a disruptive upgrade).
*RAID-0 is available for Fibre Channel drives only.
The Adaptable Modular Storage 500 replaces the Thunder 9570V system by offering
a significant improvement in performance and scalability. For customers who would
have purchased a Thunder 9585V system but do not need 8 ports, the Adaptable
Modular Storage 500 will easily meet most performance and capacity requirements,
at a much lower price. The Adaptable Modular Storage 500 supports 4Gb/sec front-
end ports for customers with 4Gb/s switches and fabric.
LUN access on the back-end uses per controller one path for a SATA tray and two
paths for a FC tray.
Note: 1Gbit and 2Gbit workloads are supported with the 4Gb/sec front end.
A minimum number of two (2) Fibre Channel drives are required for the Adaptable
Modular Storage 200, and customers many not mix SATA and Fibre Channel in the
same shelf.
* As in the Adaptable Modular Storage 200, RAID-0 (no parity) is supported for
Fibre Channel drives only.
The Adaptable Modular Storage 1000 replaces the Thunder 9585V system by
offering a significant improvement in performance and scalability. The Adaptable
Modular Storage 1000 supports eight 4Gb/sec front-end ports for customers with
4Gb/s switches and fabric.
Delivers application-specific performance, availability, and protection across
systems, from a few terabytes to more than 330TB, with both Serial ATA (SATA) and
Fibre Channel drives
Use advanced features - Cache Partition Manager and RAID 6, to help improve
performance, reliability, and usability
Partition and dedicate cache to maximize performance of high-I/O applications
Support outstanding performance for virtually any workload with 4,096 logical
units (LUNs)
Choose between SATA intermix and Fibre Channel to host any workload on the
most economical storage system
Note: 1Gbit and 2Gbit workloads are supported with the 4Gb/sec front end
*RAID-0 (no parity) is supported for Fibre Channel drives only.
Product Description
10
11
e
Performance /Connectivity/ Functionality
g rad
Up
de
gra Adaptable Modular
Up
Storage 2500
Adaptable Modular
Storage 2300
Adaptable Modular
Storage 2100
Simple Modular
Storage 110
Simple Modular
Storage 100
Price
12
Features
13
Specifications
Host Interface Options • 4 Fibre Channel (FC) • 8 Fibre Channel (FC) • 16 Fibre Channel (FC)
auto-sensing 1/2/4Gbps auto-sensing 1/2/4Gbps auto-sensing 1/2/4Gbps
• 4 iSCSI 1000Base-T • 4 iSCSI 1000Base-T copper • 8 iSCSI 1000Base-T
copper Ethernet Ethernet copper Ethernet
Drive Interface • 16 Serial Attached SCSI • 16 Serial Attached SCSI • 32 Serial Attached SCSI
(SAS) (SAS) (SAS)
• 4x4 wide link, 3Gbps • 4x4 wide links, 3Gbps • 4x8 wide links, 3Gbps
switched switched switched
14
15
16
Expansion Unit
3U
3U
3U
Back-end full duplex, 3Gb/s, Back-end full duplex, 3Gb/s, Back-end full duplex, 3Gb/s,
16 – (4x4) SAS Wide Link 16 – (4x4) SAS Wide Link 32 – (4x4) SAS Wide Link
4U 4U 4U
17
0A 0B 1A 1B
Enable/Disable Security
CTL0 CTL1
HG1 (Optional)
Opt = Windows, etc. HG0 (Always present)
Security = WWN Y Opt = Solaris, etc.
Requires the LUN LUNs mapped: 0 & 1
Management key Security = WWN X
to add Host Groups or change the HG settings LUNs mapped: 8 & 789
0
1
0 Mapped LUN number = 'HLUN' (as seen by the host)
3
Recommended Use different mapping
LUN
configuration if host requires a LUN
Mapping 0 or cannot handle the
(high) LUN number
18
y A Host Group contains one or more LUNs that can be configured to be accessed
by a particular host operating system environment. It exists behind a host
interface port.
y With Host Groups the server that is granted access to it sees a virtual storage unit
configured specifically for the software environment running on that server. This
is achieved by setting platform specific options for each Host Group.
y Access security is organized by filtering the traffic to a particular Host Group
and only allowing traffic with a specific Fibre Channel World Wide Name
(WWN) coming from the Host Bus Adapter (HBA) through which a server
accesses the Host Group.
y In addition to using Host Groups the cache can be partitioned allowing for a
complete segregation of the workloads as generated by different servers. Cache
partitioning will prevent an application monopolizing an Adaptable Modular
Storage 2000.
y Host Group 0 (or Host Storage Domain 0) is always present behind a host
interface port. Additional HGs can be configured when the Lun Management
key has been added.
Highlights
19
20
Two Back-end paths on Adaptable Modular Storage 2100 and 2300. Also SAS Wide
links to drives on the back-end.
21
A system can hold a mix of both high-speed (usually more expensive) HDDs for
performance and slower (cheaper) drives for capacity.
y Performance: Used for online transactions, and more
y Capacity: Used for audio and video streaming, backup’s and more
22
Online capacity upgrade: HDDs and expansion units can be added online;
Controllers and Cache Memory cannot.
• Hi-Track Monitor
23
Storage Navigator Modular 2 is shipped with the array. The build center/CTO will
install and enable feature keys for certain basic Software Features.
24
25
26
Specifications
• Capacity
Back-end 2100 2300 2500
HDD HDD#/unit 15 15 0
Max HDD# 120 240 480
15HDD/Tray(SAS/SATA 7 expansion 15 expansion 32 expansion
Intermix; SAS even for units units units
system area drives 0-4) maximum maximum maximum
Supported Drives 146GB/15K SAS 146GB/15K SAS 146GB/15K SAS
300GB/15K SAS 300GB/15K SAS 300GB/15K SAS
400GB/10K SAS 400GB/10K SAS 400GB/10K SAS
450GB/15K SAS 450GB/15K SAS 450GB/15K SAS
500GB SATA 500GB SATA 500GB SATA
750GB SATA 750GB SATA 750GB SATA
1TB SATA 1TB SATA 1TB SATA
RAID Max RG 50 75 100
Group
RAID Level 6 / 5 / 0+1 / 1 (SAS and SATA) 0 (SAS only)
LU Max LU# 2048 4096 4096
Max LU size 60TB
27
28
Back-end Architecture
SAS, Switch architecture
CTL0 CTL1
DCTL DCTL
(2)
1. New Topology Base Unit
Access
(Loop ->Switch)
SAS chip via DCTL SAS chip
SAS
SW SATA SW
SAS Expansion unit
1. Improvement of (SAS/SATA mixture)
Failure diagnostics SATA
[2path]
(Loop-> Switch)
29
SAS (Switch)
rate: 3Gbps * 4Wide Link
SAS_CTL
AAMux
SATA
Switch
(Expander)
12Gbps SAS
• Four Wide Links are configured in
4 Wide Link
one SAS back-end path.
• Each links are dynamically allocated
SAS
Switch
3Gbps
to any of the disks.
(Expander)
AAMux
SAT
A
SAS
Switch
(Expander)
AAMux
3Gbps SATA
FC: Fibre Channel, SAS: Serial Attached SCSI, SATA: Serial Advanced Technology Attachment, AAMux: Active-Active Multiplexer
30
Base unit
Front end CTL0 CTL1 Front end
Field Replaceable Unit (FRU)
MPU DCTL DCTL MPU
No Failure Parts FRU
1 Controller Controller (new
SAS_CTL SAS_CTL
8port 8port (including SAS CTL FRU consists
SAS protocol chip, of cache, and host
4WL 4WL 4WL 4WL
Expander, etc) interfaces)
・・・
Expander Expander
24port AAMux 24port 2 ENC ENC and Cable
4WL SATA
4WL
(including
SAS SAS SAS SAS Expander)
Buf Buf Buf Buf
3 Cable between ENC and Cable
cable cable
chassis
Expansion unit 4 SAS Drive SAS Drive
ENC0 ENC1
4WL SAS
4WL 5 SATA Drive SATA Drive
(includes (includes AAMux)
・・・
Expander Expander
24port AAMux 24port AAMux)
SATA
4WL 4WL
31
32
Note that the SATA enclosures use a different connection method from the FC
enclosures.
An AAmux (SATA Ctrl) chip is installed on every SATA Disk.
33
Cross-controller Communication
Communication overhead
has been reduced
Port Port Port Port
drastically
Communication
Overhead
PCI-Express
CTL 0 CTL 1 CTL 0 CTL 1
34
Adaptable Modular Storage 1000 Family systems have the “data-share mode”
which enables non-owner controller to receive I/Os for the target LU. But the I/O
performance is much reduce compared to the owner controller, so it is used only
temporarily, for example as an alternate path if the main path fails.
In the Adaptable Modular Storage 2000 family, I/O performance directed to non-
owner controller is drastically improved. This “cross-path” can be used as the
normal I/O path with regards to performance.
In the diagram and following slides, Adaptable Modular Storage 1000 Family
represents previous Hitachi modular storage, including Adaptable Modular Storage
models 200, 500, and 1000, and Workgroup Modular Storage 100.
Internal Transaction
• Enables the MPU to access the other controller CS/DS and devices, like
FC protocol chip, directly. Cross-path I/O is greatly improved.
data command
data
35
LU Ownership
Adaptable Adaptable
Modular
Modular The microprogram decides the owner
Storage 2000
Storage 1000 CTL of each created LU automatically
Family
Family and users does not need to care the
Administrator Administrator owner CTL for each LU.
Microprogram
36
The user need not consider which controller should be the owner when he creates
each LU or for all operations of the array.
Therefore the non-owner controller of the target LU may receive I/O commands
from hosts. But it is not a problem because such commands are processed by “high
performance cross-path”.
The manual setting mode (like previous modular systems) is also available in the
Storage Navigator Modular 2 GUI.
37
Hosts can send commands to storage via any path of any controller for the purpose
of path load balancing. This is possible because cross-path I/O is high performance
and ownership of each LU is stable.
In previous modular systems ownership moved move back and forth. If a path
failed, a temporary cross-controller path was established for predetermined period,
like one minute. After that, ownership changed to the other controller, sometime
described as “LU ping-pong.”
SW SW SW SW
LU1
Before After
38
The load balancing function can be enabled and disabled. It should be disabled this
when using the Cache Partition Manager so as not to change the partition setting for
each LU automatically.
Microcode Updates
• Benefits
– Non-disruptive firmware updates are easily and quickly accomplished.
– Firmware can be updated without interrupting I/O.
Adaptable Adaptable
Modular Modular
Storage 1000 User must change paths Storage 2000 No requirement to change paths
Family Family
(1)Change path
Path Path
Manager Manager
Internal Command
SW SW
transfer
Unique
for Midrange
0 Owning controller of LUN 0
39
TrueCopy Extended Distance software Create remote copies of production LUNs All
The LUN Expansion feature now in base product (no PP keys required) configured
with Storage Navigator Modular 2.
Note: Highlighted features on this slide are the Optional software features, additional cost
and require a key.
Time
Hosts/Applications Hosts/Applications Hosts/Applications
LU LU LU LU LU LU
8
Faster
or
Time
Hosts/Applications Hosts/Applications
Slower
LU LU LU LU
Configuring the cache for partitions is a static adjustment that will not dynamically
change afterwards.
HDD usage with the default LU Stripe HDD usage with the I/O optimized LU
Size 64KB Stripe Size 256KB
・・・・
・・・・
・・・・
・・・・
・・・・
・・・・
・・・・
・・・・
HDDs HDDs
• High throughput with concurrent I/Os to HDDs • Lower overhead because of less HDD I/Os
• Good for applications with transaction I/Os • Good for applications with sustained I/Os
(Data Base systems)
10
By selecting the most appropriate Stripe Size the number of HDD I/Os can be
brought back to the minimum which will improve the performance.
Partitioning Cache
11
y Although proper use of the Cache Partition Manager can contribute to improving
an application‘s performance, an incorrect configuration can easily achieve the
opposite effect.
y One partition can be used by one or more LUNs.
12
Functionality
to Host to Host
Controller #0 Controller #1
For other LUs For other LUs
(Controller#0) (Controller#0)
Resident LU
Duplicated Resident LU
Cache Write Data Cache
(Controller#0, LU0) (Controller#0,LU0)
For other LUs For other LUs
(Controller#1) (Controller#1)
Resident LU Resident LU
Cache Cache
(Controller#1,LU1) (Controller#1,LU1)
LU0
LU1
LU2
13
14
15
16
17
18
19
20
Hi-Track Monitor
• Hi-Track Monitor
– Is included with every Service contract
• Monitors the operation of the Adaptable Modular Storage system and Workgroup
Modular Storage systems at all times
– Is a JAVA software application
– Requires a customer PC or SUN workstation running JAVA runtime environment
– Can FTP or dial out via modem
– Interrogates Workgroup Modular Storage and Adaptable Modular Storage systems on
a timed interval for error monitoring (user configurable)
• Reports status every 24 hours by default, even if there are no error conditions
– Also supports Thunder 9200 modular storage system, Thunder 9500 V Series modular
storage systems, and various Fibre Channel switches
– Collects hardware status and error data
– The Hitachi Data Systems Support Center analyzes the data and implements corrective
action as needed
The Hi-Track Monitor, monitors the operation of the Adaptable Modular Storage
system/Workgroup Modular Storage system systems at all times, collects hardware
status and error data, and transmits this data via modem or ftp to the Hitachi Data
Systems Support Center. The Support Center analyzes the data and implements
corrective action as needed. In the unlikely event of a component failure, Hi-Track
service calls the Hitachi Data Systems Support Center immediately to report the
failure without requiring any action on the part of the user. Hi-Track Monitor
enables most problems to be identified and fixed prior to actual failure, and the
advanced redundancy features enable the system to remain operational even if one
or more components fail.
Hi-Track requires a customer PC running Microsoft Windows XP Professional,
Windows 2000 Professional, Windows 2003 Professional, or a SUN workstation
running Solaris 8, or Solaris 9. The workstation needs to run 24/7 in order to
properly perform the Hi-Track Monitor function. Other programs can run
concurrently on the Hi-Track server.
TCP/IP connectivity from the models Workgroup Modular Storage 100, Adaptable
Modular Storage 200, Adaptable Modular Storage 500, Adaptable Modular Storage
1000 Series systems to the Hi-Track monitor workstation is required.
Note: Hi-Track Monitor does not have access to any user data stored on the Adaptable
Modular Storage system/Workgroup Modular Storage system systems.
22
Module Objectives
23
Architecture
• Web GUI
• Client-Server design
Storage Navigator
Modular 2 Server
• Server software
• Database
• Server access via Web GUI
(Internet Explorer or Firefox)
24
Storage Navigator Modular 2 runs from your primary management server or client
PC. It is designed on common web-based client-server technology using a standard
IP network. In other words, you can attach your model 2100 or 2300 and Storage
Navigator Modular 2 primary management server to your existing LAN
environment. Storage Navigator Modular 2 communicates with the storage system
through a web browser. If client PCs are attached to the network, they can connect
to the Storage Navigator Modular 2 primary management server and remotely
configure the storage system.
Installation Requirements
Others Optical drive, to install Storage Navigator Modular 2 JRE*(Java Runtime Environment)1.6.0
from CD-ROM. http://java.sun.com/products/archive/
25
Verify that your PC and operating system meet these basic requirements. These are
standard for most of the today’s applications. In addition, the Release Notes and the
User’s Guide have current information.
The JAVA JRE 1.6.0 can be downloaded from the SUN web site at the link.
Online Help
26
27
Since this is the first time you are running Storage Navigator Modular 2, the Add
Array wizard appears, and prompts you to add your storage system.
Configure
28
Account Authentication
29
• Unification of LUNs
– Expand the size of a LUN and create a single unified LUN
– Maximum number of LUNs that can be unified is 128
• Re-Unification Available
– Further unification
• Release (Separation)
30
31
32
ESCON/FICON
Network
S/390 S/390
IP Network
33
34
CIFS
FTP, HTTP…
NFS
FTP, HTTP… UNIX SAN
SAN
SAN
NFS
FTP, HTTP…
NFS + CIFS
FTP, … SAN
SAN
SAN
Servers:
DATA Sharing,
SAN
SAN
Application, SAN
WEB Exchange,
Print, Backup, CIFS WinT
DB, Terminal,
FTP, HTTP… el
Security, Users,
Virus Scanning, Block Access
etc….
35
File Access
36
• Load Balancing
– Dynamic Link Manager software distributes the
storage accesses across multiple paths and
improves the I/O performance with load balancing.
– On modular storage 1000 Family Dynamic Link
Manager software does not allow load
balancing through two controllers; only
through the same controller.
– On modular storage 2000 Family Dynamic Link
Manager software does allow load balancing Controller0
Controller1
LU0
through two controllers. LU2
LU1
LU0
37
Dynamic Link Manager software performs load balancing between owner paths.
When you set an LU, you determine the owner controller for the LU. Since the
owner controller varies depending on the LU, the owner path also varies depending
on the LU. A non-owner path is a path that uses a channel adapter other than the
owner controller (a non-owner controller). To prevent performance in the entire
system from deteriorating, Dynamic Link Manager software does not perform load
balancing between owner paths and non-owner paths. When some owner paths
cannot be used due to a problem such as a failure, load balancing is performed
among the remaining usable owner paths.
• Dynamic Link Manager software does not perform load balancing between
owner paths and non-owner paths.
– Owner path is the path to the logical unit number (LUN) through the controller
to which the logical unit (LU) currently is assigned on modular storage
systems
– Non-owner path is the path to the LUN through the other controller on
modular storage systems
– On enterprise storage systems, all paths are owner paths as there is no
concept of LU ownership
38
Dynamic Link Manager software does not perform load balancing between owner
paths and non-owner paths. It only uses owner paths for load balancing even if non-
owner paths are available. If no owner paths exist, then Dynamic Link Manager
software will perform load balancing between non-owner paths.
Failure
Retries
I/O through
I/O through
Path 0
Fails Path 1
to same controller
LU2 LU2
39
40
Online(E) — An error has occurred on the path and no path, among the paths
accessing the same LU, has the Online status. If all the paths accessing the same LU
have an Offline status, then one of the paths is changed to the Online(E) status. This
enables access to the LU since all the paths are online.
The (E) indicates the error attribute, which indicates that an error occurred in the
path.
Offline(E) — The status in which I/O cannot be performed because an error
occurred in the path.
The (E) indicates the error attribute, which indicates that an error occurred in the
path.
• Options Window
Dynamic Link
Manager software
Version
Error management
function settings
Select the severity of Log
and Trace Levels
41
42
43
Event notification
Alerts generated by Dynamic Link Manager software are displayed by Global Link
Availability Manager software near real-time.
Path Management
Global view of paths and Hidden devices (HDevs) for all Dynamic Link Manager
software instances. Management capabilities are based on user role definitions.
Host Management
Centrally manages configuration of all Dynamic Link Manager software instances.
Host group management
A customized grouping of hosts created by an individual user.
Resource group management
Administrator controls user’s access to a specific group of hosts (subset of Dynamic
Link Manager software instances).
Access control
User role definitions control operational and host resource access.
Global Link
Manager
Software
Server
Global Web Browser Web Browser Web Browser
Link
Manager Global Link Availability Manager Software Clients (GUI)
software
Dynamic Dynamic
Link Link
Manager Manager
software software Dynamic Dynamic Dynamic Dynamic
5.2 to 5.7 5.2 to 5.7 Link Link Link Link
Manager Manager Manager Manager
Device Device software software software software
Manager Manager 5.8 or later 5.8 or later 5.8 or later 5.8 or later
software software
Agent Agent
Hosts 3.5 or later 3.5 or later
SAN
Storage Subsystems
44
Business Continuity
Business Continuity
Hitachi
Hitachi Storage
Storage Command
Command Suite
Suite
Local
Local –– High
High Availability
Availability Remote
Remote –– Disaster
Disaster Protection
Protection
Point-in-Time
Point-in-Time Clones
Clones and
and Snapshots
Snapshots Point-in-Time
Point-in-Time Clones
Clones and
and Snapshots
Snapshots
Hitachi
Hitachi ShadowImage
ShadowImage In-System
In-System Replication
Replication software
software ShadowImage
ShadowImage software
software
Hitachi
Hitachi Copy-on-Write
Copy-on-Write Snapshot
Snapshot software
software Copy-on-Write
Copy-on-Write Snapshot software
Snapshot software
Platforms
Hitachi Modular Storage Systems
Hitachi
Hitachi Data
Data Systems
Systems Continuity
Continuity Services
Services
On the left side of the graphic are examples of the Hitachi Data Systems high-
availability solutions that are built on the foundation of the high-end Hitachi Storage
systems and their 100% availability. On the right side, the focus is placed on remote
data protection technologies and solutions, and in essence, Disaster Recovery
solutions components.
Disaster Recovery is the planning and the processes associated with recovering your
data/information. Disaster Protection is usually focused on providing the ability to
duplicate key components of the IT infrastructure at a remote location, in the event
that the primary IT site is unavailable for a prolonged period of time. Disaster
protection solutions can also be used to minimize the duration of “planned” outages
by providing an alternate processing facility while software or hardware
maintenance technology refresh is provided at the primary site.
A Disaster Recovery environment is typically characterized by:
y Servers far apart
y Servers have separate resources
y Recovery from large-scale outage
y Major disruption
y Difficult to return to normal
y Recovery
– Note:
• DF700 must use RAID Manager (CCI)
• DF800
– must use RAID Manager (CCI) when replicating to/from DF700,
– can use RAID Manager (CCI) when replicating to/from DF800,
– can also use SNM2 GUI/CLI when replicating to/from DF800.
ShadowImage Software
• Features
– Full copy of a volume at a Point in Time
– No host processing cycles required
– No dependence on operating system, file
system or database
– Copy is RAID protected
– Create up to three concurrent copies of the
original LU
Production Copy of
• Benefits Volume Production
– Protects data availability Volume
– Simplifies and increases disaster recovery
Normal Point-in-time
testing Copy for
Processing
– Eliminates the backup window continues parallel
unaffected processing
– Reduces testing and development cycles
– Enables non-disruptive sharing of critical
information
• Features • Benefits
– Models WMS100, AMS200, AMS500, – Provides fast recovery with no
AMS1000 and AMS 2000 data loss
– Synchronous support – Distributes time-critical information
• Asynchronous support in to remote sites
conjunction with ShadowImage – Reduces downtime of customer-
software facing applications
• Support for Open environments – Increases the availability of
– Installed in high profile Disaster revenue producing applications
Recovery sites around the world
P-VOL S-VOL
• Features • Benefits
– Model AMS500 and AMS1000 – Does not effect host performance
– AMS 2000 – Enables longer distance disaster
recovery and data protection
– Asynchronous replication – Can be used in lower speed
networks
P-VOL S-VOL
Pool Pool
Extenders
Local AMS Remote AMS
Overview
10
Overview
11
Asynchronous
Write
S-VOL
12
Overview
System
WMS100, AMS200, AMS500, AMS1000
13
Differential Management
• DM-LU Overview
– DM-LU is used for saving cache resident ShadowImage Replication software
management information
– At Shutdown: Writes the management information from cache to DM-LU
– At Boot: Reads the management information from DM-LU to cache
ShadowImage Shutdown
Copy Meta Data
14
Host
P-VOL
P-VOL available to Host
For R/W I/O operations All Data
S-VOL
INITIAL COPY
Data Bit Map
P-VOL Differential
• Updates S-VOL after initial copy
Host
• Write I/O to P-VOL during initial
P-VOL available to Host P-VOL copy – Duplicated to S-VOL by
For R/W I/O operations
Differential Data update copy after initial copy
The ShadowImage Replication software update copy operation, updates the S-VOL
of a ShadowImage Replication software pair after the initial copy operation is
complete. Update copy operations take place only for duplex pairs (status = PAIR).
As write I/Os are performed on a duplex P-VOL, the system stores a map of the P-
VOL differential data, and then performs update copy operations periodically based
on the amount of differential data present on the P-VOL as well as the elapsed time
between update copy operations. The update copy operations are not performed for
pairs with the following status: COPY(PD) (pending duplex), COPY(SP) (split
pending), PSUS(SP) (quick split pending), PSUS (split), COPY(RS) (resync),
COPY(RS-R) (resync-reverse), PSUE (suspended)
HOST I/O
HOST I/O
10,15,18,29 10,19, 23
P-VOL S-VOL
10,15,18,19,23,29
sent from P-VOL to
S-VOL
P-VOL Updates --> S-VOL
Dirty Tracks
HOST I/O
16
19, 23, and 29 marked as dirty. These tracks are sent from the P-VOL to the S-
VOL as part of an update copy operation.
3. Once the update copy operation in step 2 is complete the P-VOL and S-VOL are
declared as a PAIR.
Time
App App BKUP App App
18
Overview
Pool Link
19
1:3 1 : 15
Pair P-VOL
S-VOL
S-VOL
Configuration
….
P-VOL S-VOL
V-VOL V-VOL V-VOL V-VOL
20
Operation Scenarios
21
Now the data block on the P-VOL needs to be written to. However, before the actual
write is executed, the block is copied to the Pool area. The set of pointers that
actually represent the V-VOL will be updated and if there is a request now for the
original block through a V-VOL, the block is physically taken from Pool.
From the host's perspective, the V-VOL (Snapshot Image) has not changed, which
was the plan.
If the Pool areas become full, all snapshots will be deleted. Pool utilization has to be
monitored.
22
Disaster Recovery
23
TrueCopy Specifications
RAID10 RAID5
(2D+2D) (5D+1P)
P-VOL S-VOL
RAID5 RAID5
(4D+1P) (8D+1P)
24
Configurations
Extender Extender
INI RCUT
This configuration allows you to
ShadowImage
use ShadowImage software to
ShadowImage
provide multiple backup copies of a
P-VOL P-VOL single TrueCopy software P-VOL at
Synchronous
local as well as remote sites.
S-VOL S-VOL
Extender Extender
25
Extender Extender
INI RCUT
This configuration allows you to
Snapshot
use Copy-on-Write software to
Snapshot
provide multiple backup copies of a
P-VOL P-VOL single TrueCopy software P-VOL at
Synchronous
local as well as remote sites.
V-VOL V-VOL
26
27
Functional Overview
P-VOL S-VOL
Local Adaptable Modular Storage System Remote Adaptable Modular Storage System
28
29
CTG0 V-VOL
P-VOL S-VOL
CTG0 TrueCopy
TrueCopy Extended
P- VOL S-VOL
Extended Copy-on-Write V-VOL
Distance Pair 1 Snapshot software
Distance Pair 1
TrueCopy V-VOL
CTG1 P-VOL
Extended S-VOL
TrueCopy
S-VOL Extended P- VOL Distance Pair 2
V-VOL
Distance Pair 2
CTG0
P-VOL S-VOL
TrueCopy Extended
Distance Pair 1
Copy-on-Write
V-VOL V-VOL Snapshot V-VOL V-VOL
software
31
32
RAID Manager CCI configures and manages the following replication products:
ShadowImage Replication software, Copy-on-Write Snapshot software, and
TrueCopy Remote Replication software.
RAID Manager CCI is also used for configuration of a few other products in the
enterprise area.
33
HORCM
LAN
HORCM0.conf HORCM HORCM
HORCM
HORCM0.conf Commands HORCM1.conf
HORCM1.conf
Commands Commands
Commands
Communication
Server between RAID
Software Manager Instances Server
HORCM
HORCM
and Software
App. Instance0
Instance0 HORCM
and HORCM
App. Instance1
Instance1
Command Command
Device Device
34
HORCM_DEV
• Multi-path configuration
– LUN 9 is Mapped to CL1-B and CL2-B
HORCM_DEV
#dev_group dev_name port # target ID LUN# MU#
ora1 ora_tab1 CL1-B 0 9 0
Or
ora1 ora_tab1 CL2-B 0 9 0
9
35
HORCM_MON
#ip_address service poll(ms) timeout(ms)
SVR1 horcm0 6000 3000
HORCM_CMD
#dev_name
/dev/rdsk/c2t1d1s2 # Solaris
\\.\Physicaldrive2 # Windows NT, 2000 and 2003
\\.\Volume{f66c6208-6da0-11da-912a-505054503030} # Windows 2000 and 2003
HORCM_DEV
#dev_group dev_name port # target ID LUN# MU#
oradb1 disk1 CL1-A 3 1 0
HORCM_INST
#dev_group ip_address service
oradb1 SVR1 horcm1
36
HORCM_MON describes:
y IP address (host name) or the number of the server running instance 0.
y service (local service) is the /etc/services file port name line entry for instance 0.
The port can be explained as a “socket” number to communicate to instance 1
which is also located in the /etc/services file and vice-versa.
y poll interval in milliseconds (1000 milliseconds = 1 second). This indicates the
number of times HORCM daemon will “look at” the command device for status
about the pairs. When this number is higher, HORCM daemon overhead on the
running server is reduced. 1000ms is the default value.
y timeout value in milliseconds (1000 milliseconds = 1 second). This indicates the
time for which the HORCM daemon will wait for status from instance 1 before
timing out. In ShadowImage Replication software mode, this will apply to
communication between the two instances running on one server when
applicable.
HORCM_CMD describes the path to the raw device serving as the command device.
HORCM_DE describes the source LUNs.
y dev group name associates all LUNs to be controlled as a group for manipulation
from one command.
37
38
Module Objectives
39
Replication
Monitoring and
Replication Manager Management
Open Volumes
Configuration
M/F Volumes
Navigator
Device Manager
Storage
Management
BC
Manager
RAID Manager Replication
Management
40
y Replication Manager software provides monitoring for both RAID series (open
and Mainframe volumes) and DF series storage subsystems (open volumes)
y Replication Manager software requires (is dependent on) Device Manager and
uses RAID Manager command control interface (CCI) and Device Manager agent
for monitoring open volumes
Device Manager provides volume configuration management
RAID Manager (CCI) is used by Replication Manager for pair status watching
y Replication Manager software requires (is dependent on) Business Continuity
Manager (BCM) or Mainframe agent for monitoring the mainframe volumes
Chart Legend
y TC stands for TrueCopy
y SI stands for ShadowImage
y UR stands for Universal Replicator
y CoW stands for Copy-on-Write
RAID Manager
HRpM Agent
Host Agent
Agent Base
SVP
Common
Plug-in
(CCI)
CMD
Browser HDvM Agent
Device
Plug-in
RAID Manager
Server
Server HRpM Agent
Host Agent
Agent Base
Common
IP Network
Plug-in
(CCI)
SVP
HRpM Server
FC-SAN
HDvM Agent CMD Device
Plug-in
HDvM Server
41
Product Notations
agent module. One agent install on the server works for Device Manager,
Replication Manager, and Provisioning Manager.
RAID Manager (CCI): The Replication Manager requires RAID Manager to
manage replication pair volumes. The servers on which the RAID Manager
software is installed must have a Host Agent so that Replication Manager can
recognize and manage the pair volume instances.
3. Pair Management Server (Mainframes):
BCM (Business Continuity Manager): BCM is the software product that works
on the mainframe and manages replication pair volumes assigned for the
mainframe computers. Business Continuity Manager 5.0 or later or Mainframe
Agent 6.0 or later can be used. The Replication Manager can monitor the
mainframe replication volumes by communicating with the BCM. Although
the Replication Manager V6.0 can create/modify/delete the open replication pair
volumes, it cannot create/modify/delete mainframe pair volumes even through
the BCM.
4. Host (Production Server) : A host runs application programs. The installation of
Device Manager agent is optional. Replication Manager can acquire the host
information (host name, IP address, and mount point) if the agent is installed on
it.
Types of Install
• New Installation
– Device Manager Server v6.0 and Device Manager Agent v6.0 is a
prerequisite product
• Upgrade Installation
– Upgrade from Replication Monitor v5.0 or later is supported
– Replication Monitor is replaced by Replication Manager
42
43
44
Applications are the critical driver of business process and decision making,
impacting organizational growth, risk, and profitability
Backup / DR Archiving
• Applications are the link between business and Information Technology (IT).
By focusing on applications and addressing their unique storage
requirements, Hitachi Data Systems can help organizations address their key
business challenges.
Services Oriented Storage Solutions is a business-centric framework for aligning IT
storage resources with constantly changing business requirements. It provides a
dynamic, flexible platform of integrated storage services enabling organizations and
users to optimize storage infrastructure while reducing cost and complexity.
A
S P A
T P P
O P
R V
A I V
G E I
E W
W
E
Services Oriented Storage
V
I
E
Solutions are comprised of:
W
• Hardware
• Services
Most storage vendors focus • Software
on performance and capacity–
often only from the storage
perspective!
4
Applications
Services Oriented Storage
Storage Solutions Email CRM File/Print Database ERP ECM
Practices
QoS
Storage
Object Services Economics
SLA
Index, Search, Classification, Security
Storage Platform
Data
I/O File Services Classification
Virtualization, Replication, Migration, De-Duplication,
Security, Encryption, Archiving
RPO Risk Analysis
Block Services
RTO Virtualization, Discovery, Partitioning, Provisioning, Volume Compliance and
Management, Replication, Migration, Security, Metering
Archiving
Charge Back
Consolidation &
Utilization Tiered Storage
FC SATA TAPE Archive
Physical Storage
deliver because Service Oriented Storage Solutions are built upon a dynamic,
flexible platform of integrated storage services enabling customers to optimize
storage infrastructure while reducing cost and complexity. The platform is both
powerful and simple:
The architecture summary illustrates that the Services Oriented Storage Solutions
are comprised of an integrated stack of services including:
y Block Services, which include volume virtualization, discovery, provisioning,
partitioning, volume management, replication, migration, security, and metering
y File Services, which include file virtualization, replication, migration, security,
encryption, and archiving
y Object Services, which include content services including index, search,
classification, and security
Solutions Focus
IT
7
Key Objective: Illustrate the link between customer business challenges and our
solution focus areas.
Key Points:
1. As illustrated on the previous slide Service Oriented Storage is a platform of
integrated services which used in conjunction create Services Oriented Storage
Solutions.
2. In addition to hardware and software services components, Services Oriented
Storage Solutions offer professional consulting, design, and implementation
services to insure customers maximize their investment in Hitachi solutions.
3. Hitachi Data System’s solution approach is to understand the customer’s key
business and Information Technology challenges, and then to deploy the
appropriate solutions to address their needs.
Benefits
• Improved productivity of IT resources
• Integrated data center and enterprise
operations
• Utilization of enterprise storage assets
• Risk mitigation
• Proactive alerts on storage arrays to
prevent outages
• Disaster recovery management to
minimize downtime
Device Manager software manages all Hitachi Data Systems arrays - Thunder,
Lightning, and Universal Storage Platform — with the same interface. It can also
manage multiple arrays in a network environment. Targeted for users managing
multiple storage arrays in open or shared environments, Device Manager software
quickly discovers the key configuration attributes of storage systems and allows
users to begin proactively managing complex and heterogeneous storage
environments quickly and effectively using an easy-to-use browser-based Graphical
User Interface (GUI). Device Manager software enables remote storage management
over secure IP connections and does not have to be direct-attached to the storage
system.
Business
QoS for Hitachi Dynamic Link Manager
Application QoS Application Modules
File Servers
Protection Manager
Oracle - Exchange - Sybase Exchange - SQL Server Path failover and failback
Modules SRM load balancing
Storage Path
Chargeback Global
Provisioning
Operations Reporter
Modules Backup
Tiered
Services Replication Tuning
Storage
Manager Monitor Manager
• Path Management
Storage Services Manager Manager
• Capacity Monitoring
• Performance Monitoring
Hitachi Hitachi
HDS API
Configuration Reporting Provisioning Replication Resource Performance
Array CIM/SMI-
CIM/SMI-S Maximizer
Manager
Services
10
This graphic represents a view of the Storage Management Suite laid out according
to functional layer. Light blue modules support heterogeneous environments. Dark
blue modules support heterogeneous environments but at Hitachi storage system
specific.
This is not a top-down dependency chart, although there are some top-down
dependencies here. Rather it is sorted into rows according to what the
purpose/benefit of the product is aimed at.
y The first layer at the bottom is Hitachi Storage System-specific modules for
supporting and interfacing with Hitachi arrays to get the most out of Hitachi
Data Systems storage.
y The second layer is made up of products that support storage systems on an
operational basis – things that make efficient and reliable management of storage
possible.
y The top layer consists of modules that are application specific tools to improve
application-to-storage service levels.
11
Production
Production Server
Server (Host)
(Host)
Management LAN
Device Manager
SAN Server
HBase
Management
Management
Storage
Storage Systems
Systems Console
Console (Client)
(Client)
12
Allocate Subsystem
Storage Select the optimal LDEVs
Storage
from Storage Pool
Create
File System Create Device File
Host
Create File System
13
14
15
• Functional View
Business
QoS for Hitachi Dynamic Link Manager
Application QoS Application Modules
File Servers
Protection Manager
Path failover and failback
Modules Oracle - Exchange - Sybase Exchange - SQL Server
SRM load balancing
Storage Path
Chargeback Global
Provisioning
Operations Reporter
Modules Backup
Tiered
Services Replication Tuning
Storage
Manager Monitor Manager
• Path Management
Storage Services Manager
Manager
• Capacity Monitoring
• Performance Monitoring
Device Manager
Hitachi Hitachi
HDS API
Configuration Reporting Provisioning Replication Resource Performance
Array CIM/SMI-
CIM/SMI-S Maximizer
Manager
Services
16
This graphic is a view of the Storage Management Suite laid out according to
functional layer. Light blue modules support heterogeneous environments. Dark
Blue modules support heterogeneous environments but at Hitachi Storage System
specific.
This is not a top-down dependency chart, although there are some top-down
dependencies here. Rather it is sorted into rows according to what the
purpose/benefit of the product is aimed at.
y The first layer at the bottom is Hitachi Storage System-specific modules for
supporting and interfacing with Hitachi arrays to get the most out of Hitachi
Data Systems storage.
y The second layer is made up of products that support storage systems on an
operational basis — things that make efficient and reliable management of
storage possible.
y The top layer is modules that are application specific tools to improve
application-to-storage service levels.
• Gather data from servers, databases, switches, and storage systems with
device-specific tools, then consolidate, analyze, and correlate data that is
presented in different formats.
Server
Gather Data App
SAN Storage
17
Troubleshooting requires a view of the path from the application to the storage
system. Without a tool that consolidates and normalizes all of the data, the system
administrator has difficulty distinguishing between possible sources. When a
performance problem occurs or the “database (DB) application response time
exceeds acceptable levels”, they must quickly determine if the problem is in the
application server.
Server/App Analysis — is the problem caused by trouble on the server? (DB, file
system, and HBA)
Fabric Analysis — is there a SAN switch problem? (Port, ISL, and more)
Storage Analysis — is the storage system a bottleneck?
All of the data from the components of the Storage network must be gathered by
different device-specific tools and interpreted, correlated and integrated manually,
including the timestamps, in order to find the root cause of a problem.
Some customers achieve this by exporting (CSV format) lots of data to spreadsheets
and then manually sorting and manipulating the data.
Performance Reporter
Client Client
LAN
Conceptional
SUN HP AIX WIN
Solaris Agent HP UX Agent AIX Agent Windows Agent
Agent for Platform
Oracle Agent SAN Agent RAID Agent
Agent for RAID
NAS Agent SQL Agent
Tuning Manager software consists of agents and a server. The agents collect
performance and capacity data for each monitored resource, and the server manages
the agents. This diagram shows an example system configuration.
Agents can run multiple instances to collect metrics from multiple application
instances, fabrics, and storage systems.
The instances of the Agent for RAID collect metrics from enterprise storage systems
using inbound Fibre Channel connection communicating to the CMD device in the
array. Modular storage is accessed via LAN using the DAMP utility to collect metric
data.
Tuning Manager server can concurrently serve as business server on SUN Solaris
and Microsoft Windows in small environments. The maximum number of resources
manageable by one Tuning Manager server is 16,000 and in this case Tuning
Manager requires installation on a dedicated server. To be able to manage as many
resources as possible with good performance, carefully consider the Tuning
Manager system requirements.
Hitachi modular storage includes Adaptable Module Storage and Thunder series
storage.
Universal Storage Platform is Universal Storage Platform™.
Lightning 9900 is Lightning 9900™ Series enterprise storage systems.
Lightning 9900 V is Lightning 9900 V™ Series enterprise storage systems.
Tuning Manager
Advanced application-to-spindle reporting, analysis and troubleshooting for all Hitachi storage systems
Performance Monitor
Detailed point-in-time reporting of
Storage Services Manager individual Hitachi storage systems
(QoS Modules)
Parity Group
Array Port
ACP/DKC
Cache
CHP
Disk
App HBA/Host Switch Storage System
19
This is a visualization of how these products work, and what they cover.
Storage Services Manager software provides visibility to performance within the
storage network, from the application to the storage system port. It does not provide
insight within the storage system. It is useful when a SAN includes storage systems
from multiple vendors.
Performance Monitor provides in-depth, point-in-time information about
performance within a Hitachi storage system. It does not provide any information
about the network, the host, or the application. Nor does it provide any correlation
to that information, if used in conjunction with a product such as Storage Services
Manager software.
Tuning Manager software provides end-to-end visibility for storage performance.
Though limited to Hitachi storage systems, it provides the most thorough view of
the system, tracking an I/O from an application to the disk. This ability to correlate
this information, and link from step-to-step in the I/O path provides the most
efficient solution to identifying performance bottlenecks.
I/O response time, both host side and array side:
y 4.0 adds the ability to monitor the round trip response time for troubleshooting
and proactive service level error condition alerting results in improved
20
The ability to view all SAN-attached servers, databases, file systems, switches,
storage systems, logical volumes, disk array groups, and their relationships to
each other
y Forecasting data can easily be extracted by logging in with "User" level security
level
y Alerts can trigger sending an email message, a SNMP trap message, or running a
shell script/batch file
Collection Manager
LAN
(3) Agent Client
Internet Explorer
AGT-DB Netscape Navigator
21
Performance Reporter does not display database data, but displays agent database
directly.
22
Customers demand a product that assures the retention of authentic, fixed content
in an immutable form that provides the scalability needed to address ever-
increasing volumes of new content and the associated growth in storage capacity. It
must provide the reliability required to meet customer DR/BC policies as well as
SLAs needed to ensure content is accessible when needed. The Hitachi Data Systems
product delivers on all of these with the most robust platform for fixed content
archiving.
Digital Video
Satellite
Biotechnology Medical
Legal
Records
Email Applications
ISV Partners
24
Applications (for example email and digital imaging) do not typically interact
directly with the archive. They typically interface to a “middleware” ISV application
that provides additional functionality to the application before the data is passed to
the archive. This additional functionality could include setting retention times,
search, timed deletion of data, and replication. Once this middleware has processed
the data, it is passed to the archive for storage of the data and metadata produced by
this pre-ingestion grooming.
Our ISV program is critical because it certifies the various ISV partners middleware
with our solution to insure a seamless solution. We offer two levels of certification,
compatible and integrated. Compatible means that the ISV middleware software
works with our solution. Integrated means that the ISV partners software has been
modified to better integrate with our Content Archive Platform and allows it to take
advantage of some of its advanced features, such as retention time, shredding, and
single instance.
Hitachi Data Systems ISV Partners in support of the Content Archive Platform cover
several application categories for content archiving; email, Enterprise Content
Management, file system and database archiving.
Three Solutions
OR
Content Archive Platform
USP V
Support with HCAP DL NSC55
HCAP DL (Diskless)
Functionality Demanded
25
Multiple cells
can be
A cell package combined to
includes two SMTP form a larger
nodes and one CIFS system with a
model WMS100 single archive
Cardiff
NFS
HTTP
CIFS
Cardiff
NFS
Cardiff
Cardiff
HTTP
cell
cell WebDAV
Base model
SMTP
includes two cells
starting in 4.8TB
Cardiff
CIFS
Cardiff
capacity
HTTP
WebDAV cell
cell
SMTP
CIFS
Cardiff
NFS
Cardiff
HTTP
WebDAV cell
Network Switch
26
All Content Archive Platform cells must be the same size initially. This includes
initial purchase and upgrades.
27
28
Scheduling
Media Management
Policy Management User Interface
Index
29
30
31
• Feature:
– Storage Arrays, Hosts, and Application Capacity
Reporting
• Capability:
– Identify over used, under used, or wasted storage
resources
– Provide capacity forecasting or predictive analysis
• Business Value:
– End-to-end storage capacity view from the host
perspective complementing storage array-side views
provided by other Storage Command Suite products
– Helps to ensure the availability and performance of
mission-critical business applications
– Easily deployable within a customer’s SAN environment
• What Makes This Unique?
– Application level storage reporting without the need to
install host based agents
32
• CLARiiON
• Symmetrix
• DMX
• XP Series
• FAS6000 Series
• FAS3100 Series
• FAS3000 Series
• FAS2000 Series
33
ACC— Action Code. A SIM System Information AMS —Adaptable Modular Storage
Message. Will produce an ACC which takes APID — An ID to identify a command device.
an engineer to the correct fix procedures in APF (Authorized Program Facility) — In z/OS and
the ACC directory in the MM (Maintenance OS/390 environments, a facility that permits
Manual) the identification of programs that are
ACE (Access Control Entry) — Stores access authorized to use restricted functions.
rights for a single user or group within the Application Management —The processes that
Windows security model manage the capacity and performance of
ACL (Access Control List)— stores a set of ACEs, applications
so describes the complete set of access ARB — Arbitration or “request”
rights for a file system object within the
Microsoft Windows security model Array Domain—all functions, paths, and disk
drives controlled by a single ACP pair. An
ACP (Array Control Processor) ― Microprocessor array domain can contain a variety of LVI
mounted on the disk adapter circuit board and/or LU configurations.
(DKA) that controls the drives in a specific
disk array. Considered part of the back-end, ARRAY UNIT - A group of Hard Disk Drives in one
it controls data transfer between cache and RAID structure. Same as Parity Group
the hard drives. ASIC — Application specific integrated circuit
ACP PAIR ― Physical disk access control logic. ASSY — Assembly
Each ACP consists of two DKA PCBs. To Asymmetric virtualization — See Out-of-band
provide 8 loop paths to the real HDDs virtualization.
Actuator (arm) — read/write heads are attached to Asynchronous— An I/O operation whose initiator
a single head actuator, or actuator arm, that does not await its completion before
moves the heads around the platters proceeding with other work. Asynchronous
AD — Active Directory I/O operations enable an initiator to have
ADC — Accelerated Data Copy multiple concurrent I/O operations in
progress.
ADP —Adapter
ATA — Short for Advanced Technology
ADS — Active Directory Service Attachment, a disk drive implementation that
Address— A location of data, usually in main integrates the controller on the disk drive
memory or on a disk. A name or token that itself, also known as IDE (Integrated Drive
identifies a network component. In local area Electronics) Advanced Technology
networks (LANs), for example, every node Attachment is a standard designed to
has a unique address connect hard and removable disk drives
AIX — IBM UNIX Authentication — The process of identifying an
AL (Arbitrated Loop) — A network in which nodes individual, usually based on a username and
contend to send data and only one node at a password.
time is able to send data.
IPSEC — IP security
iSCSI (Internet SCSI ) — Pronounced eye skuzzy. —K—
Short for Internet SCSI, an IP-based kVA— Kilovolt Ampere
standard for linking data storage devices
over a network and transferring data by kW — Kilowatt
carrying SCSI commands over IP networks.
iSCSI supports a Gigabit Ethernet interface -back to top-
at the physical layer, which allows systems
supporting iSCSI interfaces to connect
directly to standard Gigabit Ethernet —L—
switches and/or IP routers. When an LACP — Link Aggregation Control Protocol
operating system receives a request it
LAG — Link Aggregation Groups
generates the SCSI command and then
sends an IP packet over an Ethernet LAN— Local Area Network
connection. At the receiving end, the SCSI LBA (logical block address) — A 28-bit value that
commands are separated from the request, maps to a specific cylinder-head-sector
and the SCSI commands and data are sent address on the disk.
to the SCSI controller and then to the SCSI
LC (Lucent connector) — Fibre Channel connector
storage device. iSCSI will also return a
that is smaller than a simplex connector (SC)
response to the request using the same
protocol. iSCSI is important to SAN LCDG—Link Processor Control Diagnostics
technology because it enables a SAN to be LCM— Link Control Module
deployed in a LAN, WAN or MAN.
LCP (Link Control Processor) — Controls the
iSER — iSCSI Extensions for RDMA optical links. LCP is located in the LCM.
ISL — Inter-Switch Link LCU — Logical Control Unit
iSNS — Internet Storage Name Service LD — Logical Device
ISPF — Interactive System Productivity Facility LDAP — Lightweight Directory Access Protocol
ISC — Initial shipping condition LDEV (Logical Device) ― A set of physical disk
ISOE — iSCSI Offload Engine partitions (all or portions of one or more
disks) that are combined so that the
ISP — Internet service provider
subsystem sees and treats them as a single
area of data storage; also called a volume.
-back to top- An LDEV has a specific and unique address
-back to top-
In recent years, automated storage provisioning, RAID-1 — Mirrored array & duplexing
also called auto-provisioning, programs have RAID-3 — Striped array with typically non-rotating
become available. These programs can parity, optimized for long, single-threaded
reduce the time required for the storage transfers
provisioning process, and can free the RAID-4 — Striped array with typically non-rotating
administrator from the often distasteful task parity, optimized for short, multi-threaded
of performing this chore manually transfers
Protocol — A convention or standard that enables RAID-5 — Striped array with typically rotating
the communication between two computing parity, optimized for short, multithreaded
endpoints. In its simplest form, a protocol transfers
can be defined as the rules governing the
syntax, semantics, and synchronization of
communication. Protocols may be
-back to top-
6. Under Attachments, click the Class Eval link. The Class Evaluation form opens.
Complete the form and submit.