Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
14 views204 pages

The 1853

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 204

Certification Exam

Preparation vILT:
Foundations Modular
THE1853

Courseware Version 1.0


Notice: This document is for informational purposes only, and does not set forth any warranty, express or
implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems. This
document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data
Systems being in effect, and that may be configuration-dependent, and features that may not be currently
available. Contact your local Hitachi Data Systems sales office for information on feature and product
availability.
Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited
warranties. To see a copy of these terms and conditions prior to purchase or license, please call your local
sales representative to obtain a printed copy. If you purchase or license the product, you are deemed to have
accepted these terms and conditions.
THE INFORMATION CONTAINED IN THIS MANUAL IS DISTRIBUTED ON AN "AS IS" BASIS
WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NONINFRINGEMENT. IN NO EVENT WILL HDS BE LIABLE TO THE END USER OR ANY THIRD PARTY
FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS MANUAL, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, GOODWILL OR LOST DATA,
EVEN IF HDS EXPRESSLY ADVISED OF SUCH LOSS OR DAMAGE.
Hitachi Data Systems is registered with the U.S. Patent and Trademark Office as a trademark and service
mark of Hitachi, Ltd. The Hitachi Data Systems logotype is a trademark and service mark of Hitachi, Ltd.
The following terms are trademarks or service marks of Hitachi Data Systems Corporation in the United
States and/or other countries:

Hitachi Data Systems Registered Trademarks


Essential NAS Platform Hi-Track ShadowImage TrueCopy

Hitachi Data Systems Trademarks


HiCard HiPass Hi-PER Architecture Hi-Star
Universal Star Network Universal Storage Platform
All other trademarks, trade names, and service marks used herein are the rightful property of their respective
owners.
NOTICE:
Notational conventions: 1KB stands for 1,024 bytes, 1MB for 1,024 kilobytes, 1GB for 1,024 megabytes, and
1TB for 1,024 gigabytes, as is consistent with IEC (International Electrotechnical Commission) standards for
prefixes for binary and metric multiples.
©2009, Hitachi Data Systems Corporation. All Rights Reserved
HDS Academy 0029

Contact Hitachi Data Systems at www.hds.com.

Page ii HDS Confidential: For distribution only to authorized parties.


Product Names mentioned in courseware:

Enterprise Storage Systems


y Hitachi Universal Storage Platform™ V
y Hitachi Universal Storage Platform™ VM
y Hitachi Universal Storage Platform™
y Hitachi Network Storage Controller
Legacy Products:
y Hitachi Lightning 9900™ V Series enterprise storage systems
y Hitachi Lightning 9900™ Series enterprise storage systems

Modular Storage Systems


y Hitachi Adaptable Modular Storage
y Hitachi Workgroup Modular Storage
y Hitachi Simple Modular Storage
y Hitachi Adaptable Modular Storage 2000 Family
Legacy Products:
y Hitachi Thunder 9500™ Series modular storage systems
y Hitachi Thunder 9200V™ entry-level storage

Management Tools
y Hitachi Basic Operating System
y Hitachi Basic Operating System V
y Hitachi Resource Manager™ utility package
Œ Module Volume Migration Software
Œ LUN Manager/LUN Expansion
Œ Network Data Management Protocol (NDMP) agents
Œ Logical Unit Size Expansion (LUSE)
Œ Cache Partition Manager feature
Œ Cache Residency Manager feature
Œ Storage Navigator program
Œ Storage Navigator Modular program
Œ Storage Navigator Modular 2 program

Replication Software
Remote Replication:
y Hitachi Universal Replicator software
y Hitachi TrueCopy® Heterogeneous Remote Replication software bundle
y Hitachi TrueCopy® Remote Replication software bundle (for modular systems)

HDS Confidential: For distribution only to authorized parties. Page iii


y Hitachi TrueCopy® Synchronous software
y Hitachi TrueCopy® Asynchronous software
y Hitachi TrueCopy® Extended Distance software
In-System Replication:
y Hitachi ShadowImage® Heterogeneous Replication software (for enterprise systems)
y Hitachi ShadowImage® Replication software (for modular systems)
y Hitachi Copy-on-Write Snapshot software

Hitachi Storage Command Software Suite


y Hitachi Chargeback software
y Hitachi Device Manager software
y Hitachi Dynamic Link Manager software
y Hitachi Global Link Manager software
y Hitachi Global Reporter software
y Hitachi Path Provisioning software
y Hitachi Protection Manager software
y Hitachi QoS for File Servers software
y Hitachi QoS for Oracle software
y Hitachi Replication Monitor software
y Hitachi Storage Services Manager software
y Hitachi Storage Services Manager software
y Hitachi Tiered Storage Manager software
y Hitachi Tuning Manager software

Other Software
y Hitachi Backup and Recovery software, powered by CommVault®
y Hitachi Backup Services Manager software, powered by APTARE®
y Hitachi Business Continuity Manager software
y Hitachi Command Control Interface (CCI) Software
y Hitachi Dynamic Provisioning software
y Hitachi Storage Resource Management Solutions
y Hitachi Volume Migration software
y Hi-Track® Monitor

Other Solutions and Terms


y Hitachi Content Archive Platform
y Hitachi Essential NAS Platform®
y Hitachi High-performance NAS Platform, powered by BlueArc®
y Hi-Star™ crossbar switch architecture
y Hitachi Universal Star Network™ V

Page iv HDS Confidential: For distribution only to authorized parties.


Contents

INTRODUCTION ............................................................................... IX
Hitachi Data Systems Certified Professional Program 2009.......................ix
Framework................................................................................................... x
Exams.......................................................................................................... x
Website........................................................................................................xi
Foundations Track......................................................................................xii
Program Elements..................................................................................... xiii
Test Development ..................................................................................... xiii
Certification Exam Preparation vILT: Foundations Modular .....................xiv

1. SECTION 1 ............................................................................... 1-1


Hitachi Adaptable Modular Storage 1000 Family Architecture ......... 1-2
Hitachi Data Systems Midrange Storage Offerings ....................... 1-3
Features ......................................................................................... 1-5
Workgroup Modular Storage 100................................................... 1-7
Adaptable Modular Storage 200 .................................................... 1-8
Adaptable Modular Storage 500 .................................................... 1-9
Adaptable Modular Storage 1000 ................................................ 1-10
Hitachi Adaptable Modular Storage 2000 Family Architecture
and Administration........................................................................ 1-11
Product Description...................................................................... 1-12
Product Line Positioning .............................................................. 1-14
Features ....................................................................................... 1-15
Specifications ............................................................................... 1-16
Software and Firmware Offerings ................................................ 1-18
External Design and Connections................................................ 1-19
Host Storage Domains (Host Groups) ......................................... 1-20
Highlights ..................................................................................... 1-21
Model 2100 Controller Architecture ............................................. 1-26
Model 2300 Controller Architecture ............................................. 1-27
Model 2500 Controller Architecture ............................................. 1-28
Specifications ............................................................................... 1-29
Back-end Architecture.................................................................. 1-31
Disk Expansion Tray .................................................................... 1-33
Active-Active I/O Architecture ............................................................ 1-34
Cross-controller Communication.................................................. 1-35
Internal Transaction ..................................................................... 1-36
LU Ownership .............................................................................. 1-37
Controller Load Balancing............................................................ 1-39
Microcode Updates ...................................................................... 1-40

2. SECTION 2 ............................................................................... 2-1


Hitachi Adaptable Modular Storage Software..................................... 2-2
Software Feature Overview ........................................................... 2-3
Launch Advanced Settings ............................................................ 2-4
Cache Partition Manager Feature.................................................. 2-5
Advantage of Selectable Segment Size – Small I/O...................... 2-6
Advantage of Selectable Segment Size – Large I/O ..................... 2-7
Advantage of Global Cache ........................................................... 2-8
Advantage of Partitioned Cache .................................................... 2-9
Advantage of Selectable Stripe Size ........................................... 2-10
Partitioning Cache........................................................................ 2-11
Installing Cache Residency Manager Feature............................. 2-12
Functionality ................................................................................. 2-13
Overview of Performance Monitor Feature.................................. 2-14

HDS Confidential: For distribution only to authorized parties. Page v


Contents Certification Exam Preparation vILT: Foundations Modular

Enabling Performance Data Collection ....................................... 2-15


Overview of Modular Volume Migration ...................................... 2-17
Migration From SAS Drives to SATA Drives ............................... 2-18
Migrating Volumes for Performance............................................ 2-19
Volume Migration Setup .............................................................. 2-20
Hi-Track Monitor .......................................................................... 2-21
Storage Navigator Modular 2 Program ............................................. 2-22
Module Objectives ....................................................................... 2-23
Architecture ................................................................................. 2-24
Installation Requirements ............................................................ 2-25
Online Help ........................................................................... 2-26
Start From Web Brower ........................................................ 2-27
Configure............................................................................... 2-28
Account Authentication ......................................................... 2-29
LUN Expansion Overview ..................................................... 2-30
Overview of LUN Concatenation........................................... 2-31
Hitachi Essential NAS Platform ......................................................... 2-32
Hitachi Enterprise Storage System Connectivity History ...... 2-33
Essential NAS Platform Introduction..................................... 2-34
Essential NAS Server ........................................................... 2-35
Hitachi Dynamic Link Manager and Hitachi Global Link
Availability Manager Software..................................................... 2-36
Dynamic Link Manager Features .......................................... 2-37
Dynamic Link Manager Software GUI .................................. 2-42
Problems and Solutions ........................................................ 2-43
Global Link Availability Manager Software Features ............ 2-44
Dynamic Link Manager Software and Global Link
Availability Manager Working Together.......................... 2-45

3. SECTION 3 ...............................................................................3-1
Business Continuity ...............................................................................3-2
Business Continuity Solutions...................................................3-3
RAID Manager (CCI) .................................................................3-5
ShadowImage Software ............................................................3-6
TrueCopy Remote Replication Software...................................3-7
TrueCopy Extended Distance ...................................................3-8
Hitachi ShadowImage® Replication Software ......................................3-9
Overview ................................................................................ 3-10
Applications for ShadowImage Replication Software ............ 3-11
Overview ................................................................................ 3-12
Internal ShadowImage Replication Software Operation ........ 3-13
Overview ................................................................................ 3-14
Differential Management ........................................................ 3-15
ShadowImage Replication software Copy Operations........... 3-16
ShadowImage Replication Software Commands................... 3-18
ShadowImage Replication Software Operations ................... 3-20
Hitachi Copy-on-Write Snapshot Software ....................................... 3-21
Overview ................................................................................ 3-22
Operation Scenarios............................................................... 3-24
Hitachi TrueCopy® Remote Replication Software ............................ 3-25
Disaster Recovery .................................................................. 3-26
TrueCopy Specifications ........................................................ 3-27
Configurations ........................................................................ 3-28
TrueCopy and Copy-on-Write Snapshot Configurations........ 3-29
True Copy Extended Distance ............................................... 3-30
Functional Overview............................................................... 3-31
Concurrent Use with Other Copy Products ............................ 3-32

Page vi HDS Confidential: For distribution only to authorized parties.


Certification Exam Preparation vILT: Foundations Modular Contents

Examples of Supported Configurations .................................. 3-33


RAID Manager and Command Control Interface............................... 3-34
Command Control Interface.................................................... 3-35
HORCM_DEV ......................................................................... 3-37
Horcmo.conf Managing One Volume ..................................... 3-38
Some CCI Commands............................................................ 3-40
Managing Replication with the Hitachi Replication
Manager Software......................................................................... 3-41
Module Objectives ....................................................................... 3-42
Positioning of Replication Manager ............................................ 3-43
Architecture of Replication Manager in an Open Systems
and Mainframe Environment................................................ 3-44
Types of Install............................................................................. 3-46
Installation and Configuration of Prerequisite Software .............. 3-47
Concept of Resource Groups ...................................................... 3-48

4. SECTION 4 ........................................................................................ 4-1


Services Oriented Storage Solutions from Hitachi Data Systems ................ 4-2
Applications are the Link ............................................................... 4-3
Services Oriented Storage Solutions............................................. 4-4
Services Oriented Storage Solutions: One Platform for All Data .. 4-5
Services Oriented Storage Solutions: Architecture Summary....... 4-6
Solutions Focus ............................................................................. 4-8
Hitachi Device Manager Software ..................................................................... 4-9
Device Manager Software Value Proposition.............................. 4-10
Hitachi Storage Management Suite Products ............................. 4-11
Device Manager Software Value Proposition.............................. 4-12
Device Manager Software Components...................................... 4-13
Device Manager Software Provisioning Assistant....................... 4-14
Device Manager Command Line Interface .................................. 4-15
Hitachi Tuning Manager Software .................................................................. 4-16
Storage Management Suite Products.......................................... 4-17
The Performance and Capacity Management Challenge of a
Networked Storage Environment........................................ 4-18
Tuning Manager Agents .............................................................. 4-19
Hitachi Performance Monitoring and Reporting Products ........... 4-20
Capacity and Performance Management .................................... 4-22
Tuning Manager Performance Reporter...................................... 4-24
Hitachi Content Archive Platform ................................................................... 4-25
What an Active Archive Solution Must Deliver ............................ 4-26
Value of ISV Partner Ecosystem ................................................. 4-27
Three Solutions............................................................................ 4-28
Packaging and Configuration ...................................................... 4-29
Virtual Tape Library Solutions by Hitachi Data Systems And
Hitachi Data Protection Suite Solutions .................................................. 4-30
Virtual Tape Library ..................................................................... 4-31
Hitachi Data Protection Suite Platform ........................................ 4-32
Hitachi Data Protection Suite....................................................... 4-33
Hitachi Storage Capacity Reporter Introduction .......................... 4-34
Features, Capabilities, and Value................................................ 4-35
Supported Storage Arrays ........................................................... 4-36

GLOSSARY
EVALUATING THIS COURSE

HDS Confidential: For distribution only to authorized parties. Page vii


Contents Certification Exam Preparation vILT: Foundations Modular

Page viii HDS Confidential: For distribution only to authorized parties.


Introduction
Hitachi Data Systems Certified Professional Program 2009

• Overview
– The Hitachi Data Systems Academy has a fundamental role to play in the
future of Hitachi Data Systems. Certification is a key component in
education as it validates skills and knowledge for partners and Hitachi
Data Systems personnel.

– The Hitachi Data Systems Certified Professional Program is designed to


meet the following goals:
• To provide validation of skills and knowledge to meet the business
needs of Hitachi Data Systems
• To increase customer and partner satisfaction leading to revenue
growth
• To create lifelong advocates of Hitachi Data Systems through
continuing education, certification and qualification programs
• To lower support costs by increasing the technical competencies of
partners and Hitachi Data Systems personnel

HDS Confidential: For distribution only to authorized parties. Page ix


Introduction
Framework

Framework

Certification Qualification

Foundations (Hitachi Certified Professional)

Storage
Integration Implementation Architect Sales
Manager

HDS personnel & HDS personnel & HDS personnel & HDS personnel &
I&C Partners
Authorized Partners Customers Authorized Partners Authorized Partners

Hitachi Data Systems Tiered Credentials: Hitachi Data Systems Hitachi Data Systems Hitachi Data Systems
Certified Certified Certified Qualified
Integration Professional Storage Manager Storage Architect Sales Professional
Hitachi Data Systems
Certified Implementer
HDS Storage Manager HDS Architect
--------------------- Expert Expert
(SNIA exam required) (SNIA exam required)
Hitachi Data Systems
Certified
Implementation
Specialist

Exams

Current Certification Exams


Hitachi Data Systems Storage Foundations - Enterprise exam (HH0-110)
Hitachi Data Systems Storage Foundations - Modular exam (HH0-120)
Hitachi Data Systems Implementation - Enterprise exam (HH0-210)
Hitachi Data Systems Implementation - Modular exam (HH0-220)
Hitachi Data Systems Implementation Specialist - Business Continuity Exam (HH0-270)
Hitachi Data Systems Implementation Specialist -Storage Management Exam (HH0-280)
Hitachi Data Systems Storage Manager - Business Continuity Enterprise exam (HH0-330)
Hitachi Data Systems Storage Manager - Business Continuity Modular exam (HH0-340)
Hitachi Data Systems Storage Manager - Storage Management exam (HH0-380)
Hitachi Data Systems Architect - Business Continuity exam (HH0-400)
Hitachi Data Systems Architect - Performance/Virtualization exam (HH0-440)
Hitachi Data Systems AMS/WMS I&C Authorization Exam (EXM0005)
Hitachi Data Systems AMS 2000 I&C Authorization Exam (EXM0200)

Planned for 2009


Hitachi Data Systems Implementation - File Services - High-Performance NAS exam (HH0-250)
Hitachi Data Systems Implementation - File Services - Essential NAS exam (HH0-255)
Hitachi Data Systems Architect - File Services - NAS exam (HH0-450)

Sales Qualification
Hitachi Data Systems Sales Foundation Qualification Exam (HDS-SQ100)

Page x HDS Confidential: For distribution only to authorized parties.


Introduction
Website

Website

www.hds.com/certification

HDS Confidential: For distribution only to authorized parties. Page xi


Introduction
Foundations Track

Foundations Track

RECOMMENDED Hitachi Data Systems Basic Storage Course


Prerequisite

Hitachi Data Systems Certified Professional Credential - All technical audiences


Pass one of the two following Foundations exams:
Hitachi Data Systems Storage Foundations - Enterprise exam (HH0-110) or
Hitachi Data Systems Storage Foundations - Modular exam (HH0-120)

Hitachi Data Systems Storage Hitachi Data Systems Storage


Foundations - Enterprise exam (HH0-110) Foundations - Modular exam (HH0-120)

Test is available worldwide at Test is available worldwide at


Prometric testing centers worldwide Prometric testing centers worldwide

To schedule please go to www.2test.com To schedule please go to www.2test.com

Cost is $200 U.S. and Canada Cost is $200 U.S. and Canada
$225 outside the U.S. and Canada $225 outside the U.S. and Canada

Supporting Course: THI0517 4-day ILT Supporting Course: THI0515 3-day ILT
Hitachi Data Systems Storage Foundations - Enterprise Hitachi Data Systems Storage Foundations - Modular

Page xii HDS Confidential: For distribution only to authorized parties.


Introduction
Program Elements

Program Elements

• Strategic Plan
• Market and Audience Research
• Program Assessment, Gap Analysis and Development
• Operations
• Job Task Analysis
• Curriculum review, Gap Analysis and Development
• Marketing Plan
• Execution, Measurements and Evaluation

Test Development

• Leverages Job Task Analysis research


• Subject Matter Experts, primarily from the field, write the exam questions
• Exam development workshops are led by a psychometrician
• All exam items have a documented source
• Exam development goes through several stages of technical review
• Exam goes through beta process
• Cut scores determined by psychometrics
• Exams are updated in a scheduled timeline

HDS Confidential: For distribution only to authorized parties. Page xiii


Introduction
Certification Exam Preparation vILT: Foundations Modular

Certification Exam Preparation vILT: Foundations Modular

• Course Goal
– This virtual instructor-led course helps learners prepare for and take the
Hitachi Data Systems Foundations Modular Certification exam (HH0-120).
This refresher provides a focus on key areas of expertise for the Hitachi Data
Systems Professional (Foundations – Modular Track) credential. The training
is applicable for those with experience with Hitachi Data Systems Modular
products and technology.
• Certification Exam
– There is no online Prometric test available at the end of this session.
– Learners will have to take their exams from a Prometric test sites near where
they live.

• Course Structure
– Section 1
• Hitachi Adaptable Modular Storage 1000 Family Architecture
• Hitachi Adaptable Modular Storage 2000 Family Architecture and Administration
• Active-Active I/O Architecture
– Section 2
• Hitachi Adaptable Modular Storage Software
• Storage Navigator Modular 2 Program
• Hitachi Essential NAS Platform
• Hitachi Dynamic Link Manager and Hitachi Global Link Availability Manager Software
– Section 3
• Business Continuity
• Hitachi ShadowImage® Replication Software
• Hitachi Copy-on-Write Snapshot Software
• Hitachi TrueCopy® Remote Replication Software
• RAID Manager and Command Control Interface
– Section 4
• Services Oriented Storage Solutions from Hitachi Data Systems
• Hitachi Device Manager Software
• Hitachi Tuning Manager Software
• Hitachi Content Archive Platform
• Virtual Tape Library Solutions by Hitachi Data Systems And Hitachi Data Protection Suite
Solutions
10

Page xiv HDS Confidential: For distribution only to authorized parties.


1. Section 1
Hitachi Adaptable Modular Storage 1000 Family Architecture

Hitachi Adaptable Modular Storage 2000 Family Architecture

Active-Active I/O Architecture

HDS Confidential: For distribution only to authorized parties. Page 1-1


Section 1
Hitachi Adaptable Modular Storage 1000 Family Architecture

Hitachi Adaptable Modular Storage 1000 Family Architecture

Hitachi Adaptable Modular Storage


1000 Family Architecture

Page 1-2 HDS Confidential: For distribution only to authorized parties.


Section 1
Hitachi Data Systems Midrange Storage Offerings

Hitachi Data Systems Midrange Storage Offerings

Hitachi Network Storage


Controller model NSC55

• All lines easily managed from


Hitachi Adaptable
an integrated software suite Modular Storage 1000
• All models compatible for
tiered storage Hitachi Adaptable
• Remote copy of data between Modular Storage
lines 500
Performance

le
Hitachi Adaptable eab
Modular Storage 200 g rad
up

Hitachi Workgroup
le Adaptable Modular Storage
Modular Storage 100 ab
de
gra • 1 or 2 CTL
up
• Up to 8 - 4Gb front end ports
• Up to 8 - 2Gb back end ports
• Up to 450HDDs maximum

Scalability
3

Adaptable Modular Storage and Workgroup Modular Storage product lines consist
of four products:
The Workgroup Modular Storage 100 replaces the Thunder 9520V™ workgroup
modular storage. It is an all-SATA device designed for the SMB/SME market and as
an archive platform for tiered storage. The Workgroup Modular Storage 100 is not
upgradeable to the Adaptable Modular Storage line.
The Adaptable Modular Storage 200 replaces the Thunder 9530V™ entry-level
storage deck. It supports both SATA and Fibre Channel drives and is intended for
the lower end of the modular market. The model Adaptable Modular Storage 200
can be upgraded to the model Adaptable Modular Storage 500.
The Adaptable Modular Storage 500 replaces the Thunder 9570V™ high-end
modular storage. It also supports SATA and Fibre Channel drives and is intended
for the middle to high end of the modular market.
The Adaptable Modular Storage 1000 replaces the Thunder 9585V™ ultra high-end
modular storage system. The Adaptable Modular Storage 1000 system offers the best
midrange performance on the market.
The Adaptable Modular Storage and Workgroup Modular Storage families have
more functionality, capacity, reliability and performance than the Thunder series.
They use the same architecture as the Thunder (legacy system) series and customers

HDS Confidential: For distribution only to authorized parties. Page 1-3


Section 1
Hitachi Data Systems Midrange Storage Offerings

familiar with those products will have an easy time migrating to the new systems.
From a product speeds and feeds perspective Hitachi competes effectively against
its primary competitors. The Adaptable Modular Storage and Workgroup Modular
Storage are positioned to be 25 to 40 percent less expensive than its leading
competitors’ comparable products while being more scalable. Customers should
find this especially appealing as Hitachi Data Systems is known for providing a high
level of quality and among the best customer satisfaction ratings.
The Network Storage Controller, model NSC55 is differentiated from Adaptable
Modular Storage and Workgroup Modular Storage by having the Universal Star
Network architecture (Adaptable Modular Storage and Workgroup Modular
Storage continues to use the High Performance architecture). In addition, the NSC55,
unlike the Adaptable Modular Storage and Workgroup Modular Storage families,
supports heterogeneous storage and OS390 FICON or ESCON ports.
Note: The Adaptable Modular Storage/Workgroup Modular Storage families can
store OS390 volumes when attached as external storage to a Universal Storage
Platform or Network Storage Controller.
The combination of the new cost effective Adaptable Modular Storage and
Workgroup Modular Storage midrange storage systems with scalable capacity and
the Universal Storage Platform and Network Storage Controller enables an
intelligent tiered storage network that will ultimately reduce cost and complexity
within the data center.

Page 1-4 HDS Confidential: For distribution only to authorized parties.


Section 1
Features

Features

• RAID-6 (“n” data and two parity)


– Greater reliability (especially for SATA)
– More reliable rebuild in event of disk drive failure
• Cache Partition Manager
– Partition cache, set segment size
– Specify unique stripe sizes
• HDD Roaming - select 'any' disk
• Sparing “no copy back”
• Native NAS (option)
– CIFS and NFS
– Linux Kernel
– Integrated software
• Native iSCSI (option)
• Logical Unit Migration software
• Better performance and greater scalability
• RoHS compliant
• Many software improvements

Several new features are available in the Adaptable Modular Storage/Workgroup


Modular Storage models:
RAID-6 (“n” data, 2 parity) allows for greater availability by providing an additional
parity disk to avoid a single point of failure situation during RAID group rebuild in
the event of a HDD failure. This is very important for large density drives (i.e.
500GB SATA) where parity raid rebuild can take many hours.
HDS now offers HDD roaming, in which hot spares (Dynamic Sparing) can be
allocated flexibly anywhere within the system and allocated very quickly in the
event of a failure. Copy back is no longer required, (spare becomes part of rebuilt
RAID group, and the bad HDD can be replaced with a new spare).
Cache Partition Manager is available to enable customers to better segment random
and sequential I/O applications to cache partitions.
Hitachi Data Systems offers a native Network Attached Storage (NAS) feature and
an iSCSI feature. Customers with IP-based storage networks will be able to take
advantage of the Adaptable Modular Storage and Workgroup Modular Storage
features. Note: Except for the Adaptable Modular Storage 1000 a particular product
will only support a single protocol: Fibre Channel; iSCSI; or NAS.
Logical Unit Migration software enables volumes to be moved from one RAID
group to another without impacting the application server.

HDS Confidential: For distribution only to authorized parties. Page 1-5


Section 1
Features

The Adaptable Modular Storage systems scale higher and offer significant
performance boosts over their Thunder 9500 V Series systems predecessors.
The Adaptable Modular Storage and Workgroup Modular Storage families are
RoHS (Reduction of Hazardous Substances) compliant, meeting strict EU guidelines
for reducing the use of certain hazardous substances in electrical and electronic
equipment in order to protect human and animal health and the environment.

Page 1-6 HDS Confidential: For distribution only to authorized parties.


Section 1
Workgroup Modular Storage 100

Workgroup Modular Storage 100

• Single or dual controller Models


– Cache size: 512MB to 2GB
• Only SATA Disks 3U trays
– Base (incl. 1st tray) with up to six additional trays 15 disk drives/tray

– 105 total disk drives = 78.75TB


• 250GB, 500GB, 750GB SATA
– RAID levels: 6, 5, 1+0, and 1 4U controller
with 15 disk drives
– Maximum LUNs: 512
• Flexible connectivity:
– Default two Fibre Channel Ports with 4 Host Connectors (dual controller):
• 1 or 2GBs, Mini-hub architecture (mounted on the motherboard)
– With upgrade Four independent Fibre Channel Ports:
• 1, 2, or 4GBs
– 512 virtual ports with Host Storage Domains
– Embedded NAS or iSCSI connectivity
– Two 2Gb/s FC-AL backend paths (one per controller)
– Replaces the Thunder 9520V system

The Workgroup Modular Storage 100 is available desk-side or rack-mounted


(standard 19-inch rack) and uses 230V power. There is no single point of failure for
dual-controller models. The mini-hub refers to the same ASIC (chip) controlling both
the front and back end processes within the box. This allows adequate performance
for the SMB and archive platform at a very low price.
In all Adaptable Modular Storage /Workgroup Modular Storage trays power
supplies are duplicated to prevent a tray down from a power supply failure. A tray
can run on one power supply without any problem.
Note: Unlike the Thunder 9520V system, the Workgroup Modular Storage 100 is
only SATA. The mini-hub architecture yields a very good price to performance ratio
and runs the front and back end of the storage system from a single processor.
Target Applications:
y Storage consolidation
y Microsoft Exchange
y Backup and data protection
y Tired storage (as archive)
y Tape replacement

HDS Confidential: For distribution only to authorized parties. Page 1-7


Section 1
Adaptable Modular Storage 200

Adaptable Modular Storage 200

• Single or dual controller Models


– Cache size: 1GB to 4GB
• Fibre Channel and SATA disk intermix
– Base (incl. 1st tray) with up to six additional trays
– 105 total disk drives = 72TB
3U trays
• 73GB, 146GB or 300GB FC 15 disk drives/tray
• 250GB, 500GB, and 750GB SATA
– RAID levels: 6, 5, 1+0, 1, 0*
– Maximum LUNs: 512
4U controller
• Flexible connectivity: with 15 disk drives
– Default two Fibre Channel Ports with 4 Host Connectors:
• 1 or 2Gb/s, Mini-hub architecture (mounted on the motherboard)
– With upgrade Four independent Fibre Channel Ports:
• 1, 2, or 4Gb/s
– 512 virtual ports with Host Storage Domains
– Embedded NAS or iSCSI
– Two 2Gb/s FC-AL backend paths
• Replaces the Thunder 9530V system *RAID-0 available on FC drives only

The Adaptable Modular Storage 200 is almost identical to the Workgroup Modular
Storage 100, except that it supports Fibre Channel (FC) drives, therefore offering
somewhat better performance and availability and it includes two FC-AL backend
paths.
A minimum number of two (2) Fibre Channel drives are required for the Adaptable
Modular Storage 200, and customers may not mix SATA and Fibre Channel in the
same shelf.
The Adaptable Modular Storage 200 can be upgraded to an Adaptable Modular
Storage 500 (this is a disruptive upgrade).
*RAID-0 is available for Fibre Channel drives only.

Page 1-8 HDS Confidential: For distribution only to authorized parties.


Section 1
Adaptable Modular Storage 500

Adaptable Modular Storage 500

• Single or dual controller models


– Cache size: 2GB to 8GB
• Fibre Channel and SATA disk intermix
– Base (incl. 1st tray) with up to 14 additional trays
– 225 disk drives (maximum configuration)
• Max. capacity is 162TB
• 73GB, 146GB, or 300GB FC
• 250GB, 500GB and 750GB SATA
– RAID levels: 6, 5, 1+0, 1, 0 (FC only)
– Maximum LUNs: 2048
• Flexible connectivity:
– Four Fibre Channel Ports
• 1, 2, or 4Gb/s
– Four 1Gb/s iSCSI or NAS
– 512 virtual ports with Host Storage Domains
– Four 2Gb/s FC-AL backend paths (two per controller)
• Replaces the Thunder 9570V system

The Adaptable Modular Storage 500 replaces the Thunder 9570V system by offering
a significant improvement in performance and scalability. For customers who would
have purchased a Thunder 9585V system but do not need 8 ports, the Adaptable
Modular Storage 500 will easily meet most performance and capacity requirements,
at a much lower price. The Adaptable Modular Storage 500 supports 4Gb/sec front-
end ports for customers with 4Gb/s switches and fabric.
LUN access on the back-end uses per controller one path for a SATA tray and two
paths for a FC tray.
Note: 1Gbit and 2Gbit workloads are supported with the 4Gb/sec front end.
A minimum number of two (2) Fibre Channel drives are required for the Adaptable
Modular Storage 200, and customers many not mix SATA and Fibre Channel in the
same shelf.
* As in the Adaptable Modular Storage 200, RAID-0 (no parity) is supported for
Fibre Channel drives only.

HDS Confidential: For distribution only to authorized parties. Page 1-9


Section 1
Adaptable Modular Storage 1000

Adaptable Modular Storage 1000

• Dual controller Configuration only


– Cache size: 4GB to 16GB
• Fibre Channel and SATA disk intermix
– Base with up to 30 additional trays
– Max capacity (GBs) is 1 FC tray + 28 SATA trays
– 450 disk drives (maximum config)
• Max. capacity 319.5TB
• 73GB, 146GB, or 300GB FC
• 250GB, 500GB, and 750GB SATA
– RAID levels: 6, 5, 1+0, 1, 0* (FC only)
– Maximum LUNs: 4096
• Flexible connectivity:
– Eight Fibre Channel Ports
• 1, 2, or 4Gb/s
– Four 1Gb/s iSCSI Ports
– Eight 1Gb/s NAS Ports
– 1024 virtual ports with Host Storage Domains
– Eight 2Gb/s FC-AL backend paths Best Price and
Performance in Class!
• Replaces the Thunder 9585V system
8

The Adaptable Modular Storage 1000 replaces the Thunder 9585V system by
offering a significant improvement in performance and scalability. The Adaptable
Modular Storage 1000 supports eight 4Gb/sec front-end ports for customers with
4Gb/s switches and fabric.
Delivers application-specific performance, availability, and protection across
systems, from a few terabytes to more than 330TB, with both Serial ATA (SATA) and
Fibre Channel drives
Use advanced features - Cache Partition Manager and RAID 6, to help improve
performance, reliability, and usability
Partition and dedicate cache to maximize performance of high-I/O applications
Support outstanding performance for virtually any workload with 4,096 logical
units (LUNs)
Choose between SATA intermix and Fibre Channel to host any workload on the
most economical storage system
Note: 1Gbit and 2Gbit workloads are supported with the 4Gb/sec front end
*RAID-0 (no parity) is supported for Fibre Channel drives only.

Page 1-10 HDS Confidential: For distribution only to authorized parties.


Section 1
Hitachi Adaptable Modular Storage 2000 Family Architecture and Administration

Hitachi Adaptable Modular Storage 2000 Family Architecture


and Administration

Hitachi Adaptable Modular Storage


2000 Family Architecture and
Administration

HDS Confidential: For distribution only to authorized parties. Page 1-11


Section 1
Product Description

Product Description

• High capacity, high-performance modular storage array


• Serial attached SCSI (SAS) Back-end architecture
– SAS and SATA II drives
• Fibre Channel or iSCSI front-end host ports
– Single type of front-end interface with models 2100 and 2300
– Two concurrent types of front-end interface with model 2500
• NAS through Fiber Channel "Gateway" offerings
• Active/Active Symmetric High Availability Dual Controller functionality
• Straightforward installation and configuration
• Intuitive storage management GUI (Hitachi Storage Navigator Modular 2
program)

10

Page 1-12 HDS Confidential: For distribution only to authorized parties.


Section 1
Product Description

• Simple maintenance and troubleshooting


– Improved backend diagnostics due to new serial backend
• Online firmware upgrade (no path failover software required)
• Replication software:
– Hitachi Copy-on-Write Snapshot software
– Hitachi ShadowImage Replication software
– Hitachi TrueCopy Remote Replication software
– Hitachi TrueCopy Extended Distance software
• Easy data migration to and from previous Adaptable Modular Storage
systems using TrueCopy software

11

Previous Adaptable Modular Storage systems include:


y Adaptable Modular Storage systems 100
y Adaptable Modular Storage systems 500
y Adaptable Modular Storage systems 1000

HDS Confidential: For distribution only to authorized parties. Page 1-13


Section 1
Product Line Positioning

Product Line Positioning

Band 2 Band 3 Band 4

e
Performance /Connectivity/ Functionality

g rad
Up

de
gra Adaptable Modular
Up
Storage 2500

Adaptable Modular
Storage 2300

Adaptable Modular
Storage 2100
Simple Modular
Storage 110

Simple Modular
Storage 100

Price
12

Upgrades are data-in-place upgrades.

Page 1-14 HDS Confidential: For distribution only to authorized parties.


Section 1
Features

Features

• All models provide:


– High-speed response
– Continuous data availability
– Scalable connectivity
– Expandable capacity

• Competitive Features and Functionality


– Microsoft® environments such as Virtual Disk Service (VDS) and Microsoft
Volume Shadow Copy Service (VSS) provider
– Complete Longhorn Server support
– Native Multipath I/O (MPIO, MPxIO, and more) support
– Functional enhancements to the ShadowImage software features to enable
competitive VSS behavior in Exchange environments
– LUN Shrink/Grow feature (coming soon)
– 60TB LUN support

13

HDS Confidential: For distribution only to authorized parties. Page 1-15


Section 1
Specifications

Specifications

Adaptable Modular 2100 2300 2500


Storage Models
Specs • Dual Controller, 4GB • Dual Controller, 8GB cache • Dual Controller, 16GB
cache on each controller, on each controller, 15 drives cache on each controller,
15 drives internal, internal, Symmetric A/A, 0 internal drives,
Symmetric A/A, dual dual battery, dual redundant Symmetric A/A, dual
battery, dual redundant power supplies battery, dual redundant
power supplies power supplies

Host Interface Options • 4 Fibre Channel (FC) • 8 Fibre Channel (FC) • 16 Fibre Channel (FC)
auto-sensing 1/2/4Gbps auto-sensing 1/2/4Gbps auto-sensing 1/2/4Gbps
• 4 iSCSI 1000Base-T • 4 iSCSI 1000Base-T copper • 8 iSCSI 1000Base-T
copper Ethernet Ethernet copper Ethernet

Drive Interface • 16 Serial Attached SCSI • 16 Serial Attached SCSI • 32 Serial Attached SCSI
(SAS) (SAS) (SAS)
• 4x4 wide link, 3Gbps • 4x4 wide links, 3Gbps • 4x8 wide links, 3Gbps
switched switched switched

14

Note: All feature and function information is subject to change.

Page 1-16 HDS Confidential: For distribution only to authorized parties.


Section 1
Specifications

Models 2100 2300 2500


RAID Levels RAID 1, 0+1, 5, 6 (SAS and SATAII drives), RAID 0 (SAS drives only)

Max # of RAID Groups 50 75 100

Max # of LUs 2048 4096 4096

Max LU size 60TB

Supported Drives 146GB/15K, 300GB/15K, 400GB/10K SAS, 450GB/15K SAS,


500GB/7200, 1TB/7200 SATAII
Upgrades Model 2100 to 2300, and 2300 to 2500 model via Controller, Data In Place
Remote Mirroring interoperable with Adaptable Modular Storage
Expansion Unit / Disk 15 HDD/Tray (SAS/SATAII 15 HDD/Tray (SAS/SATAII 15 HDD/Tray (SAS/SATAII
Trays Intermix) Intermix) intermix) no HDDs in the
(Optional based on capacity) Up to 7 trays (120 Drives Up to 15 trays (240 Drives controllers
total) total) Up to 32 trays (480 Drives
total)
Maximum Capacity 118TB 236TB 472TB

15

HDS Confidential: For distribution only to authorized parties. Page 1-17


Section 1
Software and Firmware Offerings

Software and Firmware Offerings

• Storage Management Software


– Hitachi Storage Navigator Modular 2
– Hitachi Storage Command Suite
• Bundled Storage Functions
– Account Authentication
– Audit Logging
– LUN Manager
– LUN Grow/LUN Shrink (coming soon)
– Cache Residency Manager
– Cache Partition Manager
– Modular Volume Migration
– SNMP Agent Support Function
– Performance Monitor
• RAID Group expansion coming soon

16

Page 1-18 HDS Confidential: For distribution only to authorized parties.


Section 1
External Design and Connections

External Design and Connections

Expansion Unit
3U

3U

3U

Back-end full duplex, 3Gb/s, Back-end full duplex, 3Gb/s, Back-end full duplex, 3Gb/s,
16 – (4x4) SAS Wide Link 16 – (4x4) SAS Wide Link 32 – (4x4) SAS Wide Link

4U 4U 4U

Model 2100 Model 2300 Model 2500


・Cache: 8GB ・Cache: 16GB ・Cache: 32GB
・Host 4 FC or 4 iSCSI ports ・Host 8 FC or 4 iSCSI ports ・Host 16 FC or 8 iSCSI ports,
or a mixture
Rack
Installation

17

HDS Confidential: For distribution only to authorized parties. Page 1-19


Section 1
Host Storage Domains (Host Groups)

Host Storage Domains (Host Groups)

Server FC Fabric Switch Server


W W
W W
Windows N 0 1 2 N Solaris
= =
0 Y X
3
1 0

0A 0B 1A 1B
Enable/Disable Security
CTL0 CTL1
HG1 (Optional)
Opt = Windows, etc. HG0 (Always present)
Security = WWN Y Opt = Solaris, etc.
Requires the LUN LUNs mapped: 0 & 1
Management key Security = WWN X
to add Host Groups or change the HG settings LUNs mapped: 8 & 789
0

1
0 Mapped LUN number = 'HLUN' (as seen by the host)
3
Recommended Use different mapping
LUN
configuration if host requires a LUN
Mapping 0 or cannot handle the
(high) LUN number

0 1 8 789 Internal LUN number = 'LUN'

18

y A Host Group contains one or more LUNs that can be configured to be accessed
by a particular host operating system environment. It exists behind a host
interface port.
y With Host Groups the server that is granted access to it sees a virtual storage unit
configured specifically for the software environment running on that server. This
is achieved by setting platform specific options for each Host Group.
y Access security is organized by filtering the traffic to a particular Host Group
and only allowing traffic with a specific Fibre Channel World Wide Name
(WWN) coming from the Host Bus Adapter (HBA) through which a server
accesses the Host Group.
y In addition to using Host Groups the cache can be partitioned allowing for a
complete segregation of the workloads as generated by different servers. Cache
partitioning will prevent an application monopolizing an Adaptable Modular
Storage 2000.
y Host Group 0 (or Host Storage Domain 0) is always present behind a host
interface port. Additional HGs can be configured when the Lun Management
key has been added.

Page 1-20 HDS Confidential: For distribution only to authorized parties.


Section 1
Highlights

Highlights

• Consolidated data storage • Heterogeneous Multi-Host Connection


• High performance • High Availability Environment
• Capacity scalability
• Flexibility versus Reliability
• Maintainability versus Serviceability

19

Support for Heterogeneous Multi-Host Connection is accomplished by setting Host-


specific parameters for any individual group of LUNs (Host Group).
A High Availability environment (no single point of failure) requires the same level
of redundancy in the Hosts, Storage Area Network, and in the Storage Device.

HDS Confidential: For distribution only to authorized parties. Page 1-21


Section 1
Highlights

• Consolidated data storage • High throughput back-end with up to 32


• High performance (4x8) SAS Wide links
• Cache Residency Manager feature
• Capacity scalability
• Flexibility versus Reliability
• Maintainability versus Serviceability

20

Two Back-end paths on Adaptable Modular Storage 2100 and 2300. Also SAS Wide
links to drives on the back-end.

Page 1-22 HDS Confidential: For distribution only to authorized parties.


Section 1
Highlights

• Consolidated data storage


• High performance
• Capacity scalability • Up to 480 HDDs per storage
system
• Flexibility versus Reliability
• Capacity: up to 32 expansion
• Maintainability versus Serviceability units (model 2500)

21

A system can hold a mix of both high-speed (usually more expensive) HDDs for
performance and slower (cheaper) drives for capacity.
y Performance: Used for online transactions, and more
y Capacity: Used for audio and video streaming, backup’s and more

HDS Confidential: For distribution only to authorized parties. Page 1-23


Section 1
Highlights

• Consolidated data storage


• High performance
• Capacity scalability
• Flexibility versus Reliability • Online capacity upgrade
• RAID levels supported 0, 1, 5, 6, 1+0
• Maintainability versus Serviceability
• Cache Partition Manager
• HDD Roaming
• Up to 30 global spare drives (Adaptable
Modular Storage 2300 and 2500)
• Online Verify and Dynamic Sparing
• LUN Mapping, Host Group Mode, and
HG Security
• 8-byte Data Assurance Code

22

Online capacity upgrade: HDDs and expansion units can be added online;
Controllers and Cache Memory cannot.

Page 1-24 HDS Confidential: For distribution only to authorized parties.


Section 1
Highlights

• Consolidated data storage


• High performance
• Capacity scalability
• Flexibility versus Reliability • Storage Navigator Modular 2
• Maintainability versus Serviceability • Web tool
• SNMP
• Support – Web Portal
• Hot replacement for most major
components (CTL and ENC require
GSC assist)

• Hi-Track Monitor

23

Storage Navigator Modular 2 is shipped with the array. The build center/CTO will
install and enable feature keys for certain basic Software Features.

HDS Confidential: For distribution only to authorized parties. Page 1-25


Section 1
Model 2100 Controller Architecture

Model 2100 Controller Architecture

24

Page 1-26 HDS Confidential: For distribution only to authorized parties.


Section 1
Model 2300 Controller Architecture

Model 2300 Controller Architecture

25

HDS Confidential: For distribution only to authorized parties. Page 1-27


Section 1
Model 2500 Controller Architecture

Model 2500 Controller Architecture

26

Page 1-28 HDS Confidential: For distribution only to authorized parties.


Section 1
Specifications

Specifications

• Capacity
Back-end 2100 2300 2500
HDD HDD#/unit 15 15 0
Max HDD# 120 240 480
15HDD/Tray(SAS/SATA 7 expansion 15 expansion 32 expansion
Intermix; SAS even for units units units
system area drives 0-4) maximum maximum maximum
Supported Drives 146GB/15K SAS 146GB/15K SAS 146GB/15K SAS
300GB/15K SAS 300GB/15K SAS 300GB/15K SAS
400GB/10K SAS 400GB/10K SAS 400GB/10K SAS
450GB/15K SAS 450GB/15K SAS 450GB/15K SAS
500GB SATA 500GB SATA 500GB SATA
750GB SATA 750GB SATA 750GB SATA
1TB SATA 1TB SATA 1TB SATA
RAID Max RG 50 75 100
Group
RAID Level 6 / 5 / 0+1 / 1 (SAS and SATA) 0 (SAS only)
LU Max LU# 2048 4096 4096
Max LU size 60TB
27

HDS Confidential: For distribution only to authorized parties. Page 1-29


Section 1
Specifications

• Controller Host Interface

2100 2300 2500


FC I/F 1/2/4 Gbps 1/2/4 Gbps 1/2/4 Gbps

I/F Ports Max 2/CTL Max 4/CTL Max 8/CTL


(SFP) 4/system 8/system 16/system
iSCSI I/F 1Gbps 1Gbps 1Gbps
(1000Base-T (1000Base-T (1000Base-T
copper) copper) copper)
Ethernet Ethernet Ethernet

I/F Ports Max 2/CTL Max 2/CTL Max 4/CTL


4/system 4/system 8/system

28

On the Adaptable Modular Storage 2500 in case of FC - iSCSI front-end intermix,


you will have four FC host connectors and two iSCSI host connecters per controller
in the storage system.

Page 1-30 HDS Confidential: For distribution only to authorized parties.


Section 1
Back-end Architecture

Back-end Architecture
SAS, Switch architecture
CTL0 CTL1
DCTL DCTL
(2)
1. New Topology Base Unit
Access
(Loop ->Switch)
SAS chip via DCTL SAS chip

2. Simple and Easy cabling (1) failure


(common expansion unit for
SAS/SATA)
SW SAS SW
3. Back-end Path SAS
Expansion unit
failover (through (SAS/SATA mixture)
back-end path → SATA [2path]
DCTL)

SAS
SW SATA SW
SAS Expansion unit
1. Improvement of (SAS/SATA mixture)
Failure diagnostics SATA
[2path]
(Loop-> Switch)

No 2000 Advantage Contents


1 SAS Switch Architecture Point to Point connection between SAS protocol chip and each drive.
4 Wide Links are configured in one SAS back-end path.
2 Simple and Easy back-end cabling Back-end cabling can be simplified by SAS/SATA coexistence
configuration in the same expansion unit chassis.
3 Steady transaction by changing When the back-end path failure occurs, the access path failover is
the back-end access path executed via DCTL interface.
4 Improvement of back-end failure The failure parts detection is improved by switch device (Expander)
diagnostics functions.

29

HDS Confidential: For distribution only to authorized parties. Page 1-31


Section 1
Back-end Architecture

SAS (Switch)
rate: 3Gbps * 4Wide Link

SAS_CTL
AAMux

SATA

Switch
(Expander)
12Gbps SAS
• Four Wide Links are configured in
4 Wide Link
one SAS back-end path.
• Each links are dynamically allocated
SAS
Switch
3Gbps
to any of the disks.
(Expander)

AAMux

SAT
A

SAS
Switch
(Expander)
AAMux

3Gbps SATA

FC: Fibre Channel, SAS: Serial Attached SCSI, SATA: Serial Advanced Technology Attachment, AAMux: Active-Active Multiplexer

30

Base unit
Front end CTL0 CTL1 Front end
Field Replaceable Unit (FRU)
MPU DCTL DCTL MPU
No Failure Parts FRU
1 Controller Controller (new
SAS_CTL SAS_CTL
8port 8port (including SAS CTL FRU consists
SAS protocol chip, of cache, and host
4WL 4WL 4WL 4WL
Expander, etc) interfaces)
・・・

Expander Expander
24port AAMux 24port 2 ENC ENC and Cable
4WL SATA
4WL
(including
SAS SAS SAS SAS Expander)
Buf Buf Buf Buf
3 Cable between ENC and Cable
cable cable
chassis
Expansion unit 4 SAS Drive SAS Drive
ENC0 ENC1
4WL SAS
4WL 5 SATA Drive SATA Drive
(includes (includes AAMux)
・・・

Expander Expander
24port AAMux 24port AAMux)
SATA
4WL 4WL

WL:Wide Link, AAMux: Active-Active Multiplexer

31

Page 1-32 HDS Confidential: For distribution only to authorized parties.


Section 1
Disk Expansion Tray

Disk Expansion Tray

32

Note that the SATA enclosures use a different connection method from the FC
enclosures.
An AAmux (SATA Ctrl) chip is installed on every SATA Disk.

HDS Confidential: For distribution only to authorized parties. Page 1-33


Section 1
Active-Active I/O Architecture

Active-Active I/O Architecture

Active-Active I/O Architecture

33

Page 1-34 HDS Confidential: For distribution only to authorized parties.


Section 1
Cross-controller Communication

Cross-controller Communication

• Previous modular systems use “data-share mode.”


• Adaptable Modular Storage 2000 Family “cross-path” communication is
improved.

Adaptable Modular Adaptable Modular


Storage 1000 Family Storage 2000 Family

Communication overhead
has been reduced
Port Port Port Port
drastically

Communication
Overhead
PCI-Express
CTL 0 CTL 1 CTL 0 CTL 1

LU0 LU1 LU0 LU1

34

Adaptable Modular Storage 1000 Family systems have the “data-share mode”
which enables non-owner controller to receive I/Os for the target LU. But the I/O
performance is much reduce compared to the owner controller, so it is used only
temporarily, for example as an alternate path if the main path fails.
In the Adaptable Modular Storage 2000 family, I/O performance directed to non-
owner controller is drastically improved. This “cross-path” can be used as the
normal I/O path with regards to performance.
In the diagram and following slides, Adaptable Modular Storage 1000 Family
represents previous Hitachi modular storage, including Adaptable Modular Storage
models 200, 500, and 1000, and Workgroup Modular Storage 100.

HDS Confidential: For distribution only to authorized parties. Page 1-35


Section 1
Internal Transaction

Internal Transaction

• Enables the MPU to access the other controller CS/DS and devices, like
FC protocol chip, directly. Cross-path I/O is greatly improved.

Sequence of Read command Hardware improvements enable


(1) Inter-CTL communication “cross-path I/O” with little overhead.
Adaptable via cache memory. (command Adaptable Modular (1) Host command is directly
Modular recv. and starting data transfer) Storage 2000 Family transferred via CS/DS.
Storage 1000 (2) Inter-CTL communication is (2) MPU of CTL1 can control
Family also required for starting data FC ports on CTL0 directly.*1
transfer.
(1) Host command recv. Control path (1) Host command recv. Control path
Data path Data path
MPU FC FC FC FC
command MPU MPU MPU
DCTL DCTL DCTL DCTL

data command
data

CS/DS Cache Cache CS/DS CS/DS Cache Cache CS/DS


CTL0 (2) Start data CTL1 (2) Start data CTL1
CTL0
transfer transfer
*1: This feature is for FC only, not for iSCSI.

35

Page 1-36 HDS Confidential: For distribution only to authorized parties.


Section 1
LU Ownership

LU Ownership

• Owner controller of LUs or operations is not an issue. The microprogram


assigns ownership.

Adaptable Adaptable
Modular
Modular The microprogram decides the owner
Storage 2000
Storage 1000 CTL of each created LU automatically
Family
Family and users does not need to care the
Administrator Administrator owner CTL for each LU.

Microprogram

- Create LU0 CTL 0 CTL 1 - Create LU0 CTL 0 CTL 1


on CTL0 - Create LU1
- Create LU1 - Create LU2
on CTL0 - Create LU3
- Create LU2 LU0 LU2 LU0 LU1
on CTL1
- Create LU3
LU1 LU3 LU3
on CTL1 LU2

36

The user need not consider which controller should be the owner when he creates
each LU or for all operations of the array.
Therefore the non-owner controller of the target LU may receive I/O commands
from hosts. But it is not a problem because such commands are processed by “high
performance cross-path”.
The manual setting mode (like previous modular systems) is also available in the
Storage Navigator Modular 2 GUI.

HDS Confidential: For distribution only to authorized parties. Page 1-37


Section 1
LU Ownership

• LU ownership is not changed because of path failure.

Adaptable Host Adaptable Host


Modular (1) The host changes Modular
Storage 1000 Storage 2000 (1) The host changes access
access path because
Family Family path because of path
of path failure. failure.

Port Port Port Port


(3) The owner CTL
(2) Temporary (2) The owner
of the LU0 is
I/O path CTL of the
changed to CTL1
CTL 0 CTL 1 automatically after CTL 0 CTL 1 LU0 is not
I/Os to the changed.
CTL1continues for
LU0 LU0 one minute.
LU0

37

Hosts can send commands to storage via any path of any controller for the purpose
of path load balancing. This is possible because cross-path I/O is high performance
and ownership of each LU is stable.
In previous modular systems ownership moved move back and forth. If a path
failed, a temporary cross-controller path was established for predetermined period,
like one minute. After that, ownership changed to the other controller, sometime
described as “LU ping-pong.”

Page 1-38 HDS Confidential: For distribution only to authorized parties.


Section 1
Controller Load Balancing

Controller Load Balancing

• The owner controller of each LU may be changed automatically for the


purpose of load balancing of processors on two controllers.

SW SW SW SW

Port Port Port Port


Bottle-neck Bottle-neck
recovered
MPU MPU MPU MPU
Load balance CTL 0 CTL 1 CTL 0 CTL 1
monitoring
Automatically
LU0 LU3 LU0 LU3

LU1 LU2 LU2

LU1

Before After
38

The load balancing function can be enabled and disabled. It should be disabled this
when using the Cache Partition Manager so as not to change the partition setting for
each LU automatically.

HDS Confidential: For distribution only to authorized parties. Page 1-39


Section 1
Microcode Updates

Microcode Updates

• Benefits
– Non-disruptive firmware updates are easily and quickly accomplished.
– Firmware can be updated without interrupting I/O.
Adaptable Adaptable
Modular Modular
Storage 1000 User must change paths Storage 2000 No requirement to change paths
Family Family
(1)Change path
Path Path
Manager Manager

Internal Command
SW SW
transfer

(2) Firmware Firmware updating (1) Firmware Firmware updating


Updating & rebooting 0 1 Updating & rebooting 0 1
CTL0 0 1 CTL1 CTL0 0 1 CTL1
Change Change
ownership ownership
controller controller

LUN: 0 LUN: 1 LUN: 2 LUN: 3 LUN: 0 LUN: 1 LUN: 2 LUN: 3

Unique
for Midrange
0 Owning controller of LUN 0

39

For firmware updates:


No need to use host path management software
No need to change path from firmware-updating CTL to other CTLs

Page 1-40 HDS Confidential: For distribution only to authorized parties.


2. Section 2
Hitachi Adaptable Modular Storage Software

Hitachi Storage Navigator Modular 2 Program

Hitachi Essential NAS Platform

Hitachi Dynamic Link Manager Software and Hitachi Global Link


Availability Manager Software

HDS Confidential: For distribution only to authorized parties. Page 2-1


Section 2
Hitachi Adaptable Modular Storage Software

Hitachi Adaptable Modular Storage Software

Hitachi Adaptable Modular Storage


Software

Page 2-2 HDS Confidential: For distribution only to authorized parties.


Section 2
Software Feature Overview

Software Feature Overview

Product or Feature Usage Model


LUN Manager/LUN Expansion Host Groups All
Data Retention Utility Protect LUNs All
Cache Residency Manager Increase LUN access performance All
Cache Partition Manager Increase host access performance All
SNMP Agent Support Function Report events and status All
Account Authorization Robust security for restricting access All
Audit Logging Audit logging of all changes performed in AA All
Performance Monitor Monitor and collect utilization statistics All
Data Shredding – (future) Parity group /LU data erase All
Power Saving Power down of RAID groups that are not used All
ShadowImage Replication software Create local copies of production LUNs All
TrueCopy Synchronous software Create remote copies of production LUNs All

TrueCopy Extended Distance software Create remote copies of production LUNs All

Copy-on-Write Snapshot software Create point-in-time copies of production LUNs All


Dynamic Link Manager software Path Failover and Load Balancing All

The LUN Expansion feature now in base product (no PP keys required) configured
with Storage Navigator Modular 2.
Note: Highlighted features on this slide are the Optional software features, additional cost
and require a key.

HDS Confidential: For distribution only to authorized parties. Page 2-3


Section 2
Launch Advanced Settings

Launch Advanced Settings

Note: A time out will occur


after 30 minutes when
working with Advanced
Settings

Page 2-4 HDS Confidential: For distribution only to authorized parties.


Section 2
Cache Partition Manager Feature

Cache Partition Manager Feature

• Cache Partition Manager will allow for the segregation of workloads


within the system
• It will include the following:
– Selectable segment size
Customize the cache segment size for a user application
– Partitioning of cache memory
Separate workloads by dividing cache into individually managed, multiple
partitions
• A partition can then be customized to best match the I/O
characteristics of its assigned LUs
– Selectable stripe size
To increase performance by customizing the disk access size

HDS Confidential: For distribution only to authorized parties. Page 2-5


Section 2
Advantage of Selectable Segment Size – Small I/O

Advantage of Selectable Segment Size – Small I/O

Cache segmented by default in


16KB segments

Cache segmentation optimized


for the Host I/O with 8KB segments

y Recommended segment size is the host I/O size times two.


y Setting the segment size in this example to 4KB, indicates that up to two
segments will be used for cache processing overhead.

Page 2-6 HDS Confidential: For distribution only to authorized parties.


Section 2
Advantage of Selectable Segment Size – Large I/O

Advantage of Selectable Segment Size – Large I/O

Cache segmented by default in For an application with long


16KB segments size (example, 128KB) host access

Takes overhead to handle


16KB many segments for each I/O
Cache segmentation optimized
for the Host I/O with 256KB
segments 256KB

HDS Confidential: For distribution only to authorized parties. Page 2-7


Section 2
Advantage of Global Cache

Advantage of Global Cache

• Storage system without Cache Partition Manager or different cache


partitions

Global Cache usage changes dynamically as required for


Performance

READ performance between Hosts/Applications/LUs

Time
Hosts/Applications Hosts/Applications Hosts/Applications

LU LU LU LU LU LU
8

Page 2-8 HDS Confidential: For distribution only to authorized parties.


Section 2
Advantage of Partitioned Cache

Advantage of Partitioned Cache

• Selectable from Partitioned Cache and Global Cache

Partitioned Cache Global Cache


is selectable for temporary
Hosts/Applications Hosts/Applications
changes in performance

Faster
or
Time
Hosts/Applications Hosts/Applications

Slower

LU LU LU LU

Negative effects between faster/slower LU LU LU LU


hosts, applications, and LUs will decrease

Configuring the cache for partitions is a static adjustment that will not dynamically
change afterwards.

HDS Confidential: For distribution only to authorized parties. Page 2-9


Section 2
Advantage of Selectable Stripe Size

Advantage of Selectable Stripe Size

HDD usage with the default LU Stripe HDD usage with the I/O optimized LU
Size 64KB Stripe Size 256KB

128KB 1 Write to LUN = 128KB 1 Write to LUN =


2~3 Writes to HDD 1~2 Writes to HDD
Stripe Size
256KB

・・・・
・・・・

・・・・
・・・・

・・・・

・・・・
・・・・

・・・・
HDDs HDDs

• High throughput with concurrent I/Os to HDDs • Lower overhead because of less HDD I/Os
• Good for applications with transaction I/Os • Good for applications with sustained I/Os
(Data Base systems)
10

By selecting the most appropriate Stripe Size the number of HDD I/Os can be
brought back to the minimum which will improve the performance.

Page 2-10 HDS Confidential: For distribution only to authorized parties.


Section 2
Partitioning Cache

Partitioning Cache

• Cache can be divided into partitions that can be exclusively used by


assigned LUNs
– Maximum number of partitions:
• Model 2100: 16
• Model 2300: 16
• Model 2500: 32

– Partition 0/1 are the master partitions (fixed at16KB only).


– Partition 2 to n have selectable-size segments of: 4, 8, 16, 64, 256 and 512K.
– Partition sizes are flexible (each partition has a certain minimum).

11

y Although proper use of the Cache Partition Manager can contribute to improving
an application‘s performance, an incorrect configuration can easily achieve the
opposite effect.
y One partition can be used by one or more LUNs.

HDS Confidential: For distribution only to authorized parties. Page 2-11


Section 2
Installing Cache Residency Manager Feature

Installing Cache Residency Manager Feature

• Cache Residency Manager feature must be installed or uninstalled using


a software license key.
• The storage system will need to be rebooted in order for the Cache
Residency Manager changes to take effect, including installing,
uninstalling, enabling, or disabling.

12

Page 2-12 HDS Confidential: For distribution only to authorized parties.


Section 2
Functionality

Functionality

to Host to Host

Controller #0 Controller #1
For other LUs For other LUs
(Controller#0) (Controller#0)
Resident LU
Duplicated Resident LU
Cache Write Data Cache
(Controller#0, LU0) (Controller#0,LU0)
For other LUs For other LUs
(Controller#1) (Controller#1)
Resident LU Resident LU
Cache Cache
(Controller#1,LU1) (Controller#1,LU1)

LU0

LU1

LU2

13

HDS Confidential: For distribution only to authorized parties. Page 2-13


Section 2
Overview of Performance Monitor Feature

Overview of Performance Monitor Feature

• The Performance Monitor feature enables the operator to collect and


analyze performance information from a storage system.
• Performance Monitor provides information for the following storage
system components:
– Port information
– RAID Group and Logical Unit information
– Cache information
– Processor information
– Drive information
– Drive operating information
– Back-end information
• Performance Monitor presents this information in chart and table format.

14

Page 2-14 HDS Confidential: For distribution only to authorized parties.


Section 2
Enabling Performance Data Collection

Enabling Performance Data Collection

3. Click Open Advanced Settings to open the advanced settings window.

15

HDS Confidential: For distribution only to authorized parties. Page 2-15


Section 2
Enabling Performance Data Collection

5. Click Set. The Performance Statistics window appears.

16

Page 2-16 HDS Confidential: For distribution only to authorized parties.


Section 2
Overview of Modular Volume Migration

Overview of Modular Volume Migration

Issues the migration


Administrator command
Host
RM
SNM

Issuing the migration LAN


Host I/O command

Migration engine (based on ShadowImage software)


Manual migration is supported
Auto-migration is not supported
2)Access path change

LUN 004, Volume migration is not a “swap” function.


If there is user data in the secondary volume (S-
RAID Group 00 VOL) which is the destination logical unit (LUN),
LUN 004, the data is overwritten by the volume migration.
1)Data Copy RAID Group 32

SNM:Storage Navigator Modular (GUI or CLI)


Online Migration RM: Command Control Interface (CCI)

17

HDS Confidential: For distribution only to authorized parties. Page 2-17


Section 2
Migration From SAS Drives to SATA Drives

Migration From SAS Drives to SATA Drives

RAID Group HOST I/O


created on SAS HDDs

Data Copy LUN1

RAID Group Migration


created on SATA HDDs
finish
LUN2 HOST I/O

18

Page 2-18 HDS Confidential: For distribution only to authorized parties.


Section 2
Migrating Volumes for Performance

Migrating Volumes for Performance

1. Create a RAID Group and LUN of


the same size as the primary
volume (P-VOL) for the migration
2D+1P RG 00 7D+1P RG 9 target
SAS HDDs SATA HDDs
2. Format the LUN ( -> Format
LUN0 LUN1 complete)

3. Reserve the LUN for the


migration target.
Migration
finish

4. Issue the command to start


2D+1P RG 00 7D+1P RG 9
SATA HDDs
the volume migration.
SAS HDDs ( -> Migration complete )
LUN1 LUN0

5. Delete the migration pair


6. Release the LUN from reserved
status

19

HDS Confidential: For distribution only to authorized parties. Page 2-19


Section 2
Volume Migration Setup

Volume Migration Setup

• To use Modular Volume Migration software, some preparations are


needed. (These are similar to ShadowImage software.)
– Install the Modular Volume Migration key
– Set the differential management LUs
• To operate with the command control interface (CCI), additional
preparations are needed.
– Set the command devices
– Set the Target ID (LUN mapping)
Note: Target ID for migration cannot be set through Storage Navigator
Modular 2. Use Storage Navigator Modular original version.
– Define the configuration definition file
– Set the environment variables

20

Page 2-20 HDS Confidential: For distribution only to authorized parties.


Section 2
Hi-Track Monitor

Hi-Track Monitor

• Hi-Track Monitor
– Is included with every Service contract
• Monitors the operation of the Adaptable Modular Storage system and Workgroup
Modular Storage systems at all times
– Is a JAVA software application
– Requires a customer PC or SUN workstation running JAVA runtime environment
– Can FTP or dial out via modem
– Interrogates Workgroup Modular Storage and Adaptable Modular Storage systems on
a timed interval for error monitoring (user configurable)
• Reports status every 24 hours by default, even if there are no error conditions
– Also supports Thunder 9200 modular storage system, Thunder 9500 V Series modular
storage systems, and various Fibre Channel switches
– Collects hardware status and error data
– The Hitachi Data Systems Support Center analyzes the data and implements corrective
action as needed

The Hi-Track Monitor, monitors the operation of the Adaptable Modular Storage
system/Workgroup Modular Storage system systems at all times, collects hardware
status and error data, and transmits this data via modem or ftp to the Hitachi Data
Systems Support Center. The Support Center analyzes the data and implements
corrective action as needed. In the unlikely event of a component failure, Hi-Track
service calls the Hitachi Data Systems Support Center immediately to report the
failure without requiring any action on the part of the user. Hi-Track Monitor
enables most problems to be identified and fixed prior to actual failure, and the
advanced redundancy features enable the system to remain operational even if one
or more components fail.
Hi-Track requires a customer PC running Microsoft Windows XP Professional,
Windows 2000 Professional, Windows 2003 Professional, or a SUN workstation
running Solaris 8, or Solaris 9. The workstation needs to run 24/7 in order to
properly perform the Hi-Track Monitor function. Other programs can run
concurrently on the Hi-Track server.
TCP/IP connectivity from the models Workgroup Modular Storage 100, Adaptable
Modular Storage 200, Adaptable Modular Storage 500, Adaptable Modular Storage
1000 Series systems to the Hi-Track monitor workstation is required.
Note: Hi-Track Monitor does not have access to any user data stored on the Adaptable
Modular Storage system/Workgroup Modular Storage system systems.

HDS Confidential: For distribution only to authorized parties. Page 2-21


Section 2
Storage Navigator Modular 2 Program

Storage Navigator Modular 2 Program

Storage Navigator Modular 2


Program

22

Page 2-22 HDS Confidential: For distribution only to authorized parties.


Section 2
Module Objectives

Module Objectives

• Upon completion of this module, the learner should be able to:


– Explain the purpose and benefits of Storage Navigator Modular 2 program
– Register an Adaptable Modular Storage 2000 Family system in Storage
Navigator Modular 2
– Use the Add Array wizard
– Use the Initial Setup wizard
– Create RAID Groups and use LU wizard to create and format LUs
– Create Host Groups, enable Host Group Security and register the WWN of
attached host bus adapters
– Map internal LUNs to Host Group LUNs
– Create a LUN Expansion (Logical Unit Size Expansion) by
unifying/concatenating internal LUNs

23

HDS Confidential: For distribution only to authorized parties. Page 2-23


Section 2
Architecture

Architecture

• Web GUI
• Client-Server design

Storage Navigator
Modular 2 Server
• Server software
• Database
• Server access via Web GUI
(Internet Explorer or Firefox)

Model 2000 Family


Storage System

Storage Navigator Modular 2 Client


• Server access via Web GUI (Internet Explorer or
Firefox)

24

Storage Navigator Modular 2 runs from your primary management server or client
PC. It is designed on common web-based client-server technology using a standard
IP network. In other words, you can attach your model 2100 or 2300 and Storage
Navigator Modular 2 primary management server to your existing LAN
environment. Storage Navigator Modular 2 communicates with the storage system
through a web browser. If client PCs are attached to the network, they can connect
to the Storage Navigator Modular 2 primary management server and remotely
configure the storage system.

Page 2-24 HDS Confidential: For distribution only to authorized parties.


Section 2
Installation Requirements

Installation Requirements

A computer as Storage Navigator A computer as Storage


Modular 2 server Navigator Modular 2 client
Network 100BASE or 1000BASE, to communicate with 100BASE or 1000BASE, to communicate with
Interface storage system and Storage Navigator Modular 2 Storage Navigator Modular 2 Server
Client
OS Microsoft Windows 2000 Pro (SP3 and 4), Microsoft Windows Server 2K3 (SP1)/XP Pro (SP2)

RAM 2GB or higher is recommended 1GB or more

Free disk 1.5GB or more to install --


space

CPU 1Ghz Minimum (2.0GHz recommended) --

Others Optical drive, to install Storage Navigator Modular 2 JRE*(Java Runtime Environment)1.6.0
from CD-ROM. http://java.sun.com/products/archive/

Video: 1024x768 (recommended) or more

Web Browser: Microsoft Internet Explorer 6.0

Mouse (or pointing device) and keyboard

25

Verify that your PC and operating system meet these basic requirements. These are
standard for most of the today’s applications. In addition, the Release Notes and the
User’s Guide have current information.
The JAVA JRE 1.6.0 can be downloaded from the SUN web site at the link.

HDS Confidential: For distribution only to authorized parties. Page 2-25


Section 2
Online Help

Online Help

26

Page 2-26 HDS Confidential: For distribution only to authorized parties.


Section 2
Start From Web Brower

Start From Web Brower

1. Open a Web browser.


2. Access the Storage Navigator Modular 2 software from the browser.
• URL- http:// <IP address of host>:23015/StorageNavigatorModular/
3. Log in.
• User Name: system
• Password: manager

27

Since this is the first time you are running Storage Navigator Modular 2, the Add
Array wizard appears, and prompts you to add your storage system.

HDS Confidential: For distribution only to authorized parties. Page 2-27


Section 2
Configure

Configure

1. Perform Initial Setup


1. Setup Email Alerts
2. Setup Management Ports
3. Setup Host Ports
4. Setup Spare Drives
5. Setup Date/Time

2. Create and Map Logical Units to Host Servers


1. Create RAID group
2. Create LUs
3. Create Host groups
4. Map LUs to hosts and host groups

3. Enable License Keys or Install Additional

28

Configuring the array is done in easy steps.


1. Initial setup
2. Install any license keys (in most cases this is done at build center)
3. Create the RG/LU storage volumes
4. Format the LUs
5. Create any Host groups and setup
6. Map the LUs to your hosts

Page 2-28 HDS Confidential: For distribution only to authorized parties.


Section 2
Account Authentication

Account Authentication

• Authenticate on the storage system to continue. The default user is root


and the password is storage.

29

HDS Confidential: For distribution only to authorized parties. Page 2-29


Section 2
LUN Expansion Overview

LUN Expansion Overview

• Unification of LUNs
– Expand the size of a LUN and create a single unified LUN
– Maximum number of LUNs that can be unified is 128

• Re-Unification Available
– Further unification

• Release (Separation)

30

Unification of LUNs is also called LUSE or LU concatenation.


Note: This is just a concatenation. LUSE does not offer any I/O striping.

Page 2-30 HDS Confidential: For distribution only to authorized parties.


Section 2
Overview of LUN Concatenation

Overview of LUN Concatenation

• Two offline LUNs are consolidated into one


• LUNs in RAID 1/5/6/1+0 can be concatenated (and mixed)
• LUN concatenation up to 60TB

LUN LUN 0 LUN


Main Concatenation LUN 0
LUN LUN 0 Concatenation
LUN 0
LUN 0
Sub LUN 1
LUN LUN 1 Main
LUN LUN 1

LUN 2 Sub LU 2 LUN 2


LUN

31

HDS Confidential: For distribution only to authorized parties. Page 2-31


Section 2
Hitachi Essential NAS Platform

Hitachi Essential NAS Platform

Hitachi Essential NAS Platform

32

Page 2-32 HDS Confidential: For distribution only to authorized parties.


Section 2
Hitachi Enterprise Storage System Connectivity History

Hitachi Enterprise Storage System Connectivity History

Fibre Channel Network

ESCON/FICON
Network

SAN SAN SAN S/390 SAN SAN SAN

S/390 S/390

NAS NAS iSCSI NAS


S/390

IP Network
33

HDS Confidential: For distribution only to authorized parties. Page 2-33


Section 2
Essential NAS Platform Introduction

Essential NAS Platform Introduction

• Two form factors with identical features and functionalities:


– NAS Gateway attached via FC to Adaptive Modular
Storage and Universal Storage Platform Family.
– NAS Filer bundled and pre-installed with Adaptable
Modular Storage, takes about 15 minutes to
install and configure with the NAS Wizard.
• CIFS Capability:
– Full NTFS ACL enables Microsoft Windows customers to have granular
security configuration.
– Up to 24K concurrent sessions.
– Expands number of CIFS shares up to 7,500 for all models.
• IP and Block Replication (CLI with first release)
– Provides IP based remote copy utilizing the technology of Sync Image
differential snapshots.
– Provides sync replication utilizing the TrueCopy Replication software
technology.
• User friendly and intuitive NAS Manager GUI.
• Allows upgrade from NAS Blade and/or Adaptable Modular Storage
system and Workgroup Modular Storage system with NAS Option.

34

Page 2-34 HDS Confidential: For distribution only to authorized parties.


Section 2
Essential NAS Server

Essential NAS Server

Clients Ether-LAN Servers Storage Storage


UNIX and WinTel Access Protocols UNIX and WinTel Network JBOD and RAID

CIFS
FTP, HTTP…
NFS
FTP, HTTP… UNIX SAN
SAN
SAN

NFS
FTP, HTTP…
NFS + CIFS
FTP, … SAN
SAN
SAN

Servers:
DATA Sharing,
SAN
SAN
Application, SAN
WEB Exchange,
Print, Backup, CIFS WinT
DB, Terminal,
FTP, HTTP… el
Security, Users,
Virus Scanning, Block Access
etc….

35
File Access

HDS Confidential: For distribution only to authorized parties. Page 2-35


Section 2
Hitachi Dynamic Link Manager and Hitachi Global Link Availability Manager Software

Hitachi Dynamic Link Manager and Hitachi Global Link


Availability Manager Software

Hitachi Dynamic Link Manager and


Hitachi Global Link Availability
Manager Software

36

Page 2-36 HDS Confidential: For distribution only to authorized parties.


Section 2
Dynamic Link Manager Features

Dynamic Link Manager Features

• Load Balancing
– Dynamic Link Manager software distributes the
storage accesses across multiple paths and
improves the I/O performance with load balancing.
– On modular storage 1000 Family Dynamic Link
Manager software does not allow load
balancing through two controllers; only
through the same controller.
– On modular storage 2000 Family Dynamic Link
Manager software does allow load balancing Controller0
Controller1
LU0
through two controllers. LU2
LU1

LU0

LU0: Owner Controller0 LU1


LU1: Owner Controller0
LU2: Owner Controller1
LU2

37

Dynamic Link Manager software performs load balancing between owner paths.
When you set an LU, you determine the owner controller for the LU. Since the
owner controller varies depending on the LU, the owner path also varies depending
on the LU. A non-owner path is a path that uses a channel adapter other than the
owner controller (a non-owner controller). To prevent performance in the entire
system from deteriorating, Dynamic Link Manager software does not perform load
balancing between owner paths and non-owner paths. When some owner paths
cannot be used due to a problem such as a failure, load balancing is performed
among the remaining usable owner paths.

HDS Confidential: For distribution only to authorized parties. Page 2-37


Section 2
Dynamic Link Manager Features

• Dynamic Link Manager software does not perform load balancing between
owner paths and non-owner paths.
– Owner path is the path to the logical unit number (LUN) through the controller
to which the logical unit (LU) currently is assigned on modular storage
systems
– Non-owner path is the path to the LUN through the other controller on
modular storage systems
– On enterprise storage systems, all paths are owner paths as there is no
concept of LU ownership

38

Dynamic Link Manager software does not perform load balancing between owner
paths and non-owner paths. It only uses owner paths for load balancing even if non-
owner paths are available. If no owner paths exist, then Dynamic Link Manager
software will perform load balancing between non-owner paths.

Page 2-38 HDS Confidential: For distribution only to authorized parties.


Section 2
Dynamic Link Manager Features

• Dynamic Link Manager software provides continuous storage access and


high availability by failing over to inactive paths.

Failure
Retries
I/O through
I/O through
Path 0
Fails Path 1

LU0: Owner Controller0


LU1: Owner Controller0
LU2: Owner Controller1
Controller0 Controller0
Controller1 Controller1
LU0 LU0
LU2 LU2
LU1 LU1
1. I/O Sent to LU0, controller0,
through Path 1
LU0 LU0
2. I/O through Path 1 Fails
(times out)
3. I/O is retried through Path 2 LU1 LU1

to same controller
LU2 LU2

39

Failover and Failback Using Path Switching


When the system contains multiple paths to an LU and an error occurs in the path
being used, Dynamic Link Manager software can switch to another normal path to
allow the system to continue to operate. This functionality is called failover.
When the path in which an error occurred recovers from the error, Dynamic Link
Manager software can switch the paths so that the recovered path is used. This
functionality is called failback.
Two types of failover and failback are available:
y Automatic path switching
y Manual path switching
Failover and failback change the path status and switch the paths. Path status is
classified into online status and offline status. Online status allows the path to
receive IO. Offline status prevents the path from receiving I/O for the following
reasons:
y An error occurred in the path.
y A user placed the path offline using the Path Management window of the
Dynamic Link Manager software GUI.
y A user placed the path offline using the Show Path List sub window of the
Dynamic Link Manager software Web GUI.

HDS Confidential: For distribution only to authorized parties. Page 2-39


Section 2
Dynamic Link Manager Features

y A user executed the Dynamic Link Manager software command's offline


operation.
Automatic Failback
After a path recovers from an error, Dynamic Link Manager software can
automatically place the recovered path online. This functionality is called automatic
failback. When using this function, Dynamic Link Manager software monitors error
recovery on a regular basis.
When using Modular Storage Systems, Dynamic Link Manager software selects the
path to use from online owner paths, and then from online non-owner paths.
Therefore, if an owner path recovers from an error and Dynamic Link Manager
software automatically places the recovered path online while any non-owner path
is in use, the path to use is switched to the owner path.

Page 2-40 HDS Confidential: For distribution only to authorized parties.


Section 2
Dynamic Link Manager Features

• Path Health Checking


– Without Path Health Checking, an error is not detected unless I/O is
performed because the system only checks the path status when I/O
is performed.
– Path Health Checking checks the status of online paths at regular
intervals, and detects errors.
– If an error is detected in a path, the status of that path is switched to
Offline(E) or Online(E).

40

Online(E) — An error has occurred on the path and no path, among the paths
accessing the same LU, has the Online status. If all the paths accessing the same LU
have an Offline status, then one of the paths is changed to the Online(E) status. This
enables access to the LU since all the paths are online.
The (E) indicates the error attribute, which indicates that an error occurred in the
path.
Offline(E) — The status in which I/O cannot be performed because an error
occurred in the path.
The (E) indicates the error attribute, which indicates that an error occurred in the
path.

HDS Confidential: For distribution only to authorized parties. Page 2-41


Section 2
Dynamic Link Manager Software GUI

Dynamic Link Manager Software GUI

• Options Window
Dynamic Link
Manager software
Version

Basic function settings


• Load balancing
• Path Health Checking
• Auto failback
• Intermittent Error Monitor
• Reservation Level
• Remove LU

Error management
function settings
Select the severity of Log
and Trace Levels

41

View or change the Dynamic Link Manager software operating environment.

Page 2-42 HDS Confidential: For distribution only to authorized parties.


Section 2
Problems and Solutions

Problems and Solutions

Dynamic Link Manager Global Link Availability Manager


Software - Pain Points Software — Solution
• No unified GUI • Manages many servers’
multi-path connections
• Views and configures only from a single console
one instance at a time • Simplifies storage
maintenance activities
• Each instance must be • Keeps customers
monitored independently for informed on the status of
alert notification of path all their multipathed links
status changes

42

HDS Confidential: For distribution only to authorized parties. Page 2-43


Section 2
Global Link Availability Manager Software Features

Global Link Availability Manager Software Features

• Single unified GUI


• Manages multiple Hitachi Data
Systems instances
• Simplified Path maintenance
activity LAN
• Scheduling algorithms by
Hidden Devices
(HDev)(LU/LDEV)
• Alert notifications
• Install and upgrade Dynamic HBA HBA HBA HBA HBA HBA
Link Manager software
instances SAN
• Provides secure resource
grouping LU1 LU1 LU1

LU2 LU2 LU2

LU3 LU3 LU3

43

Event notification
Alerts generated by Dynamic Link Manager software are displayed by Global Link
Availability Manager software near real-time.
Path Management
Global view of paths and Hidden devices (HDevs) for all Dynamic Link Manager
software instances. Management capabilities are based on user role definitions.
Host Management
Centrally manages configuration of all Dynamic Link Manager software instances.
Host group management
A customized grouping of hosts created by an individual user.
Resource group management
Administrator controls user’s access to a specific group of hosts (subset of Dynamic
Link Manager software instances).
Access control
User role definitions control operational and host resource access.

Page 2-44 HDS Confidential: For distribution only to authorized parties.


Section 2
Dynamic Link Manager Software and Global Link Availability Manager Working Together

Dynamic Link Manager Software and Global Link Availability


Manager Working Together

Global Link
Manager
Software
Server
Global Web Browser Web Browser Web Browser
Link
Manager Global Link Availability Manager Software Clients (GUI)
software

Dynamic Dynamic
Link Link
Manager Manager
software software Dynamic Dynamic Dynamic Dynamic
5.2 to 5.7 5.2 to 5.7 Link Link Link Link
Manager Manager Manager Manager
Device Device software software software software
Manager Manager 5.8 or later 5.8 or later 5.8 or later 5.8 or later
software software
Agent Agent
Hosts 3.5 or later 3.5 or later

SAN

Storage Subsystems

44

HDS Confidential: For distribution only to authorized parties. Page 2-45


Section 2
Dynamic Link Manager Software and Global Link Availability Manager Working Together

Page 2-46 HDS Confidential: For distribution only to authorized parties.


3. Section 3
Business Continuity

Hitachi ShadowImage® Replication Software

Hitachi Copy-on-Write Snapshot Software

Hitachi TrueCopy® Remote Replication Software

RAID Manager Command Control Interface (CCI)

Managing Replication with the Hitachi Replication Manager


Software

HDS Confidential: For distribution only to authorized parties. Page 3-1


Section 3
Business Continuity

Business Continuity

Business Continuity

Page 3-2 HDS Confidential: For distribution only to authorized parties.


Section 3
Business Continuity Solutions

Business Continuity Solutions

Hitachi
Hitachi Storage
Storage Command
Command Suite
Suite

Local
Local –– High
High Availability
Availability Remote
Remote –– Disaster
Disaster Protection
Protection

VERITAS Cluster Server, Microsoft MSCS Extended


Extended Clusters
Clusters
Hitachi TrueCopy Agent for VERITAS VCS VERITAS
VERITAS GCM,
GCM, Microsoft
Microsoft MSCS
MSCS IBM
IBM GDPS
GDPS

Data Protection Suite, Powered


By CommVault® Backup/Recovery
Tape Vault
Backup and Recovery VERITAS NetBackup
Hitachi Backup and Recovery Hitachi TrueCopy® Remote
Data Migration Hitachi Backup and Recovery
Data Archiver Serverless Backup Enabler VERITAS NetBackup Replication software
Quick Recovery Hitachi Cross-System Copy
Data Protection Monitor

Point-in-Time
Point-in-Time Clones
Clones and
and Snapshots
Snapshots Point-in-Time
Point-in-Time Clones
Clones and
and Snapshots
Snapshots
Hitachi
Hitachi ShadowImage
ShadowImage In-System
In-System Replication
Replication software
software ShadowImage
ShadowImage software
software
Hitachi
Hitachi Copy-on-Write
Copy-on-Write Snapshot
Snapshot software
software Copy-on-Write
Copy-on-Write Snapshot software
Snapshot software

Path Failover Disaster


Disaster Recovery,
Recovery, Disaster
Disaster Recovery
Recovery Testing,
Testing,
Hitachi Dynamic Link Manager software and
and Planned
Planned Outages
Outages

Platforms
Hitachi Modular Storage Systems

Hitachi
Hitachi Data
Data Systems
Systems Continuity
Continuity Services
Services

On the left side of the graphic are examples of the Hitachi Data Systems high-
availability solutions that are built on the foundation of the high-end Hitachi Storage
systems and their 100% availability. On the right side, the focus is placed on remote
data protection technologies and solutions, and in essence, Disaster Recovery
solutions components.
Disaster Recovery is the planning and the processes associated with recovering your
data/information. Disaster Protection is usually focused on providing the ability to
duplicate key components of the IT infrastructure at a remote location, in the event
that the primary IT site is unavailable for a prolonged period of time. Disaster
protection solutions can also be used to minimize the duration of “planned” outages
by providing an alternate processing facility while software or hardware
maintenance technology refresh is provided at the primary site.
A Disaster Recovery environment is typically characterized by:
y Servers far apart
y Servers have separate resources
y Recovery from large-scale outage
y Major disruption
y Difficult to return to normal
y Recovery

HDS Confidential: For distribution only to authorized parties. Page 3-3


Section 3
Business Continuity Solutions

High-availability (HA): The practice of keeping systems up and running by


exploiting technology, people, skills, and processes. High Availability is usually
focused on component redundancy and recovery at a local site to protect from
failure of an infrastructure component.
HA environment is characterized by:
y Co-located servers
y Shared disks and other resources
y Recovery from isolated failures
y Minor disruptions only
y Easy or few steps to return to normal
These are complementary disciplines. Business Continuity requires practicing or
implementing both advanced Disaster Recovery solutions and Disaster Recovery on
top of additional organizational processes.
This simple framework identifies the building blocks for Business
Continuitysolutions – the blocks identified here are key functional/technology
components.

Page 3-4 HDS Confidential: For distribution only to authorized parties.


Section 3
RAID Manager (CCI)

RAID Manager (CCI)

– Note:
• DF700 must use RAID Manager (CCI)
• DF800
– must use RAID Manager (CCI) when replicating to/from DF700,
– can use RAID Manager (CCI) when replicating to/from DF800,
– can also use SNM2 GUI/CLI when replicating to/from DF800.

• RAID Manager (CCI) is an in-band admin tool

HDS Confidential: For distribution only to authorized parties. Page 3-5


Section 3
ShadowImage Software

ShadowImage Software

• Features
– Full copy of a volume at a Point in Time
– No host processing cycles required
– No dependence on operating system, file
system or database
– Copy is RAID protected
– Create up to three concurrent copies of the
original LU
Production Copy of
• Benefits Volume Production
– Protects data availability Volume
– Simplifies and increases disaster recovery
Normal Point-in-time
testing Copy for
Processing
– Eliminates the backup window continues parallel
unaffected processing
– Reduces testing and development cycles
– Enables non-disruptive sharing of critical
information

Page 3-6 HDS Confidential: For distribution only to authorized parties.


Section 3
TrueCopy Remote Replication Software

TrueCopy Remote Replication Software

• Features • Benefits
– Models WMS100, AMS200, AMS500, – Provides fast recovery with no
AMS1000 and AMS 2000 data loss
– Synchronous support – Distributes time-critical information
• Asynchronous support in to remote sites
conjunction with ShadowImage – Reduces downtime of customer-
software facing applications
• Support for Open environments – Increases the availability of
– Installed in high profile Disaster revenue producing applications
Recovery sites around the world

P-VOL S-VOL

HDS Confidential: For distribution only to authorized parties. Page 3-7


Section 3
TrueCopy Extended Distance

TrueCopy Extended Distance

• Features • Benefits
– Model AMS500 and AMS1000 – Does not effect host performance
– AMS 2000 – Enables longer distance disaster
recovery and data protection
– Asynchronous replication – Can be used in lower speed
networks

P-VOL S-VOL

Pool Pool

Extenders
Local AMS Remote AMS

Page 3-8 HDS Confidential: For distribution only to authorized parties.


Section 3
Hitachi ShadowImage® Replication Software

Hitachi ShadowImage® Replication Software

Hitachi ShadowImage® Replication


Software

HDS Confidential: For distribution only to authorized parties. Page 3-9


Section 3
Overview

Overview

• ShadowImage Replication software


– Replicates data within the Hitachi
modular storage systems without
disrupting operations
• A pair consists of a P-VOL (Primary
Volume) and up to three S-VOLs
(Secondary Volumes)
• A split creates a Point-in-Time (PiT)
copy of the data
• PiT copy is a full copy of data
• Once PiT copy is created, data can
be used for batch processing or
other processes
– High performance achieved through
asynchronous copy facility to secondary
volumes

A hardware-based program, ShadowImage software is the copy facility for Hitachi


Adaptable Modular Storage, Hitachi Workgroup Modular Storage, and Thunder
9500 V Series systems without disrupting operations. It enables server-free backups,
which allows customers to exceed service level agreements (SLAs). ShadowImage
software is available for all Adaptable Modular Storage, Workgroup Modular
Storage, and Thunder 9500 V Series systems.
Once copied, data can be used for data warehousing/data mining applications,
backup and recovery, or application development, allowing more complete and
frequent testing for faster deployment.
Adaptable Modular Storage, Workgroup Modular Storage systems support the
creation of up to three RAID protected copy from each source volume. When used in
conjunction with Hitachi TrueCopy Remote Replication software, ShadowImage
software supports three copies of critical information that can reside on either local
or secondary systems located within the same data center, or at remote sites.

Page 3-10 HDS Confidential: For distribution only to authorized parties.


Section 3
Applications for ShadowImage Replication Software

Applications for ShadowImage Replication Software

• Backup and recovery


• Data warehousing/data mining applications
• Application development
• Run benchmarks and reports

10

ShadowImage software is replication/backup and restore software that delivers the


copy flexibility customers need for meeting today’s unpredictable business
challenges. With ShadowImage Replication software, customers can:
y Execute logical backups at faster speeds and with less effort than previously
possible
y Easily configure backups to execute across a storage area network (SAN)
y Manage backups from a central location
y Increase the speed of applications
y Expedite application testing and development
y Keep a copy of data for backup or testing
y Ensure data availability

HDS Confidential: For distribution only to authorized parties. Page 3-11


Section 3
Overview

Overview

• P-VOL and S-VOL start out as


independent/simplex volumes
– P-VOL — Production Volume
– S-VOL — Secondary Volume
• P-VOL and S-VOL are Independent
synchronized using ShadowImage P-VOL S-VOL
Replication software Paircreate
Operation
• P-VOL and S-VOL are split Synchronize
creating a Point-in-Time (PiT) copy P-VOL S-VOL
• S-VOL can be used independently
of P-VOL with no performance
impact on P-VOL Split
P-VOL S-VOL

11

Page 3-12 HDS Confidential: For distribution only to authorized parties.


Section 3
Internal ShadowImage Replication Software Operation

Internal ShadowImage Replication Software Operation

Asynchronous
Write

S-VOL

(1) Write I/O


Write
Write
Write P-VOL S-VOL
Data
Data
Data

(2) Write Complete


S-VOL

•DF700: 1 P-Vol – 3 S-Vols


•DF800: 1 P-Vol – 8 S-Vols

12

ShadowImage Replication software is internally an asynchronous operation.


The host performs a write to the P-VOL and the system internally copies the data
from the P-VOL to the S-VOL asynchronously. The host does not have to wait till the
system copies the data to the S-VOL before it can do another I/O to the P-VOL;
therefore, there is not an impact upon host I/O performance.
Adaptable Modular Storage, Workgroup Modular Storage, and Thunder 9500 V
Series system replies write complete to the host as soon as the data is written to cache
memory. Data in cache memory is asynchronously written to P-VOL and S-VOL.
Maximum three S-VOLs per P-VOL are supported on Workgroup Modular Storage
system and Adaptable Modular Storage system and only one on Thunder 9500V
storage subsystems.

HDS Confidential: For distribution only to authorized parties. Page 3-13


Section 3
Overview

Overview

• Adaptable Modular Storage and Workgroup Modular Storage systems

System
WMS100, AMS200, AMS500, AMS1000

13

ShadowImage Replication software operations involve the primary and secondary


volumes in the Adaptable Modular Storage system, Hitachi Storage Navigator
software, and the Hitachi Command Control Interface (CCI).
y The ShadowImage Replication software system components include:
y ShadowImage Replication software volume pairs (P-VOLs and S-VOLs)
y Storage Navigator Software Modular (Adaptable Modular Storage/Workgroup
Modular Storage)
y CCI software on the UNIX and/or PC-server host.

Page 3-14 HDS Confidential: For distribution only to authorized parties.


Section 3
Differential Management

Differential Management

• DM-LU Overview
– DM-LU is used for saving cache resident ShadowImage Replication software
management information
– At Shutdown: Writes the management information from cache to DM-LU
– At Boot: Reads the management information from DM-LU to cache

ShadowImage Shutdown
Copy Meta Data

Cache Boot DM-LU

14

HDS Confidential: For distribution only to authorized parties. Page 3-15


Section 3
ShadowImage Replication software Copy Operations

ShadowImage Replication software Copy Operations

• Pair Create: Establishes a new ShadowImage Replication software


pair(s)

Host
P-VOL
P-VOL available to Host
For R/W I/O operations All Data

S-VOL
INITIAL COPY
Data Bit Map
P-VOL Differential
• Updates S-VOL after initial copy
Host
• Write I/O to P-VOL during initial
P-VOL available to Host P-VOL copy – Duplicated to S-VOL by
For R/W I/O operations
Differential Data update copy after initial copy

UPDATE COPY S-VOL


15

The ShadowImage Replication software paircreate operation establishes the newly


specified ShadowImage Replication software pair. The volumes which will become
the P-VOL and S-VOL must both be in SMPL (simplex) state before the pair creation
process can start.
The ShadowImage Replication software initial copy operation copies all data (the
entire volume regardless of what is on it) from the P-VOL to the associated S-VOL.
The P-VOL remains available to all hosts for read and write I/Os throughout the
initial copy operation. Write operations performed on the P-VOL during the initial
copy operations will always be duplicated to the S-VOL after the initial copy is
complete (Update Copy).
Status of the pair is COPY(PD) (PD = pending duplex) while the initial copy
operation is in progress. The pair status changes to PAIR when the initial copy is
complete.
You can select the pace for the initial copy operation when creating pairs. The
following pace options are available: Slower, Medium, and Faster
The slower pace minimizes the impact of ShadowImage Replication software
operations on system I/O performance, while the faster pace completes the initial
copy operation as quickly as possible. The best timing is based on the amount of
write activity on the P-VOL and the amount of time elapsed between update copies.

Page 3-16 HDS Confidential: For distribution only to authorized parties.


Section 3
ShadowImage Replication software Copy Operations

The ShadowImage Replication software update copy operation, updates the S-VOL
of a ShadowImage Replication software pair after the initial copy operation is
complete. Update copy operations take place only for duplex pairs (status = PAIR).
As write I/Os are performed on a duplex P-VOL, the system stores a map of the P-
VOL differential data, and then performs update copy operations periodically based
on the amount of differential data present on the P-VOL as well as the elapsed time
between update copy operations. The update copy operations are not performed for
pairs with the following status: COPY(PD) (pending duplex), COPY(SP) (split
pending), PSUS(SP) (quick split pending), PSUS (split), COPY(RS) (resync),
COPY(RS-R) (resync-reverse), PSUE (suspended)

HDS Confidential: For distribution only to authorized parties. Page 3-17


Section 3
ShadowImage Replication Software Commands

ShadowImage Replication Software Commands

• Normal Resync Illustration


10:00 AM. Status = PSUS

Dirty Tracks Dirty Tracks

HOST I/O
HOST I/O
10,15,18,29 10,19, 23

P-VOL S-VOL

10:00:01 AM. Pairresync (Normal)

Dirty Tracks Tracks


HOST I/O

10,15,18,19,23,29
sent from P-VOL to
S-VOL
P-VOL Updates --> S-VOL

10:00:45 AM. Status = PAIR

Dirty Tracks
HOST I/O

P-VOL Asynchronous Updates S-VOL

16

Pairresync for split pair: When a normal pairresync operation is performed on a


split pair (status = PSUS), the system merges the S-VOL track map into the P-VOL
track map and then copies all flagged tracks from the P-VOL to the S-VOL. When a
reverse pairresync operation is performed on a split pair, the system merges the P-
VOL track map into the S-VOL track map and then copies all flagged tracks from the
S-VOL to the P-VOL. This ensures that the P-VOL and S-VOL are properly
resynchronized in the desired direction. This also greatly reduces the time needed to
resynchronize the pair.
Pairresync for suspended pair: When a normal/quick pairresync operation is
performed on a suspended pair (status = PSUE), the subsystem copies all data on the
P-VOL to the S-VOL, since all P-VOL tracks were flagged as difference data when
the pair was suspended. Reverse and quick restore pairresync operations cannot be
performed on suspended pairs. The normal pairresync operation for suspended
pairs is equivalent to and takes as long as the ShadowImage Replication software
initial copy operation.
1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. Tracks 10, 15, 18,
and 19 are marked as dirty on the track bitmap for the P-VOL. Tracks 10, 19 and
23 are marked as dirty on the track bitmap for the S-VOL.
2. At 10:00 AM a pairresync (Normal) command is issued. The track bitmaps for the
P-VOL and S-VOL are merged. The resulting track bitmap has tracks 10, 15, 18,

Page 3-18 HDS Confidential: For distribution only to authorized parties.


Section 3
ShadowImage Replication Software Commands

19, 23, and 29 marked as dirty. These tracks are sent from the P-VOL to the S-
VOL as part of an update copy operation.
3. Once the update copy operation in step 2 is complete the P-VOL and S-VOL are
declared as a PAIR.

HDS Confidential: For distribution only to authorized parties. Page 3-19


Section 3
ShadowImage Replication Software Operations

ShadowImage Replication Software Operations

Time
App App BKUP App App

“A” “A” “A” “B” “A” “B” “A” “A”

Online Off-line Online Online Online Off-line Off-line Off-line

Duplex Split Re-sync Reverse


Copy ‘Suspend’ ‘Resume’ Sync

Pair Create Pair Suspend Pair Reverse


Resynchronization Synchronization
All Volumes, Continuous RAID Protection
17

ShadowImage Replication software operations include:


y PAIR-CREATE
y PAIR-SPLIT
y PAIR-RESYNCHRONIZE

Page 3-20 HDS Confidential: For distribution only to authorized parties.


Section 3
Hitachi Copy-on-Write Snapshot Software

Hitachi Copy-on-Write Snapshot Software

Hitachi Copy-on-Write Snapshot


Software

18

HDS Confidential: For distribution only to authorized parties. Page 3-21


Section 3
Overview

Overview

• Comparing ShadowImage Replication software and Copy-on-Write


Snapshot software functions

ShadowImage Replication Copy-on-Write Snapshot software


software Only differential data is saved from Primary Volume (P-VOL)
All data is saved from Primary to Data Pool area (Pool)
Volume (P-VOL) to Secondary Pool is shared by multiple Snapshot images (V-VOL)
Volume (S-VOL)
Main Server Backup Server Main Server Backup Server

Read Write Read Write


Read Write Read Write
Virtual Volume
P-VOL S-VOL P-VOL
V-VOL V-VOL V-VOL
Differential
Data Save

Pool Link
19

ShadowImage Replication software


y The primary volumes (P-VOLs) contain the original data; the secondary
volume(s) (S-VOLs) contain the duplicate data. Since each P-VOL is paired with
its S-VOL independently, each volume can be maintained as an independent
copy set that can be split (pairsplit), resynchronized (pairresync), and released
(pairsplit –S) separately.
Copy-on-Write Snapshot software
y The Copy-on-Write Snapshot software primary volumes (P-VOLs) contain the
original data; the Snapshot images (V-VOLs) contain the Snapshot data. Since
each P-VOL is paired with its V-VOL independently, each volume can be
maintained as an independent copy set that can be created (paircreate -split), and
released (pairsplit –S) separately.
y Since there is no data movement involved a snapshot is immediately available
for e.g. backup purposes.
y Each Copy-on-Write Snapshot software software pair consists of one primary
volume (P-VOL) and one of the up to 15 Snapshot Images (V-VOLs), which are
located in the same Modular Series system.

Page 3-22 HDS Confidential: For distribution only to authorized parties.


Section 3
Overview

• Comparing ShadowImage Replication software and Copy-on-Write


Snapshot software
ShadowImage Replication Copy-on-Write Snapshot
software software
P-VOL = S-VOL P-VOL ≧ Pool for one V-VOL
Size of
Physical
Volume P-VOL = S-VOL P-VOL ≧ Pool

1:3 1 : 15
Pair P-VOL
S-VOL
S-VOL
Configuration
….
P-VOL S-VOL
V-VOL V-VOL V-VOL V-VOL

P-VOL can be restored from S-VOL Restore from


any V-VOL P-VOL
Restore
….
P-VOL S-VOL
V-VOL V-VOL V-VOL V-VOL

20

Size of Physical Volume:


The P-VOL and the S-VOL have exactly the same size in ShadowImage Replication
software. In Copy-on-Write Snapshot software, less disk space is required for
building a V-VOL image since only part of the V-VOL is on the Pool and the rest is
still on the primary volume.
Pair Configuration:
Only one S-VOL can be created for every P-VOL in ShadowImage Replication
software. In Copy-on-Write Snapshot software there can be up to 15 V-VOLs per
primary volume.
Restore:
A primary volume can only be restored from the corresponding secondary volume
in ShadowImage Replication software. With Copy-on-Write Snapshot software, the
primary volume can be restored from any Snapshot Image (V-VOL).

HDS Confidential: For distribution only to authorized parties. Page 3-23


Section 3
Operation Scenarios

Operation Scenarios

• After Writing to the Data Block on P-VOL


– When there is a write after having created a Snapshot, that data is saved on
the Data Pool area (Pool) first.
– This saved data in Pool is from now on used by the V-VOL(s).
– Pool can be shared by multiple V-VOLs; therefore only one copy of the data is
required.

3. Write on 2. Snapshot created


Tuesday 4. Data saved onto on Tuesday
P-VOL Pool. At this time, V02
V01 and V02, which Tuesday
has Snapshot images,
will refer to this data.
1. Snapshot created
on Monday

Link V01 Monday


Pool

Physical VOL Virtual VOL(V-VOL )


In order to link Pool and Snapshot images,
Addresses and Generations are managed by cache.

21

Now the data block on the P-VOL needs to be written to. However, before the actual
write is executed, the block is copied to the Pool area. The set of pointers that
actually represent the V-VOL will be updated and if there is a request now for the
original block through a V-VOL, the block is physically taken from Pool.
From the host's perspective, the V-VOL (Snapshot Image) has not changed, which
was the plan.
If the Pool areas become full, all snapshots will be deleted. Pool utilization has to be
monitored.

Page 3-24 HDS Confidential: For distribution only to authorized parties.


Section 3
Hitachi TrueCopyP®P Remote Replication Software

Hitachi TrueCopy® Remote Replication Software

Hitachi TrueCopy® Remote Replication


Software

22

HDS Confidential: For distribution only to authorized parties. Page 3-25


Section 3
Disaster Recovery

Disaster Recovery

• RTO and RPO Examples and Disaster Recovery Technologies

RTO 72 hrs 12 hrs 4 hrs 1 hrs secs


RPO
Tape Tape
Backup Backup and
24 hrs and Hot Site
Vendor
DR Site
DB Log DB Log
Shipping Shipping
1 hrs Data Data
Vaulting with Vaulting with
Hot Site Hot Site
Synchronous Synchronous Disaster
Synchronous Disaster
Synchronous
Synchronous Data
Data DataTolerant
Mirroring Tolerant
Data Mirroring
Data Mirroring Mirroring
0 hrs Mirroring
without Hot
withCluster
Hot with Cluster
with with
Extended
Disk
Standby Distance
bi-directional
Standby Server Clusters
Server
mirroring mirroring

23

Page 3-26 HDS Confidential: For distribution only to authorized parties.


Section 3
TrueCopy Specifications

TrueCopy Specifications

• One LUN can belong to only one TrueCopy software pair


– P-VOL: S-VOL = 1:1
• P-VOL and S-VOL size must be exactly the same
• Not depended on RAID Levels
• Does not support replicating from Fibre Channel to SATA or from SATA to
Fibre Channel

RAID10 RAID5
(2D+2D) (5D+1P)

P-VOL S-VOL
RAID5 RAID5
(4D+1P) (8D+1P)

24

HDS Confidential: For distribution only to authorized parties. Page 3-27


Section 3
Configurations

Configurations

• TrueCopy Software and ShadowImage Software Configurations

Extender Extender

INI RCUT
This configuration allows you to
ShadowImage
use ShadowImage software to
ShadowImage
provide multiple backup copies of a
P-VOL P-VOL single TrueCopy software P-VOL at
Synchronous
local as well as remote sites.
S-VOL S-VOL

Extender Extender

This configuration allows you to


ShadowImage INI RCUT ShadowImage use ShadowImage software and
TrueCopy software to replicate
P-VOL S-VOL data to a far remote site with no
Asynchronous performance impact on the
production application.
S-VOL P-VOL

25

Page 3-28 HDS Confidential: For distribution only to authorized parties.


Section 3
TrueCopy and Copy-on-Write Snapshot Configurations

TrueCopy and Copy-on-Write Snapshot Configurations

Extender Extender

INI RCUT
This configuration allows you to
Snapshot
use Copy-on-Write software to
Snapshot
provide multiple backup copies of a
P-VOL P-VOL single TrueCopy software P-VOL at
Synchronous
local as well as remote sites.
V-VOL V-VOL

26

HDS Confidential: For distribution only to authorized parties. Page 3-29


Section 3
True Copy Extended Distance

True Copy Extended Distance

• Asynchronous Implementation of TrueCopy


– Allows for longer distance
– Less impact from network latency
– Secondary side will be 'behind' in time
– Less impact on the host interfaces on the primary side

27

Page 3-30 HDS Confidential: For distribution only to authorized parties.


Section 3
Functional Overview

Functional Overview

• Because copies are written asynchronously, data on the S-VOL may be


older than the data on the primary volume P-VOL

1 Writing of data 2 Writing complete, release the host

4 Sending update data to S-VOL

5 S-VOL update complete

P-VOL S-VOL

3 Saving of data that has 6 Saving of consistent,


not been transferred internally determined
to S-VOL Extenders data
Pool Pool

Local Adaptable Modular Storage System Remote Adaptable Modular Storage System

28

HDS Confidential: For distribution only to authorized parties. Page 3-31


Section 3
Concurrent Use with Other Copy Products

Concurrent Use with Other Copy Products

Concurrent Use With Other Copy Products

Product TrueCopy Extended Distance Requirement


• Not supported
TrueCopy software • TrueCopy Extended Distance volume cannot be
cascaded or used together with a TrueCopy software
volume
• TrueCopy Extended Distance and ShadowImage
Heterogeneous Replication software can be used
ShadowImage software concurrently
• TrueCopy Extended Distance volume cannot be
cascaded with ShadowImage volume
Copy-on-Write Snapshot • TrueCopy Extended Distance volume can be cascaded
software only with Copy-on-Write Snapshot software P-VOL

29

Page 3-32 HDS Confidential: For distribution only to authorized parties.


Section 3
Examples of Supported Configurations

Examples of Supported Configurations

• Examples of Supported Configurations

CTG0 V-VOL
P-VOL S-VOL
CTG0 TrueCopy
TrueCopy Extended
P- VOL S-VOL
Extended Copy-on-Write V-VOL
Distance Pair 1 Snapshot software
Distance Pair 1
TrueCopy V-VOL
CTG1 P-VOL
Extended S-VOL
TrueCopy
S-VOL Extended P- VOL Distance Pair 2
V-VOL
Distance Pair 2

Local System Remote System Local System Remote System

CTG0
P-VOL S-VOL
TrueCopy Extended
Distance Pair 1

Copy-on-Write
V-VOL V-VOL Snapshot V-VOL V-VOL
software

Local System Remote System


30

HDS Confidential: For distribution only to authorized parties. Page 3-33


Section 3
RAID Manager and Command Control Interface

RAID Manager and Command Control Interface

RAID Manager and Command Control


Interface

31

Page 3-34 HDS Confidential: For distribution only to authorized parties.


Section 3
Command Control Interface

Command Control Interface

• What is RAID Manager and CCI?


– Provides a command line interface to all Hitachi Data Systems Replication
Products
• ShadowImage Replication software
• Copy-on-Write Snapshot software
• TrueCopy Remote Replication software
• TrueCopy Extended Distance
– Is executed on a host
• For what do we use RAID Manager and CCI?
– To control and/or script ShadowImage Replication software operations
– To control and/or script Copy-on-Write Snapshot software operations
– To control and/or script TrueCopy software operations
– To control and/or script TrueCopy Extended Distance
– To control and/or script Data Retention Utility tasks

32

RAID Manager CCI configures and manages the following replication products:
ShadowImage Replication software, Copy-on-Write Snapshot software, and
TrueCopy Remote Replication software.
RAID Manager CCI is also used for configuration of a few other products in the
enterprise area.

HDS Confidential: For distribution only to authorized parties. Page 3-35


Section 3
Command Control Interface

• RAID Manager and CCI


– RAID Manager environment
establishes a “conversation”
– Instances negotiate out of band (TCP/IP)
– In-band communication with Array
• Through SCSI Channel
• Command Device

33

• TrueCopy software — 2 Servers — 2 HORCM Instances

HORCM
LAN
HORCM0.conf HORCM HORCM
HORCM
HORCM0.conf Commands HORCM1.conf
HORCM1.conf
Commands Commands
Commands
Communication
Server between RAID
Software Manager Instances Server
HORCM
HORCM
and Software
App. Instance0
Instance0 HORCM
and HORCM
App. Instance1
Instance1

Command Command
Device Device

HORC Control HORC Control


P-VOL S-VOL

34

Page 3-36 HDS Confidential: For distribution only to authorized parties.


Section 3
HORCM_DEV

HORCM_DEV

• Multi-path configuration
– LUN 9 is Mapped to CL1-B and CL2-B
HORCM_DEV
#dev_group dev_name port # target ID LUN# MU#
ora1 ora_tab1 CL1-B 0 9 0
Or
ora1 ora_tab1 CL2-B 0 9 0

9
35

HDS Confidential: For distribution only to authorized parties. Page 3-37


Section 3
Horcmo.conf Managing One Volume

Horcmo.conf Managing One Volume

HORCM_MON
#ip_address service poll(ms) timeout(ms)
SVR1 horcm0 6000 3000

HORCM_CMD
#dev_name
/dev/rdsk/c2t1d1s2 # Solaris
\\.\Physicaldrive2 # Windows NT, 2000 and 2003
\\.\Volume{f66c6208-6da0-11da-912a-505054503030} # Windows 2000 and 2003

HORCM_DEV
#dev_group dev_name port # target ID LUN# MU#
oradb1 disk1 CL1-A 3 1 0

HORCM_INST
#dev_group ip_address service
oradb1 SVR1 horcm1

36

HORCM_MON describes:
y IP address (host name) or the number of the server running instance 0.
y service (local service) is the /etc/services file port name line entry for instance 0.
The port can be explained as a “socket” number to communicate to instance 1
which is also located in the /etc/services file and vice-versa.
y poll interval in milliseconds (1000 milliseconds = 1 second). This indicates the
number of times HORCM daemon will “look at” the command device for status
about the pairs. When this number is higher, HORCM daemon overhead on the
running server is reduced. 1000ms is the default value.
y timeout value in milliseconds (1000 milliseconds = 1 second). This indicates the
time for which the HORCM daemon will wait for status from instance 1 before
timing out. In ShadowImage Replication software mode, this will apply to
communication between the two instances running on one server when
applicable.
HORCM_CMD describes the path to the raw device serving as the command device.
HORCM_DE describes the source LUNs.
y dev group name associates all LUNs to be controlled as a group for manipulation
from one command.

Page 3-38 HDS Confidential: For distribution only to authorized parties.


Section 3
Horcmo.conf Managing One Volume

y dev name must be unique for all devices within a group.


y Source to target pairs are group and device name associated and not line number
associated. PORT#, TID, and LUN# are self explanatory.
y MU# is mirror unit number when creating one source to multiple targets.
Œ 0 = first copy and is implied
Œ 1 = second copy of same source
Œ 2 = third pair of same source.
Each MU number must be specified as a unique group name and device
name. The MU field is valid in the horcm0.conf file only and not horcm1.conf.
MU is always 0 for the modular arrays since one source can only have one
target.
HORCM_INST
y dev group field requires one entry per group specified in the HORCM_DEV
definitions.
y ip address describes the ip address or the name of instance 1.
y service is the /etc/services file port name line entry for instance 1.

HDS Confidential: For distribution only to authorized parties. Page 3-39


Section 3
Some CCI Commands

Some CCI Commands

• pairdisplay - show pair status


• paircreate - create a pair
• pairsplit - split a pair, temporarily or permanent
• pairresync - resynchronize a pair after split
• raidscan - find LUNs and show their status
• setenv - set environment variables
• sync - flush system buffers to disk

37

Page 3-40 HDS Confidential: For distribution only to authorized parties.


Section 3
Managing Replication with the Hitachi Replication Manager Software

Managing Replication with the Hitachi Replication Manager


Software

Managing Replication with the Hitachi


Replication Manager Software

38

HDS Confidential: For distribution only to authorized parties. Page 3-41


Section 3
Module Objectives

Module Objectives

• Upon completion of this module the learner should be able to:


– Identify the positioning of Replication Manager software
– Describe the Replication Manager system configuration for Open systems and
Mainframes
– Identify the server requirements
– Identify the installation and configuration of prerequisite software
– Install and setup Replication Manager server
– Configure the Replication Manager server
– Verify the successful install of Replication Manager software
– Describe the Replication Manager modules and other dependent software
components

39

Page 3-42 HDS Confidential: For distribution only to authorized parties.


Section 3
Positioning of Replication Manager

Positioning of Replication Manager

Replication
Monitoring and
Replication Manager Management
Open Volumes

Configuration

M/F Volumes
Navigator
Device Manager

Storage
Management
BC
Manager
RAID Manager Replication
Management

Modular Storage Enterprise Storage


SI SI Replication
UR
TC Technologies
QS TC CoW

Remote In-System Remote In-System

40

y Replication Manager software provides monitoring for both RAID series (open
and Mainframe volumes) and DF series storage subsystems (open volumes)
y Replication Manager software requires (is dependent on) Device Manager and
uses RAID Manager command control interface (CCI) and Device Manager agent
for monitoring open volumes
Œ Device Manager provides volume configuration management
Œ RAID Manager (CCI) is used by Replication Manager for pair status watching
y Replication Manager software requires (is dependent on) Business Continuity
Manager (BCM) or Mainframe agent for monitoring the mainframe volumes
Chart Legend
y TC stands for TrueCopy
y SI stands for ShadowImage
y UR stands for Universal Replicator
y CoW stands for Copy-on-Write

HDS Confidential: For distribution only to authorized parties. Page 3-43


Section 3
Architecture of Replication Manager in an Open Systems and Mainframe Environment

Architecture of Replication Manager in an Open Systems and


Mainframe Environment

• Standard Configuration of a Site


Management Client Pair Mgmt Server (CCI Srv)
DF Subsystems

RAID Manager
HRpM Agent

Host Agent

Agent Base
SVP

Common
Plug-in

(CCI)
CMD
Browser HDvM Agent
Device
Plug-in

Host (Production Server)


Management
Management
R400 Subsystem

RAID Manager
Server
Server HRpM Agent
Host Agent

Agent Base
Common
IP Network

Plug-in

(CCI)
SVP
HRpM Server

FC-SAN
HDvM Agent CMD Device
Plug-in
HDvM Server

HBase (*) Host (Production Server)


(No Agent and CCI)
R600/500/450
Subsystems
S/N SVP

Host (Mainframe, z/OS) CMD


Device
BCM
HTTP Server

41

Product Notations

HDvM stands for Hitachi Device Manager


HRpM stands for Hitachi Replication Manager
BCM stands for Business Continuity Manager
HBase stands for HiCommand Common Component Base (HBase is bundled
component with Device Manager and Replication Manager)
Standard system configuration of a site is comprised of:
1. Management Server: Replication Manager and Device Manager software
installed on the same server. Replication Manager depends on the Device
Manager. The HBase (HiCommand Common Component Base) is automatically
installed by the Device Manager or Replication Manager installation. It is highly
recommended to use the same version number (major and minor) for Device
Manager server and Replication Manager server.
2. Pair Management Server (Open Systems):
Host Agent : Only a single Host Agent is provided for the Device Manager and
Replication Manager. The Host Agent V6.0 is completely integrated into one

Page 3-44 HDS Confidential: For distribution only to authorized parties.


Section 3
Architecture of Replication Manager in an Open Systems and Mainframe Environment

agent module. One agent install on the server works for Device Manager,
Replication Manager, and Provisioning Manager.
RAID Manager (CCI): The Replication Manager requires RAID Manager to
manage replication pair volumes. The servers on which the RAID Manager
software is installed must have a Host Agent so that Replication Manager can
recognize and manage the pair volume instances.
3. Pair Management Server (Mainframes):
BCM (Business Continuity Manager): BCM is the software product that works
on the mainframe and manages replication pair volumes assigned for the
mainframe computers. Business Continuity Manager 5.0 or later or Mainframe
Agent 6.0 or later can be used. The Replication Manager can monitor the
mainframe replication volumes by communicating with the BCM. Although
the Replication Manager V6.0 can create/modify/delete the open replication pair
volumes, it cannot create/modify/delete mainframe pair volumes even through
the BCM.
4. Host (Production Server) : A host runs application programs. The installation of
Device Manager agent is optional. Replication Manager can acquire the host
information (host name, IP address, and mount point) if the agent is installed on
it.

HDS Confidential: For distribution only to authorized parties. Page 3-45


Section 3
Types of Install

Types of Install

• New Installation
– Device Manager Server v6.0 and Device Manager Agent v6.0 is a
prerequisite product
• Upgrade Installation
– Upgrade from Replication Monitor v5.0 or later is supported
– Replication Monitor is replaced by Replication Manager

42

Page 3-46 HDS Confidential: For distribution only to authorized parties.


Section 3
Installation and Configuration of Prerequisite Software

Installation and Configuration of Prerequisite Software

• Before installing Replication Manager software, install and configure the


following prerequisite software:
– Install Device Manager software 6.0 on the management server
– Install Device Manager license key
– Install CCI and Device Manager agents 6.0 on the pair management servers
for open systems
– Install Business Continuity Manager or Mainframe agent on the pair
management servers for mainframes
– Add the resources to Device Manager
• Subsystems
• Hosts
• Pair Management servers
– Backup the databases of other Storage Command Suite products before
installing Replication Manager software

43

HDS Confidential: For distribution only to authorized parties. Page 3-47


Section 3
Concept of Resource Groups

Concept of Resource Groups

• Resource Groups provide access control functionality


– It is a collection of hosts and storage subsystems that are grouped by purpose
and associated with a user for controlled access by the user
– Large environments require Security Management for managing resources
such as:
• Who can access this subsystem?
• An administrator is assigned to hosts and subsystems that are grouped
by a site or department
– Users can only see the allocated resources on the GUI
– A user can be associated with multiple resource groups to increase the range
of operations
• Types of Resource Groups
– All Resources
• System-defined, containing all the resources in the system
– User-Defined
• Users with administrative privileges can define a resource group and add
resources, such as hosts and subsystems

44

Page 3-48 HDS Confidential: For distribution only to authorized parties.


4. Section 4
Services Oriented Storage Solutions from Hitachi Data Systems

Hitachi Device Manager Software

Hitachi Tuning Manager Software

Hitachi Content Archive Platform

Virtual Tape Library Solutions by Hitachi Data Systems And Hitachi


Data Protection Suite Solutions

HDS Confidential: For distribution only to authorized parties. Page 4-1


Section 4
Services Oriented Storage Solutions from Hitachi Data Systems

Services Oriented Storage Solutions from Hitachi Data


Systems

Services Oriented Storage Solutions


from Hitachi Data Systems

Page 4-2 HDS Confidential: For distribution only to authorized parties.


Section 4
Applications are the Link

Applications are the Link

Applications are the critical driver of business process and decision making,
impacting organizational growth, risk, and profitability

Shift the conversation from terabytes to the application


(because strategic applications need strategic storage)

Messaging Imaging Content Management

Databases ERP Messaging

Backup / DR Archiving

Services Oriented Storage Solutions is a business-centric framework for aligning IT storage


resources with constantly changing business requirements

• Applications are the link between business and Information Technology (IT).
By focusing on applications and addressing their unique storage
requirements, Hitachi Data Systems can help organizations address their key
business challenges.
Services Oriented Storage Solutions is a business-centric framework for aligning IT
storage resources with constantly changing business requirements. It provides a
dynamic, flexible platform of integrated storage services enabling organizations and
users to optimize storage infrastructure while reducing cost and complexity.

HDS Confidential: For distribution only to authorized parties. Page 4-3


Section 4
Services Oriented Storage Solutions

Services Oriented Storage Solutions

• Enterprises need solutions that manage and optimize their IT


infrastructure to meet business requirements – an application centric
approach
ERP Application Static Content Application

A
S P A
T P P
O P
R V
A I V
G E I
E W
W
E
Services Oriented Storage
V
I
E
Solutions are comprised of:
W
• Hardware
• Services
Most storage vendors focus • Software
on performance and capacity–
often only from the storage
perspective!
4

Page 4-4 HDS Confidential: For distribution only to authorized parties.


Section 4
Services Oriented Storage Solutions: One Platform for All Data

Services Oriented Storage Solutions: One Platform for All Data

Structured Data/RDB, Apps Unstructured Data (Files, Metadata, Content)

High-end Enterprise Application/DB Archiving/Object/Content


Level Awareness
ƒ Tiered storage/virtualization through ƒ Foundation for open, scalable and
ƒ USP and NSC hardware platforms integrated content solutions
ƒ Common protection solutions INTEGRATED
ƒ Common storage management STRATEGY
Common Storage and Data
Management, Tiered
Midrange Application/ Storage, Data Protection, NAS — Two Key
DB Enablement Security, Common Search Segments:
ƒ High Performance NAS – for
ƒ NSC, AMS/WMS hardware platforms
high throughput applications
ƒ Common Storage Management ƒ Standard NAS – for SAN/NAS
ƒ Common Protection Solutions consolidation and file and print
services

USP stands for Universal Storage Platform


NSC = Network Storage Controller
AMS = Adaptable Modular Storage
WMS = Workgroup Modular Storage

HDS Confidential: For distribution only to authorized parties. Page 4-5


Section 4
Services Oriented Storage Solutions: Architecture Summary

Services Oriented Storage Solutions: Architecture Summary

Applications
Services Oriented Storage
Storage Solutions Email CRM File/Print Database ERP ECM
Practices
QoS
Storage
Object Services Economics
SLA
Index, Search, Classification, Security
Storage Platform

Data
I/O File Services Classification
Virtualization, Replication, Migration, De-Duplication,
Security, Encryption, Archiving
RPO Risk Analysis

Block Services
RTO Virtualization, Discovery, Partitioning, Provisioning, Volume Compliance and
Management, Replication, Migration, Security, Metering
Archiving
Charge Back
Consolidation &
Utilization Tiered Storage
FC SATA TAPE Archive

Physical Storage

Key Objective: is to illustrate to the customer how Services Oriented Storage


Solutions is built on an integrated platform of services and why that is important to
them. Key points:
Key Points:
1. Services Oriented Storage Solutions provides a single platform for all block, file,
and object services. This eliminates the traditional silo approach to storage we
highlighted earlier in the presentation.
2. Using Services Oriented Storage Solutions customers can align their storage with
application requirements based upon metrics including quality of service (QoS),
SLA, I/O, RTO, and others. Some of these metrics are highlighted in the Sample
Metrics portion of the graphic.
3. Professional services are a key part of Services Oriented Storage Solutions.
Hitachi offers services for consulting, design, implementation, and health checks.
Some of our business-centric consulting services are highlighted in the Storage
Practices portion of the graphic.
As we have described throughout this module, the Services Oriented Storage
Solutions platform is a business-centric concept enabling organizations to closely
align their storage infrastructure with their business requirements. While many
storage vendors may claim to have business-centric strategies only Hitachi can

Page 4-6 HDS Confidential: For distribution only to authorized parties.


Section 4
Services Oriented Storage Solutions: Architecture Summary

deliver because Service Oriented Storage Solutions are built upon a dynamic,
flexible platform of integrated storage services enabling customers to optimize
storage infrastructure while reducing cost and complexity. The platform is both
powerful and simple:
The architecture summary illustrates that the Services Oriented Storage Solutions
are comprised of an integrated stack of services including:
y Block Services, which include volume virtualization, discovery, provisioning,
partitioning, volume management, replication, migration, security, and metering
y File Services, which include file virtualization, replication, migration, security,
encryption, and archiving
y Object Services, which include content services including index, search,
classification, and security

HDS Confidential: For distribution only to authorized parties. Page 4-7


Section 4
Solutions Focus

Solutions Focus

• A business-centric approach for aligning IT storage resources with


constantly changing business requirements, enabled by the Services
Oriented Storage platform
BUSINESS

COMPLIANCE COST RISK GOVERNANCE EFFICIENCY VALUE

ALIGNMENT SERVICES ORIENTED STORAGE SOLUTIONS ALIGNMENT

PROFESSIONAL BUSINESS TIERED STORAGE


VIRTUALIZATION ARCHIVING NAS
SERVICES CONTINUITY STORAGE MANAGEMENT

IT
7

Key Objective: Illustrate the link between customer business challenges and our
solution focus areas.
Key Points:
1. As illustrated on the previous slide Service Oriented Storage is a platform of
integrated services which used in conjunction create Services Oriented Storage
Solutions.
2. In addition to hardware and software services components, Services Oriented
Storage Solutions offer professional consulting, design, and implementation
services to insure customers maximize their investment in Hitachi solutions.
3. Hitachi Data System’s solution approach is to understand the customer’s key
business and Information Technology challenges, and then to deploy the
appropriate solutions to address their needs.

Page 4-8 HDS Confidential: For distribution only to authorized parties.


Section 4
Hitachi Device Manager Software

Hitachi Device Manager Software

Hitachi Device Manager Software

HDS Confidential: For distribution only to authorized parties. Page 4-9


Section 4
Device Manager Software Value Proposition

Device Manager Software Value Proposition

• Device Manager software centrally manages all


Hitachi Data Systems Storage
– A single console, single product for managing all
tiers of storage
– One common interface, browser and CLI
– CIM 2.8 / SMI-S 1.1 Enabled
– Discover, Configure, Monitor, Report, and
Provision of Storage

Benefits
• Improved productivity of IT resources
• Integrated data center and enterprise
operations
• Utilization of enterprise storage assets
• Risk mitigation
• Proactive alerts on storage arrays to
prevent outages
• Disaster recovery management to
minimize downtime

Device Manager software manages all Hitachi Data Systems arrays - Thunder,
Lightning, and Universal Storage Platform — with the same interface. It can also
manage multiple arrays in a network environment. Targeted for users managing
multiple storage arrays in open or shared environments, Device Manager software
quickly discovers the key configuration attributes of storage systems and allows
users to begin proactively managing complex and heterogeneous storage
environments quickly and effectively using an easy-to-use browser-based Graphical
User Interface (GUI). Device Manager software enables remote storage management
over secure IP connections and does not have to be direct-attached to the storage
system.

NSC stands for Network Storage Controller


AMS stands for Adaptable Modular Storage
WMS stands for Workgroup Modular Storage
USP stands for Universal Storage Platform
9900V stands for Lightning 9900 V Series enterprise storage systems
9500V stands for Thunder 9500 V Series modular storage systems

Page 4-10 HDS Confidential: For distribution only to authorized parties.


Section 4
Hitachi Storage Management Suite Products

Hitachi Storage Management Suite Products

Business
QoS for Hitachi Dynamic Link Manager
Application QoS Application Modules
File Servers
Protection Manager
Oracle - Exchange - Sybase Exchange - SQL Server Path failover and failback
Modules SRM load balancing

Storage Path
Chargeback Global
Provisioning
Operations Reporter
Modules Backup
Tiered
Services Replication Tuning
Storage
Manager Monitor Manager
• Path Management
Storage Services Manager Manager
• Capacity Monitoring
• Performance Monitoring

Device Manager software

Hitachi Hitachi
HDS API
Configuration Reporting Provisioning Replication Resource Performance
Array CIM/SMI-
CIM/SMI-S Maximizer
Manager
Services

New Product Updated Product Heterogeneous Hitachi Storage Specific

10

This graphic represents a view of the Storage Management Suite laid out according
to functional layer. Light blue modules support heterogeneous environments. Dark
blue modules support heterogeneous environments but at Hitachi storage system
specific.
This is not a top-down dependency chart, although there are some top-down
dependencies here. Rather it is sorted into rows according to what the
purpose/benefit of the product is aimed at.
y The first layer at the bottom is Hitachi Storage System-specific modules for
supporting and interfacing with Hitachi arrays to get the most out of Hitachi
Data Systems storage.
y The second layer is made up of products that support storage systems on an
operational basis – things that make efficient and reliable management of storage
possible.
y The top layer consists of modules that are application specific tools to improve
application-to-storage service levels.

HDS Confidential: For distribution only to authorized parties. Page 4-11


Section 4
Device Manager Software Value Proposition

Device Manager Software Value Proposition

• Organizes and manages Finance Email


storage from a logical Santa Clara Oracle
perspective, along lines of
business, departments,
criticality, or storage class
• Immediate view of available
storage and current usage
• Consolidated control of FC/IP
SAN
Hitachi storage as well as
externally-attached Network
Storage Controller or Device
Manager
Universal Storage Platform
storage Physical Storage Pool

• Enables easy deployment of


storage resources to meet High Perf. 99.99% 100% General Backup Archive
Purpose
business and application Thunder Lightning Thunder
needs 9500V 9900V 9500 SATA

11

Page 4-12 HDS Confidential: For distribution only to authorized parties.


Section 4
Device Manager Software Components

Device Manager Software Components

• Device Manager software consists of the following components:


– Server and its subcomponent (on the management server)
– Host agent (on the customer production server)
– Management console (on the Web browser)

Production
Production Server
Server (Host)
(Host)

Management LAN

AIX HP-UX Windows


Management
Management
Agent Agent Agent Server
Server

Device Manager
SAN Server

HBase
Management
Management
Storage
Storage Systems
Systems Console
Console (Client)
(Client)

12

y Host Agents allow for server-side storage provisioning (through the


Provisioning Assistant/Manager) and provide the servers view on the visible
storage to the Device Manager.
y Installing the Host Agent software on a server allows this server to be added to
the Device Manager's configuration. The same operation can be achieved
through scripting the CLI or running the 'LUN Scan" command.

HDS Confidential: For distribution only to authorized parties. Page 4-13


Section 4
Device Manager Software Provisioning Assistant

Device Manager Software Provisioning Assistant

• Device Manager’s Provisioning Assistant software functionalities are:


– Storage Pool Management
– Host Volume Management

Allocate Subsystem
Storage Select the optimal LDEVs
Storage
from Storage Pool

Storage Allocate LDEVs to a Host (*)


Pool
(*) Launch HDvM

Create
File System Create Device File
Host
Create File System

Mount to File System

13

Provisioning Assistant software provides the functionality to integrate and manage


various models and types of storage subsystems as a single, logical storage pool. In
Provisioning Assistant software, a storage pool refers to a managed data storage area
that resides on a set of storage subsystems. A storage pool is a collection of volumes
(LUs). You can use Device Manager's All Storage (My Storage) functionality to place
the storage pools into hierarchies and manage a storage pool for each user group.

Page 4-14 HDS Confidential: For distribution only to authorized parties.


Section 4
Device Manager Command Line Interface

Device Manager Command Line Interface

• The CLI enables you to perform Device


Manager software operations by issuing
commands from the CLI client interactively Device Manager CLI
or using a script
Communicated by
• The CLI communicates with and runs as a XML/API on HTTP
client of the Device Manager server (or HTTPS)
protocol
• Device Manager In-Band CLI
– A new CLI which communicates to the Device Manager
storage system through a command software
device
– Is provided as part of Device Manager
SAN
software Bundle

Universal Storage Platform, Lightning


Legacy systems, Network Storage
Controller, Thunder Legacy systems,
and HP Storage Works XP

14

HDS Confidential: For distribution only to authorized parties. Page 4-15


Section 4
Hitachi Tuning Manager Software

Hitachi Tuning Manager Software

Hitachi Tuning Manager Software

15

Page 4-16 HDS Confidential: For distribution only to authorized parties.


Section 4
Storage Management Suite Products

Storage Management Suite Products

• Functional View
Business
QoS for Hitachi Dynamic Link Manager
Application QoS Application Modules
File Servers
Protection Manager
Path failover and failback
Modules Oracle - Exchange - Sybase Exchange - SQL Server
SRM load balancing

Storage Path
Chargeback Global
Provisioning
Operations Reporter
Modules Backup
Tiered
Services Replication Tuning
Storage
Manager Monitor Manager
• Path Management
Storage Services Manager
Manager
• Capacity Monitoring
• Performance Monitoring

Device Manager

Hitachi Hitachi
HDS API
Configuration Reporting Provisioning Replication Resource Performance
Array CIM/SMI-
CIM/SMI-S Maximizer
Manager
Services

New Product Updated Product Heterogeneous Hitachi Storage Specific

16

This graphic is a view of the Storage Management Suite laid out according to
functional layer. Light blue modules support heterogeneous environments. Dark
Blue modules support heterogeneous environments but at Hitachi Storage System
specific.
This is not a top-down dependency chart, although there are some top-down
dependencies here. Rather it is sorted into rows according to what the
purpose/benefit of the product is aimed at.
y The first layer at the bottom is Hitachi Storage System-specific modules for
supporting and interfacing with Hitachi arrays to get the most out of Hitachi
Data Systems storage.
y The second layer is made up of products that support storage systems on an
operational basis — things that make efficient and reliable management of
storage possible.
y The top layer is modules that are application specific tools to improve
application-to-storage service levels.

HDS Confidential: For distribution only to authorized parties. Page 4-17


Section 4
The Performance and Capacity Management Challenge of a Networked Storage Environment

The Performance and Capacity Management Challenge of a


Networked Storage Environment

• Gather data from servers, databases, switches, and storage systems with
device-specific tools, then consolidate, analyze, and correlate data that is
presented in different formats.

Server
Gather Data App
SAN Storage

Device-specific Server tool Switch tool Storage tool


tools

Server Switch Storage


Correlate the data report report report

• Interpret each report separately


• Integrate the data manually
- Synchronize time stamps MS Excel (example)
- Unify different data formats
- Correlate the various reports

17

Troubleshooting requires a view of the path from the application to the storage
system. Without a tool that consolidates and normalizes all of the data, the system
administrator has difficulty distinguishing between possible sources. When a
performance problem occurs or the “database (DB) application response time
exceeds acceptable levels”, they must quickly determine if the problem is in the
application server.
Server/App Analysis — is the problem caused by trouble on the server? (DB, file
system, and HBA)
Fabric Analysis — is there a SAN switch problem? (Port, ISL, and more)
Storage Analysis — is the storage system a bottleneck?
All of the data from the components of the Storage network must be gathered by
different device-specific tools and interpreted, correlated and integrated manually,
including the timestamps, in order to find the root cause of a problem.
Some customers achieve this by exporting (CSV format) lots of data to spreadsheets
and then manually sorting and manipulating the data.

Page 4-18 HDS Confidential: For distribution only to authorized parties.


Section 4
Tuning Manager Agents

Tuning Manager Agents


• Tuning Manager server manages agents on Tuning Manager
multiple platforms software
Tuning Manager - MC

Performance Reporter
Client Client
LAN

Conceptional
SUN HP AIX WIN
Solaris Agent HP UX Agent AIX Agent Windows Agent
Agent for Platform
Oracle Agent SAN Agent RAID Agent
Agent for RAID
NAS Agent SQL Agent

Agent for SAN

SAN Agent for Oracle

Agent for NAS


Hitachi Modular Universal Lightning 9900
Storage Storage and Lightning Agent for SQL
Platform 9900 V Series
Systems
Agent for DB2

Tuning Manager software consists of agents and a server. The agents collect
performance and capacity data for each monitored resource, and the server manages
the agents. This diagram shows an example system configuration.
Agents can run multiple instances to collect metrics from multiple application
instances, fabrics, and storage systems.
The instances of the Agent for RAID collect metrics from enterprise storage systems
using inbound Fibre Channel connection communicating to the CMD device in the
array. Modular storage is accessed via LAN using the DAMP utility to collect metric
data.
Tuning Manager server can concurrently serve as business server on SUN Solaris
and Microsoft Windows in small environments. The maximum number of resources
manageable by one Tuning Manager server is 16,000 and in this case Tuning
Manager requires installation on a dedicated server. To be able to manage as many
resources as possible with good performance, carefully consider the Tuning
Manager system requirements.
Hitachi modular storage includes Adaptable Module Storage and Thunder series
storage.
Universal Storage Platform is Universal Storage Platform™.
Lightning 9900 is Lightning 9900™ Series enterprise storage systems.
Lightning 9900 V is Lightning 9900 V™ Series enterprise storage systems.

HDS Confidential: For distribution only to authorized parties. Page 4-19


Section 4
Hitachi Performance Monitoring and Reporting Products

Hitachi Performance Monitoring and Reporting Products

Tuning Manager

Advanced application-to-spindle reporting, analysis and troubleshooting for all Hitachi storage systems

Performance Monitor
Detailed point-in-time reporting of
Storage Services Manager individual Hitachi storage systems
(QoS Modules)

Heterogeneous path performance monitoring and capacity planning

Parity Group
Array Port

ACP/DKC
Cache
CHP

Disk
App HBA/Host Switch Storage System

19

This is a visualization of how these products work, and what they cover.
Storage Services Manager software provides visibility to performance within the
storage network, from the application to the storage system port. It does not provide
insight within the storage system. It is useful when a SAN includes storage systems
from multiple vendors.
Performance Monitor provides in-depth, point-in-time information about
performance within a Hitachi storage system. It does not provide any information
about the network, the host, or the application. Nor does it provide any correlation
to that information, if used in conjunction with a product such as Storage Services
Manager software.
Tuning Manager software provides end-to-end visibility for storage performance.
Though limited to Hitachi storage systems, it provides the most thorough view of
the system, tracking an I/O from an application to the disk. This ability to correlate
this information, and link from step-to-step in the I/O path provides the most
efficient solution to identifying performance bottlenecks.
I/O response time, both host side and array side:
y 4.0 adds the ability to monitor the round trip response time for troubleshooting
and proactive service level error condition alerting results in improved

Page 4-20 HDS Confidential: For distribution only to authorized parties.


Section 4
Hitachi Performance Monitoring and Reporting Products

application performance. On Universal Storage Platform this ability includes and


extends to round trip response to/from external storage.
Note there is no correlation between Storage Service Manager and Performance
Monitor, so the two combined do not provide the same end-to-end performance
information that Tuning Manager does

HDS Confidential: For distribution only to authorized parties. Page 4-21


Section 4
Capacity and Performance Management

Capacity and Performance Management

• Tuning Manager Software Provides:


– Monitoring and reporting on capacity
and performance
– Historical reports to help optimize the
current infrastructure
– Forecasting to help anticipate future
growth and avoid surprises
– Alerts to provide proactive notification
of potential problems
– View the performance of a resource at
a specific past point-in-time
– View all SAN-attached devices and
their relationships to each other

20

y Tuning Manager Feature Highlights


Œ Monitors storage capacity and storage performance metrics from application
to device
Œ Helps to maintains IT service and operating level agreements
Œ Sets storage capacity and performance alerts
Œ Analyzes and forecasts future storage requirements
Œ Generates storage utilization and performance reports
Œ Automated storage management scripts
Œ Supports Oracle, Solaris, Windows
Œ Ready made templates for analysis and trending
Œ User-customizable
Œ Notification options, including CLI support
Œ Historical, forecast, lists, and graphical reports
Œ The ability to view the performance of a resource at a specific past point-in-
time, so that you can correlate any recent configuration changes with changes
in application performance or response time

Page 4-22 HDS Confidential: For distribution only to authorized parties.


Section 4
Capacity and Performance Management

Œ The ability to view all SAN-attached servers, databases, file systems, switches,
storage systems, logical volumes, disk array groups, and their relationships to
each other
y Forecasting data can easily be extracted by logging in with "User" level security
level
y Alerts can trigger sending an email message, a SNMP trap message, or running a
shell script/batch file

HDS Confidential: For distribution only to authorized parties. Page 4-23


Section 4
Tuning Manager Performance Reporter

Tuning Manager Performance Reporter

• Performance Reporter Metrics Display

Tuning Manager Server


Main Console Performance
(1) Reporter (3)
Database (2)

Collection Manager

LAN
(3) Agent Client
Internet Explorer
AGT-DB Netscape Navigator

1. Launch Performance Reporter from the Main Console.


2. Select metrics you want to see.
3. Performance Reporter gathers metrics date from Tuning Manager Agent.
4. Tuning Manager software displays the data.

21

Performance Reporter does not display database data, but displays agent database
directly.

Page 4-24 HDS Confidential: For distribution only to authorized parties.


Section 4
Hitachi Content Archive Platform

Hitachi Content Archive Platform

Hitachi Content Archive Platform

22

HDS Confidential: For distribution only to authorized parties. Page 4-25


Section 4
What an Active Archive Solution Must Deliver

What an Active Archive Solution Must Deliver

• Fixed content retention • Reliability


– Policies for preserving content for long – Be able to withstand simultaneous
periods points of failure
– May require Write Once - Read Many – Withstand site disasters
(WORM) • Accessibility
– Authentication – Support and provision files to multiple
• Scalability applications from the same archive
– Capacity — hold thousands of terabytes in a – Search and access files using
single repository standard and open methods
– Volume of files — ingest a growing volume
of new files such as email
Digital Video
Satellite Images

Biotechnology Medical Records

Legal Records Email

Fixed Content File Types


23

Customers demand a product that assures the retention of authentic, fixed content
in an immutable form that provides the scalability needed to address ever-
increasing volumes of new content and the associated growth in storage capacity. It
must provide the reliability required to meet customer DR/BC policies as well as
SLAs needed to ensure content is accessible when needed. The Hitachi Data Systems
product delivers on all of these with the most robust platform for fixed content
archiving.

Page 4-26 HDS Confidential: For distribution only to authorized parties.


Section 4
Value of ISV Partner Ecosystem

Value of ISV Partner Ecosystem

Digital Video
Satellite

Biotechnology Medical

Legal
Records
Email Applications

ISV Partners

Content Archive Platform

24

Applications (for example email and digital imaging) do not typically interact
directly with the archive. They typically interface to a “middleware” ISV application
that provides additional functionality to the application before the data is passed to
the archive. This additional functionality could include setting retention times,
search, timed deletion of data, and replication. Once this middleware has processed
the data, it is passed to the archive for storage of the data and metadata produced by
this pre-ingestion grooming.
Our ISV program is critical because it certifies the various ISV partners middleware
with our solution to insure a seamless solution. We offer two levels of certification,
compatible and integrated. Compatible means that the ISV middleware software
works with our solution. Integrated means that the ISV partners software has been
modified to better integrate with our Content Archive Platform and allows it to take
advantage of some of its advanced features, such as retention time, shredding, and
single instance.
Hitachi Data Systems ISV Partners in support of the Content Archive Platform cover
several application categories for content archiving; email, Enterprise Content
Management, file system and database archiving.

HDS Confidential: For distribution only to authorized parties. Page 4-27


Section 4
Three Solutions

Three Solutions

Hitachi Content Archive Platform Hitachi Content Archive


• Fully-Integrated Appliance Platform
• HCAP – utility device with integrated
• Software-only
model WMS100 storage OR
• Provides the highest-level of security • Customer provides storage,
servers, and switches

OR
Content Archive Platform
USP V
Support with HCAP DL NSC55
HCAP DL (Diskless)
Functionality Demanded

• Appliance without embedded


storage
AMS1000
• Supports models WS100,
AMS500 AMS200/500/1000, USP V,
NSC-55
AMS200
• Provides ultimate flexibility and
WMS100 ability to leverage existing
Hitachi storage

25

Content Archive Platform’s fully integrated appliance includes:


y Content Archiver V2.0 software
y 1U Server Nodes (4GB memory) – start with two, scaling up in pairs
y Two Ethernet Switches
y Two FC Switches (16 port expandable)
y Workgroup Modular Storage system Array (controllers + disk; RAID 6)
y 42U Rack
y Redundant connectivity and pre-cabled
Adaptable Modular Storage models:
y Adaptable Modular Storage 200
y Adaptable Modular Storage 500
y Adaptable Modular Storage 1000
Workgroup Modular Storage model:
y Workgroup Modular Storage 100
Universal Storage Platform V
Hitachi Network Storage Controller module NSC55

Page 4-28 HDS Confidential: For distribution only to authorized parties.


Section 4
Packaging and Configuration

Packaging and Configuration

Multiple cells
can be
A cell package combined to
includes two SMTP form a larger
nodes and one CIFS system with a
model WMS100 single archive
Cardiff

NFS

array using FC namespace -


Cardiff

HTTP

connectivity WebDAV cell Supports up to


40 cells
SMTP
Cardiff

CIFS
Cardiff

NFS
Cardiff

Cardiff

HTTP

cell
cell WebDAV

Base model
SMTP
includes two cells
starting in 4.8TB
Cardiff
CIFS
Cardiff

and 9.6TB usable


NFS
Cardiff
Cardiff

capacity
HTTP

WebDAV cell
cell
SMTP

CIFS
Cardiff

NFS
Cardiff

HTTP

WebDAV cell

Network Switch

26

All Content Archive Platform cells must be the same size initially. This includes
initial purchase and upgrades.

HDS Confidential: For distribution only to authorized parties. Page 4-29


Section 4
Virtual Tape Library Solutions by Hitachi Data Systems And Hitachi Data Protection Suite Solutions

Virtual Tape Library Solutions by Hitachi Data Systems And


Hitachi Data Protection Suite Solutions

Virtual Tape Library Solutions by Hitachi Data Systems


And Hitachi Data Protection Suite Solutions

27

Page 4-30 HDS Confidential: For distribution only to authorized parties.


Section 4
Virtual Tape Library

Virtual Tape Library

• Diligent Technologies Corporation's ProtecTIER, with HyperFactor™


technology, reduces bandwidth needs for replication
– Innovative data de-duplication technology
• Advanced data similarity analysis
– Initially — reduce 30TB of tape backup capacity to 19TB of VTL storage.
– After 17 weeks, as the HyperFactor algorithms recognize more duplicate
data, they reduce 287TB of tape backup to only 24TB of VTL storage.

28

HDS Confidential: For distribution only to authorized parties. Page 4-31


Section 4
Hitachi Data Protection Suite Platform

Hitachi Data Protection Suite Platform

Scheduling
Media Management
Policy Management User Interface
Index

29

Page 4-32 HDS Confidential: For distribution only to authorized parties.


Section 4
Hitachi Data Protection Suite

Hitachi Data Protection Suite

• Unified platform comprising solutions for data backup and recovery,


migration, archiving, and replication
– Designed for disk-to-disk storage area network (SAN)–based data
protection
– Provides unified cataloging, indexing, and movement of data at various
levels of granularity via a common user interface and policy engine
• Hitachi Backup and Recovery
– Intelligent Media Agents
• Integrates with OS and applications to provide granular recovery
– Win 2000/2003 system state
– Oracle, DB2, SQL Server tablespace or files
– Exchange message, note or mailbox

30

HDS Confidential: For distribution only to authorized parties. Page 4-33


Section 4
Hitachi Storage Capacity Reporter Introduction

Hitachi Storage Capacity Reporter Introduction

• Agent-less Storage Capacity Reporting


– Hitachi Storage Capacity Reporter is a new reporting product that provides end-
to-end storage capacity reporting from applications and hosts to heterogeneous
storage arrays without the use of host based agents

• Hitachi Storage Capacity Reporter


– Storage capacity utilization reporting for storage arrays, hosts, and leading
Enterprise applications (Oracle, Microsoft Exchange, Microsoft SQL Server)
• Current and historical views
• Predictive analysis for future projected growth
– Heterogeneous storage array support for Hitachi, EMC, Sun StorageTek, HP XP,
and NetApp
– Integrated with Hitachi Backup Services Manager
– Web 2.0 architecture
• Ease of use
• GUI speed
• Multiple reporting options

31

Hitachi Storage Capacity Reporter

Page 4-34 HDS Confidential: For distribution only to authorized parties.


Section 4
Features, Capabilities, and Value

Features, Capabilities, and Value

• Feature:
– Storage Arrays, Hosts, and Application Capacity
Reporting
• Capability:
– Identify over used, under used, or wasted storage
resources
– Provide capacity forecasting or predictive analysis

• Business Value:
– End-to-end storage capacity view from the host
perspective complementing storage array-side views
provided by other Storage Command Suite products
– Helps to ensure the availability and performance of
mission-critical business applications
– Easily deployable within a customer’s SAN environment
• What Makes This Unique?
– Application level storage reporting without the need to
install host based agents

32

HDS Confidential: For distribution only to authorized parties. Page 4-35


Section 4
Supported Storage Arrays

Supported Storage Arrays

• Hitachi Universal Storage Platform V


• Hitachi Universal Storage Platform VM
• Hitachi Universal Storage Platform
• Hitachi Network Storage Controller
• Hitachi Adaptable Modular Storage
• Hitachi Workgroup Modular Storage
• Hitachi Thunder and Lightning Storage Systems

• CLARiiON
• Symmetrix
• DMX

• XP Series

• Sun StorageTek 9900 Series

• FAS6000 Series
• FAS3100 Series
• FAS3000 Series
• FAS2000 Series

33

Page 4-36 HDS Confidential: For distribution only to authorized parties.


Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

—A— AL-PA — Arbitrated Loop Physical Address

ACC— Action Code. A SIM System Information AMS —Adaptable Modular Storage
Message. Will produce an ACC which takes APID — An ID to identify a command device.
an engineer to the correct fix procedures in APF (Authorized Program Facility) — In z/OS and
the ACC directory in the MM (Maintenance OS/390 environments, a facility that permits
Manual) the identification of programs that are
ACE (Access Control Entry) — Stores access authorized to use restricted functions.
rights for a single user or group within the Application Management —The processes that
Windows security model manage the capacity and performance of
ACL (Access Control List)— stores a set of ACEs, applications
so describes the complete set of access ARB — Arbitration or “request”
rights for a file system object within the
Microsoft Windows security model Array Domain—all functions, paths, and disk
drives controlled by a single ACP pair. An
ACP (Array Control Processor) ― Microprocessor array domain can contain a variety of LVI
mounted on the disk adapter circuit board and/or LU configurations.
(DKA) that controls the drives in a specific
disk array. Considered part of the back-end, ARRAY UNIT - A group of Hard Disk Drives in one
it controls data transfer between cache and RAID structure. Same as Parity Group
the hard drives. ASIC — Application specific integrated circuit
ACP PAIR ― Physical disk access control logic. ASSY — Assembly
Each ACP consists of two DKA PCBs. To Asymmetric virtualization — See Out-of-band
provide 8 loop paths to the real HDDs virtualization.
Actuator (arm) — read/write heads are attached to Asynchronous— An I/O operation whose initiator
a single head actuator, or actuator arm, that does not await its completion before
moves the heads around the platters proceeding with other work. Asynchronous
AD — Active Directory I/O operations enable an initiator to have
ADC — Accelerated Data Copy multiple concurrent I/O operations in
progress.
ADP —Adapter
ATA — Short for Advanced Technology
ADS — Active Directory Service Attachment, a disk drive implementation that
Address— A location of data, usually in main integrates the controller on the disk drive
memory or on a disk. A name or token that itself, also known as IDE (Integrated Drive
identifies a network component. In local area Electronics) Advanced Technology
networks (LANs), for example, every node Attachment is a standard designed to
has a unique address connect hard and removable disk drives
AIX — IBM UNIX Authentication — The process of identifying an
AL (Arbitrated Loop) — A network in which nodes individual, usually based on a username and
contend to send data and only one node at a password.
time is able to send data.

HDS Confidential: For distribution only to authorized parties. Page 1


Availability — Consistent direct access to storage companies, including HDS, calculate
information over time capacity based on the assumption that 1
megabyte = 1000 kilobytes and 1
-back to top- gigabyte=1,000 megabytes.
CAPEX - capital expenditure - is the cost of
developing or providing non-consumable
—B— parts for the product or system. For
B4 — A group of 4 HDU boxes that are used to example, the purchase of a photocopier is
contain 128 HDDs the CAPEX, and the annual paper and toner
cost is the OPEX. (See OPEX).
Backend— In client/server applications, the client
part of the program is often called the front- CAS — Column address strobe is a signal sent to
end and the server part is called the back- a dynamic random access memory (DRAM)
end. Backup image—Data saved during an that tells it that an associated address is a
archive operation. It includes all the column address. CAS- column address
associated files, directories, and catalog strobe sent by the processor to a DRAM
information of the backup operation. circuit to activate a column address.
BATCTR — Battery Control PCB CCI — Command Control Interface
BED — Back End Director. Controls the paths to
CE — Customer Engineer
the HDDs
Centralized management —Storage data
Bind Mode — One of two modes available when
management, capacity management, access
using FlashAccess™, in which the
security management, and path
FlashAccess™ extents hold read data for
management functions accomplished by
specific extents on volumes (see Priority
software.
Mode).
CentOS — Community Enterprise Operating
BST — Binary Search Tree
System
BTU— British Thermal Unit
CFW— Cache Fast Write
Business Continuity Plan — Describes how an
CHA (Channel Adapter) ― Provides the channel
organization will resume partially- or
interface control functions and internal cache
completely interrupted critical functions
data transfer functions. It is used to convert
within a predetermined time after a disruption
the data format between CKD and FBA. The
or a disaster. Sometimes also called a
CHA contains an internal processor and 128
Disaster Recovery Plan.
bytes of edit buffer memory.
CH — Channel
-back to top-
CHA — Channel Adapter
CHAP — Challenge-Handshake Authentication
—C— Protocol
CA — Continuous Access software (see HORC) CHF — Channel Fibre
Cache — Cache Memory. Intermediate buffer CHIP (Client-Host Interface Processor) ―
between the channels and drives. It has a Microprocessors on the CHA boards that
maximum of 64 GB (32 GB x 2 areas) of process the channel commands from the
capacity. It is available and controlled as two hosts and manage host access to cache.
areas of cache (cache A and cache B). It is
fully battery-backed (48 hours) . CHK— Check
Cache hit rate — When data is found in the cache, CHN — CHannel adapter NAS
it is called a cache hit, and the effectiveness CHP — Channel Processor or Channel Path
of a cache is judged by its hit rate. CHPID — Channel Path Identifier
Cache partitioning — Storage management CH S— Channel SCSI
software that allows the virtual partitioning of
cache and allocation of it to different CHSN — Cache memory Hierarchical Star
applications Network
CAD — Computer-Aided Design CHT—Channel tachyon, a Fibre Channel protocol
controller
Capacity — Capacity is the amount of data that a
drive can store after formatting. Most data

Page 2 HDS Confidential: For distribution only to authorized parties.


CIFS protocol — common internet file system is a programs on a given computer to run
platform-independent file sharing system. A routines or access objects on another remote
network file system access protocol primarily computer
used by Windows clients to communicate file Controller — A device that controls the transfer of
access requests to Windows servers. data from a computer to a peripheral device
CIM — Common Information ModelCKD (Count- (including a storage system) and vice versa.
key Data) ― A format for encoding data on Controller-based Virtualization — Driven by the
hard disk drives; typically used in the physical controller at the hardware
mainframe environment. microcode level versus at the application
CKPT — Check Point software layer and integrates into the
CL — See Cluster infrastructure to allow virtualization across
heterogeneous storage and third party
CLI — Command Line Interface products
CLPR (Cache Logical PaRtition) — Cache can be Corporate governance — Organizational
divided into multiple virtual cache memories compliance with government-mandated
to lessen I/O contention. regulations
Cluster — A collection of computers that are COW — Copy On Write Snapshot
interconnected (typically at high-speeds) for
the purpose of improving reliability, CPS — Cache Port Slave
availability, serviceability and/or performance CPU — Central Processor Unit
(via load balancing). Often, clustered CRM — Customer Relationship Management
computers have access to a common pool of
storage, and run special software to CruiseControl — Now called Hitachi Volume
coordinate the component computers' Migration software
activities. CSV — Comma Separated Value
CM (Cache Memory Module) ― Cache Memory. CSW (Cache Switch PCB) ― The cache switch
Intermediate buffer between the channels (CSW) connects the channel adapter or disk
and drives. It has a maximum of 64 GB (32 adapter to the cache. Each of them is
GB x 2 areas) of capacity. It is available and connected to the cache by the Cache
controlled as two areas of cache (cache A Memory Hierarchical Star Net (C-HSN)
and cache B). It is fully battery-backed (48 method. Each cluster is provided with the
hours) two CSWs, and each CSW can connect four
CM PATH (Cache Memory Access Path) ― caches. The CSW switches any of the cache
Access Path from the processors of CHA, paths to which the channel adapter or disk
DKA PCB to Cache Memory. adapter is to be connected through
arbitration.
CMD — Command
CU (Control Unit) — The hexadecimal number to
CMG — Cache Memory Group which 256 LDEVs may be assigned
CNAME — Canonical NAME CUDG —Control Unit DiaGnostics. Internal
CPM (Cache Partition Manager) — Allows for system tests.
partitioning of the cache and assigns a CV — Custom Volume
partition to a LU; this enables tuning of the
system’s performance. CVS (Customizable Volume Size) ― software
used to create custom volume sizes.
CNS— Clustered Name Space Marketed under the name Virtual LVI (VLVI)
Concatenation — A logical joining of two series of and Virtual LUN (VLUN)
data. Usually represented by the symbol “|”.
In data communications, two or more data -back to top-
are often concatenated to provide a unique
name or reference (e.g., S_ID | X_ID).
Volume managers concatenate disk address —D—
spaces to present a single larger address
DAD (Device Address Domain) — Indicates a site
spaces.
of the same device number automation
Connectivity technology — a program or device's support function. If several hosts on the
ability to link with other programs and same site have the same device number
devices. Connectivity technology allows system, they have the same name.

HDS Confidential: For distribution only to authorized parties. Page 3


DACL — Discretionary ACL - the part of a security DFW —DASD Fast Write
descriptor that stores access rights for users DIMM—Dual In-line Memory Module
and groups.
Direct Attached Storage — Storage that is directly
DAMP (Disk Array Management Program) ― attached to the application or file server. No
Renamed to Storage Navigator Modular other device on the network can access the
(SNM) stored data
DAS — Direct Attached Storage Director class switches — larger switches often
DASD—Direct Access Storage Device used as the core of large switched fabrics
Data Blocks — A fixed-size unit of data that is Disaster Recovery Plan (DRP) — A plan that
transferred together. For example, the X- describes how an organization will deal with
modem protocol transfers blocks of 128 potential disasters. It may include the
bytes. In general, the larger the block size, precautions taken to either maintain or
the faster the data transfer rate. quickly resume mission-critical functions.
Data Integrity —Assurance that information will be Sometimes also referred to as a Business
protected from modification and corruption. Continuity Plan.
Data Lifecycle Management — An approach to Disk Administrator — An administrative tool that
information and storage management. The displays the actual LU storage configuration
policies, processes, practices, services and Disk Array — A linked group of one or more
tools used to align the business value of data physical independent hard disk drives
with the most appropriate and cost-effective generally used to replace larger, single disk
storage infrastructure from the time data is drive systems. The most common disk
created through its final disposition. Data is arrays are in daisy chain configuration or
aligned with business requirements through implement RAID (Redundant Array of
management policies and service levels Independent Disks) technology. A disk array
associated with performance, availability, may contain several disk drive trays, and is
recoverability, cost and what ever structured to improve speed and increase
parameters the organization defines as protection against loss of data. Disk arrays
critical to its operations. organize their data storage into Logical Units
Data Migration— The process of moving data from (LUs), which appear as linear block paces to
one storage device to another. In this their clients. A small disk array, with a few
context, data migration is the same as disks, might support up to 8 LUs; a large
Hierarchical Storage Management (HSM). one, with hundreds of disk drives, can
support thousands.
Data Pool— A volume containing differential data
only. DKA (Disk Adapter) ― Also called an array control
processor (ACP); it provides the control
Data Striping — Disk array data mapping functions for data transfer between drives
technique in which fixed-length sequences of and cache. The DKA contains DRR (Data
virtual disk data addresses are mapped to Recover and Reconstruct), a parity generator
sequences of member disk addresses in a circuit. It supports four fibre channel paths
regular rotating pattern. and offers 32 KB of buffer for each fibre
Data Transfer Rate (DTR) — The speed at which channel path.
data can be transferred. Measured in DKC (Disk Controller Unit) ― In a multi-frame
kilobytes per second for a CD-ROM drive, in configuration, the frame that contains the
bits per second for a modem, and in front end (control and memory components).
megabytes per second for a hard drive. Also,
often called simply data rate. DKCMN ― Disk Controller Monitor. Monitors
temperature and power status throughout the
DCR (Dynamic Cache Residency) ― see machine
FlashAccess™
DKF (fibre disk adapter) ― Another term for a
DE— Data Exchange Software DKA.DKU (Disk Unit) ― In a multi-frame
Device Management — Processes that configure configuration, a frame that contains hard disk
and manage storage systems units (HDUs).
DDL — Database Definition Language DLIBs — Distribution Libraries
DDNS —Dynamic DNS DLM —Data Lifecycle Management
DFS — Microsoft Distributed File System

Page 4 HDS Confidential: For distribution only to authorized parties.


DMA— Direct Memory Access EREP — Error REporting and Printing
DM-LU (Differential Management Logical Unit) — ERP — Enterprise Resource Management
DM-LU is used for saving management ESA — Enterprise Systems Architecture
information of the copy functions in the
cache ESC — Error Source Code
DMP — Disk Master Program ESCD — ESCON Director
DNS — Domain Name System ESCON (Enterprise Systems Connection) ― An
input/output (I/O) interface for mainframe
Domain — A number of related storage array computer connections to storage devices
groups. An “ACP Domain” or “Array Domain” developed by IBM.
means all of the array-groups controlled by
the same pair of DKA boards. Ethernet — A local area network (LAN)
OR architecture that supports clients and servers
― The HDDs managed by one ACP PAIR and uses twisted pair cables for connectivity.
(also called BED) EVS — Enterprise Virtual Server
DR — Disaster Recovery ExSA — Extended Serial Adapter
DRR (Data Recover and Reconstruct) —Data
Parity Generator chip on DKA -back to top-
DRV — Dynamic Reallocation Volume
DSB — Dynamic Super Block —F—
DSP — Disk Slave Program Fabric — The hardware that connects
DTA —Data adapter and path to cache-switches workstations and servers to storage devices
in a SAN is referred to as a "fabric." The
DW — Duplex Write
SAN fabric enables any-server-to-any-
DWL — Duplex Write Line storage device connectivity through the use
Dynamic Link Manager — HDS software that of Fibre Channel switching technology.
ensures that no single path becomes Failback — The restoration of a failed system
overworked while others remain underused. share of a load to a replacement component.
Dynamic Link Manager does this by For example, when a failed controller in a
providing automatic load balancing, path redundant configuration is replaced, the
failover, and recovery capabilities in case of devices that were originally controlled by the
a path failure. failed controller are usually failed back to the
replacement controller to restore the I/O
-back to top- balance, and to restore failure tolerance.
Similarly, when a defective fan or power
supply is replaced, its load, previously borne
—E— by a redundant component, can be failed
ECC — Error Checking & Correction back to the replacement part.
ECC.DDR SDRAM — Error Correction Code Failed over — A mode of operation for failure
Double Data Rate Synchronous Dynamic tolerant systems in which a component has
RAm Memory failed and its function has been assumed by
a redundant component. A system that
ECN — Engineering Change Notice protects against single failures operating in
E-COPY — Serverless or LAN free backup failed over mode is not failure tolerant, since
ENC — Stands for ENclosure Controller, the units failure of the redundant component may
render the system unable to function. Some
that connect the controllers in the DF700
with the Fibre Channel disks. They also allow systems (e.g., clusters) are able to tolerate
for online extending a system by adding more than one failure; these remain failure
tolerant until no redundant component is
RKAs
available to protect against further failures.
ECM— Extended Control Memory
Failover — A backup operation that automatically
EOF — End Of Field switches to a standby database server or
EPO — Emergency Power Off network if the primary system fails, or is
temporarily shut down for servicing. Failover
ENC — Enclosure
is an important fault tolerance function of

HDS Confidential: For distribution only to authorized parties. Page 5


mission-critical systems that rely on constant FC-AL — Fibre Channel Arbitrated Loop. A serial
accessibility. Failover automatically and data transfer architecture developed by a
transparently to the user redirects requests consortium of computer and mass storage
from the failed or down system to the backup device manufacturers and now being
system that mimics the operations of the standardized by ANSI. FC-AL was designed
primary system. for new mass storage devices and other
Failure tolerance — The ability of a system to peripheral devices that require very high
continue to perform its function or at a bandwidth. Using optical fiber to connect
reduced performance level, when one or devices, FC-AL supports full-duplex data
more of its components has failed. Failure transfer rates of 100MBps. FC-AL is
tolerance in disk subsystems is often compatible with SCSI for high-performance
achieved by including redundant instances of storage systems.
components whose failure would make the FC-P2P — Fibre Channel Point-to-Point
system inoperable, coupled with facilities that FC-SW — Fibre Channel Switched
allow the redundant components to assume FCC — Federal Communications Commission
the function of failed ones. FC — Fibre Channel or Field-Change (microcode
FAIS — Fabric Application Interface Standard update)
FAL — File Access Library FCIP –Fibre Channel over IP, a network
storage technology that combines the
FAT — File Allocation Table features of Fibre Channel and the Internet
Fault Tolerat — Describes a computer system or Protocol (IP) to connect distributed SANs
component designed so that, in the event of over large distances. FCIP is considered a
a component failure, a backup component or tunneling protocol, as it makes a transparent
procedure can immediately take its place point-to-point connection between
with no loss of service. Fault tolerance can geographically separated SANs over IP
be provided with software, embedded in networks. FCIP relies on TCP/IP services to
hardware, or provided by some hybrid establish connectivity between remote SANs
combination. over LANs, MANs, or WANs. An advantage
FBA — Fixed-block Architecture. Physical disk of FCIP is that it can use TCP/IP as the
sector mapping. transport while keeping Fibre Channel fabric
services intact.
FBA/CKD Conversion — The process of FCP — Fibre Channel Protocol
converting open-system data in FBA format
to mainframe data in CKD format. FC RKAJ (Fibre Channel Rack Additional) —
Acronym referring to an additional rack unit(s) that
FBA — Fixed Block Architecture houses additional hard drives exceeding the
FBUS — Fast I/O Bus capacity of the core RK unit of the Thunder
9500V/9200 subsystem.
FC ― Fibre Channel is a technology for
transmitting data between computer devices; FCU— File Conversion Utility
a set of standards for a serial I/O bus FD — Floppy Disk
capable of transferring data between two
ports FDR— Fast Dump/Restore
FC-0 ― Lowest layer on fibre channel transport, it FE — Field Engineer
represents the physical media. FED — Channel Front End Directors
FC-1 ― This layer contains the 8b/10b encoding Fibre Channel — A serial data transfer
scheme. architecture developed by a consortium of
FC-2 ― This layer handles framing and protocol, computer and mass storage device
frame format, sequence/exchange manufacturers and now being standardized
management and ordered set usage. by ANSI. The most prominent Fibre Channel
standard is Fibre Channel Arbitrated Loop
FC-3 ― This layer contains common services (FC-AL).
used by multiple N_Ports in a node.
FICON (Fiber Connectivity) ― A high-speed
FC-4 ― This layer handles standards and profiles input/output (I/O) interface for mainframe
for mapping upper level protocols like SCSI computer connections to storage devices. As
an IP onto the Fibre Channel Protocol. part of IBM's S/390 server, FICON channels
FCA ― Fibre Adapter. Fibre interface card. increase I/O capacity through the
Controls transmission of fibre packets. combination of a new architecture and faster

Page 6 HDS Confidential: For distribution only to authorized parties.


physical link rates to make them up to eight GLM — Gigabyte Link Module
times as efficient as ESCON (Enterprise Global Cache — Cache memory is used on
System Connection), IBM's previous fiber demand by multiple applications, use
optic channel standard. changes dynamically as required for READ
Flash ACC ― Flash access. Placing an entire performance between
LUN into cache hosts/applications/LUs.
FlashAccess — HDS software used to maintain Graph-Track™ — HDS software used to monitor
certain types of data in cache to ensure the performance of the Hitachi storage
quicker access to that data. subsystems. Graph-Track™ provides
FLGFAN ― Front Logic Box Fan Assembly. graphical displays, which give information on
device usage and system performance.
FLOGIC Box ― Front Logic Box.
GUI — Graphical User Interface
FM (Flash Memory) — Each microprocessor has
FM. FM is non-volatile memory which
-back to top-
contains microcode.
FOP — Fibre Optic Processor or fibre open
FPC — Failure Parts Code or Fibre Channel —H—
Protocol Chip H1F — Essentially the Floor Mounted disk rack
FPGA — Field Programmable Gate Array (also called Desk Side) equivalent of the RK.
(See also: RK, RKA, and H2F).
Frames — An ordered vector of words that is the
basic unit of data transmission in a Fibre H2F — Essentially the Floor Mounted disk rack
Channel network. (also called Desk Side) add-on equivalent
similar to the RKA. There is a limitation of
Front-end — In client/server applications, the client
only one H2F that can be added to the core
part of the program is often called the front
RK Floor Mounted unit. (See also: RK, RKA,
end and the server part is called the back
and H1F).
end.
HLU (Host Logical Unit) — A LU that the
FS — File System
Operating System and the HDLM
FSA — File System Module-A recognizes. Each HLU includes the devices
FSB — File System Module-B that comprise the storage LU
FSM — File System Module H-LUN — Host Logical Unit Number (See LUN)
FSW (Fibre Channel Interface Switch PCB) ― A HA — High Availability
board that provides the physical interface HBA — Host Bus Adapter—An HBA is an I/O
(cable connectors) between the ACP ports adapter that sits between the host
and the disks housed in a given disk drive. computer's bus and the Fibre Channel loop
FTP (File Transfer Protocol) ― A client-server and manages the transfer of information
protocol which allows a user on one between the two channels. In order to
computer to transfer files to and from another minimize the impact on host processor
computer over a TCP/IP network performance, the host bus adapter performs
many low-level interface functions
FWD — Fast Write Differential automatically or with minimal processor
involvement.
-back to top-
HDD (Hard Disk Drive) ― A spindle of hard disks
that make up a hard drive, which is a unit of
—G— physical storage within a subsystem.
HD — Hard Disk
GARD — General Available Restricted Distribution
GB — Gigabyte HDev (Hidden devices) — Hitachi Tuning Manager
Main Console may not display some drive
GBIC — Gigabit Interface Converter letters in its resource tree, and information
GID — Group Identifier such as performance and capacity is not
available for such invisible drives. This
GID — Group Identifier within the Unix security
problem occurs if there is a physical drive
model
with lower PhysicalDrive number assigned
GigE — Giga Bit Ethernet that is in “damaged (SCSI Inquiry data

HDS Confidential: For distribution only to authorized parties. Page 7


cannot be obtained)” or “hidden by HDLM” HPC — High Performance Computing
status. HRC — Hitachi Remote Copy ― See TrueCopy
HDS — Hitachi Data Systems HSG — Host Security Group
HDU (Hard Disk Unit) ― A number of hard drives HSM — Hierarchical Storage Management
(HDDs) grouped together within a
subsystem. HSSDC — High Speed Serial Data Connector
HDLM — Hitachi Dynamic Link Manager software HTTP — Hyper Text Transfer Protocol
Head — See read/write head HTTPS — Hyper Text Transfer Protocol Secure
Heterogeneous — The characteristic of containing Hub — A common connection point for devices in
dissimilar elements. A common use of this a network. Hubs are commonly used to
word in information technology is to describe connect segments of a LAN. A hub contains
a product as able to contain or be part of a multiple ports. When a packet arrives at one
heterogeneous network," consisting of port, it is copied to the other ports so that all
different manufacturers' products that can segments of the LAN can see all packets. A
interoperate. Heterogeneous networks are switching hub actually reads the destination
made possible by standards-conforming address of each packet and then forwards
hardware and software interfaces used in the packet to the correct port.
common by different products, thus allowing HXRC — Hitachi Extended Remote Copy
them to communicate with each other. The Hub — Device to which nodes on a multi-point bus
Internet itself is an example of a or loop are physically connected
heterogeneous network.
HiRDB — Hitachi Relational Database -back to top-
HIS — High Speed Interconnect
HiStar — Multiple point-to-point data paths to
cache
—I—
IBR — Incremental Block-level Replication
Hi Track System — Automatic fault reporting
system. IBR —Intelligent Block Replication
HIHSM — Hitachi Internal Hierarchy Storage ID — Identifier
Management IDR — Incremental Data Replication
HMDE — Hitachi Multiplatform Data Exchange iFCP — Short for the Internet Fibre Channel
HMRC F — Hitachi Multiple Raid Coupling Feature Protocol, iFCP allows an organization to
extend Fibre Channel storage networks over
HMRS — Hitachi Multiplatform Resource Sharing
the Internet by using TCP/IP. TCP is
HODM — Hitachi Online Data Migration responsible for managing congestion control
Homogeneous — Of the same or similar kind as well as error detection and recovery
services. iFCP allows an organization to
HOMRCF — Hitachi Open Multiple Raid Coupling
create an IP SAN fabric that minimizes the
Feature; Shadow Image, marketing name for
Fibre Channel fabric component and
HOMRCF
maximizes use of the company's TCP/IP
HORC — Hitachi Open Remote Copy ― See infrastructure.
TrueCopy
In-band virtualization — Refers to the location of
HORCM — Hitachi Open Raid Configuration the storage network path, between the
Manager application host servers in the storage
Host — Also called a server. A Host is basically a systems. Provides both control and data
central computer that processes end-user along the same connection path. Also called
applications or requests. symmetric virtualization.
Host LU — See HLU Interface —The physical and logical arrangement
supporting the attachment of any device to a
Host Storage Domains—Allows host pooling at the
connector or to another device.
LUN level and the priority access feature lets
administrator set service levels for Internal bus — Another name for an internal data
applications bus. Also, an expansion bus is often referred
to as an internal bus.
HP — Hewlett-Packard Company

Page 8 HDS Confidential: For distribution only to authorized parties.


Internal data bus — A bus that operates only —J—
within the internal circuitry of the CPU,
communicating among the internal caches of Java (and Java applications). — Java is a widely
memory that are part of the CPU chip’s accepted, open systems programming
design. This bus is typically rather quick and language. Hitachi’s enterprise software
is independent of the rest of the computer’s products are all accessed using Java
operations. applications. This enables storage
administrators to access the Hitachi
IID — Stands for Initiator ID. This is used to enterprise software products from any PC or
identify LU whether it is NAS System LU or workstation that runs a supported thin-client
User LU. If it is 0, that means NAS System internet browser application and that has
LU and if it is 1, then the LU is User LU. TCP/IP network access to the computer on
IIS — Internet Information Server which the software product runs.
I/O — Input/Output — The term I/O (pronounced Java VM — Java Virtual Machine
"eye-oh") is used to describe any program, JCL — Job Control Language
operation or device that transfers data to or
from a computer and to or from a peripheral JBOD — Just a Bunch of Disks
device. JRE —Java Runtime Environment
IML — Initial Microprogram Load JMP —Jumper. Option setting method
IP — Internet Protocol
IPL — Initial Program Load -back to top-

IPSEC — IP security
iSCSI (Internet SCSI ) — Pronounced eye skuzzy. —K—
Short for Internet SCSI, an IP-based kVA— Kilovolt Ampere
standard for linking data storage devices
over a network and transferring data by kW — Kilowatt
carrying SCSI commands over IP networks.
iSCSI supports a Gigabit Ethernet interface -back to top-
at the physical layer, which allows systems
supporting iSCSI interfaces to connect
directly to standard Gigabit Ethernet —L—
switches and/or IP routers. When an LACP — Link Aggregation Control Protocol
operating system receives a request it
LAG — Link Aggregation Groups
generates the SCSI command and then
sends an IP packet over an Ethernet LAN— Local Area Network
connection. At the receiving end, the SCSI LBA (logical block address) — A 28-bit value that
commands are separated from the request, maps to a specific cylinder-head-sector
and the SCSI commands and data are sent address on the disk.
to the SCSI controller and then to the SCSI
LC (Lucent connector) — Fibre Channel connector
storage device. iSCSI will also return a
that is smaller than a simplex connector (SC)
response to the request using the same
protocol. iSCSI is important to SAN LCDG—Link Processor Control Diagnostics
technology because it enables a SAN to be LCM— Link Control Module
deployed in a LAN, WAN or MAN.
LCP (Link Control Processor) — Controls the
iSER — iSCSI Extensions for RDMA optical links. LCP is located in the LCM.
ISL — Inter-Switch Link LCU — Logical Control Unit
iSNS — Internet Storage Name Service LD — Logical Device
ISPF — Interactive System Productivity Facility LDAP — Lightweight Directory Access Protocol
ISC — Initial shipping condition LDEV (Logical Device) ― A set of physical disk
ISOE — iSCSI Offload Engine partitions (all or portions of one or more
disks) that are combined so that the
ISP — Internet service provider
subsystem sees and treats them as a single
area of data storage; also called a volume.
-back to top- An LDEV has a specific and unique address

HDS Confidential: For distribution only to authorized parties. Page 9


within a subsystem. LDEVs become LUNs to —M—
an open-systems host.
LDKC — Logical Disk Controller MAC — Media Access Control (MAC address = a
Manual. unique identifier attached to most forms of
networking equipment.
LDM — Logical Disk Manager
MIB — Management information base
LED — Light Emitting Diode
MMC — Microsoft Management Console
LM — Local Memory
MPIO — multipath I/O
LMODs — Load Modules
Mapping — Conversion between two data
LNKLST — Link List addressing spaces. For example, mapping
Load balancing — Distributing processing and refers to the conversion between physical
communications activity evenly across a disk block addresses and the block
computer network so that no single device is addresses of the virtual disks presented to
overwhelmed. Load balancing is especially operating environments by control software.
important for networks where it's difficult to Mb — Megabits
predict the number of requests that will be
issued to a server. If one server starts to be MB — Megabytes
swamped, requests are forwarded to another MBUS — Multi-CPU Bus
server with more capacity. Load balancing
MC — Multi Cabinet
can also refer to the communications
channels themselves. MCU — Main Disk Control Unit; the local CU of a
remote copy pair.
LOC — Locations section of the Maintenance
Metadata — In database management systems,
Logical DKC (LDKC) — An internal architecture
data files are the files that store the database
extension to the Control Unit addressing
information, whereas other files, such as
scheme that allows more LDEVs to be
index files and data dictionaries, store
identified within one Hitachi enterprise
administrative information, known as
storage system. The LDKC is supported only
metadata.
on Universal Storage Platform V/VM class
storage systems. As of March 2008, only one MFC — Main Failure Code
LDKC is supported, LDKC 00. Refer to MIB — Management Information Base, a database
product documentation as Hitachi has of objects that can be monitored by a
announced their intent to expand this network management system. Both SNMP
capacity in the future. and RMON use standardized MIB formats
LPAR — Logical Partition that allow any SNMP and RMON tools to
monitor any device defined by a MIB.
LRU — Least Recently Used
Microcode — The lowest-level instructions that
LU — Logical Unit; Mapping number of an LDEV
directly control a microprocessor. A single
LUN (Logical Unit Number) ― One or more machine-language instruction typically
LDEVs. Used only for open systems. LVI translates into several microcode
(logical volume image) identifies a similar instructions.
concept in the mainframe environment.
LUN Manager — HDS software used to map
Logical Units (LUNs) to subsystem ports.
LUSE (Logical Unit Size Expansion) ― Feature
used to create virtual LUs that are up to 36
times larger than the standard OPEN-x LUs.
LVDS — Low Voltage Differential Signal
LVM — Logical Volume Manager

-back to top-

Microprogram — See Microcode

Page 10 HDS Confidential: For distribution only to authorized parties.


Mirror Cache OFF — Increases cache efficiency —O—
over cache data redundancy.
OEM — Original Equipment Manufacturer
MM — Maintenance manual.
OFC — Open Fibre Control
MPA — Micro-processor adapter
OID — Object identifier
MP — Microprocessor
OLTP — On-Line Transaction Processing
MPU— Microprocessor Unit
ONODE — Object node
Mode— The state or setting of a program or
device. The term mode implies a choice -- OPEX – Operational Expenditure – An operating
that you can change the setting and put the expense, operating expenditure, operational
system in a different mode. expense, operational expenditure or OPEX
is an on-going cost for running a product,
MSCS — Microsoft Cluster Server business, or system. Its counterpart is a
MS/SG — Microsoft Service Guard capital expenditure (CAPEX).
MTS — Multi-Tiered Storage Out-of-band virtualization — Refers to systems
MVS — Multiple Virtual Storage where the controller is located outside of the
SAN data path. Separates control and data
on different connection paths. Also called
-back to top-
asymmetric virtualization.
ORM— Online Read Margin
—N— OS — Operating System
NAS (Network Attached Storage) ― A disk array
connected to a controller that gives access to -back to top-
a LAN Transport. It handles data at the file
level.
NAT — Network Address Translation —P—
NAT — Network Address Translation Parity — A technique of checking whether data
has been lost or written over when it’s moved
NDMP — Network Data Management Protocol, is from one place in storage to another or when
a protocol meant to transport data between it’s transmitted between computers
NAS devices
Parity Group — Also called an array group, is a
NetBIOS — Network Basic Input/Output System group of hard disk drives (HDDs) that form
Network — A computer system that allows sharing the basic unit of storage in a subsystem. All
of resources, such as files and peripheral HDDs in a parity group must have the same
hardware devices physical capacity.
NFS protocol — Network File System is a protocol Partitioned cache memory — Separate workloads
which allows a computer to access files over in a ‘storage consolidated’ system by dividing
a network as easily as if they were on its cache into individually managed multiple
local disks. partitions. Then customize the partition to
NIM — Network Interface Module match the I/O characteristics of assigned
LUs
NIS — Network Information Service (YP)
PAT — Port Address Translation
Node ― An addressable entity connected to an
I/O bus or network. Used primarily to refer to PATA — Parallel ATA
computers, storage devices, and storage Path — Also referred to as a transmission
subsystems. The component of a node that channel, the path between two nodes of a
connects to the bus or network is a port. network that a data communication follows.
Node name ― A Name_Identifier associated with The term can refer to the physical cabling
a node. that connects the nodes on a network, the
signal that is communicated over the
NTP — Network Time Protocol pathway or a sub-channel in a carrier
NVS — Non Volatile Storage frequency.
Path failover — See Failover
-back to top- PAV — Parallel Access Volumes

HDS Confidential: For distribution only to authorized parties. Page 11


PAWS — Protect Against Wrapped Sequences implemented by hardware, software, or a
PBC — Port By-pass Circuit combination of the two. At the lowest level, a
protocol defines the behavior of a hardware
PCB — Printed Circuit Board connection.
PCI — Power Control Interface PS — Power Supply
PCI CON (Power Control Interface Connector PSA — Partition Storage Administrator
Board)
PSSC — Perl SiliconServer Control
Performance — speed of access or the delivery of
information PSU — Power Supply Unit
PD — Product Detail PTR — Pointer
PDEV— Physical Device P-VOL — Primary Volume
PDM — Primary Data Migrator
-back to top-
PDM — Policy based Data Migration
PGR — Persistent Group Reserve
—Q—
PK — Package (see PCB)
QD — Quorum Device
PI — Product Interval
QoS — Quality of Service —In the field of
PIR — Performance Information Report
computer networking, the traffic engineering
PiT — Point-in-Time term quality of service (QoS), refers to
PL — Platter (Motherboard/Backplane) - the resource reservation control mechanisms
circular disk on which the magnetic data is rather than the achieved service quality.
stored. Quality of service is the ability to provide
different priority to different applications,
Port — In TCP/IP and UDP networks, an endpoint users, or data flows, or to guarantee a
to a logical connection. The port number certain level of performance to a data flow.
identifies what type of port it is. For example,
port 80 is used for HTTP traffic.
-back to top-
P-P — Point to Point; also P2P
Priority Mode— Also PRIO mode, is one of the
modes of FlashAccess™ in which the —R—
FlashAccess™ extents hold read and write R/W — Read/Write
data for specific extents on volumes (see
Bind Mode). RAID (Redundant Array of Independent Disks, or
Redundant Array of Inexpensive Disks) ― A
Provisioning — The process of allocating storage group of disks that look like a single volume
resources and assigning storage capacity for to the server. RAID improves performance
an application, usually in the form of server by pulling a single stripe of data from multiple
disk drive space, in order to optimize the disks, and improves fault-tolerance either
performance of a storage area network through mirroring or parity checking and it is
(SAN). Traditionally, this has been done by a component of a customer’s SLA.
the SAN administrator, and it can be a
tedious process. RAID-0 — Striped array with no parity

In recent years, automated storage provisioning, RAID-1 — Mirrored array & duplexing
also called auto-provisioning, programs have RAID-3 — Striped array with typically non-rotating
become available. These programs can parity, optimized for long, single-threaded
reduce the time required for the storage transfers
provisioning process, and can free the RAID-4 — Striped array with typically non-rotating
administrator from the often distasteful task parity, optimized for short, multi-threaded
of performing this chore manually transfers
Protocol — A convention or standard that enables RAID-5 — Striped array with typically rotating
the communication between two computing parity, optimized for short, multithreaded
endpoints. In its simplest form, a protocol transfers
can be defined as the rules governing the
syntax, semantics, and synchronization of
communication. Protocols may be

Page 12 HDS Confidential: For distribution only to authorized parties.


RAID-6 — Similar to RAID-5, but with dual rotating operational hardware components of the
parity physical disks, tolerating two physical Thunder 9500V/9200 subsystem. (See also:
disk failures RKA, H1F, and H2F)
RAM — Random Access Memory RKA (Rack Additional) — Acronym referring to
RAM DISK — A LUN held entirely in the cache “Rack Additional”, namely additional rack
area. unit(s) which house additional hard drives
exceeding the capacity of the core RK unit of
Read/Write Head — Read and write data to the the Thunder 9500V/9200 subsystem. (See
platters, typically there is one head per also: RK, RKA, H1F, and H2F).
platter side, and each head is attached to a
single actuator shaft RKAJAT — Rack Additional SATA disk tray
Redundant — Describes computer or network RLGFAN — Rear Logic Box Fan Assembly
system components, such as fans, hard disk RLOGIC BOX — Rear Logic Box
drives, servers, operating systems, switches, RMI (Remote Method Invocation) — A way that a
and telecommunication links that are programmer, using the Java programming
installed to back up primary resources in language and development environment, can
case they fail. A well-known example of a write object-oriented programming in which
redundant system is the redundant array of objects on different computers can interact in
independent disks (RAID). Redundancy a distributed network. RMI is the Java
contributes to the fault tolerance of a system. version of what is generally known as a RPC
Reliability —level of assurance that data will not be (remote procedure call), but with the ability to
lost or degraded over time pass one or more objects along with the
Resource Manager — Hitachi Resource request.
Manager™ utility package is a software suite RoHS — Restriction of Hazardous Substances (in
that rolls into one package the following four Electrical and Electronic Equipment)
pieces of software: ROI — Return on Investment
• Hitachi Graph-Track™ performance ROM — Read-only memory
monitor feature
Round robin mode — A load balancing technique
• Virtual Logical Volume Image (VLMI) in which balances power is placed in the
Manager (optimizes capacity utilization), DNS server instead of a strictly dedicated
• Hitachi Cache Residency Manager feature machine as other load techniques do. Round
(formerly FlashAccess) (uses cache to robin works on a rotating basis in that one
speed data reads and writes), server IP address is handed out, then moves
to the back of the list; the next server IP
• LUN Manager (reconfiguration of LUNS, address is handed out, and then it moves to
or logical unit numbers). the end of the list; and so on, depending on
RCHA — RAID Channel Adapter the number of servers being used. This
RC — Reference Code or Remote Control works in a looping fashion. Round robin DNS
is usually used for balancing the load of
RCP — Remote Control Processor geographically distributed Web servers.
RCU — Remote Disk Control Unit Router — a computer networking device that
RDMA — Remote Direct Memory Access forwards data packets toward their
Redundancy — Backing up a component to help destinations, through a process known as
ensure high availability. routing.
Reliability — An attribute of any commuter RPO (Recovery Point Option) — point in time that
component (software, hardware, or a recovered data should match.
network) that consistently performs RPSFAN — Rear Power Supply Fan Assembly
according to its specifications. RS CON — RS232C/RS422 Interface Connector
RID — Relative Identifier that uniquely identifies a RSD — Raid Storage Division
user or group within a Microsoft Windows
domain R-SIM—Remote Service Information Message
RISC — Reduced Instruction Set Computer RTO (Recovery Time Option) — length of time that
can be tolerated between a disaster and
RK (Rack) — Acronym referring to the main recovery of data.
“Rack” unit, which houses the core

HDS Confidential: For distribution only to authorized parties. Page 13


Sector - a sub-division of a track of a magnetic
-back to top- disk that stores a fixed amount of data.
Selectable segment size — can be set per
—S— partition
Selectable Stripe Size — Increases performance
SA — Storage Administrator
by customizing the disk access size.
SAA — Share Access Authentication - the process
Serial Transmission — The transmission of data
of restricting a user's rights to a file system
bits in sequential order over a single line.
object by combining the security descriptors
from both the file system object itself and the Server — A central computer that processes end-
share to which the user is connected user applications or requests, also called a
host.
SACK — Sequential Acknowledge
Service-level agreement (SLA) - A contract
SACL — System ACL - the part of a security
between a network service provider and a
descriptor that stores system auditing
customer that specifies, usually in
information
measurable terms, what services the
network service provider will furnish. Many
SAN (Storage Area Network) ― A network linking Internet service providers (ISP)s provide
computing devices to disk or tape arrays and their customers with an SLA. More recently,
other devices over Fibre Channel. It handles IT departments in major enterprises have
data at the block level. adopted the idea of writing a service level
agreement so that services for their
SANtinel — HDS software that provides LUN
customers (users in other departments within
security. SANtinel protects data from
the enterprise) can be measured, justified,
unauthorized access in SAN environments. It
and perhaps compared with those of
restricts server access by implementing
outsourcing network providers.
boundaries around predefined zones and is
used to map hosts in a host group to the Some metrics that SLAs may specify include:
appropriate LUNs. • What percentage of the time services will
SARD — System Assurance Registration be available
Document • The number of users that can be served
SAS — SAN Attached Storage, storage elements simultaneously
that connect directly to a storage area • Specific performance benchmarks to
network and provide data access services to which actual performance will be
computer systems. periodically compared
SAS — (Serial Attached SCSI) disk drive • The schedule for notification in advance of
configurations for Hitachi Simple Modular network changes that may affect users
Storage 100 systems
• Help desk response time for various
SATA — (Serial ATA) —Serial Advanced classes of problems
Technology Attachment is a new standard
• Dial-in access availability
for connecting hard drives into computer
systems. SATA is based on serial signaling • Usage statistics that will be provided.
technology, unlike current IDE (Integrated Service-Level Objective (SLO) - Individual
Drive Electronics) hard drives that use performance metrics are called service-level
parallel signaling. objectives (SLOs). Although there is no hard
SC (simplex connector) — Fibre Channel and fast rule governing how many SLOs may
connector that is larger than a Lucent be included in each SLA, it only makes
connector (LC). sense to measure what matters.
SC — Single Cabinet Each SLO corresponds to a single
performance characteristic relevant to the
SCM — Supply Chain Management
delivery of an overall service. Some
SCP — Secure Copy examples of SLOs would include: system
SCSI — Small Computer Systems Interface. A availability, help desk incident resolution
parallel bus architecture and a protocol for time, and application response time.
transmitting large data blocks up to a SES — SCSI Enclosure Services
distance of 15-25 meters.

Page 14 HDS Confidential: For distribution only to authorized parties.


SENC — Is the SATA (Serial ATA) version of the SMI-S — Storage Management Initiative
ENC. ENCs and SENCs are complete Specification
microprocessor systems on their own and SMP/E (System Modification Program/Extended)
they occasionally require a firmware — An IBM licensed program used to install
upgrade. software and software changes on z/OS
SFP — Small Form-Factor Pluggable module Host systems.
connector — A specification for a new SMS — Hitachi Simple Modular Storage
generation of optical modular transceivers.
The devices are designed for use with small SMTP — Simple Mail Transfer Protocol
form factor (SFF) connectors, and offer high SMU — System Management Unit
speed and physical compactness. They are Snapshot Image — A logical duplicated volume
hot-swappable. (V-VOL) of the primary volume. It is an
ShadowImage® — HDS software used to duplicate internal volume intended for restoration
large amounts of data within a subsystem SNIA — Storage Networking Industry Association,
without affecting the service and an association of producers and consumers
performance levels or timing out. of storage networking products, whose goal
ShadowImage replicates data with high is to further storage networking technology
speed and reduces backup time. and applications.
SHSN — Shared memory Hierarchical Star SNMP (Simple Network Management Protocol) —
Network A TCP/IP protocol that was designed for
SI — Hitachi ShadowImage® Replication software management of networks over TCP/IP, using
agents and stations.
SIM RC — Service (or system) Information
Message Reference Code SOAP (simple object access protocol) — A way for
a program running in one kind of operating
SID — Security Identifier - user or group identifier
system (such as Windows 2000) to
within the Microsoft Windows security model
communicate with a program in the same or
SIMM — Single In-line Memory Module another kind of an operating system (such as
SIM — Storage Interface Module Linux) by using the World Wide Web's
Hypertext Transfer Protocol (HTTP) and its
SIM — Service Information Message; a message
Extensible Markup Language (XML) as the
reporting an error; contains fix guidance
mechanisms for information exchange.
information
Socket — In UNIX and some other operating
SIz — Hitachi ShadowImage® Replication
systems, a software object that connects an
Software
application to a network protocol. In UNIX,
SLA —Service Level Agreement for example, a program can send and
SLPR (Storage administrator Logical PaRtition) — receive TCP/IP messages by opening a
Storage can be divided among various users socket and reading and writing data to and
to reduce conflicts with usage. from the socket. This simplifies program
development because the programmer need
SM (Shared Memory Module) ― Stores the
only worry about manipulating the socket
shared information about the subsystem and
and can rely on the operating system to
the cache control information (director
actually transport messages across the
names). This type of information is used for
network correctly. Note that a socket in this
the exclusive control of the subsystem. Like
sense is completely soft - it's a software
CACHE, shared memory is controlled as two
object, not a physical component.
areas of memory and fully non-volatile
(sustained for approximately 7 days). SPAN — Span is a section between two
intermediate supports. See Storage pool
SM PATH (Shared Memory Access Path) ―
Access Path from the processors of CHA, Spare — An object reserved for the purpose of
DKA PCB to Shared Memory. substitution for a like object in case of that
object's failure.
SMB/CIFS — Server Message Block Protocol /
Common Internet File System SPC — SCSI Protocol Controller
SMC — Shared Memory Control SpecSFS — Standard Performance Evaluation
Corporation Shared File system
SM — Shared Memory
SSB — Sense Byte

HDS Confidential: For distribution only to authorized parties. Page 15


SSC — SiliconServer Control TCP/IP — Transmission Control Protocol over
SSH — Secure Shell Internet Protocol
SSID — Subsystem Identifier TCP/UDP — User Datagram Protocol is one of the
core protocols of the Internet protocol suite.
SSL — Secure Sockets Layer Using UDP, programs on networked
SSVP — Sub Service Processor; interfaces the computers can send short messages known
SVP to the DKC as datagrams to one another.
Sticky Bit — Extended Unix mode bit that prevents TCS — TrueCopy Synchronous
objects from being deleted from a directory TCz — Hitachi TrueCopy® Remote Replication
by anyone other than the object's owner, the software
directory's owner or the root user
TDCONV (Trace Dump CONVerter) ― Is a
STR — Storage and Retrieval Systems software program that is used to convert
Storage pooling — The ability to consolidate and traces taken on the system into readable
manage storage resources across storage text. This information is loaded into a special
system enclosures where the consolidation spreadsheet that allows for further
of many appears as a single view. investigation of the data. More in-depth
Striping — A RAID technique for writing a file to failure analysis.
multiple disks on a block-by-block basis, with TGTLIBs— Target Libraries
or without parity. Target — The system component that receives a
Subsystem — Hardware and/or software that SCSI I/O command, an open device that
performs a specific function within a larger operates at the request of the initiator
system. THF — Front Thermostat
SVC — Supervisor Call Interruption Thin Provisioning — Thin Provisioning allows
S-VOL — Secondary Volume space to be easily allocated to servers, on a
SVP (Service Processor) ― A laptop computer just-enough and just-in-time basis.
mounted on the control frame (DKC) and Throughput — The amount of data transferred
used for monitoring, maintenance and from one place to another or processed in a
administration of the subsystem specified amount of time. Data transfer rates
Symmetric virtualization — See In-band for disk drives and networks are measured in
virtualization. terms of throughput. Typically, throughputs
are measured in kbps, Mbps and Gbps.
Synchronous— Operations which have a fixed
time relationship to each other. Most THR — Rear Thermostat
commonly used to denote I/O operations TID — Target ID
which occur in time sequence, i.e., a Tiered storage —A storage strategy that matches
successor operation does not occur until its data classification to storage metrics. Tiered
predecessor is complete. storage is the assignment of different
Switch— A fabric device providing full bandwidth categories of data to different types of
per port and high-speed routing of data via storage media in order to reduce total
link-level addressing. storage cost. Categories may be based on
Software — Switch levels of protection needed, performance
requirements, frequency of use, and other
considerations. Since assigning data to
-back to top- particular media may be an ongoing and
complex activity, some vendors provide
software for automatically managing the
—T— process based on a company-defined policy.
T.S.C. (Technical Support Center) ― A chip Tiered Storage Promotion — Moving data between
developed by HP, and used in various tiers of storage as their availability
devices. This chip has FC-0 through FC-2 on requirements change
one chip.
TISC — The Hitachi Data Systems internal
TCA ― TrueCopy Asynchronous Technical Information Service Centre from
TCO — Total Cost of Ownership which microcode, user guides, ECNs, etc.
can be downloaded.

Page 16 HDS Confidential: For distribution only to authorized parties.


TLS — Tape Library System VHDL — VHSIC (Very-High-Speed Integrated
TLS — Transport Layer Security Circuit) Hardware Description Language
TMP — Temporary VHSIC — Very-High-Speed Integrated Circuit
TOC — Table Of Contents VI — Virtual Interface, a research prototype that is
undergoing active development, and the
TOD — Time Of Day details of the implementation may change
TOE — TCP Offload Engine considerably. It is an application interface
Topology — The shape of a network or how it is that gives user-level processes direct but
laid out. Topologies are either physical or protected access to network interface cards.
logical. This allows applications to bypass IP
processing overheads (copying data,
TPF — Transaction Processing Facility computing checksums, etc.) and system call
Transfer Rate — See Data Transfer Rate overheads while still preventing one process
from accidentally or maliciously tampering
Track — Circular segment of a hard disk or other
storage media with or reading data being used by another.
VirtLUN —VLL. Customized volume; size chosen
Trap — A program interrupt, usually an interrupt
caused by some exceptional situation in the by user
user program. In most cases, the Operating Virtualization —The amalgamation of multiple
System performs some action, and then network storage devices into what appears
returns control to the program. to be a single storage unit. Storage
virtualization is often used in a SAN, and
TRC — Technical Resource Center
makes tasks such as archiving, back up, and
TrueCopy — HDS software that replicates data recovery easier and faster. Storage
between subsystems. These systems can be virtualization is usually implemented via
located within a data center or at software applications.
geographically separated data centers. The
9900V adds the capability of using TrueCopy VLL — Virtual Logical Volume Image/Logical Unit
to make copies in two different locations Number
simultaneously. VLVI — Virtual Logic Volume Image, marketing
TSC — Technical Support Center name for CVS (custom volume size)
VOLID — Volume ID
TSO/E — Time Sharing Option/Extended
Volume — A fixed amount of storage on a disk or
-back to top-
tape. The term volume is often used as a
synonym for the storage medium itself, but it
is possible for a single disk to contain more
—U— than one volume or for a volume to span
more than one disk.
UFA — UNIX File Attributes
VTOC — Volume Table of Contents
UID — User Identifier
V-VOL — Virtual volume
UID — User Identifier within the UNIX security
model
-back to top-
UPS — Uninterruptible Power Supply — A power
supply that includes a battery to maintain
power in the event of a power outage. —W—
URz — Hitachi Universal Replicator software WAN —Wide Area Network
USP — Universal Storage Platform™ WDIR — Working Directory
USP V — Universal Storage Platform™ V WDIR — Directory Name Object
USP VM — Universal Storage Platform™ VM WDS — Working Data Set
WFILE — Working File
-back to top-
WFILE — File Object
WFS — Working File Set
—V— WINS — Windows Internet Naming Service
VCS — Veritas Cluster System

HDS Confidential: For distribution only to authorized parties. Page 17


WMS — Hitachi Workgroup Modular Storage
system
—X—
WTREE — Working Tree
XAUI — "X"=10, AUI = Attachment Unit Interface
WTREE — Directory Tree Object
XFI — Standard interface for connecting 10 Gig
WWN (World Wide Name) ― A unique identifier Ethernet MAC device to XFP interface
for an open-system host. It consists of a 64-
XFP — "X" = 10 Gigabit Small Form Factor
bit physical address (the IEEE 48-bit format
Pluggable
with a 12-bit extension and a 4-bit prefix).
The WWN is essential for defining the XRC — Extended Remote Copy
Hitachi Volume Security software (formerly
SANtinel) parameters because it determines -back to top-
whether the open-system host is to be
allowed or denied access to a specified LU
or a group of LUs. —Y—
WWN — World Wide Name — A unique identifier
for an open systems host. It consists of a 64- -back to top-
bit physical address (the IEEE 48-bit format
with a 12-bit extension and a 4-bit prefix).
The WWN is essential for defining the —Z—
SANtinel parameters because it determines Zone — A collection of Fibre Channel Ports that
whether the open systems host is to be are permitted to communicate with each other via
allowed or denied access to a specified LU the fabric
or a group of LUs.
Zoning — A method of subdividing a storage area
network into disjoint zones, or subsets of nodes on
WWNN — World Wide Node Name ― A globally the network. Storage area network nodes outside
unique 64-bit identifier assigned to each a zone are invisible to nodes within the zone.
Fibre Channel node process. Moreover, with switched SANs, traffic within each
zone may be physically isolated from traffic
WWPN (World Wide Port Name) ― A globally outside the zone.
unique 64-bit identifier assigned to each
Fibre Channel port. Fibre Channel ports’
-back to top-
WWPN are permitted to use any of several
naming authorities. Fibre Channel specifies a
Network Address Authority (NAA) to
distinguish between the various name
registration authorities that may be used to
identify the WWPN.

-back to top-

Page 18 HDS Confidential: For distribution only to authorized parties.


Evaluating this Course
1. Log in to the Hitachi Data Systems Learning Center page at
https://learningcenter.hds.com
2. Select the Learning tab on the upper-left corner of the Hitachi Data Systems
Learning Center page.
3. On the left panel of the Learning page, click Learning History. The Learning
History page appears.
4. From the Title column of the Learning History table, select the title of the course
in which you have enrolled. The Learning Details page for the enrolled course
appears.
5. Select the More Details tab.

6. Under Attachments, click the Class Eval link. The Class Evaluation form opens.
Complete the form and submit.

HDS Confidential: For distribution only to authorized parties. Page 1


Evaluating this Course

Page 2 HDS Confidential: For distribution only to authorized parties.

You might also like