IFS 20070701 Jul 2007
IFS 20070701 Jul 2007
IFS 20070701 Jul 2007
THE LEADING SOURCE FOR ENTERPRISE STORAGE PROFESSIONALS VOLUME 11, NO. 7 JULY 2007
CONT I NUE D ON PAGE 21 CONT I NUED ON PAGE 17
S P E C I A L R E P O R T S
In the News
EMC supersizes VTLs, extends de-dupe p. 8
Hitachi ups the ante in content archiving p. 8
Brocade enters HBA market via LSI p. 10
Start-up offers IP storage clusters p. 18
Features
The real state of SRM, part 2 p. 33
Storage resource management (SRM) challenges
include multi-vendor support, homegrown vs.
vendor tools, and the trend toward SRM suites.
Introducing data warehouse appliances p. 35
Workload-optimized storage appliances are tuned
to specifc applications and I/O workloads.
BY KEVI N KOMI EGA
Brocade is making good on its post-McData ac-
quisition product road map with the launch of
nearly a dozen enhancements to its SAN and
file area network (FAN) offerings, all while
tightening interoperability between the com-
panies respective products and attempting to
shed its reputation as a block-level-only stor-
age company.
Brocade beefed up a number of its SAN hard-
ware and software products with new manage-
ment, virtualization, interoperability, and con-
nectivity enhancements. For example, the
company announced the addition of 10Gbps
Fibre Channel connectivity to its 48000 direc-
tor in the form of the new FC10-6 blade, which
is aimed at connecting systems between remote
sites for high-performance business continuity
and disaster-recovery applications.
The FR4-18i router blade and model 7500
routing platform have also been tweaked to
Brocade upgrades
SAN, FAN products
Vendors, users grapple
with power concerns
BY KEVI N KOMI EGA
Storage systems are major of-
fenders when it comes to power
consumption in the data center,
but in the quest
for energy-
efficient
tech-
nology
the fo-
cus, at
least so
far, has
been pri-
marily on
microproces-
sors and servers. So
why has storage flown un-
der the radar? It could be that,
aside from reducing raw capac-
ity, the industry has yet to come
up with a clear-cut answer to the
problem.
There is not an obvious and
straightforward approach
to saving energy
in the storage
environment
because disks
are going to
spin. The
only way
you are go-
ing to save is
to stop them
from spinning,
says John Webster,
principal IT advisor with
the Illuminata research and con-
sulting firm.
Server virtualization:
The case for iSCSI
BY DAVE SI MPSON
In the context of server virtualization and storage,
end users and vendors agree: Separate the storage
from the server. Maximizing the benefits of serv-
er virtualization (such as resource consolidation)
requires shared storage, which means SANs.
According to International Data Corp., about
80% of virtual servers are connected to SANs.
And, today, virtually all of them are Fibre Channel
SANs. However, iSCSI-based IP SANs may have
inherent advantages in the context of server virtu-
alization environments.
According to Matt Bak-
er, product manager, stor-
age marketing, at Dell, the
benefits of iSCSI in a vir-
tual server environment fall
into three categories:
Reducing the complex-
Tape market update:
LTOs the bright spot
BY DAVE SI MPSON
P. 24
CONT I NUE D ON PAGE 10 BY MI CHELLE HOPE
Options abound for
tape, disk encryption
P. 30
Contents Zoom In Zoom Out Search Issue Next Page
For navigation instructions please click here
Contents Zoom In Zoom Out Search Issue Next Page
For navigation instructions please click here
___________________
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
_________________________
_______________________
S
P
E
C
I
A
L
R
E
P
O
R
T
S
2007
VOL. 11, NO. 7
CONTINUED
NEWS ANALYSIS AND TRENDS
1 Brocade upgrades SAN, FAN products
Almost a dozen introductions span the spectrum
from switches to software.
1 Vendors, users grapple with power
concerns
But theres more than one way to end the power
struggle.
1 Server virtualization:
The case for iSCSI
Most VMs are connected to FC SANs,
but IP SANs may have inherent advantages.
8 EMC supersizes VTLs, extends de-dupe
DL6000 series scales up to 1.8PB (compressed) and
2,400 disk drives. De-duplication now available for
NAS, VMware.
8 Hitachi ups the ante in content archiving
Also on the supersizing front, the HCAP supports
up to 20PB in an 80-node archive system.
8 NetBackup upgrade focuses on D2D
Version 6.5 of Symantecs fagship backup/recovery
software works with D2D, VTL, CDP, and other
environments.
10 Brocade enters HBA market via LSI
But faces an uphill battle against the Emulex-QLogic
duopoly.
14 Continuity Software tackles DR testing
Start-up thinks it has a smarter approach to disaster
recovery.
18 Start-up offers IP storage clusters
Pivot3s RAIGE architecture provides virtual
distributed RAID.
Tape market
update: LTOs
the bright spot
LTO libraries accounted
for more than 88% of unit
shipments last year, and
the LTO-4 format promises
to extend the technologys
dominance.
BY DAVE SI MPSON
p. 24
Options abound
for tape, disk
encryption
Choices include software-
based encryption, switch-
based encryption, drive-
or library-based encryption,
and dedicated appliances.
BY MI CHELE HOPE
p. 30
July
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
2007
VOL. 11, NO. 7
July
INFOSTOR
THE LEADI NG SOURCE FOR ENTERPRISE STORAGE PROFESSIONALS
CORPORATE OFFICERS
Chairman Frank T. Lauinger
President and CEO Robert F. Biolchini
Chief Financial Ofcer Mark C. Wilmoth
1421 South Sheridan Road, Tulsa, OK 74112
Tel: (918) 835-3161, fax: (918) 831-9497
www.pennwell.com.
Founded in 1910, PennWell is an information
company with 40 magazines and related
conferences, exhibitions, and online services
for business and industry worldwide.
FEATURES
33 The real state of SRM, part 2
Storage resource management (SRM) challenges
include multi-vendor support, homegrown vs. vendor tools,
and the trend toward SRM suites.
35 Introducing data warehouse appliances
Workload-optimized storage appliances are tuned
to specifc applications and I/O workloads.
SNIA ON STORAGE
37 ILM isnt just about storage
Storage-focused implementers can learn a lot
from non-storage IT disciplines.
DEPARTMENTS
6 Editorial
6 Business Briefs
38 New Products
42 Ad Index
42 Editorial Index
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
No one understands the importance of keeping your bases covered better than a big league
baseball team. Thats why when one of the leagues most technically savvy teams started
digitally capturing every at-bat, they chose solutions from Overland Storage to meet their
increasing data storage and protection needs. Because Overlands solutions are exible,
scalable and affordable, the team spends less time managing their data, and more time
using it to perfect their game. Learn how the team cut backup times by more than half at
www.overlandstorage.com/baseball or call 1-888-288-4103.
2007 Overland Storage, Inc.
Simply Protected Storage keeps a big league team at bat.
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
I NFOS TOR www. i nf ostor. com J ULY 20 07
6
EDITORIAL
To dupe, or not to dupe
is not a question
DATA DE-DUPLICATION SPECIALIST Data Domains suc-
cessful IPO last month was a clear signal that this tech-
nology has hit the big time. The IPO occurred amidst a
flurry of de-duplication-related announcements.
Quantum, for example, says that later this year it
will deliver a system that supports both inline and post-
process de-duplication, which would give users an op-
tion while icing the controversy between the two ap-
proaches. (Quantum got its data de-dupe technology in
its acquisition of ADIC, which had acquired de-dupe pioneer Rocksoft.)
Network Appliance and others are extending data de-dupe beyond its tradi-
tional role in backup scenarios, into nearline and primary storage devices and
applications (see NetApp extends de-dupe beyond backups, InfoStor, June
2007, p. 8).
Users will have to wait until early next year to get data de-duplication func-
tionality on EMCs disk libraries (aka virtual tape libraries, or VTLs), but the
company recently announced support for data de-dupe in both its VMware and
NAS platforms (see EMC supersizes VTLs, extends de-dupe, p. 8).
How hot is the data de-duplication market? Pretty hot. The 451 Group
research and consulting firm expects it to grow from $100 million last year to
DAVE SIMPSON
EDITOR-IN-CHIEF
more than $250 million this year. At that growth rate, it
could become a $1 billion market by 2009.
The rapid growth is due to the fact that data de-dupli-
cation is, for the most part, a no-brainer technology that
has immediate appeal to end users. It provides a sharp
reduction in required capacity as well as high-speed recov-
ery, with few drawbacks other than having to evaluate the
various approaches (inline vs. post-process, hash-based ver-
sus byte-level, etc.).
Users also have to watch out for vendors insane de-dupe
ratio claims. Some vendors, for example, claim a 500x re-
duction. Reality: Results from a recent end-user survey by
the 451 Group indicate that most de-duplication users ex-
perience a 15x to 20x reduction in data, although a few
achieved greater than 50x. Other respondents experienced
data-reduction rates of less than 5x.
The July issue of InfoStor will include an in-depth look
at the data de-duplication market and the various tech-
nology approaches, as well as end users experiences and
expectations.
NEC selected Xyratex Ltd.s
E5412E SAS/SATA RAID sys-
tem as the external storage so-
lution for its IA server NEC Ex-
press5800. Separately, CorData,
a storage networking solutions
provider, has chosen the Xyra-
tex F5402E 4Gbps RAID system
as a component in its line of net-
worked storage solutions.
3PAR announced support for a
new storage consolidation solu-
tion that pairs its InServ Storage
Server with Network Appliances
V-Series systems.
StoneFly, a subsidiary of
Dynamic Network Factory
(DNF), has signed more than
50 new channel partners as part
of its MVP Channel Program.
About 75% of its channel part-
ners are in the US.
iStor Networks recently rolled
out its inAbled Channel Partner
Program, which initially includes
distributors, integrators, and re-
sellers such as Arbitech, Aura-
Gen, Condre, RAID Inc., and Variel
Technology. Other iStor channel
partners include Accusys, ASUS,
Axstor, Coma Zalohovaci Sys-
temy, Gigabyte, Kano, Netweb,
PDE Technology, Thinkmate,
and XSI Data Solutions. Sepa-
rately, iStor Networks has signed
Arbitech LLC as a distribution
partner. Arbitech will distribute
the iS325 storage system, which
combines iStors GigaStor iSCSI
network storage controller and
up to 15 SATA drives. The sys-
tem is available with four or eight
1Gbps Ethernet ports, or one
10Gbps Ethernet port.
Hewlett-Packard has inte-
grated 4Gbps Fibre Channel
HBAs and EZPilot software
from Emulex in the new HP
EVA4100 SAN Starter Kit.
Zeteras Z-SAN technology is
currently shipping as the founda-
tion for Netgears Storage Cen-
tral Turbo (SC101T).
Tek-Tools has a reseller agree-
ment with AdviStor, a provider of
data storage and data-protection
solutions. AdviStor will sell and
support Tek-Tools Profiler Suite.
QLogics SANbox 6140 intel-
ligent storage router has been
qualified with Symantecs Net-
Backup software.
Sonasoft has signed a sales
and marketing agreement with
the North American Compo-
nents (NAC) business of Ar row
Electronics. Arrow will sell
Sonasofts Point-Click Recovery
software for Microsoft Exchange,
SQL, and Windows file systems.
TimeSpring Software and
LeftHand Networks have en-
tered into a sales and technolo-
gy alliance. Components include
TimeSprings TimeData continu-
ous data protection (CDP) soft-
ware and LeftHands SAN/iQ
SAN platform.
FalconStor Software and V2
Electronics have a new range of
storage appliances based on V2s
hardware platforms and Falcon-
Stors data-protection software,
including IPStor, VirtualTape
Library, Continuous Data Protec-
tion, and Network Storage Sys-
tem (NSS) software.
CONTINUED ON PAGE 21
B
U
S
I
N
E
S
S
B
R
I
E
F
S
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
w|to eotecc|aea exaod|og aod ceaouceea aoc|o||og, maoageca oave eome to cea||ze
toat maoq o| todaqa caet|eea woot eut |t tomoccow. oeq oeed |eaa eom|ex|tq aod
gceatec eootco|. i| qouce eooa|dec|og a oext-geo data eeotec, eooa|dec to|a. 4\d[TgXb
T]PQ[X]VX]UaPbcadRcdaTeXacdP[XiPcX^]. e|got oow. odaq. ouc |ateat aecvec, |aoc|e aod
atocage v|ctua||zat|oo teeooo|og|ea eao |ae|||tate aod aeee|ecate toe coeeaa, a||ow|og
qou to move, maoage aod ma|ota|o |o|ocmat|oo |aatec aod moce e|e|eot|q toao evec.
emu|ex. eeadq woeo qou ace. emo|ex.eemtv|:toa||zat|eo
emo|ex.eem
tuau v|atunt|znt|ou |uto aent|znt|ou.
|utaoouc|uc tue uew ontn ceutea.
mnoe ross|ete ev emutex-euneteo
seavea, rnea|c nuo stoance v|atunt|znt|ou tecuuotoc|es.
oz-z== iotoo
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
_______________________
__________
What are your plans for virtual tape libraries
(VTLs)?
Q:
Will implement
this year
36%
No plans
to use VTLs
37%
Source: InfoStor QuickVote reader survey
Have already
implemented
27%
CAS implementation plans
Fixed content/content-addressed
storage (CAS) array
In use now
In pilot /evaluation
In near-term plan In long-term plan
Not in plan
Source: TheInfoPro
I NFOS TOR www. i nf ostor. com J ULY 20 07
8
NEWS ANALYSIS
+
TRENDS
BY KEVI N KOMI EGA
EMC recently rolled out a pair of
large, Symmetrix-based virtual
tape systems and several new soft-
ware upgrades for data de-duplica-
tion, backup, and archiving.
The EMC Disk Library 6000
series scales up to 1.8PB of com-
pressed capacity and can back
up more than 11TB per hour,
according to company claims.
The 6000 series currently offers
hardware compression, but users
seeking data de-duplication ca-
pabilities will have to wait until
early 2008, which is when EMC
says data de-dupe will be made
available across its entire family
of disk libraries.
EMC boosted the scalability
and performance of its new librar-
ies by basing the DL6000 series on
the Symmetrix DMX-3 platform,
versus the midrange Clariion,
which serves as back-end storage
for the companys other virtual
tape library (VTL) systems.
The DL6100 supports up to
1,440 disk drives per system and
offers RAID-5 protection with a
maximum uncompressed capac-
ity of 615TB, or up to 1.845PB
of compressed capacity. The
DL6300 supports up to 2,400
drives and offers RAID-1 pro-
tection with a maximum un-
compressed capacity of 584TB,
or 1.752PB compressed.
The DL6000 series is clearly
targeted at our largest customers
who have gigantic data centers
and gigantic backup-and-restore
problems, says Jay Krone,
director of storage product
marketing at EMC. There
are also a lot of Symmetrix
customers who want to con-
tinue to use the array that
theyre familiar with.
EMCs Disk Library fam-
ily touts consolidated media
management that gives users
control of their entire tape
pool through a single applica-
tion interface, thereby elim-
inating some of the redundant
management tasks commonly
associated with managing mul-
tiple VTLs in traditional deploy-
ment scenarios. The libraries al-
so feature Active Engine Failover,
which kicks in when a processor
engine fails, and enables automat-
ic fail-over to a second processor
engine so that the disk library is
able to continue servicing the
backup server or application.
Heidi Biggar, an analyst with
the Enterprise Strategy Group,
says EMCs transition from Clari-
ion to Symmetrix as the founda-
tion for DL6000 libraries boosts
capacity and performance.
EMC supersizes VTLs, extends de-dupe
BY KEVI N KOMI EGA
Hitachi Data Systems has unveiled a new version
of its Content Archive Platform with a slew of en-
hanced features in replication, security, de-duplica-
tion, and compression, but the selling point could
potentially be the sheer size of the system.
Version 2.0 of the Hitachi Content Archive Plat-
form (HCAP) can support up to 20PB of storage in
an 80-node archive system. A single HCAP node
can scale up to 400 million objects (files, meta-
data, and policies), and an 80-node system can
support up to 32 billion objects. Hitachi claims
the platform outper-
forms previous-gen-
eration CAS systems
by 470%.
When it comes to
building out the ar-
chive, Hitachis ap-
proach is to scale ar-
chive server nodes
and storage capacity
independently rather than requiring additional
servers and processing power to scale storage.
The launch of HCAP 2.0 comes on the heels of
the debut of Hitachis latest high-end storage array:
the Universal Storage Platform (USP) V. And its
no coincidence that the two platforms have a lot
of technology in common.
The new release of the Content Archive Plat-
form shares the same philosophy of disaggregat-
ing servers and storage as the recently announced
USP V platform, says Asim Zaheer, senior direc-
tor of business development for content archiving
at HDS.
The USP V touts the combination of a virtu-
alization layer with thin-provisioning software to
offer users consolidation, external storage virtual-
ization, and the power and cooling advantages of
thin provisioning.
The combination of the aforementioned tech-
nologies allows for the management of up to
theoretically247PB of virtualized capacity, about
670% more than the previous-generation Tagma-
Store USP platform. The com-
pany also claims a maximum
performance of 3.5 million I/Os
per second (IOPS), a 5x increase
over its previous arrays.
The HCAP can attach to a
virtual storage pool with the
USP V, thereby acting as an ar-
chive tier of storage where aged
data on primary storage can be
moved. Data in the archive can be offloaded from
expensive disk to less expensive ATA or Serial
ATA (SATA) storage.
The previous version of Hitachis archiving
product, until now, had only been offered as an
appliance based on the TagmaStore Workgroup
Modular Storage model WMS100 with servers that
offered software connectivity into the infrastruc-
ture. Zaheer says Hitachi will continue to offer
CONT I NUE D ON PAGE 14 CONT I NUE D ON PAGE 12
Hitachi ups the ante
in content archiving
BY KEVI N KOMI EGA
Disk, tape, data movement: It
doesnt matter. Symantec wants
to unify the management of all
things backup. And the compa-
ny is betting that the disk-based
backup support, application opti-
mization upgrades, and new pric-
ing model for the latest release
of its flagship software, Veritas
NetBackup 6.5, will attract users
looking for a single product for
enterprise data protection.
NetBackup 6.5 is capable of
managing tape, virtual tape li-
braries (VTLs), disk backup, da-
ta de-duplication, continuous
data protection (CDP), snap-
shots, and replication process-
NetBackup upgrade
focuses on D2D
CONT I NUE D ON PAGE 23
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
52,000 IOps with One Drive
Revolutionary Performance
Zeus
IOPS
Solid State Drive
Performance of 200 HDDs in One Drive
Revolutionary Performance for Demanding Transactional Applications
Drop-in Replacement for 3.5 HDD
52,000 IOps Sustained Speed
Zeus
IOPS
Solid State Drive delivers the performance of 200 HDDs with just one
drive. For comparable performance, a 200 HDD system will cost 2-3 times as
much as a Zeus
IOPS
system because of additional up-front hardware as well as
reoccurring maintenance, power and cooling costs. Zeus
IOPS
Solid State Drive is
revolutionizing the way businesses access critical data by overcoming the
performance bottleneck inherent to traditional rotating media HDD storage.
STEC, Inc. The STEC name, logo, and design are trademarks of STEC, Inc. All other trademarks are the property of their respective owners.
For more information, contact our SSD specialists at SSD@stec-inc.com, call
1-800-796-4645 or visit our website at www.stec-inc.com.
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
Other
15.2%
LSI
5%
Emulex
35.3%
QLogic
44.5%
Source: Dell'Oro Group
FC HBA market shares, Q1 2007
I NFOS TOR www. i nf ostor. com J ULY 20 07
10
NEWS ANALYSIS
+
TRENDS
BY DAVE SI MPSON
This month, Brocade marked its entry
into the Fibre Channel host bus adapt-
er (HBA) market with shipments of re-
branded adapters from LSI, putting the
company in direct competition with mar-
ket leaders Emulex and QLogic (which
combined, have a market share of more
than 80%). Last month, Brocade entered
the iSCSI HBA market with cards based
on its own technology.
The DellOro Group market research
firm expects the Fibre Channel HBA
market to top $1 billion this year.
But many observers conjectured that
the announcement was not so much about
Brocade seeking to boost its revenues via
HBA sales, or about gaining market share
but, rather, it was more about being able
to tell a soup-to-nuts SAN infrastructure
story to gain a competitive advantage over
its biggest rivalCisco. In other words,
Brocade might be shooting BBs at Emu-
lex and QLogic, but its big guns are still
aimed where theyve always beenat the
800-pound networking gorilla.
At least thats the take of one new-
found competitor (and partner): Bro-
cade needs ways to compete against their
biggest competitor, and this move gives
them a bit of a differentiator, says Mike
Smith, executive vice president of world-
wide marketing at Emulex, but its not
like theres a new player in the market.
We do not expect this to have an impact
on our business, and we expect to contin-
ue to partner with Brocade on bringing
best-of-breed solutions to market.
Brocades 4Gbps Fibre Channel HBAs
are available in single- or dual-port mod-
els and are compatible with the PCI-
Express host bus. Although Brocades
initial foray into Fibre Channel HBAs is
based on LSI products, future generations
(e.g., 8Gbps HBAs) will be based on Bro-
cades own intellectual property and will
have more-competitive differentiators,
according to Tom Buiocchi, Brocades
vice president of worldwide marketing.
At 8Gbps, Brocade will have their own
technology, but the LSI deal gives them
a jump-start into the HBA market and
will give them a good feel for whether
they can crack the very strong shell that
QLogic and Emulex have built around the
Brocade enters HBA market via LSI
Brocade FROM COVER
offer better performance for remote di-
saster-recovery applications with the ad-
dition of Fast Write acceleration technol-
ogy. Fast Write improves the response
times of synchronous applications over
longer distances and boosts the overall
throughput of data transfers over dark
fiber or xWDM WAN links for more-
efficient utilization of WAN bandwidth.
Brocade officials claim the Fast Write
feature improves data protection by
accelerating the performance of remote
disaster-recovery applications, such as
disk mirroring, by up to 200%.
Beyond the speeds and feeds, Brocade
also announced a number of enhance-
ments to both the Enterprise Fabric Con-
nectivity Manager (EFCM) and Fabric
Manager, its dueling SAN infrastructure
management applications. The compa-
ny added Advanced Call Home features
to EFCM and performance monitoring
functionality to Fabric Manager.
The future foundation of Brocades
SAN management software will be the
EFCM, formerly known as the McData
EFCM. The Fabric Manager software will
continue to be offered and enhanced un-
til its functionality is built into a con-
verged EFCM application, which is due
next year.
Brocade has stated that it will sell and
support existing Brocade and McData
products through the end of this year
with the ultimate goal of combining the
best elements of the portfolio into com-
mon, integrated hardware and software
products in 2008.
One of our biggest efforts is in achiev-
ing full interoperability between Mc-
Data and Brocade fabrics and products
like the Brocade Access Gateway, and
the enhancements to EFCM help with
that interoperability, according to Truls
Myklebust, senior director of product mar-
keting for Brocades FAN solutions.
To that end, the Brocade Access Gate-
way, a virtualization platform that enables
interoperability between Brocade and
McData switches, will now be available
on the entry-level Brocade 200E switch.
The company also released the next
generations of its Fabric Application
Platforms for virtualizationthe FA4-18
application blade for the 48000 director
and the model 7600 Application Plat-
form. The Fabric Application Platform
serves as the foundation for virtualiza-
tion solutions such as EMCs Recover-
Point and Invista.
FAN features for branch offices
Brocade hasnt forgotten about the file-
level world. In fact, its fast becoming a
main focus of the companys overall da-
ta-center strategy. The company contin-
ues to build out its FAN portfolio for ad-
vanced file-based data management and
protection from the data center to the
branch office and back.
Topping the FAN upgrades is a new
release of StorageX. Version 6.0 of the
software features additional capabili-
ties for file migration in both CIFS and
NFS environments. The release provides
tighter integration with Windows Server
2003 R2, broadens Unix platform sup-
port, and allows organizations to have
more control over data movement dur-
ing migration and storage load balanc-
ing procedures.
StorageX has always been very focused
on Windows [CIFS] environments and
that has served us well, but obviously we
see customers with environments that al-
so have Unix [NFS], says Myklebust, so
were rolling out much broader coverage
and support for other environments.
Brocade also unveiled a new version
of its File Lifecycle Manager (FLM)
product. FLM Version 4.0 touts more-
efficient automated file migration and
restoration without system downtime.
FLMs claim to fame is its ability to cre-
ate tiers of storage in Network Appli-
ance NAS environments.
These new products are all about in-
creasing efficiencies in the data center,
says Doug Ingraham, Brocades senior
director of SAN product management.
Brocade is moving away from being
just a block-level SAN company toward
tackling other issues such as data man-
agement and bringing branch office data
back to the data center.
Improving interoperability within its
own products, moving beyond the block
level to become more of a data manage-
ment provider, and broadening its reach
in the branch office are all important
goals that Brocade must execute in order
to remain a major player in the market,
according to Richard Villars, vice presi-
dent of storage systems research at IDC.
Delivering interoperability between
Brocade and McData products is key for
Brocade from the standpoint of protect-
ing the customer base and reassuring them
they can deliver on their promises, but now
that they have merged the company wants
to start adding functionality to make them
more valuable as a partner, says Villars.
Moving beyond block-level storage
Villars says Brocades attempt to move
beyond block-level storage is a direct re-
action to two big trends in the storage
industry: the massive expansion of file-
level data and the move toward virtual-
ized infrastructures via blades and vir-
tual server solutions.
Brocade wants to be a critical player in
the data center and not just a component
supplier, Villars says. There is a need to
manage file-level data while simultane-
ously managing block-level storage and, if
you want to play in the data center, there
is going to be a big opportunity in auto-
mating and creating more-effective solu-
tions for virtualized infrastructures.
Further evidence of Brocades plans
to become a key vendor in the data cen-
ter of the future was its recent entry in-
to the Fibre Channel host bus adapter
(HBA) market via a reseller deal with LSI
(see Brocade enters HBA market via LSI,
above).
CONT I NUE D ON PAGE 16
VENDORS MENTIONED
Brocade, EMC, LSI
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
Any Way You Stack It,
We Keep Your Data Available
The RAID Storage Expert
us.sales@infortrend.com 408-988-5745 Fax: 408-988-6288
www.infortrend.com/sas
SAS Drives
A
SATA Drives
SAS SATA Drives +
C
B
Install the hard drives that exactly match your system performance and budget goals:
High performance SAS drives for mission-critical solutions
Cost-effective SATA drives for near-line backup environments
Mix and change the drive types as your needs change
Full-featured hardware redundancy for 24/7 operation
Connect the subsystem to three S16S-J1000 JBODs populated with 750GB SATA
drives for up to 48TB of RAID storage.
A:
B:
C:
Balance Performance vs. Cost
Always-On Availability
Massive Data Storage
Introducing the EonStor S16F-R1430,
the new SAS/SATAto 4G-FCArray
from Infortrend.
R
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
I NFOS TOR www. i nf ostor. com J ULY 20 07
12
NEWS ANALYSIS
+
TRENDS
NetBackup FROM PAGE 8
es across all major vendors, according to
Matt Fairbanks, Symantecs senior direc-
tor of product marketing.
Fairbanks says the latest version of Net-
Backup is referred to internally as the
disk release due to the laundry list of
new features and support for disk-based
backup environments. This is an inte-
grated way to manage all devices and data
movers, he says.
NetBackup 6.5, which will be available
this summer, includes features such as na-
tive disk-based backup, data de-duplica-
tion, integration with intelligent back-
up appliances and VTLs, heterogeneous
snapshot management, granular recov-
ery for applications and virtual machines,
and new licensing and pricing programs.
According to Fairbanks, NetBackup
6.5 provides a single approach to agents,
policy management, recovery processes,
security, backup reporting, and the data
catalog.
Four new capabilities in version 6.5 are
designed to take advantage of emerging
and established disk-based data-protec-
tion technologies, including native da-
ta de-duplication that can be leveraged
across the entire NetBackup environ-
ment; native disk backup capabilities,
which enable pooling, sharing, and back-
up over a SAN to shared disk; integration
with disk-based backup appliances and
VTLs; and heterogeneous snapshots and
CDP management.
With the softwares PureDisk Dedupli-
cation Option, NetBackup 6.5 integrates
Symantecs PureDisk de-dupe technolo-
gy into the core of NetBackup to ensure
redundant backup information is only
stored once across the backup environ-
ment. In addition, the new Flexible Disk
Option enables backup administrators
to perform high-speed SAN backup to a
shared disk pool.
The Virtual Tape Option enhances the
performance and manageability of virtu-
al tape devices by copying data directly
from the VTL to tape, using a process
that is controlled by NetBackup in a cata-
log-consistent manner.
In response to the growing populari-
ty of virtual machines, Symantec added
support for consolidated backup, granular
file-level and image-level recovery, and de-
duplication for VMware environments.
NetBackup leverages VMware Consoli-
dated Backup (VCB) to guarantee con-
sistency and remove the backup from the
primary VMware server. VMware back-
ups can be performed to tape or disk and
can leverage the PureDisk Deduplica-
tion Option for de-dupe and replication
of VMware backups.
NetBackup 6.5 also offers database
and document-level recovery from the
same backup for Microsoft SharePoint,
eliminating the need for multiple back-
ups of the same system. For Exchange
environments, NetBackup provides an
instant-recovery feature that enables
administrators to recover from a disk-
based snapshot.
Symantecs focus on disk-based backup
in NetBackup 6.5 is well-timed. In Inter-
national Data Corp.s (IDC) recent Disk-
Based Data Protection Study, the research
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
_______________
NEWS ANALYSIS
+
TRENDS
firm asked IT professionals how much of
their current disk storage exists to hold
copies for data protection, backup, and
recovery. On average, firms said that 35%
of their disk capacity was for data protec-
tion, backup, and recovery.
Laura DuBois, IDCs research direc-
tor for storage software, says that num-
ber is likely to rise. In three years, we
expect this to grow to an average of 40%.
NetBackup 6.5s focus on disk is consis-
tent with customer demand and provides
flexibility in selecting the manner of disk-
based protection that is most suitable for
users environments.
Dubois says the integration of existing
NetBackup configurations with the new
NetBackup PureDisk configuration is a
key piece of the software. This enables de-
duplication, use of any type of disk storage,
and replication to a remote disaster-recov-
ery site without tape, she says.
Another area of change for the latest
version of NetBackup is its pricing. Sy-
mantecs research shows a growing cus-
tomer interest in aligning their purchas-
ing model for data-protection software
with their approach to storage hardware
procurement. In response, Symantec is
offering a capacity-based pricing option
for NetBackup 6.5.
Customers now have the choice of li-
censing NetBackup based on the total
amount of data being protected, or they
can continue to use the traditional per-
server pricing model. In addition, custom-
ers that stick with traditional server-based
pricing will be offered a simplified pricing
structure under which dozens of clients,
agents, and modules are now grouped in-
to three options.
Whether customers take to the new
pricing model remains to be seen. The
challenge with pricing is that no mat-
ter how you offer it, some users want it
one way, and others [want it] another
way. Well have to wait and see what us-
ers think about this, but Ive heard some
positive responses, says DuBois.
The launch of NetBackup 6.5 is the
first step on the path to a new strategy
for Symantec. The company simultane-
ously announced Storage United, an ini-
tiative designed to minimize the cost and
complexity of managing storage. Storage
United provides a software-oriented ap-
proach to help heterogeneous data-center
environments deliver storage as a service
by uniting disparate resources.
The main aim of the initiative is to
provide a layer of data protection, stor-
age management, and archiving soft-
ware that supports all major server and
storage systems.
Symantecs Fairbanks claims that, be-
cause Symantec has no hardware agenda,
customers have more choices, flexibility,
and control over their storage and server
architectures and hardware purchases.
The storage management problem is
connected to the platform management
problem, which is connected to adminis-
tration and business problems. Right now,
all of these different platforms have dif-
ferent management utilities, says Fair-
banks. Theres a gap between what the
business needs and what IT is providing.
Its time to align everything to deliver
storage as a service.
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
_____________________
____________
___________________________
I NFOS TOR www. i nf ostor. com J ULY 20 07
14
NEWS ANALYSIS
+
TRENDS
BY KEVI N KOMI EGA
Israeli start-up Continuity Software has
set up shop in the U.S. and is making its
disaster-recovery management software
available to North American customers.
The product, dubbed RecoverGuard, of-
fers end users visibility into remote recovery
operations by detecting infrastructure gaps
and configuration vulnerabilities between
primary data centers and disaster-recovery
sites. The main aim of RecoverGuard is to
validate disaster-recovery implementations
which, according to company officials, fail
at an alarming rate.
Disaster recovery doesnt work, says Gil
Hecht, Continuity Softwares founder and
CEO. Every time a change is made in the
production environment it must be im-
plemented in a similar way in the disaster-
recovery environment. There are hundreds
and thousands of changes being made with-
out users having the ability to test them.
The chances of it working are slim.
That, says Hecht, is where Continuity
Software can help. RecoverGuard can
identify problems or gaps between produc-
tion and disaster-recovery environments.
When something gets out of sync, it imme-
diately notifies the administrator, he says.
RecoverGuard monitors and detects
configuration errors, infrastructure
changes, and vulnerabilities in real time
in order to eliminate the risk of data loss
or corruption in the event of a disaster.
The software ensures all production con-
figuration changes are successfully ap-
plied to the remote hot site.
Continuity offers RecoverGuard in a
number of different ways, including a No-
Risk Assessment, which offers custom-
ers the opportunity to deploy Recover-
Guard on up to 30 servers, for 48 hours.
At the end of the 48 hours, the customer
receives a report that details the complete
topology of the data center and disaster-
recovery environment, a description of
the risks and threats to the production
and disaster-recovery environments, a list
of ways to optimize certain aspects of the
environment, and an SLA analysis.
The 48-hour assessment costs $15,000
for up to 30 servers, while the software
is also available for an annual license of
$2,000 per server.
RecoverGuard is agent-less and sup-
ports EMC Symmetrix, Clariion, SRDF,
and TimeFinder, as well as Network Ap-
pliances Data OnTap platforms. The soft-
ware also supports all major database and
cluster environments, as well as Windows,
HP-UX, Solaris, AIX, and Linux operat-
ing systems.
Bob Laliberte, an analyst with the En-
terprise Strategy Group (ESG), says a
very high percentage of disaster-recovery
implementations have some kind of prob-
lem. Take a disaster-recovery environ-
ment that is put in place today and tested.
Typically it will work. Now fast-forward
three months: How many moves, addi-
tions, and changes have been made in
that production environment over three
months? he says. Clearly, companies
dont have time to do a disaster-recovery
regression test after each change; its just
not feasible.
Laliberte says most companies test
their disaster-recovery systems every six
to twelve months, while other compa-
nies test more frequently. In most cases,
they fail, and the company corrects those
failures only to have different ones affect
them six months later, he says.
Laliberte believes there is a critical need
for disaster-recovery testing systems. Why
wouldnt you invest in a system to monitor
your multi-million dollar disaster-recovery
environment that your business depends
on? At least that way when you come in to
work and see the light on, you can fix the
problem immediately, instead of waiting for
the next disaster-recovery test, he says.
ESG estimates that remote recovery
operations currently fail at a rate of 40%
to 60%.
Continuity Software tackles DR testing
appliance-based versions of the archiving
platform at various capacity points for
customers who want a turnkey product,
but there is also the HCAP-DL (diskless)
version, which supports all of Hitachis
storage systems, including the USP V,
USP (formerly branded as TagmaStore),
Network Storage Controller, Adaptable
Modular Storage systems, and Workgroup
Modular Storage arrays.
The salient point here is that Hitachi is
divorcing the concept of what the software
does from the whole hardware stack, says
John Webster, principal IT advisor with
the Illuminata research and consulting
firm. That makes the HCAP much more
appealing to customers because now they
can potentially take legacy storage devices
and include them under the umbrella.
However, Webster admits, to add leg-
acy or commodity storage to the virtual
pool, end users have to put a USP V in be-
tween the HCAP-DL and the arrays. But
for USP V customers, thats great, he says.
Now they have a number of different ways
to [implement the archiving platform].
Pricing for different models of the HCAP
varies considerably based on the storage
platform being used on the back-end, but,
for example, an entry-level 5TB HCAP
system is priced at approximately $70,000.
In an effort to limit the need for proprie-
tary APIs, the HCAP uses standards-based
interfaces such as NFS, CIFS, Web-based
Distributed Authoring and Versioning
(WebDAV), and HTTP as well as storage
management standards such as the Stor-
age Management Initiative Specification
(SMI-S) to integrate content-producing
applications into the archive.
Hitachi also introduced a new encryp-
tion solution referred to as Secret Shar-
ing. The patent-pending technology al-
lows users to store their security key within
the HCAP and share that key across mul-
tiple nodes within the archive.
As content comes into the system we
protect it with standard AES encryption,
but the differentiator is our distributed
key management system based on our
Secret Sharing technology, says Zaheer.
Rather than having a single key in a sin-
gle location we distribute pieces of the key
across the environment. Users need all of
the pieces of that key in order to gain ac-
cess to and decrypt the data.
Secret Sharing ensures only a fully op-
erational system with all of its nodes con-
nected to the archive will be able to de-
crypt the content, metadata, and search
index. Zaheer says if a server or storage
device is stolen or removed from the clus-
ter, the device would be automatically en-
crypted and immediately unreadable by
any other device.
Hitachi has thrown data de-duplication
into the mix to eliminate storing redun-
dant data in the archive. Zaheer claims
Hitachis approach to data de-duplication
is collision-proof, in that it performs
both hash comparisons and binary com-
parisons to ensure objects are actual du-
plicates, therefore avoiding hash colli-
sions where different objects could have
the same cryptographic hash key. Most
de-duplication methods use a hash key to
compare hash values between files, but it
is sometimes possible to have the same
hash key for different files. We perform
a binary comparison before we collapse a
file and reclaim the capacity, he says.
Hitachis archiving system comprises
homegrown HDS hardware and software
and technology the company acquired
through the purchase of digital archiving
start-up Archivas last February.
Archivas software, Archivas Cluster
(ArC), simultaneously indexes metadata
and content as files are written to the ar-
chive, with the built-in ability to extract
text and metadata from 370 file formats.
ArC also provides event-based updat-
ing of the full text and metadata index as
retention status changes or as files are de-
leted. The ArC software is what enables
HCAP to scale to 80 nodes, support a
single global namespace with more than
2PB of capacity, and manage more than
two-billion files.
Hitachi FROM PAGE 8
HDS teams with Bus-Tech
Hitachi Data Systems and Bus-Tech have
jointly announced the availability of Hita-
chis Content Archive Platform with Bus-
Techs Mainframe Data Library (MDL) and
Mainframe Appliance for Storage (MAS)
products, resulting in a digital archiving
system for mainframes.
Bus-Techs MDL and MAS are tape-on-
disk appliances that attach directly to
zSeries mainframes via FICON or ESCON
I/O channels, and to disk storage systems
via Gigabit Ethernet or Fibre Channel. To
the mainframe, the tape-on-disk appli-
ances emulate up to 1,024 (MDL) or 256
(MAS) 3480/3490/3590 tape drives, allow-
ing mainframe-based applications to store
tape data on Hitachis Content Archive
Platform by writing sequential files to disk
as if they were standard tape devices.
The MDL and MAS attach to the Content
Archive Platform via the HTTP protocol.
VENDORS MENTIONED
Continuity Software, EMC, NetApp
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
________________________________
_____________
Powered by Qsan Controller (www.qsan.com.tw)
No Boundary...
Manage your data anywhere, anytime...
12221 Florence Ave.
Santa Fe Springs, CA 90670
Tel: 562-777-3488
www.enhance-tech.com/infostor
Advanced Replacement Options 3-Year Limited Warranty Optional Onsite Service . .
2007 Enhance Technology, Inc. All rights reserved. Enhance Technology logo, Where Storage Begins, and UltraStor IP are registered trademarks of Enhance Technology, Inc.
Enhance UltraStor IP storage systems are powered by Qsan Controllers. Qsan and Qsan logo are registered trademarks of Qsan Technology, Inc.
1-888-394-2355 1-800-808-4239
3U 16 Drives with Dual GbE Ports 1U 4 Drives Expandable to 8 Drives
UltraStor RS16 IP iSCSI Expandable R4 XP2000 iSCSI
UltraStor RS8 IP iSCSI
2U 8 Drives iSCSI with Dual GbE Ports
RAID 6 and Powerful Web GUI
Certified for Microsoft Windows 2003
Up to 2GB ECC Cache Memory
Up to 8TB Storage (8 x 1TB SATA)
The Enhance sales and support teams are just a call away to answer your questions
regarding our intelligent iSCSI RAID storage systems, hardware and software features
such as Snapshot and Remote Replication.
NEWS ANALYSIS
+
TRENDS
HBA market, says Arun Taneja, founder
of the Taneja Group consulting firm.
Buiocchi says that Brocades distri-
bution strategy for its HBAs will be the
same as it is for its switches: large OEMs
(which could potentially include EMC,
HBA market FROM PAGE 10 IBM, and Hewlett-Packard), and the
channel (VARs and integrators). Ap-
proximately 85% of Brocades switches go
through OEMs, while 15% go through
the channel. In addition, Brocade will
offer the HBAs via its Website.
Brian Garrett, an analyst with the En-
terprise Strategy Group, notes that Bro-
cades HBA play is significant not in its
short-term implications but, rather, in its
long-term ramifications. Besides the po-
tential for improved pricing and a reduc-
tion in the number of vendors that cus-
tomers have to deal with, Brocade could
bring a lot to the party over time, says
Garrett. Having a footprint at the server
end of the wire in the form of Fibre Chan-
nel HBAs, along with an existing foot-
print within the fabric, provides Brocade
with an end-to-end platform for the deliv-
ery of intelligent services running in the
storage network, including online migra-
tion, virtualization, and replication. The
intelligent ASIC technology that Brocade
has honed over the years at the port level
within switches can be re-purposed at the
server end of the wire within HBAs.
With that said, Garrrett continues,
Brocade has a new challenge ahead as they
start supporting the server end of the wire.
Supporting HBA drivers is a pain for end
users and vendors alike. Brocade needs to
invest in a new level of infrastructure, ex-
pertise, and support services to help cus-
tomers deal with the qualification, support,
and upgrade of HBA driver software.
Richard Villars, vice president of stor-
age systems research at International
Data Corp. (IDC), agrees that Brocades
move goes far beyond just duking it out
in the HBA space. If Brocade were just
getting into the HBA market they would
be facing a rough road, but what theyre
really trying to do is take advantage of
the emerging opportunity created by the
move toward bladed architectures and the
explosion of virtual servers, says Villars.
They see a confluence of things such as
bladed architectures, virtual servers, and
a shift toward high-speed interconnects
like 10GbE and 8Gbps Fibre Channel.
For Brocade to be competitive they need
to be able to play in the architectures be-
ing built for those environments.
In addition to Fibre Channel HBAs,
Brocade last month began shipments of
iSCSI HBAs based on technology gained
in the companys acquisition of Silverback
Systems last year. The model 2110 iSCSI
HBA initiators are compatible with Win-
dows and Linux platforms.
Brocade also outlined plans for next-
generation Intelligent Server Adapters,
which company officials say will integrate
HBA technology with SAN switching
technology. Those products will include
8Gbps Fibre Channel HBAs and 10Gbps
Ethernet adapters and will be available
next year.
VENDORS MENTIONED
Brocade, Cisco, EMC, Emulex, Hewlett-
Packard, IBM, LSI, QLogic
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
*Microsofts Volume Shadow Copy Services. Microsoft, Windows, and the Windows Logo are
trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries.
Does your SAN do this?
Shipping since 2002, StoneFly IP SANs
support all environments ranging from
entry-level, disk to disk backup, primary
storage and enterprise deployments
Contact Us:
www.StoneFly.com/BuyersGuide
WorldWideSales@StoneFly.com
1-888-STONEFLY
Intelligent IP SANs Made Simple
Asynchronous Replication
Integration with Microsofts VSS*
Built-in Reporting
10Gb Ethernet Support
Storage Virtualization
Active-Active Clustering
Delta-Based Snapshots
Synchronous Mirroring
NEWS ANALYSIS
+
TRENDS
iSCSI FROM COVER
ity and costs associated with shared
storage;
Facilitating virtual machine (VM)
mobility, which is one of the key
value propositions of server virtual-
ization; and
Improving data protection, such as
backups and disaster recovery.
In many cases, iSCSI provides a supe-
rior fabric for server virtualization com-
pared to Fibre Channel, not just a cheap-
er one, says Baker.
Praveen Asthana, Dells marketing di-
rector, takes it one step further: Server
virtualization is a killer app for iSCSI.
Reduced complexity
iSCSIs ability to reduce the complexity
and costs associated with SANs is not an
advantage thats specific to virtual server
environments. But reduced complexity is
particularly important in virtual server en-
vironments because it feeds into the value
proposition of consolidation and simpli-
fied management. In addition, many small
and medium-sized companies embarking
on server virtualization do not have Fibre
Channel expertise, nor do they have in-
stalled Fibre Channel SANs.
In addition to reduced complexity,
iSCSI lowers the entry costs for shared
storage in virtualized environments be-
cause its based on Ethernet, and com-
panies can leverage less-expensive (com-
pared to Fibre Channel) equipment and
existing skills.
VM mobility
With a shared-storage SAN on the back-
end of a virtual server environment, if
one server goes down, the guest operat-
ing system (OS) and applications will
transfer to another physical server auto-
matically, usually without any disruption
noticeable to users. This mobility of vir-
tual machines and their applications is a
key benefit of server virtualization, and
SANs are required for mobility.
VM mobility also provides the ability
to move workloads around to dynami-
cally level out (load-balance) resources,
providing applications with more horse-
power on-demand.
With direct-attached storage (DAS), in
contrast, if a VM fails or becomes over-
loaded, administrators have to manual-
ly migrate virtual machines and applica-
tions. SANs facilitate mobility, and iSCSI
may provide some mobility advantages
compared to Fibre Channel that are,
again, related to complexity.
Fibre Channel is a very physically ori-
ented protocol, explains Dells Baker.
WWNs are like MAC addresses: Theyre
burned into the hardware [e.g., host bus
adapters]. There is no logical equivalent
of WWNs that you can give to a virtual
machine, which means you have to cre-
ate relationships upon relationships be-
tween storage and virtual platforms, and
then you have to again allocate storage
from the hypervisor up to the VMs.
As such, using Fibre Channel in a vir-
tual server environment increases the
number of touch points (manual con-
figuration steps) required to manage your
CONT I NUE D ON PAGE 18
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
NEWS ANALYSIS
+
TRENDS
iSCSI FROM PAGE 17
BY KEVI N KOMI EGA
Start-up Pivot3 recently made its pres-
ence known with a new approach toward
cost-effective network storage.
Pivot3 has come up with a way to use
off-the-shelf components and a parallel-
ized I/O architecture to provide an IP-
based storage cluster that can potential-
ly deliver up to 5x the performance at half
of the cost of some competing solutions,
according to company claims.
The companys initial offering, dubbed
the RAIGE (RAID Across Independent
Gigabit Ethernet) Storage Cluster, is
an iSCSI implementation of the Pivot3
block-level infrastructure virtualization
architecture that provides a virtual dis-
tributed RAID implementation.
Jeff Bell, vice president of marketing
for Pivot3, says the RAIGE Storage Clus-
ter, which is designed for both Windows
and Linux environments, breaks the per-
formance and capacity limits of physical
RAID devices by using block-level vir-
tualization and eliminating the need for
specialized RAID hardware and storage
controllers.
We designed the system without any
RAID or storage controllers, which pro-
vides a better way to do RAID data
protection in a clustered environment,
says Bell. Every client has direct access
to the back-end storage nodes, nothing
has to funnel through a controller, and
the system gets faster as you build out
your infrastructure.
Data protection is supported across
multiple networked storage nodes, called
Databanks, which are built using standard
x86 servers and disk drives and are con-
nected via Gigabit Ethernet. Databank
nodes can be added to scale capacity to
hundreds of terabytes. Each node adds
processing power, cache, and network
ports, contributing to the overall perfor-
mance. Each node contains 12 500GB
or 750GB drives for a raw capacity of
up to 9TB.
Drives and Databanks of any size can
be added non-disruptively and, unlike
DAS or server-based storage, Pivot3 stor-
age can be virtually assigned when and
where it is required, without the need for
re-cabling. Databanks are automatically
discovered and can be assigned to a new
virtual array or added to an existing vir-
tual array.
Bell claims data recovery times are
5x to 10x faster with system-wide par-
allel processing and a proprietary algo-
rithm that optimizes the rebuild process.
RAIGE supports on-the-fly configura-
tion changes, and data is continuously
available through volume provisioning
changes.
Pivot3 is initially targeting its RAIGE
Storage Cluster product at the digital vid-
eo surveillance market, which requires
scalable, high-performance, low-cost
storage systems, according to Bell.
A 6TB Databank node is priced at
$17,499.
Start-up offers IP storage clusters
storage, and transferring from a virtual
environment to a physical machine over
Fibre Channel can require extensive mi-
gration planning and reconfiguration,
according to Baker.
Also, to facilitate VM mobility, fab-
ric zoning and masking must be opened
up, so that each virtual server has ac-
cess to storage. To the guest operating
systems, the provisioned storage looks as
if it is directly connected, but the guest
OS does not have a direct relationship
to the storage.
Initiatives such as N_Port ID Virtual-
ization (NPIV), which allows multiple Fi-
bre Channel initiators to share a single
physical port with multiple WWNs, may
help simplify the configuration and man-
agement of Fibre Channel SANs in virtu-
alized environments. However, NPIV can
add fabric complexity and cost.
In contrast, Baker argues, iSCSI is very
logically (as opposed to physically) orient-
ed (see figure, p. 19). It runs on top of Eth-
ernet, IP, and TCP, which gives users the
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
______________________________________
Source: Dell
Application layer
FC initiator
Presentation layer
Session layer
Transport layer
Network layer
Datalink layer
Physical layer
Application layer
FC target
Presentation layer
Session layer
Transport layer
Network layer
Datalink layer
Physical layer
Application layer
iSCSI initiator
Presentation layer
iSCSI protocol
Session layer
Transport layer
Network layer
Datalink layer
Physical layer
Application layer
Presentation layer
iSCSI protocol
Session layer
Transport layer
Network layer
Datalink layer
Physical layer
FC target
iSCSI alleviates complexity
FC model:
Extensive hypervisor conguration
Fabric management required
Arbitrated storage only
Physical / HW control
FC switch
FC relationship FC relationship
iSCSI model:
Limited hypervisor conguration
Minimal network management required
Storage Direct or arbitrated
Logical/ VM controlled
Ethernet / IP network
iSCSI relationship
Source: Dell
VMDK repository VM1 app data
VM guest 1
IQN 1
IQN
VM guest 1
IQN 1
IQN
iSCSI facilitates VM mobility
Physical
server 1
Standard
NICs
Physical
server 2
Standard
NICs
Ethernet switch
Server/storage
relationship
Guest OS: Direct relationship via IQN
Hypervisor: Direct relationship via IQN
Guest OS: Arbitrated relationship
Hypervisor: Direct relationship via WWN
iSCSI FC
VM mobility VMs identity (IQN name) transfers seamlessly VM mobility requires hypervisor arbitration
VM mobility using iSCSI Storage Direct
NEWS ANALYSIS
+
TRENDS
CONT I NUE D ON PAGE 20
ability to abstract away from the hardware
and deal with the storage configuration
in a more logical way. For example, us-
ers can create a logical one-to-one rela-
tionship among VMs, applications, and
storage; this is in contrast to the multi-
ple touch points that you have to deal
with in the case of Fibre Channel. And
iSCSI initiators are agnostic to low-
er-level (physical) layers, allowing a di-
rect relationship between a guest opera-
tion systems software initiator and the
storage resources. As such, provisioning
storage through the VM hypervisor layer
(e.g., via ESX) is no longer necessary.
The IQN [iSCSI Qualified Name,
or identifier] is tied directly to the VM,
which simplifies things by reducing the
complexity of the relationship among a
VM, its applications, and storage, says
Baker. iSCSI makes it easier to config-
ure the virtual environment.
Dell refers to the ability to access stor-
age directly from a VM without interfer-
ence from the underlying VM as Storage
Direct (see figure, above).
Baker rounds out the case for iSCSI
with comments on potential benefits of
data protection:
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
______________________________________
NEWS ANALYSIS
+
TRENDS
iSCSI FROM PAGE 19
With iSCSI, you can perform direct
backups to tape or disk from a guest
OS (virtual machine). With Fibre
Channel, in contrast, backups have
to be managed and arbitrated in the
VM hypervisor (e.g., via ESX in the
case of VMware).
With iSCSI, backups (and other stor-
age management applications) are di-
rectly managed in the guest OS, with
direct access to storage and full ap-
plication functionality. With Fibre
Channel, backups are managed by the
guest OS and hypervisor, which arbi-
trate access to storage and control the
relationship with the external disk ar-
ray. This approach can also limit the
functionality of the applications. In
an iSCSI implementation, integra-
tion with VSS/VDS can be ported
over directly from existing backup
methodologies. This is not possible
with the indirect relationship re-
quired with Fibre Channel, accord-
ing to Baker. Utilization of existing
backup scripts and methodologies is
money in the bank for IT adminis-
trators, he says. Moreover, you can
achieve much finer grained backup
capabilities.
With iSCSI, images and applications
developed on guest operating systems
can be migrated to a non-virtualized
(physical) server seamlessly. Migrat-
ing from virtual to physical machines
via Fibre Channel can require sig-
nificant reconfiguration by adminis-
trators and comes with the risk that
administrator errors will cause ap-
plication-level problems. The same
holds true with physical-to-virtual
and virtual-to-virtual migrations.
Nevertheless, Fibre Channel still has
two advantages: Its more mature and,
in almost all cases (or at least until
10Gbps iSCSI takes off), Fibre Channel
provides better performance. However,
in the majority of virtual server appli-
cations, iSCSI SANs may provide suf-
ficient performance.
Chris Poelker, vice president of enter-
prise solutions at FalconStor Software,
cites many of the same benefits of iSCSI
in virtual server environments as does
Dells Baker, most notably in the areas
of lower cost, simplicity, disaster recov-
ery, and direct storage connections to
virtual machines. But Poelker adds that
performance is actually another place
where iSCSI can shine, citing not only
the advent of 10Gbps Ethernet, but also
InfiniBand.
In larger organizations we see a migra-
tion toward leveraging iSCSI as a pro-
tocol over InfiniBand, which runs at
20Gbps, to provide RDMA access to
disk, says Poelker.
So in a virtualized, large-scale
grid environment using a single In-
finiBand connection, you can run Fi-
bre Channel, Ethernet, and iSCSI
RDMA, which allows you to transfer to
disk at 20Gbps. Although iSCSI was
originally pushed to the back burner for
performance reasons, its now being used
for higher-performance applications,
adds Poelker.
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
____________________
NEWS ANALYSIS
+
TRENDS
BUSINESS BRIEFS
CONTINUED FROM PAGE 6
iStor Networks and Fujitsu Ltd. have
teamed up to demonstrate perfor-
mance of 1,046MBps and 85,786 IOPS
in a 10GbE iSCSI configuration. The
demo included Fujitsus Primequest
servers, iStors GigaStor ATX target
subsystems, 10Gbps Ethernet adapters
from Neterion, and i316-A1 disk arrays
from Asustek. Hitachi High-Technol-
ogies (HHT) also participated in the
demonstration.
Designed in collaboration with Da-
ta Direct Networks (DDN), IBM has
announced the DCS9550 disk sys-
tem for high-performance computing
(HPC) environments. The DCS9550
scales up to 96TB with Fibre Channel
disk drives or up to 160TB with SATA
drives. The companies claim through-
put of up to 3GBps on both read-and-
write operations in full-duplex host
transfers. The SAN array also features
RAID 6 for protection against the si-
multaneous failure of two drives in the
same redundancy group.
Storewiz has closed a $9 million
round of funding with venture capital
firm Sequoia Capital.
Exanet has secured $18 million in
its recent round of financing. Exanets
current investors include Evergreen
Venture Partners, Intel Capital, Mi-
crodent, Kodak, CSK Fund (Hitachi),
Dr. Giora Yaron, and others. The lat-
est round of funding was led by Cor-
al Capital Management and includ-
ed QVT Fund LP as well as existing
investors.
Emulexs HBAnyware is now avail-
able for use in Sun Microsystems
SAN Foundation Software, which Sun
developed for the Solaris OS.
ExaGrid Systems has expanded
the availability of the ExaGrid Disk-
based Backup System through several
new distributor and VAR relationships.
Promark Technology, Synegi, USI
Corp., and Voyant Strategies are
among the latest companies to join
the ExaGrid Reseller Partner
Program.
Brocade announced the availabil-
ity of Brocade Access Gateway for
IBM BladeCenter solutions.
Power FROM COVER
Some vendors are doing just that. Co-
pan Systems, for example, bases all of its
virtual tape library (VTL) and archiving
systems on a massive array of idle disks
(MAID) architecture. MAID technology
operates on the basic premise that not all
disks need to be spinning all of the time.
Only disks containing data being request-
ed by applications need to be powered on,
and they are turned off when not in use.
In the case of Copan, only a maximum
of 25% of the drives in a system are pow-
ered on at any one time. This approach
may be a fit for storing long-term, infre-
quently accessed data, but is less practical
for primary storage.
Another approach is more-efficient ca-
pacity provisioning. So-called thin-provi-
sioning technologies have been around
CONT I NUE D ON PAGE 22
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
I NFOS TOR www. i nf ostor. com J ULY 20 07
2 2
NEWS ANALYSIS
+
TRENDS
Power FROM PAGE 21
for several years and, according to Web-
ster, can improve disk utilization rates
from 30% to 40%, to 60% or greater.
Thin provisioning lets users allocate
just enough storage to applications, there-
by reducing overall capacity requirements
and associated power and cooling costs.
Vendors such as 3PAR Data, Compellent,
EqualLogic, LeftHand Networks, Net-
work Appliance and, most recently, Hi-
tachi with its new USP V high-end array
all offer thin provisioning in some form.
There are still more data-management
techniques that can be applied to reduce
overall capacity, such as data compression,
de-duplication, tiering, and archiving,
which all add up to energy savings.
Eventually, however, users will need to
buy more storage no matter how much soft-
ware they throw at the capacity problem.
A lot of hardware vendors are finally
waking up to the reality that they have
to tackle the power-consumption prob-
lem, but it looks like some of the promises
they are making are still a ways out, says
Seth Sladek, a senior systems engineer at
Cambridge Health Alliance.
Sladek manages storage on a daily ba-
sis and is constantly looking for ways to
streamline his operation.
Were looking at archiving technolo-
gies to get data that isnt being accessed
regularly off of spinning disks, says
Sladek. Id like to see more energy-effi-
cient drives. Todays drives are certainly
more-efficient than the 500GB drives of
old that were two feet in diameter, but I
think drive makers have just scratched
the surface in that respect, he says.
But Sladek says he is not completely
willing to sacrifice performance for a low-
er electric bill. Im wary of the perfor-
mance trade-off. Hopefully, drives will
continue to improve in performance,
but at the same time become more en-
ergy-efficient. Its a double-edged sword,
he adds.
Drive manufacturers are conscious
of their role in the power consumption
conundrum.
The amount of power consumption
attributed to drives is relatively insig-
nificant in small numbers, but when you
move into the data center and take an
average of 8 watts per drive and multiply
it by hundreds of thousands youre talk-
ing megawatts, says Willis Whittington,
senior product marketing manager for
Seagates Enterprise Compute Business.
However, Whittington says there is a
delicate balance between saving power
at the drive level and providing the per-
formance and capacity points users have
come to expect.
We can save power, but its at the ex-
pense of something else, and that some-
thing is usually performance, whether it
be seek times, latency, or throughput,
says Whittington. We could say to cus-
tomers, We can save you 20% on your
electric bill if you let us take 10% off the
performance of the drives. But users want
more performance.
Seagate has begun its own work on the
power problem with the announcement
of what the company calls Pow erTrim
Energy Efficiency, which is a set of fea-
tures that together reduce the overall
power consumption of its hard drives
(see Seagate unveils power-conscious
10K drive, above).
Whittington claims PowerTrim helps
energy-constrained data centers maxi-
mize efficiency with power consumption
rated as low as 8 watts. The result is a
drive that delivers a 34% reduction of
power in idle mode, as well as a 33% re-
duction in operating power.
The Cheetah NSthe first Seagate
drive to use PowerTrimis a 10,000rpm
hard drive based on the same platform
as the speedier 15,000rpm Cheetah 15K.5.
The Cheetah NS offers 400GB of capac-
ity with lower power and cooling require-
ments than the 15K.5. The trade-off, of
course, is a performance hit.
There is no low-hanging fruit available
when it comes to saving on energy costs,
but there are a lot of little things that can
be done. It has to be a holistic approach,
says Whittington.
Whittington notes that more-efficient
power supplies and tighter integration be-
tween system workloads and drives could
also yield power savings. If we had bet-
ter cooperation between the system and
the drive and get power supply efficiency
up over 80%, we could save more pow-
er, he says.
Data growth ultimately translates into
the need for more power. Its the unstop-
pable force versus the immovable object.
Something has to give.
Steve Duplessie, founder and senior
analyst at the Enterprise Strategy Group,
says that at least for now a hodgepodge
of space-saving technologies seems to be
the bestif not the onlyapproach to-
ward stemming the tide of power con-
sumption. However, he also expects ven-
dors and users alike will begin to reassess
how they build and implement storage
infrastructures.
IT needs to wake up and begin treat-
ing process changes, such as information
lifecycle management [ILM], as need to
have instead of nice to have, because
thats the only way theyre going to get
close to solving these issues in the short
term, says Duplessie.
Storage vendors seem to be in catch-
up mode when it comes to addressing
power and cooling problems, slapping
the green label on existing technolo-
gies or pledging more eco-friendly prod-
ucts in the future, but most of the initial
progress is being made in other areas
of IT.
Servers are what you mostly hear about
now, but the data layer is really where the
problems are, Duplessie says. Youll see
a ton of both marketing hype and real-
world data over the next six months that
expose the real issues around data-center
power and cooling.
In an effort to develop some of that
real-world data, a non-profit consortium
called the Green Grid cropped up earlier
this year to come up with ways to trim the
data-center power drain.
The group, founded by AMD, APC,
Dell, Hewlett-Packard, IBM, Intel, Micro-
soft, Rackable Systems, SprayCool, Sun,
and VMware, is in the process of devel-
oping performance-per-watt metrics with
a fixed set of benchmarks and interoper-
ability standards for energy efficiency in
the data center.
Green Grid members have stated that
they will take a holistic approach to
addressing the entire computing eco-
system. Standards and metrics will be
developed and applied to all IT equip-
ment, including servers, networking gear,
and storage, as well as non-IT equipment
such as air conditioning units and over-
all facility design.
Several storage companies have joined
the ranks of the Green Grid since its
launch in February: Brocade, Cisco, Co-
pan, EMC, Netezza, QLogic, Quantum,
SGI, Storewiz, and Xyratex.
By Kevin Komiega
Seagate Technology recently announced
a new hard drive that strikes a balance be-
tween capacity and power consumption.
The 10,000rpm Cheetah NS drive is based
on the same platform as the 15,000 rpm
Cheetah 15K.5, but includes Seagates new
PowerTrim technology for more-efficient
power consumption.
The Cheetah NS also features up to 33%
more capacity at 400GB, along with a
33% reduction in power and cooling re-
quirements. This additional capacity and
reduced cooling profile in the data center
means that the Cheetah NS ultimately
delivers a lower total cost of ownership.
We took the 15K Cheetah drive design and
boosted the capacity to give us an extra
100GB without losing too much in the way of
performance, says Willis Whittington, senior
product marketing manager for Seagates
Enterprise Compute Business. We havent
changed anything in the mechanics. The
only thing we changed is the head design.
Power consumption for the Cheetah NS
drive is rated as low as 8 watts. The result
is a 34% reduction of power while idle, as
well as a 33% reduction in operating power
compared to other 10,000rpm drives.
Whittington says users looking for higher
IOPS-per-gigabyte transactional perfor-
mance are likely to opt for the Cheetah
15K.5, while the new Cheetah NS is de-
signed for users in search of a higher
capacity, lower-cost option.
The Cheetah NS has a seek time of 3.9ms
and is available with a choice of interfac-
es, including 3Gbps Serial Attached SCSI
(SAS) or 4Gbps Fibre Channel. The drive
is rated at a mean time between failure
(MTBF) of 1.4 million hours and has a five-
year warranty.
Seagate is now shipping the Cheetah NS to
OEM customers, and the drive is expected
to be available to the distribution channel
during the third quarter of 2007.
Seagate unveils power-conscious 10K drive
VENDORS MENTIONED
AMD, APC, Brocade, Cisco, Compellent,
Copan Systems, Dell, EMC, EqualLogic,
Hewlett-Packard, Hitachi, IBM, Intel,
LeftHand Networks, Microsoft, Netezza,
Network Appliance, QLogic, Quantum,
Rackable Systems, Seagate Technology,
SGI, SprayCool, Storewiz, Sun, 3PAR Data,
VMware, Xyratex
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
E-Class Redundant SAS/SATA RAID
Pr omi se RAI D - Pr of i t Fr om Our Exper i ence
www.promise.com/SAS
For details about E-Class go to:
SAS / SAS
Active / Active
Performance / Performance
Reliable / Reliable
Redundant / Redundant
PROMISE/ PROMISE
One word -RAID
used to say it all,
Now...
Promise Technology 580 Cottonwood Drive Milpitas, CA 95035 sales@promise.com (408) 228-1400
I NFOS TOR www. i nf ostor. com J ULY 20 07
2 3
NEWS ANALYSIS
+
TRENDS
Users are looking for more and more
capacity. ESG research confirms this,
with 54% of respondents to a recent VTL
survey citing scalability as a key purchas-
ing criterion, says Biggar.
The sheer size of the 6000 series sys-
tems make the DL6100 and DL6300 via-
ble options for companies looking to con-
solidate multiple, smaller virtual libraries
onto a single platform to simplify VTL
management.
VTL proliferation is a potential prob-
lem, particularly from a management per-
spective, says Biggar. Consolidating on-
to a single larger platform does address
the problem; however, its not necessar-
ily the only answer and it doesnt always
make sense in all environments.
EMC also announced support for data de-
duplication in its VMware and NAS plat-
forms. The company debuted EMC Avamar
version 3.7 software, which supports
VMware Consolidated Backup (VCB) for
the protection and reduction of backup
times in virtual machine environments.
The Avamar backup-and-recovery soft-
ware uses data de-duplication technology
to eliminate the transmission of redun-
dant backup data over the network to sec-
ondary storage. With support for VCB,
VMware customers have a new way to
de-duplicate backup data stored in virtual
machines, in turn reducing the amount
of data backed up and minimizing the
impact on host servers.
Customers can now use de-duplication
capabilities with Celerra NAS systems via
NDMP backups. In addition, EMCs Back-
up Advisor software now supports Avamar
software, providing monitoring, analysis,
and troubleshooting, as well as diagnostics
to provide analysis of failed backup jobs.
Backup, recovery, archive
Rounding out EMCs recent backup an-
nouncements was the addition of several
new features for NetWorker, RecoverPoint,
and DiskXtender, as well as the debut of a
product for bare-metal recovery.
EMCs new HomeBase software offers
added server protection by automatically
capturing and storing point-in-time pro-
files of the server configuration required
for bare-metal recovery. HomeBase inte-
grates into the backup-and-recovery work-
flow and, at the time of recovery, applies
a source servers profile to the new target
EMC FROM PAGE 8 server hardware, eliminating the need to
re-configure systems and applications in
the case of hardware failure or disaster.
The newest version of EMCs Recover-
Point continuous data protection (CDP)
software supports Microsofts Volume
Shadow Copy Service (VSS). Recover-
Point is now also integrated with EMC
Replication Manager, which enables us-
ers to manage RecoverPoint-protected ap-
plications and other supported replication
technologies via a single console.
On the archiving front, the companys
DiskXtender for NAS archiving software
now provides expanded file migration in-
teroperability with support for file serv-
ers from NetApp and other vendors. The
software frees up space on primary storage,
improves application performance, and
reduces backup data sets while speeding
recovery.
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
Worldwide market projections, tape libraries
Unit shipments (in thousands)
Revenue ($millions)
Source: Freeman Reports
0
10
20
30
40
50
60
70
80
0
600
400
200
1,000
800
1,400
1,200
Helical scan 8mm Half-inch cartridge DLT/SDLT LTO
2012 2011 2010 2009 2008 2007 2006
2012 2011 2010 2009 2008 2007 2006
I NFOS TOR www. i nf ostor. com J ULY 20 07
2 4
S
P
E
C
I
A
L
R
E
P
O
R
T
T
ape market research firm Freeman Reports re-
cently released its annual Tape Library Outlook
reporta 188-page compendium that covers
virtually all aspects of the tape marketand in terms
of tape formats, LTO is the only growth market.
For the first time, overall revenues and unit ship-
ments of tape libraries declined in 2006. Total reve-
nue was down 15.6% (to $1.81 billion)
relative to 2005, and unit shipments de-
clined 4.5% (to 57,668 units). That fol-
lows two consecutive years of revenue
growth: 10.4% in 2005 and 13.5% in
2004.
However, that doesnt mean users
are buying less tape capacity. In fact,
the report notes that users purchased
more than 50% more tape capacity in
2006 versus 2005. The decline in rev-
enues and unit shipments is in part due
to users migration to lower-cost, higher-
capacity libraries.
Freeman Reports expects tape library
revenues to continue to slip slightly this
year, although at a slower pace. The re-
search firm predicts revenues will slide
from $1.81 billion in 2006 to $1.77 bil-
lion this year (although the firm pre-
dicts a rebound leading to $2.15 billion
in revenues by 2012, representing a compound annual
growth rate of 2.9%).
Meanwhile, overall unit shipments are expected to
increase this year, from 57,668 units in 2006 to 60,438
units in 2007. Freeman Reports predicts a rebound
in tape library shipments through 2012, with a com-
pound annual growth rate of 5.8%.
Continued interest in tape is being fueled by a
number of factors, including tapes write-once, read-
many, or WORM, capability (which is increasingly
important for records retention and compliance ap-
plications), drive-level encryption (as opposed to ap-
pliance- or software-based encryption), a shift toward
tiered storage strategies such as information lifecycle
management (ILM), and tapes cost-per-terabyte ad-
vantages over disk.
LTO dominates
The one bright spot in the unit shipments and reve-
nue picture was LTO tape libraries. LTO libraries ac-
counted for 88.4% of all library shipments and 58.1%
of revenue last year, up from 81% of total unit ship-
ments and 52% of revenues in 2005.
Unit shipments of LTO libraries were
almost 51,000 last year, representing
revenue of $1.053 billion.
Freeman Reports expects LTO to
continue its dominance, predicting
LTO libraries will account for more
than 96% of unit shipments, and 64%
of revenues, by 2012.
LTO tape drive manufacturers in-
clude Hewlett-Packard, IBM, Quan-
tum, and Tandberg. LTO library/auto-
loader manufacturers include Fujitsu,
HP, IBM, NEC, Overland Storage,
Qualstar, Quantum, Spectra Logic,
Sun/StorageTek, and Tandberg. (An
autoloaderalso referred to as a mini
library or autochangerhas only one
tape drive, as opposed to a library that
can have multiple tape drives.) LTO
media manufacturers include vendors
Tape market update:
LTOs the bright
spot
LTO libraries accounted for more than
88% of unit shipments last year, and
the LTO-4 format promises to extend
the technologys dominance.
BY DAVE SIMPSON
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
558n`j\Z_f`Z\`jXe`e]fid\[Z_f`Z\%
N_\epfli\cffb`e^]fik\Z_efcf^pjfclk`fej#cffbijkkf
k_\`e[ljkipc\X[\ij%
hl is shiing a famil, of lroLiant SAS servers, enabled b, LSl technolog,, to SH8 and
enterrise customers looking for higherformance, exibilit, and uncomromised data
rotection. Look for nextgeneration uAS server, SH8 and enterrise storage s,stems at
www.SerialAttachedSCSl.com.
J8J Smart Arra,
enabled b, LSl
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
____________________
Tape library technology shifts
Unit shipments
Source: Freeman Reports
Helical scan
0.4%
Half-inch cartridge
2.1%
DLT/SDLT
6.1%
LTO
88.4%
2006
2012
LTO
96.3%
Half-inch cartridge
2.7%
8mm
1.2%
Helical scan
0.2%
8mm
2.6%
10
20
30
40
50
60
70
80
90
100
110
120
0
100 200 300 400 500 600 700
Source: Freeman Reports
800
N
a
t
i
v
e
d
a
t
a
-
t
r
a
n
s
f
e
r
r
a
t
e
(
M
B
p
s
)
Native capacity (GB)
Key characteristics of tape drives
STK 10000
Travan
AIT-3
VXA-2
VXA-3
IBM TS1120
LTO-2
LTO-3
LTO-4
STK 9940B
STK 9840C
SDLT 320
AIT-4
DLT-S4
SAIT-2 (07)
SAIT-1
IBM 3592-J1A
SDLT 600
I NFOS TOR www. i nf ostor. com J ULY 20 07
2 6
Tape market update: LTOs the bright spot continued
S
P
E
C
I
A
L
R
E
P
O
R
T
such as Fuji, Imation, Maxell, Sony, and
TDK.
The growth in the LTO space came
at the expense of all other tape formats,
most notably the DLT/SDLT and 8mm
formats.
LTO libraries are even encroaching
on the high-end half-inch cartridge sec-
tor. Half-inch cartridge libraries account-
ed for 2.7% of unit shipments and 37%
of revenues last year and are expected to
account for 2.1% of unit shipments and
35% of revenue in 2012.
DLT libraries took a big hit last year,
declining 46% in terms of unit shipments
(from 6,481 units to 3,521 units). DLT li-
braries represented 6.1% of library ship-
ments in 2006 (down from 10.7% the
previous year) and 4% of library revenue
(down from 4.6% in 2005).
Freeman Reports predicts that the
DLT/SDLT share of the market will ap-
proach 0% in 2012, leading some observ-
ers to question Quantums plans for future
generations of the technology as the com-
pany increasingly emphasizes its LTO pro-
gram. Quantums market share (26.7%) of
the LTO library market is second only to
IBMs (29%), followed by Sun/StorageTek
with a 26.2% share of the 2006 market
in terms of revenues, according to
Freeman Reports.
Shipments of libraries based on
the 8mm tape format (including
AIT) declined 40% in 2006. Ac-
cording to Freeman Reports, lack
of backward compatibility hin-
dered end-user adoption of AIT-
4 products, although AIT manu-
facturer Sony rebounded near the
end of last year with the introduc-
tion of the AIT-5 format, which
has a native capacity of 400GB
per cartridge (1.04TB compressed)
and a transfer rate of 24MBps. In
addition to Sony, vendors such as
Qualstar and Spectra Logic sell li-
braries based on AIT technology.
Given these market dynamics,
its no surprise that theres been
some consolidation in the tape
library market. Last year, for ex-
ample, Quantum completed its
acquisition of ADIC, Tandberg
acquired Exabyte, and Grau dis-
continued manufacturing of tape
libraries.
Speeds and feeds
LTO-4 features a native capacity of
800GB per cartridge (1.6TB, assuming
2:1 compression), which is twice the
capacity of LTO-3 cartridges. LTO-4s na-
tive transfer rate is 120MBps (240MBps
in compression mode), vs. 80MBps for
LTO-3, which translates into a backup
rate of approximately 864GB per hour.
LTO-4 is backward read/write-com-
patible with LTO-3 tape cartridges
and is backward read-compatible with
LTO-2 cartridges.
The LTO-4 specification doubles
the capacity of the LTO-CM (Car-
tridge Memory) to 8KB. The CM
memory chip uses an RF interface,
which allows remote reading of its
stored information, such as calibra-
tion information, manufacturers
data, and initialization parameters.
The CM can store user-supplied in-
formation, such as the age of the tape
cartridge, how many loads have oc-
curred, how many temporary errors
have accumulated, etc.
As does LTO-3, LTO-4 supports
WORM functionality. (Other tape
formats that support WORM in-
clude various half-inch tape formats
from vendors such as IBM and Sun/
StorageTek, Quantums DLT/SDLT, and
Sonys AIT.) Unlike LTO-3, LTO-4 sup-
ports drive-level, 256-bit AES (Advanced
Encryption Standard) encryption.
Drive-level encryption (which is al-
so available in various half-inch tape
drives), is hardware-based and is faster
than software-based encryption and less
expensive than appliance-based encryp-
tion. (For more information on encryp-
tion, see Options abound for tape, disk
encryption, p. 30.)
Spectra Logic (uniquely among tape
library manufacturers) offers hardware-
based encryption in its libraries, but the
company will support drive-level en-
cryption in its libraries configured with
LTO-4 drives.
It should be noted that the LTO-4 spec-
ification does not require encryption, and
it does not dictate a specific method of
implementing it. However, the standard
does include a media interchange speci-
fication so that users can interchange en-
crypted tapes between drives from differ-
ent LTO-4 drive vendors.
Recent product intros
Among the larger LTO vendors, IBM was
early out of the gates in May with ship-
ments of LTO-4 drives and libraries. Big
Blues LTO-4 drives use the same encryp-
tion technologyand key management
functionalitythat the company has
been using in its higher-end TS1120 half-
inch tape drives.
IBM has five products in its LTO-4
lineup:
The TS2340 LTO-4 tape drives are
priced at $5,170 with an LVD
SCSI interface, or $5,681 for ver-
sions with a Serial Attached SCSI
(SAS) interface;
The TS3100 tape library has one
LTO-4 drive and a choice of LVD
SCSI, 4Gbps Fibre Channel, or
3Gbps SAS interfaces. Pricing starts
at $5,770;
The TS3200 library has one or two
LTO-4 drives and any of the three
interfaces (SCSI, Fibre Channel, or
SAS). Pricing starts at $5,770 for a
single-drive configuration;
The TS3310 library has up to
316.8TB of capacity (30 to 396 car-
tridge slots) and up to 18 drives.
Pricing starts at $16,530; and
The high-end TS3500 library has a
multi-path architecture and scales
up to 16 frames, 192 tape drives, and
more than 6,000 cartridges for up
to 10PB of capacity. Starting price is
$22,800. (For more information, see
IBM ships LTO-4, broadens encryp-
tion, InfoStor, June 2007, p. 1.)
Also in May, Tandberg Data began
shipments of its model 1640 LTO-4 drives.
As are IBMs, Tandbergs LTO-4 drives
are available with SCSI, SAS, or Fibre
Channel interfaces. The drives include
128-bit encryption and are priced at ap-
proximately $4,499, including a copy of
Symantecs Backup Exec QuickStart
Edition software.
This month, Tandberg began shipping
two LTO-4 libraries specifically for Mac
OS X platformsthe Magnum 224 and
448. The 2U Magnum 224 has one Fibre
Channel interface, 24 tape car-
tridge slots, and up to 19.2TB of
native capacity. The 4U Mag-
num 448 has one or two LTO-
4 drives, and 48 tape cartridge
slots for a capacity of up to
76.8TB. Single-drive configura-
tions of the Magnum 224 and
448 are priced at $8,500 and
$12,500, respectively.
Qualstar began shipping tape
libraries equipped with LTO-4
drives and media in April. LTO-
4 drives are available in Qual-
stars TLS-8000, RLS-8000 and
XLS Enterprise series of librar-
ies. (Qualstars existing LTO-
1, LTO-2, and LTO-3 libraries
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
BIGGER. FASTER. MORE SECURE.
The new LTO 4 has an astounding 1.6 terabyte capacity and can transfer
up to 240 MBs per second making it 50% faster than LTO 3. And for the rst time
ever, Maxell LTO 4 offers encryption capabilities for securing sensitive data. Plus, it
incorporates all seven Maxell NeoSMART technologies for unsurpassed reliability.
Maxell LTO 4 - the ultimate terabyte tape.
MAXELL. THE TAPE TO TRUST.
2
0
0
7
M
a
x
e
ll
C
o
r
p
o
r
a
t
io
n
o
f
A
m
e
r
ic
a
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
_________________
S
P
E
C
I
A
L
R
E
P
O
R
T
can be field-upgraded to LTO 4.) Capaci-
ties range from 8.8TB to 10PB with more
than 6,000 tape cartridges in the XLS
Enterprise library.
Last month, Overland Storage an-
nounced that it will incorporate Hewlett-
Packards LTO-4 drives in its NEO and
ARCvault series of tape libraries. The
company also announced that the LTO-
4 drives will be integrated into the HP
StorageWorks MSL6000 libraries man-
ufactured by Overland under an OEM
agreement with HP.
Overlands LTO-4-based NEO libraries
range from 24TB (uncompressed) on the
NEO2000 to 400TB on a 500-slot version
of the NEO8000 library. Native trans-
fer rates range from 864GB per hour to
5.2TB per hour. Pricing starts at $16,615.
Overland was expected to begin ship-
ments of LTO-4-based ARCvault libraries
this month, with capacities ranging from
9.6TB on the ARCvault 12 to 38.4TB on
the ARCvault 48. Native transfer rates
range from 432GB per hour to 864GB
per hour.
Despite advances such as LTO-4, tape
is under constant pressure from disk-based
backup. And tape vendors have respond-
ed by decreasing prices to maintain tapes
cost-per-megabyte advantages. For exam-
ple, at the OEM level, pricing for LTO
drives has declined from about 2.8 cents
per MB in 2001 to less than 0.4 cents per
MB this year, according to Freeman Re-
ports. And compression essentially cuts
Users
purchased
more than 50%
more tape
capacity in
2006 vs. 2005.
the cost-per-megabyte in half.
Another factor driving down the cost-
per-megabyte of tape libraries is the ex-
treme scalability of todays libraries. Sun/
StorageTeks SL8500 library, for example,
can scale to more than 300,000 tape
cartridges for a capacity of about 120PB
using LTO-3 cartridges.
Although some disk subsystem vendors
pushing their systems as alternatives to
tape claim a lower cost-per-megabyte,
tape is still less expensive than disk ar-
rays. However, the total cost of ownership
of a tape library includes the initial hard-
ware and software costs, the cost of data
conversion from previously used formats
(if applicable), initial and recurring costs
of media, costs to transport and store re-
corded media, software and hardware
upgrades, as well as service and mainte-
nance costs.
VENDORS MENTIONED
Fuji, Fujitsu, Hewlett-Packard, IBM,
Imation, Maxell, NEC, Overland Storage,
Qualstar, Quantum, Sony, Spectra Logic,
Sun/StorageTek, Symantec, Tandberg, TDK
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
_______________
Source: TheInfoPro
In use now In long-term
plan
Not in plan In pilot /
evaluation
In near-term
plan
Fall 2005
Spring 2006
Fall 2006
0% 20% 40% 60% 80% 100%
Data encryption implementation plans
I NFOS TOR www. i nf ostor. com J ULY 20 07
3 0
S
P
E
C
I
A
L
R
E
P
O
R
T
P
rotecting personal and private customer or
employee data has become something of a
national crusade these days. Unfortunately,
so has the need to publicly expose any company that
fails to adequately protect the sensitive data under its
care. Missing backup tapes are often the culprit, as are
laptops whose mobility can also mean sensitive data
ending up in the wrong hands.
Increasingly, companies have begun to turn to a
growing line of encryption options to help them avoid
the twin pains they might otherwise face: potential
fines for non-compliance with various state or federal
privacy regulations, and the loss of brand identity and
customer loyalty that could result from costly public
disclosure of a security breach.
In legal circles, an increasingly popular way to
combat such risks is to encrypt sensitive data wher-
ever you (or legislators) have determined there is
a good chance of unwanted exposure. Stopping a
few steps short of the Holy Grail of storage security,
encryption of data-at-rest is nonethe-
less gaining momentum as a legal safe
harbor that can simultaneously help
prove a companys good-faith efforts to
comply with privacy rules while also sig-
nificantly reducing the companys risk
of a data breach.
Encryptions expanding reach
Previously the bastion of highly regulat-
ed financial, government, and healthcare
industries, the need to encrypt data has now spread to
other enterprises whose daily business consists of han-
dling customer credit card numbers, employee social
security numbers, salary information, highly sensitive
intellectual property, etc.
Not surprisingly, the two areas where data encryp-
tion solutions are seeing the most activity of late in-
clude protection of data-at-rest on backup tapes and
efforts to protect data stored on laptops, according
to Jon Oltsik, senior analyst at the Enterprise Strat-
egy Group (ESG). While some vendors also offer en-
cryption of data-at-rest for both backup tapes and disk
arrays, Oltsik doesnt see array-based encryption at-
tracting many users. For every one company we find
doing disk array-based encryption, we see 10 doing
tape encryption, he says.
In his assessment of the current growth areas for
data-at-rest encryption, Oltsik is not alone. Rich
Mogull, a research vice president of information secu-
rity and risk at the Gartner Group IT consulting firm,
also cites the same two market segmentsencryption
of backup tapes and laptopsas those currently get-
ting the most traction among end users.
Focusing on the enterprise storage side of encryp-
tion, the remainder of this article explores how todays
IT organizations have decided to approach the many
backup-related encryption solutions now available.
Software-based encryption
Solutions for encrypting backup data can occur at sev-
eral points in the backup infrastructure. One of the
oldest methods is software-based encryption, which
is available in most backup applications.
However, analysts note that this approach has a
number of drawbacks, at least for enterprises with
large-scale backup and encryption requirements. For
example, the amount of client or server CPU cycles
needed to conduct software-based encryption could
lead to backup performance penalties, not to mention
tying up the clients or servers needed to run encryp-
tion. The performance drag from soft-
ware-based encryption can be around
20% on the CPU, says Steve Norall,
an analyst with the Taneja Group re-
search and consulting firm.
Yet that hasnt been the experi-
ence of Kevin Donnellan, assistant
CIO at the Screen Actors Guild-
Producers Pension and Health Plans
(SAGPH). A long-time Veritas Net-
Backup shop, SAGPH had investi-
Options abound for tape,
disk encryption
Choices include software-based
encryption, switch-based encryption,
drive- or library-based encryption, and
dedicated appliances.
BY MICHELE HOPE
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
Source: TheInfoPro
How important are the following types of storage security functionality?
(Only users with tape encryption, data encryption, and/or data privacy in plan
but not in use were asked this question.)
Q:
Extremely Minimal Not at all Unknown Very Somewhat
0% 50% 100%
Access control
Auditing and logging for regulatory compliance
Tape-level encryption
Key management
Secure database access and usage
Discovery of current access control setting in network/application
Host data access protection
Permanent deletion
Data privacy management - nding condential material
Identity management
File data encryption (e-mail, Word, PowerPoint, Excel)
Fabric transit encryption
Block data encryption (database, ERP)
Disk-level encryption
Storage security functionality importance
Source: TheInfoPro
What is the top pain point associated with securing storage? (Only users with
tape encryption, data encryption, and/or data privacy in plan but not in use
were asked this question.)
Q:
0% 20% 40% 10% 30%
Top storage security pain points
Assessing performance impact
Data encryption complexity of management
Workow for access
Note: Fewer than 10% of the survey respondents cited one of
the following: key management, authentication, off-premise
data access, integration, or data classication.
I NFOS TOR www. i nf ostor. com J ULY 20 07
31
IP Storage addresses diverse applications continued
S
P
E
C
I
A
L
R
E
P
O
R
T
gated encrypting its DLT backup tapes
to help protect the sensitive salary and
health information of both its famous
and not-so-famous Hollywood members.
Needing to back up close to 100 serv-
ers nightly within an already-tight, sev-
en-hour backup window, Donnellan ad-
mits he wasnt too keen on the idea of
installing NetBackups client-based en-
cryption option on each of his systems
or adding much extra time to complete
his backups.
That sentiment began to change when
he heard that Symantec was working on a
Media Server Encryption Option (MSEO)
for NetBackup that placed encryption pro-
cessing on a centralized server, as opposed
to the client. After beta testing the impact
of NetBackup MSEO on current backup
jobs, Donnellan was happy to see encryp-
tion only add what he
estimates was between
5% and 8% extra over-
head to the companys
current backup win-
dowas opposed to
the 40% to 45% hed
seen when testing Net-
Backups client-based
encryption option.
Encryption keys are
replicated to a second
media server at the
companys disaster-re-
covery site to help
ease the pain of re-
storing previously en-
crypted tapes. If we
ever need to do a re-
store, we have the key
in both of those serv-
ers. So you basically
mount the tape, and
NetBackup will find the key it needs to
decrypt it.
Although ESGs Oltsik says he seldom
sees software-based encryption of backup
tapes as a solution for enterprise installa-
tions, he notes a few places this type of
solution can still apply. One exception
is architectural changes with disk-to-disk
backup, which then may get dumped to
tape. In that area, were starting to see
more software-based encryption, he says,
since backup windows tend to become
less of an issue.
One customer fitting this profile is Nau-
gatuck Savings Bank, an East Coast com-
munity bank with sur-
rounding area branch
offices. According
to network manag-
er Craig Wallace, the
bank had been strug-
gling for some time
with the growing in-
efficiencies of manag-
ing backups to tape
on a wide range of its
servers at its main cor-
porate site. With no
networked tape drive
or library, backups had become a time-
consuming manual process, while still
leaving open the prospect that the tapes
could be compromised somewhere during
their frequent trips to off-site storage.
Wanting to avoid making the head-
lines for data breaches like some of its
larger bank counterparts, Naugatuck
decided to convert its infrastructure to
a disk-to-disk backup paradigm. Wal-
lace reasoned this would simultaneously
make backups more efficient and replace
the companys riskier tape process with
what he thought would be a more secure
disk-based solution.
Wallace turned to EVaults InfoStage,
an application-level software solution
with the ability to encrypt backup da-
ta from end to end in the backup cycle.
With InfoStage, users have the option to
encrypt either when they are setting up
a backup job or when it is transported
over the wire to an InfoStage server-based
vault via 128-bit encryption.
InfoStage allowed for encryption in all
phaseswhile doing the transport and
during the backup process. Once on the
backup medium, it was already encrypt-
ed. Then, if you were to archive the data
off to a secondary storage system, its en-
crypted there also, says Naugatuck Sav-
ings Wallace, who claims he hasnt no-
ticed any major overhead with backups
using software-level encryption. Since
EVaults system backs up only block-lev-
el changes and new files with its DeltaPro
technology, Wallace didnt experience
any challenges with backup windows or
extra encryption processing overhead.
Other ways to encrypt
Another option for encrypting data on
tape (and disk) is to use purpose-built en-
cryption appliances from vendors such
as NeoScale and Network Appliances
Decru division. These inline appliances
can encrypt backup data at wire speed
and typically reside between the backup
server and the tape media.
Yet-another option is hardware-based
encryption at the tape library lev-
el, which is offered by Spectra Logic
(see R.C. Willey opts for library-based
encryption, above).
A more recent alternative is tape drive-
level encryption, which is available on
certain half-inch tape drives from ven-
dors such as IBM and Sun/StorageTek,
as well as the more recently introduced
LTO-4 tape drives.
(For more information on LTO-4, see
Tape market update: LTOs the bright
spot, p. 24.)
All of the LTO-4 tape drive manufac-
turersincluding Hewlett-Packard, IBM,
Quantum, and Tandbergoffer drive-
level encryption, as do (or will) LTO
library manufacturers such as Fujitsu,
Hewlett-Packard, IBM, NEC, Overland
Storage, Qualstar, Quantum, Spectra
Logic, Sun/StorageTek, and Tandberg.
Tape drive-level encryption, which is
implemented in hardware, is relatively
inexpensive, although it may require a
media upgrade, and does not incur the
performance penalties of software-based
encryption. Bruce Master, senior pro-
After evaluating a number of encryption op-
tions, R.C. Willey Home Furnishings decid-
ed to go with tape library-based encryption.
Facing rapid business growth, IS direc-
tor Ned Jones decided it was time to insti-
tute more enterprise-level practices to back
up and protect the companys data. Ac-
customed to hand-carrying backup tapes
to the companys small off-site location
several miles away, Jones began to map
out a plan for a new off-site disaster-recov-
ery center at one of the companys facilities
in another state. As part of the plan, Jones
knew his hand-carrying of backup tapes
would have to be replaced by less trustwor-
thy modes of transitmail or truckswhere
hed need to encrypt the tapes to avoid a
potential data breach.
Jones ended up selecting a Spectra Logic
T950 tape library and two T120 libraries
with all encryption keys managed by Spectra
Logics BlueScale Encryption software. The
libraries include hardware-based encryption.
Although the libraries currently use S-AIT
drives and tapes, Jones looks forward to
switching to LTO-4 drives in the coming
months for better capacity. Spectra Logic
will continue to support library-level encryp-
tion in previous LTO generations, but will
leverage drive-level encryption with LTO-4
drives. While Jones may benefit from the
added performance that drive-level encryp-
tion provides, he notes that the hardware-
based, library-level encryption solution he
now uses has met his top-two concerns:
cost and minimizing the impact that en-
cryption has on existing backup jobs.
Jones compared the cost of encrypting
with the Spectra Logic library against stand-
alone encryption appliances. In the end,
he felt the standalone appliances were too
pricey, estimating they were as much as 2x
or 3x the price of the Spectra Logic solu-
tion. The impact of encryption on backup
jobs so far has also not been as much as
he had expected.
R.C. Willey opts for library-based encryption
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
.com
At www.infostor.com
you will fnd:
www.infostor.com
On demand Webcasts
Searchable archives
Visit www.infostor.com today
to nd information you can
put to use right away.
Source: Taneja Group
Types of storage security solutions
32%
21%
39%
43%
29%
36%
46%
Storage security
appliances
Column-level
database encryption
File system-level
encryption
Backup application using
software-based encryption
Encryption at the
tape library or drive-level
Encryption at the
disk-array level
Custom application using
software encryption
Deployed today
Expected to be
deployed in
next 6 months
48%
46%
39%
43%
36%
25%
32%
I NFOS TOR www. i nf ostor. com J ULY 20 07
3 2
IP Storage addresses diverse applications continued
S
P
E
C
I
A
L
R
E
P
O
R
T
gram manager for tape marketing at IBM,
claims that encryption performed by an
LTO-4 tape drive has a performance hit
of less than 1%.
Encryption is also available from
switch vendors such as CipherMax and
Cisco. These vendors claim that encryp-
tion from within the storage fabric scales
better than growing clusters of applianc-
es. Switch vendors also argue that fab-
ric-based encryp-
tion may be an
option for com-
panies that cant
afford to swap out
their current in-
vestment in tape
systems with the
latest encryption-
enabled drives or
libraries.
CipherMax has
been delivering
encryption so-
lutions for some
time, and Cisco recently entered the fab-
ric-based encryption space when it an-
nounced in May its Storage Media En-
cryption (SME) option. At the same time,
Cisco announced a strategic alliance with
EMC Security Division RSA to offer joint
customers the choice to manage their en-
cryption keys via RSA Key Manager or
from within Ciscos own fabric manage-
ment tool set.
Which way to go?
All of the various encryption options
have pros and cons, and advice from
analysts is mixed. Gartners Mogull,
for example, is quick to note that most
encryption options are relatively new.
Until the market matures and differ-
ent approaches shake themselves out,
he recommends ignoring the hype and
says users should stick with the stuff
thats already being deployed.
ESGs Oltsik tends to lean toward the
use of high-speed encryption appliances,
especially if youre encrypting backups
under a very tight backup window. How-
ever, he notes that if you have a tight
backup window and are switching out
tape drives or libraries, then [you might
want to] consider drive-level encryption.
If youre doing disk-to-disk backup and
dont need a lot of performance, you
might want to consider software-based
encryption.
The direction you choose for encryp-
tion of data-at-rest depends not only on
where you choose to perform the encryp-
tion process, but also on how and where
you plan to decrypt the data that has al-
ready been encrypted.
Encryption is actually pretty easy to
do, says Michele Borovac, director of
marketing at Decru, but decryption pos-
es more of a problem.
Focus on strategy
Beyond evaluating the pros and cons and
costs of the various encryption options,
the Taneja Groups Norall recommends
focusing on each solutions (and vendors)
key management strategy and implemen-
tation details.
From a storage security end-point
perspective, weve found that most en-
vironments are very heterogeneous,
says Norall, referencing results from a
March 2007 storage security survey the
firm conducted. Were seeing backup
software-level encryption, tape drive-
level encryption, and dedicated appli-
ancessometimes all in use at the same
enterprise.
Although this diversified approach
may stop the short-term bleeding for
specific encryption needs in an organiza-
tion, Norall believes it may also cost the
enterprise in the long run.
Yes, you need to stop the bleeding, but
by just stopping the bleeding you may in-
cur management issues down the road,
he says, especially in the area of manag-
ing the growing assortment of encryption
keys from each disparate solution.
Some areas to focus on with regard to
key management include the process to
archive and back up encryption keys, how
the key system will operate in case of a
disaster, and how to perform secure key
exchange or key sharing with business
partners, when needed.
This is one reason why groups such
as the IEEE 1619.3 committee and the
Trusted Computing Group (TCG) are
developing more-unified storage secu-
rity standards surrounding key manage-
ment. Its also why a variety of vendors
have begun to form strategic relation-
ships to further a future key manage-
ment world where data encryption keys
co-exist and are jointly managed with
other security keys on a more federated
scale.
Michele Hope is a freelance writ-
er covering enterprise storage and net-
working issues. She can be reached at
mhope@thestoragewriter.com.
VENDORS MENTIONED
CipherMax, Cisco, Decru, EMC, EVault,
Fujitsu, Hewlett-Packard, IBM, NEC,
NeoScale, Network Appliance, Overland
Storage, Qualstar, Quantum, Spectra Logic,
Sun/StorageTek, Symantec, Tandberg
For every one company we
find doing disk array-based
encryption, we see 10 doing
tape encryption. Jon Oltsik
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
I NFOS TOR www. i nf ostor. com J ULY 20 07
3 3
I
Part 1 in this series of articles (see June 2007, p. 31) fo-
cused on what SRM tools are good forand where im-
provements are required.
n an effort to reduce costs in their storage envi-
ronments, most companies are deploying storage
arrays from multiple vendors. Competing vendors
often offer deep discounts on their products and
services to gain a foothold in a company dominated by
another vendor. This is especially true in the large en-
terprises, which can have hundreds of large storage ar-
rays and thousands of hosts. The mix of multiple array,
switch, and operating system vendors has driven the
requirement for storage resource management (SRM)
vendors to invest heavily in development efforts to
support third-party products at an expanded level.
Different SRM products have approached multi-
vendor support in different ways. EMC Control-
Center, for example, emphasized homogeneous dis-
covery, concentrating on deep, functional support of
EMCs products before expanding support for other
vendors hardware.
Conversely, Hitachi Data Systems HiCommand
Storage Services Manager was originally developed
and marketed by AppIQ and was intended from the
outset to support heterogeneous storage environments.
AppIQ targeted the most common storage hardware
from multiple vendors, but not to the same depth the
EMC ControlCenter team pursued for its own prod-
ucts. The same can be said of Symantecs Command-
Central Storage suite.
When it comes to homogeneous discovery, EMC,
Hitachi, and IBM certainly provide greater detail
on utilization, reporting, replication functionality,
and performance of their own arrays. Their abili-
ties to deliver such detail on competitive gear vary
by vendor and array model.
When you are evaluating SRM products, you
should evaluate multi-vendor capabilities in a demo
or lab environment before you make a commitment
to a product.
Equally important is evaluating the functional limi-
tations associated with different array models within
a given vendor. Array product suites have been com-
bined and consolidated over time through acquisi-
tions. SRM products have extended support to these
acquired products in a similar manner to competitive
products. Functionality of acquired products is grad-
ually incorporated into the overall framework of the
SRM tool, but users should not assume the SRM soft-
ware will provide every function that is native to the
array simply because the products share a label.
The ability to perform third-party discovery of stor-
age devices is probably the single largest deficiency in
the SRM market, due to the wide variety of array ven-
dors and models. Because of the lack of cooperation
among storage vendors and the relative immaturity of
standards such as SMI-S, the ability for one vendor to
interface with a competitive vendors storage device
still leaves much to be desired.
Users evaluating SRM tools will find array func-
tionality for heterogeneous discovery limited to gath-
ering component data such as vendor, model, cache
size, array firmware, capacity utilization, and some lim-
ited performance metrics. Provisioning support usually
includes basic LUN masking, but little in the way of
advanced operations. Real-time monitoring is usually
not available, but limited historical examination can
be performed from the available counters, depending
upon the device. Typically, performance statistics of
this type do not provide the level of detail required to
completely analyze an array. As an example, counters
such as cache utilization and hit ratio may not be gath-
ered, but overall read and writes may be available.
It is important to remember that third-party ven-
dor support is usually reliant on an intermediate
system that actually does the communicating with
the third-party arrays. For example, management of
HDS arrays by other SRM vendors requires Hitachis
HiCommand Device Manager to do the actual data
collection from the HDS arrays.
This condition introduces several limitations, in-
cluding the following:
Disparate software application versions must be
managed carefully. An array upgrade that might
seem innocuous could disable a third-party collec-
tion mechanism, and/or the quality of its data; and
Scaling of the solution has the added potential to
increase consumption of server resources. With sep-
arate systems required to communicate with the ar-
rays for each third-party hardware vendor in the en-
vironment, the overall server footprint of the SRM
infrastructure can grow quickly.
Storage resource management (SRM) challenges include multi-vendor
support, homegrown versus vendor tools, and the trend toward SRM suites.
By John Echaniz and Justin Schnauder
The real state of SRM,
part 2
Key questions to ask your SRM vendor
Can the SRM infrastructure components be
distributed to multiple systems?
Can the SRM servers be clustered for high
availability?
Are the products database views exposed for SQL
client interfacing?
Is this product fully SMI-S-compliant?
How often are service packs created?
How do SRM upgrades impact the existing
environment?
How is the product licensed? By the amount of
storage managed? By managed object?
How are other vendors products tested for
integration with the SRM product?
For heterogeneous deployments, how many
additional systems are required to support third-
party arrays?
How are agents deployed? How much is manual
versus automated?
How large (number of servers, arrays) is the largest
deployment of the product?
What features are on the products road map for the
next 18 months?
Does the product support virtualization of serv-
ers, storage, and NAS? If so, how are virtual entities
handled?
How is disaster recovery approached for the SRM
infrastructure?
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
The real state of SRM, part 2 continued
I NFOS TOR www. i nf ostor. com J ULY 20 07
3 4
Until the SMI-S management standard
is mature and adopted by all storage ven-
dors, bloated SRM frameworks will be re-
quired for heterogeneous discovery and
end-to-end storage management.
Lack of complete solutions
A significant number of enterprises (in-
cluding several large financial institu-
tions) have chosen to forego vendor-based
SRM altogether. These companies have
deployed homegrown toolsets in response
to what they viewed as an unappealing
SRM market which, at the time of their
in-house tool development, was immature
and unable to satisfy their requirements.
In an effort to remain vendor-neutral or
to create a data-collection structure that
meets their specific needs, these com-
panies have invested significant dollars
to create these homegrown SRM tools
utilizing storage vendors APIs. These
unique applications often include
Customized zoning policies and
unique processing for NAS and
SAN-attached storage configuration
changes, featuring rigid provisioning
change windows;
Company-specific provisioning re-
quirements for integration into di-
saster-recovery and business contin-
uance models;
Customized SAN-attached storage and
NAS provisioning requirements; and
Storage reporting and chargeback
models aligned with very specific tier
definitions.
Faith in these custom tools and the
paradigms associated with their utiliza-
tion has limited the flexibility of these
companies to implement dynamic tools
and new technologies. Extremely long
lead times are required to integrate new
storage arrays, switches, and NAS devic-
es into these custom products. Therefore,
to continue utilizing supported devices,
some of these companies have deployed
overly complex designs that feature hard-
ware that does not provide the necessary
processing power, capacity, and/or den-
sity to facilitate achievable economies of
scale. In an effort to simplify their day-to-
day storage operations and bring stabil-
ity to their infrastructures, many of the
integrated policies and procedures have
subsequently become obstacles to perfor-
mance at the array and host level.
It is thus important to consider the
long-term ramifications of custom stor-
age software development when you are
thinking about SRM tool implementa-
tion. It is quite common to find a hap-
py medium between custom software to
handle specific automation needs, and
SRM deployments to handle the more
basic elements of storage and SAN
administration.
It is in this middle ground where the
best solutions lie: Fewer storage admin-
istrators are needed overall because of
SRM control, more junior employees are
able to conduct a wide variety of adminis-
trative duties through an SRM GUI, and
more senior administrators are able to fo-
cus on the oversight of the environment,
more-advanced automation, and design
for the future.
The rise of SRM suites
While many SRM products do provide
vast amounts of information and func-
tionality to end users, there is a level of
complexity and overhead associated with
maintaining them. Niche products that
perform some, but not all, of the tasks
associated with SRM are beginning to
make their way to the forefront. Tasks
such as LUN masking of disk arrays or
monitoring block-level replication be-
tween sites do not require a full-blown
SRM toolset, or suite. The niche products
provide the vendors with an opportunity
to deploy new functionality without hav-
ing to regression-test the changes within
an SRM toolset to determine what effect
the new code will have on the product as
Companies are always looking for ways
to reduce IT costs. SRM tools, with their
graphical user interfaces (GUIs), can aug-
ment or replace command-line storage
administration tasks, which may tempt
management to reduce the number of ex-
perienced administratorssignificantly al-
tering an IT departments salary structure.
One major telecommunications compa-
ny was able to reduce its high-end, pricey
storage administrators from eight to four,
supplementing with less-expensive con-
tractors and employees after the implemen-
tation of an enterprise SRM tool.
But is this a wise choice? Although the mi-
gration to SRM tools may lead a company
to believe it can get by with less-expensive
and less-experienced storage administra-
tors, the trade-offs must be considered.
Any cost relief gained by relying on inexpe-
rienced administrators should be weighed
against the risk of outages that may result.
Experienced administrators understand
the links between storage technologies
and business requirements and have
the foresight necessary to manage large
amounts of data with differing protection
requirements. Novice administrators likely
will not have this level of understanding
and may lack the ability to analyze com-
plex infrastructures. For example, perfor-
mance analysis and troubleshooting are key
disciplines for storage administrators. Using
performance-tuning applications often re-
quires an advanced level of understanding
and experience to translate and diagnose
problems.
Also, storage administrators must be able
to identify situations where the SRM data
is not current. If zoning changes are made
directly at the switch level, for instance, the
SRM softwares configuration data might
not be up to date. Therefore, applying
changes to old configurations will almost
certainly result in outages.
Using an SRM tools GUI for array and
switch management potentially lessens
the risk of employing novice storage staff.
But risks still abound. SRMs are slow to
take advantage of new functionality when
introduced by the respective vendors at
the array, switch, and host levels. If an or-
ganization wishes to use more-advanced
functionality earlier than it is supported by
SRM, then the organization must have sig-
nificant expertise on hand to manage it.
When considering SRM adoption, IT man-
agers must consider how advanced their
operations are and weigh that against the
benefits of SRM. As with most things, the
truth is somewhere in the middle: SRM is
generally worth deploying in large environ-
ments to consolidate and streamline man-
agement, but SRM will never replace sea-
soned expertise.
SRMs effect on staffing
a whole. Eventually this same function-
ality will likely be included in full-blown
SRM suites. EMCs Symmetrix Manage-
ment Console and IBMs Storage Manag-
er Client are good examples of this type
of point product.
These same products are also be-
ginning to find their place as disaster-
recovery toolsets.
Companies that deploy full SRM tool-
sets do not want to create a duplicate en-
vironment to perform the same tasks in
the event of a site disaster. These niche
management tools can provide a basic
GUI to perform array configuration in a
disaster-recovery location, driving down
the cost of building a disaster-tolerant
infrastructure.
Some SRM vendors have also broken
their products into distinct components
that plug into one another, again as a way
to make SRM adoption easier to handle
in phases or in limited scope. For ex-
ample, IBMs three SRM components
TotalStorage Productivity Center (TPC)
for Disk, Fabric, and Dataallow com-
panies to scale their levels of SRM imple-
mentation and ease into a new method of
storage administration.
It is clear that SRM tools provide sig-
nificant enough benefits to outweigh
the burdens of deployment and man-
agement. By consolidating management
into a small number of environments,
SRM software provides critical visibil-
ity to the storage network and its com-
ponents, for an increasingly wide variety
of activities.
In the next part of this series, we will ex-
amine specific SRM tools that can help pro-
spective users identify which products best
fit their needs.
John Echaniz is director of client solutions,
and Justin Schnauder is a technologist, with
Novus Consulting Group (NovusCGwww.
novuscg.com). David Askew, client technol-
ogy executive at NovusCG, also contributed to
this article.
VENDORS MENTIONED
EMC, Hitachi Data Systems, IBM,
Symantec
The ability for one vendor to
interface with a competitive
vendors storage device still
leaves much to be desired.
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
________
___
I NFOS TOR www. i nf ostor. com J ULY 20 07
3 5
D
espite all the marketing talk about intelli-
gence in the storage network, we still have a
long way to go. The truth is that most storage
devices today are simply not as aware as they
should be of applications, data access patterns, and
workflows.
Established vendors have built general-purpose,
block-based storage arrays capable of running a wide
spectrum of workloads. However, these systems are
not optimized for any particular workload and have
no intelligence about the application, its data formats,
and access patterns. On the other end of the spec-
trum, especially over the past five years, there has
been a trend toward more-specialized storage applianc-
es. These systems combine application intelligence, or
workload optimization functionality, with core storage
capabilities to deliver tailored solutions for particular
applications or business needs.
While NAS is probably the oldest example of spe-
cialized storage appliances replacing general-purpose
computers, more recently content-addressed storage
(CAS) has evolved into a specialized class of stor-
age focused on the requirements of archival and com-
pliance data. Also, with the growth in high perfor-
mance computing (HPC) applications, vendors such
as DataDirect Networks and Isilon have delivered stor-
age systems optimized for specific I/O profiles, such
as large-block, sequential I/O. As another example
of the trend toward specialized storage appliances, a
number of vendors, such as Teneros, are delivering ap-
pliances tailored for continuous availability in e-mail
environments.
Database drivers
This trend toward specialized storage architectures and
devices is occurring in the database space, too. In fact,
several key drivers are transforming how large-scale
databases (greater than 1TB) are stored, managed, and
scaled. Five factors are leading to the emergence of a
new class of database storage optimized for data ware-
housing and business intelligence workloads:
Users are facing a tsunami of structured dataBased
on Taneja Group research, many end users data-
bases, particularly data warehouses, are doubling in
size every year. The primary driver for this growth in
database size comes from the line of business. Business
decision-makers recognize the value of maintaining
more historical data online longer for better analyt-
ics and decision-making purposes. A secondary driv-
er fueling the size of databases is a tightening regula-
tory and compliance environment. The need to keep
more data online longer exacerbates issues of database
performance, scalability, and management and makes
general-purpose storage approaches less attractive.
The need for speedThe need for more database per-
formance is insatiable. Database and storage admin-
istrators are being asked to manage much larger data-
bases and storage environments, while improving data
loading times and query responses and delivering deep-
er data analytics. Unfortunately, the overall perfor-
mance and response time of current RDBMS systems
is impacted as the database size increases. This fact is
particularly true as databases grow beyond 1TB. Tech-
niques such as database archiving allow IT to prune the
size of a database to improve performance, but dont
necessarily allow that data to be kept online and ful-
ly query-able. IT faces huge challenges in coaxing
significant I/O throughput and response times out of
the underlying storage system to meet the insatiable
requirements of large data warehouse implementations.
Clearly, the overall throughput and response time of
the underlying storage infrastructure directly affects
what end users see in terms of response time.
Current database scalability approaches have signifi-
cant drawbacksThree architectural approaches to
scaling database performance have emerged: Buy a
larger symmetric multi-processor (SMP) server to run
the database, implement a clustered shared-disk da-
tabase architecture such as Oracle Real Application
Clusters (RAC), or deploy a massively parallel pro-
cessing (MPP) architecture (e.g., Teradata). SMP sys-
tems are by far the most common deployment mod-
el for OLTP databases and small data warehouses or
data marts, but a high-end SMP server can cost more
than $1 million and cannot be modularly scaled on-
demand. Clustered databases offer the promise of near-
linear scalability, but require laborious partitioning to
reduce synchronization overhead and achieve opti-
mum performance for data-intensive workloads. MPP
systems that partition data and parallelize queries have
emerged as the de facto approach for large-scale data
warehouses. However, traditional MPP systems require
constant tuning and repartitioning, and as a result on-
going OPEX cost can run into the tens of millions of
dollars for a large-scale data warehouse. There is no
silver-bullet approach that offers low acquisition cost,
scalability, and ease of management.
OPEX costs mount for tuning and managing large data-
basesAs the database size grows, the administrative
overhead of managing a database grows exponential-
ly along two dimensionsdatabase management/
Workload-optimized storage appliances are tuned to specifc
applications and I/O workloads.
By Steve Norall
Introducing data
warehouse appliances
Data warehouse appliance comparison
DATAllegro Dataupia Netezza Kognitio
Optimized workload Data warehouse Data warehouse Data warehouse Data warehouse
Works with existing databases (e.g., Oracle,
DB2, SQL Server)
No Yes No No
Industry-standard component design Yes Yes No Yes
Massively parallel processing (MPP) architecture Yes Yes Yes Yes
Entry price point $110,000 $19,500 $200,000 N/A
Capacity scaling increments 15/20TB 2TB 12/25TB N/A
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
Introducing data warehouse appliances continued
I NFOS TOR www. i nf ostor. com J ULY 20 07
3 6
tuning and storage management/tuning.
The type of tuning and management re-
quired to maintain a large-scale database
requires highly skilled professionals. As
can be imagined, as the database grows,
the amount a business must spend to
maintain and grow it increases dramati-
cally. The cost of administering a large-
scale database does not grow linearly or
in proportion to the database size; instead,
the OPEX costs scale exponentially as
the size of the database grows. OPEX
costs can be the number-one inhibitor
to growing a very large database.
Databases and storage are becoming more
intertwinedIncreasingly, storage admin-
istrators must have a working knowledge
of the database architecture, table layout,
and how the database places data on disk
to deliver the desired performance SLA.
As a result, database vendors such as Or-
acle incorporate core storage features like
automatic volume management into their
database kernels as a way to more tightly
couple storage with the databases engine.
A data warehouse appliance takes this
convergence to the ultimate endpoint
collapsing database intelligence and
moving it closer to the physical storage
to minimize network roundtrips and gain
performance. This convergence of stor-
age design and host-level software is not
unprecedented. File systems have evolved
to the point where they are now consid-
ered extensions of the underlying storage
infrastructure. Furthermore, NAS appli-
ances subsume file systems as a key com-
ponent of a NAS system. It is natural for
databases and storage to become more
tightly coupled as the need for optimum
performance grows.
Data warehouse appliances
The Taneja Group has begun tracking
how this historical trend toward special-
ized storage appliances is being applied
to structured data. We have identified an
emerging category of data warehouse ap-
pliances over the past three years. Al-
though the term data warehouse appli-
ance is recognized in DBA circles, the
term has almost no meaning or mind-
share within the storage community.
However, data warehouse appliances
have far-reaching implications regarding
how structured data will be managed and
how access to that data will be scaled in
the future. Ultimately, we see data ware-
house appliances morphing into a new
class of storage in much the same way
that NAS and CAS became new types
of storage.
The origins of the term data ware-
house appliance can be traced back to
2002 or 2003 when Foster Hinshaw, the
founder of Netezza and now founder and
CEO of Dataupia, coined the term. Es-
sentially, a data warehouse appliance is
a turnkey, fully integrated stack of CPU,
memory, storage, operating system (OS),
and RDBMS software that is purpose-
built and optimized for data warehous-
ing and business intelligence workloads.
It uses massive parallelism such as MPP
architectures to optimize query process-
ing. Through its knowledge of SQL and
relational data structures, a data ware-
house appliance is architected to remove
all the bottlenecks to data flow so that
the only remaining limit is the disk speed.
Through standard interfaces such as SQL
and ODBC, it is fully compatible with
existing business intelligence (BI) and
packaged third-party applications, tools,
and data.
At its core, a data warehouse appliance
simplifies the deployment, scaling, and
management of the database and storage
infrastructure. Ultimately, the vision of a
data warehouse appliance is to provide a
self-managing, self-tuning, plug-and-play
database system that can be scaled out
in a modular, cost-effective manner. To
that end, data warehouse appliances are
defined by four criteria:
Workload optimized: A data ware-
house appliance is optimized to deliver
excellent performance for large-block
reads, long table scans, complex que-
ries, and other common activities in
data warehousing;
Extreme scalability: A data warehouse
appliance is designed to scale and per-
form well on large data sets. In fact, the
sweet spot for all data warehouse appli-
ances on the market today is databases
over 1TB in size;
Highly reliable: A data warehouse ap-
pliance must be completely fault-toler-
ant and not be susceptible to a single
point of failure; and
Simplicity of operation: A data ware-
house appliance must be simple to in-
stall, set up, configure, tune, and main-
tain. In fact, these appliances promise
to eliminate or significantly minimize
mundane tuning, data partitioning,
and storage provisioning tasks.
Vendor landscape
A number of vendors are shipping data
warehouse appliances. The original data
warehouse appliances came from Netez-
za. However, since Netezzas market entry,
several other firms such as DATAllegro,
Dataupia, and Kognitio have entered the
market with variations on the original
concept.
Although architectural approaches to
data warehouse appliances vary wide-
ly, there are four main points for assess-
ing different vendors approaches. First,
does the data warehouse appliance re-
place existing database software with it
own purpose-built kernel? Most of the
data warehouse appliances replace tra-
ditional database kernels (e.g., Oracle,
IBM DB2, and Microsoft SQL Server)
with their own optimized database kernel.
One exception is Dataupia. Unlike oth-
er data warehouse appliances, Dataupias
software interoperates with, but does not
replace, existing database systems.
Second, does the data warehouse ap-
pliance use low-cost industry-standard
building blocks or customized ASICs
and FPGAs to achieve higher levels of
scalability and performance? Netezza,
for example, uses custom ASICs and
FPGAs to increase performance and scal-
ability, while other vendors (DATAllegro,
Dataupia, and Kognitio) use industry-
standard building blocks in order to offer
the best-price/performance combo. The
total cost and overall price-performance
of the solution can be directly affected by
the underlying components.
Third, does the data warehouse appli-
ance make use of a highly parallelized de-
sign to gain greater scalability and perfor-
mance? All vendors leverage some degree
of parallelism to deliver the requisite per-
formance and scalability. However, with
any highly complex product, the devil is
in the details. End users should scrutinize
and understand the various architectural
trade-offs and benefits of each approach
and assess whether the trade-offs are
well-suited to their database workload.
Fourth, what is the entry price of the so-
lution, and can users scale storage capacity
in increments that match how their data
warehouse is growing? Data warehousing
appliance vendors have widely divergent
price points. Several solutions are priced
from hundreds of thousands of dollars and
can easily top out at several million dol-
lars. Moreover, some solutions require us-
ers to purchase additional storage capac-
ity in relatively large chunks (sometimes
greater than 10TB). As a result, some ap-
pliances may be cost-prohibitive for small-
er data warehousing deployments.
Over the next few years, workload-
optimized storage appliances, such as da-
ta warehouse appliances, will become key
elements of the storage infrastructure in
most data centers, much the same way
that NAS and CAS became data-cen-
ter staples. Data warehouse appliances
represent another point in the histori-
cal trend toward more-specialized, work-
load-optimized storage systems. However,
that is not to say that general-purpose
storage devices will be replaced or ren-
dered obsolete by these optimized appli-
ances. Workload-optimized storage devic-
es will carve out specific market niches
where application-specific scaling, perfor-
mance, and management requirements
are unique and not easily met by general-
purpose storage designs.
Large-scale data warehousing repre-
sents a significant headache for IT today.
The continuing data tsunami, the need
to keep structured data online longer, and
the insatiable need for faster and more-
responsive databases are driving users to
consider new storage alternatives. Add to
the mix that the current database scaling
technologies are too cost-prohibitive or
inflexible to meet the ever-increasing de-
mands of the business. Specialized storage
approaches, such as data warehouse ap-
pliances, offer a novel approach that pro-
vides cost-effective scalability and simpli-
fied management of structured content.
End users must realize the new require-
ments of structured content and be will-
ing to embrace new approaches to solve
the problems of scaling and managing
large-scale data warehouse implementa-
tions today and in the future.
Steve Norall is a senior analyst with the
Taneja Group research and consulting firm
(www.tanejagroup.com).
VENDORS MENTIONED
DataDirect Networks, DATAllegro,
Dataupia, IBM, Isilon, Kognitio, Microsoft,
Netezza, Oracle, Teneros, Teradata
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
SNIA
on
STORAGE
I NFOS TOR www. i nf ostor. com J ULY 20 07
3 7
Storage-focused implementers
can learn a lot from non-storage
IT disciplines.
By Bob Rogers
DEPENDING ON WHOM you talk to, informa-
tion lifecycle management (ILM) is nothing
more than tiered storage, or, a new term for
hierarchical storage management (HSM), or, a new way of
classifying content for corporate or regulatory governance,
or, something entirely different. The Storage Networking
Industry Association (SNIA) defines ILM as the policies,
processes, practices, and tools used to align the business
value of information with the most appropriate and cost-
effective IT infrastructure from the time information is
conceived through its final disposition. Information is
aligned with business processes through management
policies and service levels associated with applications,
metadata, information, and data.
Notice the word storage is not included anywhere in
SNIAs definition. The definition fits nicely into several
other disciplines, including information assurance and
security, enterprise architecture, etc. But what is primar-
ily driving ILM adoption today is fear of prosecution.
Most current users of, or potential candidates for, ILM so-
lutions are companies that are implementing compliance
programs for corporate and regulatory governance. The
records management community has been operating in
this area for a very long time. Most of their emphasis is
focused on using these tools for data classification and
retention to ensure government and judicial demands
for records can be met.
The convergence of ILM data-classification tech-
niques and compliance is a serendipitous coincidence.
However, what most people in the industry were hop-
ing for from ILM was an improvement in information
management. The ability to keep up with the man-
agement of information assets has become one of the
major issues for IT today.
The principles of ILM are alive and well, just not in
the way that most of the storage vendors had imagined.
For example, the widespread adoption of practices based
on the IT Infrastructure Library (ITIL) shows there is
a major need for accountability in service management
and delivery. The ITIL methodology focuses on many
of the same areas as ILM, albeit from more of an opera-
tional aspect. The service management component of
ITIL addresses issues of availability, capacity, and perfor-
mance at a high level. These are the same attributes that
ILM principles use to differentiate data. The systems
management folks are leading the effort to produce the
enterprise core values for business process, workflow,
and application service management. One might ask if
the level of detail is sufficient for the storage folks, but it
would be wrong to assume no one in the enterprise has
embraced the necessity of understanding and defining
the service requirements of the enterprise portfolio.
If ITIL efforts are exposing service management issues
and concerns in so many data centers, then the next ques-
tion is: How does that information influence the man-
agement of data throughout its lifecycle? What tools and
techniques can be employed to fulfill the ILM role?
Server and storage virtualization, de-duplication, en-
cryption, disk-to-disk backup, and continuous data pro-
tection (CDP) are just a few examples of technologies
whose goals are to reduce the data-management burden
for storage administrators. Each of these technologies is
little more than a stopgap measure in the rising tide of
data when applied indiscriminately. However, the col-
laborative efforts of enterprise stakeholders to take the
time to study business processes, workflows, and appli-
cations and structure them in an identifiable way, lever-
ages those technologies to become powerful techniques
of an in-house ILM solution .
For example, a large insurance company uses virtu-
alized servers to isolate critical business processes, and
virtual tape to increase availability by mirroring the vir-
tual volumes to a disaster-recovery location before they
are written to physical tape. Their method of classifying
data may not be the most elegant process (i.e., classified
by virtual server alignment), but it has yielded signifi-
cant benefits in terms of conserving hardware resources
and improving service to users. There were no special
ILM-ized software or hardware componentsjust plain-
old storage products implemented after some analysis
and planning.
Most of the potential for ILM comes from having an-
alyzed the enterprise environment to understand who
deserves what resources. Classifying information, ap-
plying service level objectives, and understanding the
value of the information are labor-intensive activities
and generally not simple tasks.
The storage-focused ILM folks arent alone. ITIL pro-
ponents have made significant progress in enterprise da-
ta centers, and their objectives are virtually identical. In
addition, last year the IT Governance Institute and the
Information Systems Audit and Control Association an-
nounced the Val IT initiative. Val IT is a framework
that focuses on the evaluation and selection of IT in-
vestments and the value of them to the business.
There are three major areas of emphasis to Val IT:
value governance, portfolio management, and invest-
ment management. The portfolio management process-
es of Val IT are particularly important to ILM because
they describe several key management practices such as
identifying resource requirements, performing gap anal-
yses, and monitoring and adjusting portfolio priorities.
A trend is clearly developing here. The early ILM
adopters were focused on compliance. The ITIL folks
are establishing best practices for operations, and the
Val IT proponents are working down the financial man-
agement path. Executive-level commitment for projects of
this caliber is not optional.
The project management responsibilities may be out of
scope for most storage administrators since all the required
up-front analysis is business-focused and well beyond is-
sues of data placement, retention, and availability. The
back-end part of ILM, which includes moving data from
disk to disk, or disk to tape, is actually the easiest part of
an ILM solution.
The key to ILM is that the technology will eventually
simplify, automate, and make provisioning new business
processes or changes to applications such a simplistic op-
eration that some IT administration jobs will be in jeop-
ardy; however, that day is years away. Today, most of the
value of ILM is in analyzing what goes on in IT by un-
derstanding application requirements, availability con-
siderations, and performance and capacity requirements.
There is no silver bullet, and as other disciplines beyond
storage management have discovered, understanding
the business from a service management, value, and
governance perspective is not just a desirable goal, but
is also a requisite to the health of the enterprise.
Bob Rogers was one of the founders of the SNIA ILM Initiative
(ILMI). ILMI is defining a reference architecture for ILM, includ-
ing data classification, market and product segmentation, and
requirements and use cases to drive ILM-related standards in
the SMI-S management standard. Rogers is also the CTO and
founder of Application Matrix LLC. This article was submitted
on behalf of SNIAs Data Management Forum.
ILM isnt just about storage
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
At www.infostor.com
you will fnd:
For more information on digital media
advertising, contact Shaun Shen at 916-419-1481
or sshen@pennwell.com
www.infostor.com
.com
On demand Webcasts
White Paper library
SAS Resource Center
Searchable archives
Visit www.infostor.com today to nd
information you can put to use right
away.
For more information on digital media
advertising, contact Carol Stagg at 818-348-1240
or clstagg@aol.com
I NFOS TOR www. i nf ostor. com J ULY 20 07
3 8
STORAGE MART
NEWPRODUCTS
Silver Peak boosts
WAN performance
As the amount of data moving across WANs con-
tinues to grow, vendors keep rolling out new tools
for maximizing the performance of applications over
distances. WAN acceleration appliance vendor Sil-
ver Peak Systems is the latest vendor to enhance
its products in an effort to help companies get more
bandwidth for their IT buck.
The company has introduced version 2.0 of its Silver
Peak management software for its family of NX accel-
eration appliances and Global Management System
(GMS).
The new software includes various performance
upgrades to the companys Network Memory tech-
nology, along with a number of tools that simplify
configuration and deployment of Silver Peaks WAN
acceleration systems.
Network Memory data-reduction technology re-
duces WAN traffic and accelerates application perfor-
mance. Network Memory adds less than 1ms of laten-
cy and works with all types of applications, including
real-time data replication, interactive SQL transac-
tions, and streaming video.
Silver Peaks NX Series appliances sit between
network resources and the WAN infrastructure and
employ a variety of latency and loss mitigation tech-
niques, compression, and quality of service (QoS)
to improve application performance.
The GMS platform is used to deploy, manage,
and monitor Silver Peak-enabled WANs. GMS
gives IT managers visibility into applications,
including WAN performance statistics, application
analysis, and tools for configuring and managing
NX Series appliances.
The Silver Peak 2.0 software features new pattern
recognition capabilities and faster reads and writes to
disk over WANs, a centralized policy engine for easier
configuration and management of QoS, routing and
optimization policies, new traffic management poli-
cies, application optimization techniques, and auto-
configuration capabilities.
Ciprico branches out
Known primarily as a storage systems vendor fo-
cused on the entertainment market, Ciprico hopes to
branch out beyond its traditional roots into the main-
stream IT storage market. The seeds for the expan-
sion were planted last year with the companys acqui-
sition of certain RAIDcore products from Broadcom,
including host-based RAID software.
Targeted at OEMs and systems-storage integrators
and VARs, Cipricos RAIDCore RC5000 line of SAS/
SATA RAID controllers are available in 4- or 8-port
configurations for PCI-X or PCI Express hosts (Win-
dows or Linux). The company plans to ship 12- and
16-port versions in the third quarter. Also due later this
year is support for Intel and nVidia motherboards as
part of Cipricos upcoming Universal RAIDCore soft-
ware stack.
The company claims performance of more than
1.1GBps with 16 SATA drives, based on lab results.
Using spanning algorithms, integrators can create
virtual arrays of up to 32 drives.
But Cipricos real secret sauce may lie in its host-
based RAIDCore software stack. (Since the RAID logic
resides in the software, rather than the hardware,
Cipricos RAID controllers could more accurately be
described as host bus adapters.)
The host software stack eliminates the need for em-
bedded processors or custom RAID chipsets on the
controller/adapter. Instead, the software-based RAID
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
____________________
One-Step Simplicity
One-Step Compatibility
One-Step Reliability
One-Step Security
One-Step Scalability
2007 Dynamic Network Factory, Inc. www.DNFstorage.com
STORBANK-XL 1600dz
STORBANK-XL 800dz
STORBANK-XL 400d
One-Step Storage Solution
One-Step System Management
One-Step Installation
One-Step Backup
One-Step Customer Support
STORAGE MART
I NFOS TOR www. i nf ostor. com J ULY 20 07
3 9
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
www.condrestorage.com
866-500-3472
THINK
SASSY
Condre Storage is now offering
Infortrends 3U 16-bay SAS/SATA
Expansion Array.
Full-featured RAID functionality with
two (2) x4 SAS host channels and two
(2) x4 SAS expansion channels.
Attach to any of the Infortrend
expansion capable RAIDS.
Accommodates sixteen (16) Hitachi
High Performance SAS or SATA hard
drives.
Backplane-base, cable-less, modular
enclosure.
Up to 16TB per chassis using the new
Hitachi 1TB Drives.
1TB SATA enterprise hard drives
I NFOS TOR www. i nf ostor. com J ULY 20 07
4 0
NEWPRODUCTS
STORAGE MART
approach uses host CPU cycles for RAID calculations.
Avnet and Bell Microproducts distribute RAIDCore
controllers/adapters and software. Pricing ranges
from about $219 to $299. Pricing for the upcoming
12-port and 16-port versions is expected to be $519
and $679, respectively.
Adaptec enhances
NAS software
The 4.4 version of Adaptecs Guardian-OS software,
which runs on the companys Snap Server NAS appli-
ances, includes new features such as
With UltraBac Softwares advanced backup technology this issue is practically eliminated. Previously
there wasnt a way to securely back up your network through a firewall without excessive risk, or having
to place your entire backup infrastructure in the DMZ. The new version of UltraBac will allow you to
quickly and easily back up your servers and workstations without having to compromise security by
opening many ports in your system. This innovative solution allows you great flexibility by uniquely
regulating exactly which ports are used for communication. A one way connection is initiated from
inside your firewall so that the outside communications are initiated using a defined range. This means
that networks remain more secure by eliminating unnecessary port usage, and you can easily configure
your firewall for this defined range to include only your expected backup clients. If you need to better
lock down your environment then you need UltraBacs backup and disaster recovery protection. Your
organizations data is an extremely valuable asset. Keep your data safe and secure inside your firewall,
no open door policy allowed.
WWW. ULTRABAC. COM BACKUP AND DI SASTER RECOVERY SOFTWARE FOR PEOPLE WHO MEAN BUSI NESS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F
_____________
Live dangerously,
store your data only on hard drives.
There are two ki nds of hard dr i ves: those that have fai l ed and those that
are about to. When fai l ure i s not an opti on, add Sony Advanced Intelligent
Tape
drives and media. Sony AI T maxi mi zes your backup, di saster recover y
and archi val systems with industr y-leading data density and rel i abi l i t y.
And AI T grows as you grow wi t h out st andi ng scal abi l i t y and backward
compat i bi l i t y. * Al l i n an energy-effi ci ent , space-savi ng desi gn. Prot ect your
dat a, your busi ness and your sanity with reliable, cost-effective Sony AIT.
Born to back up!
sony.com/ait
* AIT-5 compatible with AIT-4, AIT-3EX and AIT-3. AIT-4 compatible with AIT-3EX. AIT-3EX compatible with AIT-3 and AIT Turbo Series E, 1 and 2.
2007 Sony Electronics Inc. All rights reserved. Reproduction in whole or in part without written permission is prohibited. Features and specifications are subject to change without notice. Sony and Advanced Intelligent Tape are trademarks of Sony.
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
Previous Page Contents Zoom In Zoom Out Front Cover Search Issue Next Page
IS
B
A
M S a
G E
F
B
A
M S a
G E
F