Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Ebook 7 Tips To Make Storage Administration Faster

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

A NETAPP IT EBOOK FOR STORAGE ADMINISTRATORS

7 TIPS & TRICKS TO MAKE STORAGE


ADMINISTRATION FASTER & EASIER
7 Tips & Tricks to Make Storage Administration
Faster & Easier

BLOGS

1: Introduction—Jeff Boni 3

2: Automating cDOT Configuration: How to Make a 4-Hour Process Only 5 Minutes—Ezra Tingler 4

3: The Importance of IO Density in Delivering Storage as a Service (Part 1)—Eduardo Rivera 6

4: The Role of QoS in Delivering Storage as a Service (Part 2)—Eduardo Rivera 8

5: The Demystification of NetApp Private Storage —Mike Frycz 10

6: Using AltaVault and StorageGRID to Replace Tape Backups —Pridhvi Appineni 12

7: How We Balanced Power & Capacity in Our Data Centers with Flash & Other Innovations—Randy Egger 14

8: How NetApp Products Help NetApp IT Overcome Data Protection Challenges —Dina Ayyalusamy 16

2
Introduction
JEFF BONI, VP FOUNDATIONAL SERVICES, NETAPP IT

The storage admin


role is changing
rapidly thanks to The role of a storage administrator in IT is changing • Demystifying NetApp Private Storage (NPS) for the hybrid cloud
rapidly as virtualization and automation improve (NPS, Data Fabric)
virtualization and efficiency and productivity. These trends give face to
• Replacing tape backup with cloud and object storage (AltaVault® and
automation. a series of new challenges: storage service levels that
StorageGRID® )
adjust to changing application performance/capacity
requirements and integration with other layers of the stack (compute, • Overcoming data protection challenges (Snap products, FlexClone® )
networking and the cloud). The new storage environment requires a We invite you to take the next step and ask NetApp IT experts to share
new skill set that relies on proactive data analysis and looks to a future of their real experiences in using NetApp products and services, including
hyper-converged infrastructures and hybrid cloud. All Flash FAS and OnCommand Insight, in the NetApp production
This ebook examines the many issues present in today’s storage environment. Ask your NetApp sales team to arrange an interactive
environment. NetApp IT practitioners share their experiences, with an discussion with us soon.
emphasis on the best practices IT has adopted to improve its service
delivery to the business:

• Automating ONTAP configurations (ONTAP® )
Jeff Boni, VP Foundational Services,
• Designing storage service levels (OnCommand® Insight/ONTAP)
NetApp IT
• Adopting the hybrid cloud (Data Fabric)

www.NetAppIT.com Twitter: @NetAppIT

3
Automating ONTAP Configurations: How to Make a
4-Hour Process Only 5 Minutes
EZRA TINGLER, SENIOR STORAGE ENGINEER, NETAPP IT

As a senior storage engineer in our Customer-1 script after the disks are initialized, cluster setup is complete, and the
I wrote a script to organization, I am responsible for storage lifecycle cluster inter-connect switches are properly configured. The script reads
management including the installation, decommission, configuration information from a file, then applies the configuration to the
do a 4-hour storage and capacity management of our ONTAP® and 7-Mode cluster nodes. It does so by accessing the nodes via ZAPI calls, which is
cluster configuration storage controllers. Our group is in the midst of moving why it is fast.
all data hosted on 7-Mode storage controllers to ONTAP
in 5 minutes. storage clusters.
The results have been amazing. The four-hour process now takes about
five minutes to complete 99% of the configuration. It is now possible
Business Challenge to install 24 nodes in two hours rather than 96 hours, a time savings of
94 hours or 2½ work weeks. Errors caused by interruptions have been
As part of our migration, we are installing additional ONTAP clusters and
eliminated. Automating this process has freed up my time to work on
nodes. The configuration of each high availability (HA) pair took about
other projects.
four hours, spread out over 2 to 3 days. The four hours did not include the
time needed to configure the cluster inter-connect switches or initialize If you are a storage admin, you can easily do this yourself with the SDK.
disks; this takes 2 to 12 hours depending on disk type. Plus typical I used an SDK tool called Z-Explorer that contains a complete list of all
office interruptions added more time as I had to figure out where I had ZAPI calls for the cluster. With Z-Explorer most of the development work
left off. This sporadic schedule seemed to result in some configuration is done for you. It took me just three weeks to automate all the builds.
inconsistencies. This KnowledgeBase article is a good place to start.
The Solution It was a fun project because I could write the script without feeling like I
had to be a developer. I wrote the scripts in PERL, but the SDK works with
I challenged myself to see if I could automate the process to save time
any language you are familiar with. I also used the SDK online forum to
and reduce errors. Although I’m not a developer, I found it easy to write
get advice from others. People were quick to answer my questions.
the script using the NetApp Software Development Kit (SDK). I run the

www.NetAppIT.com Twitter: @NetAppIT

4
Automating cDOT Configuration:
TheHow
CIO’stoNew
MakeRole:
a 4-Hour
The Business
Process of
Only
the5Business
Minutes

The Future
I’m now using the SDK to automate and streamline other storage tasks to
save time and reduce errors. My next project is a quality assurance (QA)
script that will login to a cluster and verify if nodes are properly configured
per NetApp IT Standards and NetApp best practice guidelines. I plan to
automate the cluster interconnect switch configuration in the same way as
well as e-Series configuration.
Read the story in the Tech ONTAP newsletter.
Check out the NetApp Software Development Kit.
Find the script in the NetApp Automation Store. Search: Day-0 c-mode
cluster build setup.

Ezra Tingler, Senior Storage Engineer,


NetApp IT

www.NetAppIT.com Twitter: @NetAppIT

5
The Importance of IO Density in Delivering Storage as
a Service (Part 1)
EDUARDO RIVERA, SENIOR STORAGE ENGINEER, NETAPP IT

Can NetApp IT deliver storage as a service? The next step in [Here’s an example of how IO
Density works. Suppose we
NetApp IT posed this question to itself more than a year ago. Our goal storage management have a single 7.2K RPM drive. By
was to find a new way to offer our business customers a method by
which they could consume storage that not only met their capacity is storage service rule of thumb, a single drive of
this type can deliver around 50
requirements, but also their performance requirements. At the same design. IOPS @ 20ms response time.
time, we wanted this storage consumption model to be presented as a
Consider, however, that 7.2K
predictive and easily consumable service. After consulting with enterprise
RPM drives today can range anywhere from 1TB to 8TB in size. The ability
architects for NetApp’s cloud provider services, we developed a storage
of the drive to deliver 50 IOPS does not change with its size. Therefore, as
service catalog leveraging two main items: IO Density and NetApp
the size of the drive increases, the IOPS/TB ratio worsens (i.e. you get 50
ONTAP’s QoS (quality of service).
IOPS/TB with 1TB drive and 6.25 IOPS/TB with an 8TB drive).
In this first part of this two-part blog, we will discuss how NetApp
Applying the same logic, we can divide the amount of capacity that we
OnCommand Insight’s IO Density metric played a key role in the design of
provision to a given application by the amount of IO that an application
our storage service catalog. (You can also hear this as a podcast.)
demands from its storage. The difference is that at the array level, there
The Role of IO Density are many other technologies and variables at play that can determine
IO Density is a simple, yet powerful idea. The concept itself is not new, the IO throughput for a given storage volume. Elements like disk type,
but it is essential to building a sound storage consumption model. By controller type, amount of cache, etc., affect how many IOPS a storage
definition, IO Density is the measurement of IO generated over a given array can deliver. Nonetheless, the general capabilities of a known storage
amount of stored capacity and expressed as IOPS/TB. In other words, IO array configuration can be estimated with a good degree of accuracy
Density measures how much performance can be delivered by a given given a set of reasonable assumptions.]
amount of storage capacity.

www.NetAppIT.com Twitter: @NetAppIT

6
The Importance of IO Density
The CIO’s
in New
Delivering
Role: The
Storage
Business
as a Service
of the Business
(Part 1)

Using OnCommand Insight we were able to gather, analyze, and 99 percent of our installed base. Those workloads requiring something
visualize the IO Density of all the applications that run on our storage other than these pre-defined workloads are easily dealt with on a case-
infrastructure. Initially, what we found was surprising. Some applications by-case basis since there are so few of them.
that anecdotally were marked as high performance were demonstrating
A New Perspective on of Application Performance
very low IO Density rates, and thus were essentially wasting high-
performance storage capacity. We also saw the reverse, where IO Density gave us a new perspective on how to profile and deploy
applications were pounding the heck out of lower performance arrays our applications across our storage infrastructure. By recognizing that
because their actual IO requirements were incorrectly estimated at the performance and storage capacity go hand in hand, we were able to
time of deployment. Therefore, we started to use NetApp OnCommand create a storage catalog with tiers that reflected the actual requirements
Insight’s aggregated IO Density report to profile application performance of our installed base.
across the entire infrastructure and establish a fact-based architecture. Our next step was placing IO limits on volumes to prevent applications
Ultimately, OnCommand Insight’s IO Density report helped us to identify from stepping on the performance resources of other applications within
the range of service levels (defined as IOPS/TB) that the apps actually the same storage array. Stay tuned for part two of this blog where I will
needed. With this information, we created a storage catalog based on discuss how we used ONTAP’s adaptive QoS feature to address this issue.
three standard service levels: Tune into the Tech ONTAP podcast for more details.
1. Value: Services workloads requiring between 0 and 512 IOPS/TB.
2. Performance: Services workloads requiring between 512 and 2048
IOPS/TB.
3. Extreme: Services workloads requiring between 2048 and 8192
IOPS/TB.
Based on our own understanding of our application requirements (as
Eduardo Rivera, Senior Storage Engineer,
depicted by our IO Density reports), the above three tiers would address
NetApp IT

www.NetAppIT.com Twitter: @NetAppIT

7
The Role of QoS in Delivering Storage as a Service
(Part 2)
EDUARDO RIVERA, SENIOR STORAGE ENGINEER, NETAPP IT

Using ONTAP QoS


policies we can more
NetApp IT is on a journey to offer its customers storage enforce a particular IOPS/TB objective. Hence, if we have a volume that
efficiently manage as a service. In part one of this blog, I discussed how is consuming 1TB of capacity and the service level objective (SLO) is to
our storage service we embraced IO Density to help us better profile provide 2048 IOPS/TB, the QoS policy for that volume would set an IOPS
and deploy our applications across our storage limit of 2048. If that same volume in the future grows to 2TB of consumed
levels. infrastructure. We developed a three-tier service space, then the QoS policy would be adjusted to 4096 IOPS/TB to
catalog that offers storage as a predictive and easily maintain an effective 2048 IOPS/TB. In a live environment with hundreds,
consumable service to our customers. The second step in this journey was or even thousands, of individual volumes and where storage consumption
tapping into the power of clustered Data ONTAP®’s adaptive Quality of continuously varies (as the application writes/deletes data), manually
Service (QoS) feature to assure performance stability. managing all the QoS policies would be close to impossible. This is where
Adaptive QoS comes in.
QoS—Corralling the Herd
Adaptive QoS is a tool developed by NetApp. Its sole purpose is to
The adoption of ONTAP’s QoS feature is a key component of our storage-
monitor consumption per volume and dynamically adjust each volume’s
as-a-service model. In a nutshell, QoS enables us to place IO limits on
QoS policy so that it matches the desired IOPS/TB SLO. With this tool,
volumes (it can also work at the storage virtual machine (SVM) or file
we are able to provision volumes at will and not worry about having to
level) in order to keep the applications using those volumes within their
manage all the necessary QoS policies.
IOPS “swim lane.” This prevents one application from starving other
applications of performance resources within the same storage array. QoS With QoS and Adaptive QoS, we are able to easily provide predictive
can be implemented dynamically and without interruption to application storage performance tiers upon which we can build the actual storage
data access. service catalog.
In our storage catalog model, we assign a QoS policy per volume for all
the volumes that exist within a given cluster. The QoS policies themselves

www.NetAppIT.com Twitter: @NetAppIT

8
The Role The
of QoS
CIO’s
in New
Delivering
Role: The
Storage
Business
as a Service
of the Business
(Part 2)

Building the Storage Service Catalog Data-Driven Design


With the pre-defined service levels and the ability to manage IO demand Together, IO Density and QoS have revolutionized how we view our
with Adaptive QoS, we were able to build a storage infrastructure that storage. It has made us much more agile. The IO Density metric forces us
not only delivers capacity but also predicts performance. Leveraging to think about storage in a holistic manner because we operate according
ONTAP’s ability to cluster together controllers and disks that offer to a data-driven—not experience-based—storage model. We don’t need
various combinations of capacity and performance, we built clusters to look at whether we have enough capacity or performance, but can
using different FAS and AFF building blocks to deliver varying tiers of check to see if we have enough of both. If we nail it, they run out at the
performance. Then Adaptive QoS was used to enforce the performance same time.
SLO per volume depending on where that volume resides.
The same is true with the QoS service level approach. Our storage
Moving a volume between service levels is also quite simple using infrastructure is much simpler to manage. ONTAP gives us granular
ONTAP’s vol-move feature. Adaptive QoS is smart enough to adjust control of resources at the volume level; our QoS policies now act as the
the policy based on where the volume sits. By defining a service level controller. Best of all, this new storage approach should enable us to
per aggregate, we are also defining a multitude of service levels within deliver a storage service model that is far more cost efficient than in the
a particular cluster through which we can move our data around. past while supporting application performance requirements.
Addressing changes in performance requirements is easy; we move the
Tune into the Tech ONTAP podcast for more details.
volume to a higher performing high availability (HA) pair using vol-move.

Eduardo Rivera, Senior Storage Engineer,


NetApp IT

www.NetAppIT.com Twitter: @NetAppIT

9
The Demystification of NetApp Private Storage (NPS)
for Cloud
MIKE FRYCZ, BUSINESS SYSTEMS ANALYST, IT SUPPORT & OPERATIONS, NETAPP IT

If you follow NetApp IT, you’ll know that we talk a lot about our hybrid Our FAS system is physically deployed in racks located inside a NetApp
cloud strategy and roadmap. But one question comes up repeatedly: How cage, similar to that shown. The minimum is two nodes for high
do we operationalize the cloud? How does our cloud strategy translate availability. The FAS system is managed by an off-site storage team.
into operational excellence, especially when it comes to taking advantage
The FAS system connects to a layer 3
of hyperscaler cloud resources?
network switch, patched to an Equinix
Our current cloud operations are actually fairly simple. They rely on three patch panel through a cross connect.
primary elements:
• NetApp Private Storage (NPS) for Cloud enables us to access cloud
resources while maintaining complete control over our data.
• Cloud-connected colocation facilities, such as Equinix, allow the data
to remain private just outside the cloud.
• Hyperscalers, such as Amazon Web Services (AWS), Microsoft Azure, The Equinix cross-connect uses single-
and others offer flexible compute resources. mode fiber (SMF) optic cables that
run through a large, yellow overhead
NPS for Cloud Architecture
tray and down the aisles of the Equinix
To better understand what this all means, let’s look at the physical facility to the cloud peering switch in
architecture of NetApp IT and NPS for Cloud, as shown in the graphic. the AWS and Azure cages.
NetApp’s FAS system connects to the AWS and Azure compute via a
dedicated network connection within an Equinix data center. We will
connect to other hyperscalers in the future.

www.NetAppIT.com Twitter: @NetAppIT

10
The Demystification
The CIO’s
of NetApp
New Role:
Private
The Business
Storage (NPS)
of the for
Business
Cloud

The cable directly connects to AWS Is NPS for Cloud really that simple? Yes. And its benefits are numerous:
and Azure inside their respective cages.
• Ability to keep control of data at all times
Given the close physical proximity of the
storage and data to the hyperscaler, we • High-throughput, direct connections to the cloud
now can access a high bandwidth (10GB) • Ability to rapidly scale our compute or secure run-time resources for
Ethernet connection from our data center peak workloads
by way of NPS to the cloud.
• Centralized storage intelligence using OnCommand® Insight and data
management through NetApp ONTAP® software
Our data resides in NetApp storage, • Compliance with the security and privacy requirements of companies
but our compute is accessed in AWS or and governments
Azure. We currently operate our legal,
human resources, branding, customer • Migration flexibility so applications can be easily moved between
service support, and various other different clouds
portals using NPS for Cloud. Our next phase is to work with Business Apps to build cloud-aware apps
that take advantage of the many benefits of the cloud, such as platform-
as-a-service (PaaS) and DevOps. The cloud is definitely a key part of our
strategy to excel at IT service delivery inside NetApp.
Keeping Control of Our Data Download the infographic from NetAppIT.com.
The single most important benefit to NetApp IT of using NPS for Cloud
is that we keep control of our data. We use the SnapMirror® feature of
ONTAP® to replicate the data from our on-premises data centers to NPS Mike Frycz, Business Systems Analyst, IT Support &
then to AWS or Azure. The NetApp Data Fabric enables us to connect Operations, NetApp IT
to and switch cloud providers at any time. We avoid vendor lock-in and
costly data migrations.

www.NetAppIT.com Twitter: @NetAppIT

11
Using AltaVault and StorageGRID to Replace
Tape Backups
PRIDHVI APPINENI, SENIOR MANAGER, IT STORAGE SERVICES, NETAPP IT

One of the business challenges NetApp IT faces is archiving our legal, Powerful Combination
finance, and Sarbanes-Oxley (SOX) compliant data. Backing up this The business case Because they take advantage
data is important for legal, HR, and tax reasons. In some cases, the data
must be secured for seven years for tax purposes. Like most companies,
for using AltaVault of the cost and scale benefits
of cloud storage, AltaVault and
we have relied on tape backups to secure this data. Tapes are reliable, and StorageGRID to StorageGRID are designed for
inexpensive, and present very little risk to our operations.
replace tape backups an enterprise like NetApp with
My IT storage team was intrigued by the use case that NetApp® locations worldwide.
AltaVault® and NetApp StorageGRID® offered. AltaVault cloud-integrated
is compelling.
AltaVault delivered benefits
storage functions as a cloud gateway for backup and archive applications.
such as 30:1 inline deduplication,
StorageGRID provides an enterprise-grade object storage solution
compression and encryption technology, which makes archived data
that supports widely adopted cloud protocols such as Amazon S3. The
easier to transport and store, and faster to retrieve. It offers complete
combination of AltaVault and StorageGRID would enable us to efficiently
security for the data. We can categorize that data into buckets to make it
back-up application data while optimizing data protection and simplifying
more easily retrievable. We are currently seeing 22 times deduplication
disaster recovery.
savings from the data stored in AltaVault. As we push more data through
NetApp IT’s core tenet is to adopt technologies only when they satisfy AltaVault, we will benefit from even greater savings.
a business use case. We evaluated these products and came to the
StorageGRID enables us to store and manage these massive datasets in a
conclusion that AltaVault and StorageGRID would be a powerful
repository in the hybrid cloud. It enables us to abstract storage resources
combination to modernize our backup procedures, reduce our costs, and,
across multiple logical and/or physical data centers. We also create data
most importantly, improve the speed with which we can restore data for
policies to manage and protect our data according to our requirements.
our customers.

www.NetAppIT.com Twitter: @NetAppIT

12
Using AltaVault
The CIO’s
andNew
StorageGRID
Role: The to
Business
ReplaceofTape
the Business
Backups

Changing Our Archiving Architecture We also gained flexibility. We can more easily modify archive policies and
add new archives on an ad-hoc basis. AltaVault allows us to scale our
Previously, critical data from all our locations was backed up in our
archive/SOX environment much faster than we could with a tape library.
Sunnyvale, California data center. We use backup software to manage the
For example, we can spin up a virtual machine with AltaVault using
flow of the data from backup storage into a tape library in Sunnyvale. We
existing disk shelves to gain capacity as opposed to purchasing more
defined when and how the archiving took place. The tapes were regularly
tape drives and a new frame for the tape library. Long term, the combined
transported to an off-site location for safekeeping. When data needed
software will make it much easier to transition our backup procedures to
to be restored, we had to order the tapes from the vendor and physically
new data centers as our data center footprint evolves.
transport them back to our site, which took at least 24 hours.
Faster, More Reliable Data Archiving
Under the new target architecture, the process remains virtually the same.
First, AltaVault provides a local optimized disk cache for the application One of the most satisfying parts of this project has been seeing firsthand
to store backup data, resulting in faster restores for end users and saving the impact NetApp products can have on our operations. Not only will
bandwidth. One copy is stored in the StorageGRID nodes in Sunnyvale. we improve efficiency and reduce costs, but we also will improve the
Then the data is copied to StorageGRID nodes in our Raleigh, North data archiving services we provide to our business customers. As a team
Carolina data center, which serves as a repository for the offsite data that constantly strives to use the best technologies to improve service
copy. The complete cutover process took about five months. delivery, that is the best result of all.
The benefits have been numerous. We eliminated the cost of
transportation, storage, tape library support, and physical tape
procurement. Bringing backup in-house has enabled us to automate
many of the day-to-day operational processes, resulting in a more
agile service. We can also retrieve the archived data in one to six hours,
depending on the data set size, or 3 times faster than before. This Pridhvi Appineni, Senior Manager, IT Storage Services,
translates to a much faster turnaround for our customers. We anticipate NetApp IT
improved backup speeds and significant cost savings in the future

www.NetAppIT.com Twitter: @NetAppIT

13
How We Balanced Power & Capacity in Our Data
Centers with Flash & Other Innovations
RANDY EGGER, DATA CENTER LEAD, NETAPP IT

One of the biggest trade-offs in any data center is power and capacity, The chart illustrates this point. Our power requirements peaked in mid-
the two biggest expenses of any data center. The golden rule is that these 2011 when we opened a new NetApp production data center, the Hillsboro
two costs increase together—the more racks of hardware you have, the Data Center (HDC). As we moved operations into HDC and closed other
more power you need to run it. This means when you need more capacity, data centers, power consumption dropped while storage and compute
you need more power, which could result in a cooling issue. If you have increased significantly. Since then we’ve seen this trend continuing.
enough cooling and power, you could run out of rack capacity.
The following factors are contributing to this change:
NetApp IT was able to address the power and cooling costs in a
• Virtualization. In the past, each app had its set of hardware and
multitude of ways. We started by making changes to the facility itself. We
its own power supply, which translated to thousands of servers, an
installed non-traditional raised floors. We introduced overhead cooling,
expensive model to maintain. Because of virtualization, the same
economization, and cold aisle containment over six years ago. These
applications can be hosted on 10 to 20 physical machines in a few
changes have helped control our power and cooling costs.
racks using around 20 kilowatts (kW). NetApp IT’s compute is 75%
virtualized now.
Changing Relationship between Power and Capacity
A NetApp IT data center operation analysis compiled over the past • All Flash FAS adoption. Our solid-state disks (SSD) take very little
decade shows that the relationship between power and capacity is power (1-2kW as compared to 5-6kW for traditional disks per rack);
evolving due to other factors as well. We are seeing that while our our Flash hardware even less. As result, full storage racks aren’t even
compute and storage capabilities are increasing, our power costs have close to reaching their power limits. Using Flash for all non-archival
been actually dropping. This shift is due to several reasons: the availability storage going forward means even lower power consumption.
of the cloud, smaller form factors offering more horsepower, and
• High-density storage rack design. HDC has high-density, taller-
virtualization, among others.
than-normal racks, 52U as opposed to traditional 42U or 47U racks
with more power (10kW/rack). Hardware that used to take four racks

www.NetAppIT.com Twitter: @NetAppIT

14
How We Balanced
The CIO’sPower
New Role:
and Capacity
The Business
in Our
of Data
the Business
Centers

now takes half of a rack, thanks to higher density disks and higher IO continue, even as we begin to take advantage of the hybrid cloud. Instead
capability clusters/filers. This unique design has reduced the number of building arrays to meet peak workloads--which translates to idle
of infrastructure/connection points, shrinking the data center footprint capacity--we will be able to take advantage of the cloud’s elasticity. This,
and enabling a build-as-you-grow approach. in turn, will reduce operational, licensing, and management costs.
• FlexPod® datacenter. We have eight FlexPod systems hosting Adopting a hardware lifecycle management strategy is a key factor in
hundreds of applications in a rack connected to the Cisco fabric for reducing power consumption and improve capacity management. In our
networking and compute. The applications are hosted on thousands HDC migration, we were able to decommission 96 of 163 systems and 40
of machines, but thanks to virtualization and cloud, that doesn’t mean filers (of 2 PB of storage); more than 1,000 servers were either migrated
thousands of physical servers. Most of the applications are hosted on or decommissioned. The configuration management database (CMDB),
virtual machines. These footprints will continue to shrink as compute NetApp IT’s single source of truth for everything in IT operations, also
core processor power increases, hardware size shrinks, and power plays a major role in helping us track, manage, and analyze power and
consumption requirements fall due to technology advancements. capacity over time.
• Smart power design. The Starline busway system supports ‘anywhere’ Each company faces its own challenges in controlling its power
power and connector types, and with our smart layout we can utilize consumption costs while maximizing its storage and compute. However,
power borrowing that enables us to share power across multiple racks. as we have seen, adopting a hardware lifecycle management strategy
We can draw power from a parallel busway if a rack needs and leveraging innovations in technology and power design can make a
While our capacity more than 9kW. We have effectively removed power as a significant difference.
consideration in our hardware installations.
is increasing, our
Our analysis shows that the relationship between storage/
power costs are compute capacity to deliver applications and power will
dropping.

Randy Egger, Data Center Lead, NetApp IT

www.NetAppIT.com Twitter: @NetAppIT

15
How NetApp Products Help NetApp IT Overcome
Data Protection Challenges
DINA AAYYALUSAMY, LEAD DATABASE ADMINISTRATOR, NETAPP IT

As a database administrator (DBA), I face very different challenges than How NetApp Products Help Us Overcome These Challenges
I did five years ago. The adoption of the hybrid cloud, rising security
Our team uses a variety of NetApp’s data protection products, including
and backup concerns, and the expectation of non-disruptive service are
SnapCenter®, SnapMirror®, FlexClone®, and SnapVault® in our everyday
rapidly changing the IT environment. Like many IT shops, NetApp IT is
routine. ONTAP® is the underlying software that helps automate our
doing more with less, including supporting a large number of databases
enterprise application lifecycle tasks. I will start with a discussion of
(400 plus) on a smaller budget.
SnapCenter, our management tool.
While performance
SnapCenter, NetApp’s data protection
management remains
and clone management tool combines
a top priority, just as
management and backup of our many
critical are issues such
databases into one central platform. We use
as disaster recovery,
SnapCenter to simplify our storage planning,
auditing, compliance,
backup, and restore operations. For example,
and maintenance.
in the past we manually specified daily file
I rely on a variety
backups, which was both time consuming
of tools, including
and prone to errors. This process is now
NetApp’s data protection products, to manage our databases. In this
completely automated. We also use SnapCenter to:
blog, I’ll share how our team uses NetApp products in these four areas:
• Provide automatic scheduling of backups at the volume level, not file
• Business continuity/disaster recovery
level. This ensures regular and quality backups and makes it easier to
• Performance management scale our operations.
• Capacity management
• Auditing & Sarbanes Oxley (SOX) compliance

www.NetAppIT.com Twitter: @NetAppIT

16
How NetApp Products Help The
NetApp
CIO’sITNew
Overcome
Role: The
Data
Business
Protection
of the
Challenges
Business

• Perform daily database refreshes. Using an end-to-end workflow • Copy files during a data center migration; SnapMirror can copy files in
from production through backup eliminates the many manual tasks a fraction of the time, reducing application downtime.
associated with tracking and backing changes.
• Provide lifecycle management for database clones, accelerating
• Ensure automatic backups of Sarbanes Oxley (SOX) and other application development/test.
compliance-related data. With SnapCenter we send the backup data
FlexClone is a fast, efficient functionality we rely on for automation and
to the cloud using AltaVault®. (See blog.)
capacity management. The thin provisioning capability delivers multiple
• Grant users the ability to manage their application-specific backup/ instant, point-in-time, space-efficient copies of replicated volumes and
restore/clone jobs without our intervention. logical unit numbers (LUNs), saving terabytes of disk space. SnapMirror
and FlexClone work hand in hand to enable stored data to be used for
For the past eight years we have used SnapManager® for SQL Server® to
dev/test or data mining. While SnapMirror tracks changes over time,
run all our SQL database backups in one location. Currently, we run the
FlexClone generates point-in-time copies that are essential for rapid
NetAppIT uses SQL feature as a separate product, but we will be moving to a new SQL
application development/test. We use SnapMirror to replicate data on
plug-in for SnapCenter, which means one less tool to manage and more
NetApp data efficient SQL server management.
a different filer, then spin these off using FlexClone. And because a
clone uses only a small amount of space for metadata, it only consumes
protection products SnapMirror, a data transfer feature of ONTAP, is a critical tool in our additional space as data is changed. We can use it with both Oracle and
in its IT operations database management arsenal because of its ability to compress and SQL databases. We use FlexClone to maintain performance and automate
replicate data between Point A and Point B. We use it to ensure block- many of our services including:
every day. level changes are replicated in the database. It is an invaluable tool for
• Spin off a copy when we have a critical issue in a large database. The
generating multiple copies for application development/test, reporting,
FlexClone version is ideal for troubleshooting while the production
cloning, and disaster recovery replication. We also use SnapMirror to:
version keeps running.
• Set up automated scheduling for the recurring refreshes of critical
• Generate copies of disaster recovery volumes for use during
databases, such as those that are compliance-related (SOX, audit),
application testing so we don’t need to break the SnapMIrror
making the process both consistent and predictable.
relationship with the database, eliminating the loss of data backups.
• Support high availability (HA) requirements; we can recover a
• Create a database copy from which management reports can
database in minutes instead of hours thanks to SnapMirror’s
be generated, enabling application development/test to use the
replication and compression features.
untouched production database.

www.NetAppIT.com Twitter: @NetAppIT

17
How NetApp Products Help The
NetApp
CIO’sITNew
Overcome
Role: The
Data
Business
Protection
of the
Challenges
Business

• Migrate very large databases without business interruption for pre- ONTAP as the Foundation
cutover testing.
Our use of NetApp products relies on the underlying ONTAP software.
• Provide a quick return to service if a server or storage goes down in ONTAP supports features such as self-provisioning to automatically
our Oracle 11G environment; FlexClone’s schema-as-a-service solution handle growth in filer volumes without human intervention. It also
enables point-in-time recovery of individual schemas. enables transparent migration between nodes in the event of any storage
changes or failures without disruption to the databases. Its non-disruptive
• SnapVault, the HA backup and recovery feature of ONTAP, is used to
feature is essential to ensuring continuous access to databases during
protect data with unique requirements, such as compliance data. In
updates, migrations, and other volume-related changes.
the past, we had to manually move a database to storage, then move
it to a vault. In the latest release we can transfer from production These NetApp products have been instrumental in helping our database
directly to the vault, which is much more efficient and requires no team work more efficiently and provide fast, efficient data replication and
manual intervention. With SnapVault we can store data on a filer and disaster recovery. We can meet recovery-point objectives ranging from
then capture user changes over time. minutes to hours. We keep both the active mirror and prior backup copies
to enable selective failover points in the disaster recovery copy. These
SnapVault is also used for keeping multiple copies of production
products—along with rigorous work processes—help us protect our data
databases for code releases. If developers want to retrieve a database
while maximizing our database performance in a wide variety of business
from three releases ago, they can take multiple snapshots of a database,
and IT environments.
store it in a vault, then restore it to a point-in-time as needed.

Dina Ayyalusamy, Lead Database


Administrator, NetApp IT

www.NetAppIT.com Twitter: @NetAppIT

18
7 Tips & Tricks to Make Storage Administration
Faster & Easier

7 ways to make your


storage administra-
tion faster & easier. The NetApp on NetApp Program shares its real-world
#NetAppIT IT experiences in using NetApp products and services
in a global enterprise IT environment. Our subject
matter experts speak with representatives from other
IT organizations about common IT challenges and best pratices and the
business cases driving product adoption. To learn about NetApp IT or to
speak with one of our subject matter experts, talk to your NetApp sales
representative or visit www.NetAppIT.com.
Read our blogs.
Read our other ebooks:
- Building a Foundation for Business Apps Agility
- 7 Perspectives on the Future of IT: The Drive Toward Business Agility

www.NetAppIT.com Twitter: @NetAppIT

19
Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate The product described in this manual may be protected by one or more U.S. patents,
that the exact product and feature versions described in this document are supported foreign patents, or pending applications.
for your specific environment. The NetApp IMT defines the product components and
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is
versions that can be used to construct configurations that are supported by NetApp.
subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical
Specific results depend on each customer’s installation in accordance with published
Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR
specifications.
52-227-19 (June 1987).
Copyright Information
Trademark Information
Copyright © 1994–2016 NetApp, Inc. All rights reserved. Printed in the U.S. No part of
NetApp, the NetApp logo, AltaVault, ASUP, AutoSupport, Campaign Express, Cloud
this document covered by copyright may be reproduced in any form or by any means—
ONTAP, Clustered Data ONTAP, Customer Fitness, Data ONTAP, DataMotion, Fitness,
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage
Flash Accel, Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache, FlexClone, FlexPod,
in an electronic retrieval system—without prior written permission of the copyright
FlexScale, FlexShare, FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars,
owner.
MetroCluster, MultiStore, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, RAID-
Software derived from copyrighted NetApp material is subject to the following license TEC. SANtricity, SecureShare, Simplicity, Simulate ONTAP, SnapCenter, Snap Creator,
and disclaimer: SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover,
SnapProtect, SnapRestore, Snapshot, SnapValidator, SnapVault, StorageGRID,
THIS SOFTWARE IS PROVIDED BY NETAPP “AS IS” AND WITHOUT ANY EXPRESS OR Tech OnTap, Unbound Cloud, WAFL and other names are trademarks or registered
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES trademarks of NetApp Inc., in the United States and/or other countries. All other brands
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE or products are trademarks or registered trademarks of their respective holders and
HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, should be treated as such. A current list of NetApp trademarks is available on the Web
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES at http://www.netapp.com/us/legal/netapptmlist.aspx.
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) NA-20161003
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and
without notice. NetApp assumes no responsibility or liability arising from the use of
products described herein, except as expressly agreed to in writing by NetApp. The
use or purchase of this product does not convey a license under any patent rights,
trademark rights, or any other intellectual property rights of NetApp.

www.NetAppIT.com Twitter: @NetAppIT

20

You might also like