Nimble Storage Data Migration
Nimble Storage Data Migration
Nimble Storage Data Migration
SAN BASED
For workloads running on FCP or iSCSI (offline migration), the CirrusData Data Migration Server (DMS)
appliance can be used to migrate data from existing storage systems into Nimble Storage using the SAN
Fabric. This is achieved by inserting Data Migration Server appliances into a host’s data path using a
patented Transparent Data Interception technology, which requires no production downtime. The target
LUN’s on the Nimble Storage are automatically provisioned by the device based on the source LUNs
requirements. The appliances intercept I/O and track data changes, enabling online data migration to be
performed in production systems. For remote sites that do not have direct FC connectivity, a remote
DMS appliance can be used for migration over a WAN via TCP/IP. Quality of Service features in the DMS
appliance ensures that there is no performance impact during migration of data. They can be set at the
beginning of each migration job in the GUI to migrate minimally, moderately or aggressively based on
time of day and day of week (all customizable). The appliance actively monitor the arrays that are
migrating and will dynamically yield to production I/O. It also monitors the array’s idle time (pending I/O,
throughput, etc.)
The Data Migration Server (DMS) appliance consists of 3
types of ports
If the Nimble Storage is connected via a switch, it needs to be zoned to the DMS appliance.
There are four types of connection methods used to determine the insertion mode of the data migration
server appliances as given below
Physical Insertion
Host Side Insertion
Storage Side Insertion
Logical Insertion
VSAN Insertion
Zoning Insertion
Physical Insertion
This is ideal when there are a small number of hosts or migration is carried out in groups of server.
This mode is also ideal for a large scale hosting data center with multitenant storage farm
environments. Host side insertion guarantees that only those hosts and the assigned storage
owned by the customer being migrated are affected. With host side insertion, the appliance is
physically chained between the host and the FC switch. All FC paths from all hosts to the FC
switches must be inserted in order to capture all of the I/O to the set of LUNs being migrated for
the hosts. The insertion process disconnects the host FC cable from the switch port and connects
This is ideal when there are more number of hosts or migration is carried out in groups of more
than 8 servers. Storage side insertion brings less disruption and enables all hosts connected to
the storage be migrated at one go. With storage side insertion, the appliance is physically
chained between the source storage controller ports and the FC switch. Every storage controller
port involved with presenting the set of source LUNs being migrated (known as the “migration
set”) must be inserted, so that all I/O is going through the Data Migration Server (DMS)
appliances. The insertion process disconnects one cable from a downstream port on the FC
switch and then reconnects the path through a set of DMS ports (consisting of an upstream and
a downstream FC port).Therefore, if the source storage uses two ports – two from each controller
– to present the migration set of LUNs, it is necessary to use four Nexus ports (2 on each fabric)
on DMS. In a high availability configuration, the four ports should be spread across two DMS
appliances, as shown below:
VSAN Insertion
When existing FC switches support virtual SAN, CDS appliances are inserted into data paths by
connecting to available switch ports. VSANs are added to include the ports to be intercepted as well
as the nexus ports to use. The desired data paths are thereby intercepted transparently, in an
identical manner to the physical intercept method. Multiple links – either host or storage – can be
intercepted by a single nexus.
This requires no physical manipulation of cables in the SAN and provides flexibility to insert into large
environments and selectively intercept a large number of data paths with fewer nexus to reduce
configuration complexity. It allows same configuration to support host-side or storage-side insertion
The existing switches should support virtual SAN and have available (free) switch ports.
In zoning insertion, CDS appliances are inserted between unused storage ports and unused switch
ports. After this, selected hosts are moved logically to the inserted ports through zoning.
This requires no physical cabling changes on active ports. Only hosts required to be intercepted are
included in the intercepted paths. This mode is optimized for operations that require projects be
done in waves.
This requires unused, inactive ports on storage, as well as free switch ports in fabric. This also requires
LUN masking changes, zoning changes, and host rescan for new paths.
In this step and based on the above scenarios, the DMS needs to be inserted into the fabric for it to
discover the storage landscape. While the steps won’t be different Host side Insertion is assumed in this
scenario. Each of the FC cables need to be inserted sequentially with each insertion followed by a path
validation on all the DMS appliances.
The DMS appliance automatically gathers all of the discovered initiator WWPNs, target WWPNs, and LUNs
from all Nexus ports to build a picture of the SAN environment. The host initiator WWPNs are correlated
so that all of the initiators that “see” the same LUN are assumed to be a single host entity. Once the DMS
appliance is inserted the configurations need to be validated. Using the DMS console verify
a. Verify that the ports are all connected
b. Verify that the storage controller and initiator WWPN details are correct
c. Verify SAN configuration – discovered initiator WWPNs, target WWPNs, and LUNs
Step 3 – System Configuration
This steps comprises of configuring the DMS appliance and setting migration parameters required for
migration. This involves setting up user credentials, network and security settings, alerts, migration
calendar (times when migration can happen).
Step 4 – Migration
A migration session is a policy that defines the source LUNs (LUNs to be migrated), the target LUNs, and
the conditions under which migration will occur. The conditions include: the schedule and the general
aggressiveness (Quality of Service) of the data copy process. This is an 8 step process comprising of the
below steps.
a. Creating a new Migration Session
b. Selecting the source LUN to migrate
c. Specifying the destination type (Local, Remote, Swing Box)
d. Pairing destination LUNs for each source LUN
e. Enter description and specify migration options like Holiday schedule, QoS, Thin Migration,
Time zone etc.
f. Specify the schedule for migration and mode for each time frame.
g. Verify License Usage and click “Finish” to start the migration.
LUN verification validates data on a migrated LUN by comparing it with the data on the source LUN.
Once initial synchronization has completed and the data has been validated, production is almost ready
to move over to the migrated disks. A final synchronization must be performed for data that has changed
since the last synchronization. This would involve the following steps.
a. Prior to cut-over time, check amount of data awaiting migration and use “Sync-Delta” manually
or schedule periodically to minimize delta during cut-over
b. Stop application on Source system and unmount Source LUN / Volume / Drive.
c. Complete the migration session to trigger last round of synchronization. This completes data
migration.
d. Un-assign the Source LUNs from the hosts. Rescan to ensure they disappeared.
e. Use “Auto-Provision” feature to automatically cause the new storage to remove the destination
LUNs from the DMS system and present them to the host entities (also automatically
created). Create required zoning to publish new LUNs to the host directly.
f. Rescan host for the new drives / LUNs. Windows should automatically assign the correct Drive
Letters. For Linux and Unix file systems based on Disk Labels, the existing mount-points should
still be valid, otherwise adjust the host file for mounting the new drives / LUNs and start
application.
g. Once application has validated the former production storage can be retired, or optionally
scrubbed using DMS (separate license required).
Details of each of these steps and how they needs to be executed on the web console are available in the
User Guide for DMS.
For workloads running on VMWare Hypervisor, Storage VMotion can be used to migrate data from
existing arrays to the new Nimble Storage Array. Storage VMotion relocates virtual machine disk files
from one shared storage location to another shared storage location with zero downtime, continuous
service availability and complete transaction integrity. Storage VMotion is fully integrated with VMware
vCenter Server to provide easy migration and monitoring.
The below provides a detailed 4 Step migration solution to move VMs from an existing storage into a
Nimble storage across datacenters using Storage VMotion and Nimble Replication.
Publish storage volumes from the Nimble Storage (which would be a part of the swing kit) to the VSphere
Cluster and configure the LUNs as a new datastore in VCenter. Please ensure that there is adequate space
to host all the required VMs in the new datastore.
Right Click the VM that needs to be migrated and select “Migrate”. On the next screen select the option
“Change Datastore”. Then select the datastore configure on Nimble Storage. Once the configuration
changes are reviewed click the “Finish” button to start the Storage vMotion.
Setup Replication using Nimble Storage SmartReplicate between the arrays in the old datacenter with
the array in the new datacenter. Setting up replication between two Nimble Storage arrays is simple. By
creating a replication partner on each of the two arrays, replication is automatically set up. Replication is
automatic, based on the protection schedule assigned to the volume or volume collection. The details of
creating a replication partner can be found in the Admin Guide.
Once all the VMs are migrated to the Nimble Storage datastore at source and data replicated to the
Nimble Storage in target DC we can initiate a handover. The Nimble Handover process automatically
pauses the replication partnership and the volume ownership is transferred, allowing the VMs to be
brought online in the target DC.