h8174 Migrating Celerra VNX Replicator WP
h8174 Migrating Celerra VNX Replicator WP
h8174 Migrating Celerra VNX Replicator WP
Abstract
This white paper provides best practices for migrating data from
an EMC file-storage array (such as a Celerra NS or VNX) to one
of EMCs new VNX2 arrays. It includes simple step-by-step
instructions on how to use VNX Replicator and the new VNX
File Migration tool to perform the migration. If you wish to
migrate data from an EMC block-storage array (such as a
CLARiiON AX or CX array), refer to the white paper Migrating
Table of Contents
Executive summary.................................................................................................. 4
Introduction ............................................................................................................ 4
Audience ............................................................................................................................ 5
Terminology ....................................................................................................................... 5
Using VNX Replicator to migrate data from a Celerra to a VNX series array ................. 5
Appendix A: Using Replication Wizard step-by-step procedure ........................................... 7
Step 1 Verify that the Replicator and Snapsure licenses are enabled ............................. 7
Step 2 Create interfaces for access on the VNX array ...................................................... 8
Step 3 Create interfaces for the Data Mover interconnects .............................................. 8
Step 4 Use the Replication Wizard to replicate the VDM ................................................. 9
Step 5 Replicate the file systems on the VDM that you wish to migrate to the VNX ........ 31
Step 6 Verify the last synchronization time ................................................................... 38
Step 7 Change the source file system to Read Only ................................................... 38
Step 8 Create a server interface on the VNX .................................................................. 40
Step 9 Using Unisphere on the Celerra, switch over the replicated VDMs...................... 40
Step 10 Using Unisphere on the Celerra, switch over the replicated file systems .......... 42
Step 11 Verify file systems are available ...................................................................... 43
Step 12 Clean up replication sessions .......................................................................... 44
References ............................................................................................................ 87
Executive summary
EMCs VNX series of midrange storage arrays began with new architectural
innovations such as Fully Automated Storage Tiering (FAST), unified replication,
unified management, and a SAS back end. The point-to-point architecture of the SAS
back end offers higher availability and better performance than older architectures
such as Fibre Channel. The VNX2 continues the evolution with MCX operating
Environment to replace FLARE, as well as updates to the existing FAST feature. The
system keeps evolving to better serve your needs and with that there is a need for
continued support for migrating data as well.
This white paper serves as a migration guide to help you move data from an EMC
midrange file-storage array (such as a Celerra NS model or VNX) to a new VNX2 array.
The paper provides simple step-by-step best practices that show you how to use VNX
Replicator software to perform a low/no-cost migration of data to your new VNX
array.
If you need to migrate data from one of EMCs block-storage arrays (such as a
CLARiiON AX or CX model) refer to Migrating Data from an EMC CLARiiON Array to a
Introduction
To help ensure a smooth transition for customers who wish to migrate their data from
older EMC midrange arrays to the powerful evolving VNX series array, EMC supports a
wide range of data migration tools and techniques. EMC recently performed
numerous tests to pinpoint the most efficient way to use these tools to migrate data
to a new VNX array. After careful testing, EMC concluded that the most effective way
to migrate data from a block-based array (such as a CLARiiON AX or CX model) is to
utilize EMC SAN Copy, which is described in Migrating Data from an EMC CLARiiON
Array to a VNX Platform Using SAN Copy.
On the other hand, our tests showed that when migrating from a file-based array
(such as a Celerra NS model or VNX), the most effective tool is VNX Replicator. The
rest of this white paper provides a step-by-step procedure, including best practices,
to help you make a smooth transition to your new VNX array using VNX Replicator.
This procedure will cover the specific use case of migrating CIFS shares on Virtual
Data Movers (VDMs). There are variations on the procedure if you are not using VDMs
or if you are using NFS. NFS considerations are covered briefly at the end of the
document. This paper will also cover the New EMC VNX File Migration tool that greatly
simplifies migrating data for you, and in some cases provides the ability to migrate
transparently to the end user.
Audience
This white paper is for customers who have purchased, or are considering
purchasing, an EMC VNX series array and wish to migrate their data from a previous
generation EMC file-storage array to the VNX series hardware. The paper also serves
as a guide for EMC field services and customer service employees. Familiarity with
VNX Replicator is highly recommended.
Terminology
Please note that you need be running Celerra Replicator (V2) and not the older version
(V1) to perform the migration described in this paper.
Virtual Data Movers (VDM): A VDM is a software feature on the Celerra that allows
an administrator to separate CIFS servers, replicate their CIFS environment, and
move VDMs from Data Mover to Data Mover.
VNX Replicator: VNX Replicator is licensed software that resides on the VNX
series array. This software requires the remote protection suite is required in
order to this feature via Unisphere.
Once the configuration is configured you can then consider migrating the File
systems that you would like moved to the new VNX. To move the file systems
you need to consider the following that may be involved in migrating to the
new system.
o Virtual data mover root file systems
o Production file systems
o Checkpoints
o Checkpoint schedules
You have a few of options when considering migrating using VNX Replicator.
1. Manually configure the replication sessions via the CLI or GUI.
a. This involves individual commands or GUI clicks for each setup piece
required, this would entail commands for each configuration piece and
replication commands for each replication session VDM and
production file system.
2. Use the Replication Wizard which can setup the interconnects and replication
sessions for the VDMs and file systems.
a. Using the wizard will simplify setting up the replication sessions for the
VDMs and production file systems, however the configuration will still
have to be manually setup on the destination system.
b. To see a step by step walkthrough of using the Wizard, see Appendix A.
3. Use the VNX File Migration tool to complete the migration
a. This new feature shows the evolving nature of the VNX array. This
feature has the ability to migrate the configuration in a few steps, as
well as migrate the VDMs, and production file systems all for you.
There is a simple set of 3 steps for the migration of a VDM, and 2 steps
for the migration of a production file system.
b. To see a step by step walkthrough of using the VNX File Migration tool
see Appendix B.
If your array is running a version of DART older than 6.0, we strongly encourage you to
engage Professional Services from EMC, or an EMC partner, to execute your data
migration from your Celerra to a VNX.
Step 1 Verify that the Replicator and Snapsure licenses are enabled
On your Celerra and VNX array, click the Settings tab, and click Manage Licenses for
File. Enable Replicator and SnapSure, as shown in Figure 1. Log out of the GUI and
log back in.
The wizard asks you to specify the destination VNX array. If your destination VNX
array is present on the list, you may select it. If the VNX is a brand new array in your
environment, then you may need to add it by clicking the New Destination Celerra
button as shown in Figure 4.
10
The wizard will ask you to create the Celerra Network Server. First, give your
destination VNX array a name in the Celerra Network Server Name field. Then, specify
the destination VNX arrays Control Station IP address, and create a passphrase that
you will remember as shown in Figure 5.
11
Next the wizard will ask you to specify the destination credentials, as illustrated in
Figure 6. Type the IP address of the destination VNX array, and then type the login
and password credentials for this array that you created in step 3. Click Next.
12
Next the wizard steps you through creating the peer Celerra Network Server. Type a
name for your source Celerra array, and then type the source Celerras IP address as
shown in Figure 7. Then click Next.
13
Review the selections that you have made, and then click Submit as shown in Figure
8.
14
The network servers should be created successfully, as shown in Figure 9. Click Next
to continue.
15
Now that you have successfully created your destination Celerra Network Server (the
destination VNX array entry), select it from the list, as shown in Figure 10.
16
Next the wizard asks you to enter the Data Mover interconnect. You will probably
have to click New Interconnect, as shown in Figure 11, to create a new Data Mover
interconnect.
17
Specify a name for the Data Mover interconnect. Choose the desired Data Mover,
select the Show Advanced Settings checkbox, select the desired IP address (or
addresses) for the interface to use, and click Next as shown in Figure 12.
18
Specify the IP address, login, and password credentials for the target VNX array. Use
the nasadmin user account and credentials that you created in step 4, and click Next.
This is shown in Figure 13.
19
Specify a name for the Data Mover interconnect that you are creating on the
destination VNX array. Choose the desired Data Mover, and select the Show
Advanced Settings checkbox. Select the desired IP address(es) for the interconnect,
as shown in Figure 14, then click Next.
20
We will not modify any of the settings in the Interconnect Bandwidth Schedule
window, so simply click Next as shown in Figure 15.
21
Make sure the Overview/Results window shows the values you entered. Click Submit
(as shown in Figure 16) to create your source and target interconnects.
22
23
Return to setting up the replication session. Select the source Data Mover
interconnect that you have just created from the list, and click Next as shown in
Figure 18.
24
Select the desired source and destination interfaces from the drop-down lists, or just
leave them set to Any to let the system decide. (This is shown in Figure 19.) It is a best
practice to segregate replication traffic and data access to the CIFS and NFS file
systems that are in use on the system. Click Next.
25
In the Select Source window, type a replication session name and select the desired
VDM to be replicated as shown in Figure 20. Click Next.
26
In the Select Destination window, you may select an existing VDM if you have one
available. If you do not have a VDM to use, select Create Using Storage Pool and
specify the desired pool or select clarsas_archive to allow the wizard to create a
target pool for you; then click Next as shown in Figure 21.
27
Select an update policy. You may choose to accept all of the defaults shown in
Figure 22. Click Next.
28
29
Close the window after the wizard finishes. Your session will appear under the
Replicas > Replications tab, as shown in Figure 24.
30
Step 5 Replicate the file systems on the VDM that you wish to migrate to the VNX
When file systems are on VDMs, both the VDM and the file system(s) need to be
replicated. Now that you have replicated the VDM, you need to replicate any file
systems on the VDM that you wish to migrate to the VNX. You can use the
Replication Wizard again to help you migrate the desired file system(s).
Select a Replication Type by selecting File System, then click Next as shown in Figure
25.
31
Select Ongoing File System Replication and click Next as shown in Figure 26.
32
Specify your destination array: This will be the same target VNX array you
selected when running the replication wizard for the VDM (as shown in Figure
10 on page 16), then click Next.
Select Data Mover Interconnects: Since you created Data Mover interconnects
the last time we ran this wizard for the VDMs, the available interconnect(s) will
appear in a list. Select the desired interconnect (as shown in Figure 18 on
page 24), and then click Next.
Select the replication sessions interface: Accept the defaults of Any (as
shown in Figure 19 on page 25) if this is the only replication session you plan
to have running. If you have multiple sessions running, then it may prove
beneficial to select a specific interface for this replication session. Click Next.
Select Source: Give the file system replication session a name, select the
desired file system you wish to replicate from the drop-down list (as shown in
Figure 27), then click Next.
33
Select Create Destination File System and select a target storage pool or accept the
clarsas_archive choice for your Storage Pool and Checkpoint Storage. Then select the
VDM in which your file system resides from the VDM drop-down menu, as shown in
Figure 28. Click Next.
34
Accept the default 10-minute update policy as shown in Figure 29, unless you wish to
change it. Click Next.
35
Leave the checkbox Use Tape Transport? clear, and click Next as shown in Figure 30.
(Migrating really large file systems that do not easily lend themselves to replication
over the WAN is not covered in this procedure.)
Figure 29. The Use Tape Transport? checkbox should be clear in the Select Tape
Transport window
36
Review the choices you have made, and click Finish. When the wizard finishes
creating the file system replication session, click Close to close this window as shown
in Figure 31.
Figure 30. Closing the Replication Wizard for the file system
37
38
writing to your source file system, un-map the file system, and change the file system
properties to Read Only as described below.
In Unisphere on the Celerra array, go to Storage > File Systems and click the Mounts
sub-tab. Select the file system mount that you are going to migrate and click
Properties at the bottom of the window to open the Properties window. In the
Properties window, select Read Only and click OK as shown in Figure 33.
39
40
The status of the session will change to Switched Over, as shown in Figure 35.
41
Step 10 Using Unisphere on the Celerra, switch over the replicated file systems
Once you have switched over the VDMs, you need to switch over the file system
sessions. Select the desired file system replication session(s) you wish to switch
over, and click Switchover, as shown in Figure 36. Click OK to confirm the switchover.
Figure 35. Selecting the file system replication session to switch over
42
The file systems should change to Switched Over as shown in Figure 37.
43
44
Before you begin any migration operation EMC recommends you run the
collect_support_materials script located in the /nas/tools directory or a
diagnostic collection via the Unisphere Service Manager.
Before performing a File System or VDM migration, verify that the Destination
system, running VNX OE for File version 8.0, has been configured with the
appropriate File OE pools of sufficient size, to support the intended
migration(s).
You must setup a bi-directional control station trust relationship with which to
perform the system configuration migration, if applicable. This can be done
using this GUI tool.
Additionally, the Source & Destination Data Movers must have an interface
configured to support the bi-directional Data Mover Interconnects necessary to
support DM-to-DM communications. These interfaces should not be used with
any CIFS or NFS Server instances on either system. They should be dedicated
to the interconnect which will be the data transport method for the migrations.
You must setup the time on the data mover and control stations to be in sync
on each system you are migrating to and from.
You must verify that the routing for the data movers is correctly set for the
network that particular system is in.
If you would like to move a file system on a physical data mover to a VDM on
the destination side, the VDM must be created and empty on the destination
side.
You must ensure the versions of code are supported, see the following table.
Source System
Destination System
45
This window has three functions that can be performed: Add System, Refresh Systems, and Reset All
Add System Click this button to add Source and Destination Primary Control Stations to the VDM GUI
tool
Refresh Systems Click here to re-poll the systems to update the GUI database with any configuration
changes that may have occurred since the system was added to the VDM GUI (e.g., additional interfaces;
Interconnect changes; Remote System changes; Storage Pool changes, etc.)
46
Reset All Clicking this will remove all Tasks and systems from the Manage Servers window. This will be
cleaning the database of all items causing a fresh start with the GUI tool. This will not delete migrations
or abort tasks.
Click Add System, enter the Control Station IP address and the Username (e.g. nasadmin, or root), and
the respective Password. EMC recommends that you only add the Source and Destination systems which
will be used for the migration activity at hand to reduce any confusion. If you receive an error when trying
to add a system please verify the username and password are correct, and the system responds to a
ping, and try again.
Note: Only the Celerra/VNX nasadmin and root user accounts can be used.
You will notice the Systems appear in the Manage Systems pane after they have been added. You can
click on objects in the left pane to see more information about them in the Information pane. Some of the
folders in the list can be right-clicked to perform additional tasks:
The system serial number folder will give a menu to refresh or delete that system
The Interconnects folder will give a menu to add or refresh a data mover interconnect
The Remote Systems folder will give a menu to add or refresh a remote system
47
48
Highlight the Remote Systems folder on the source system, right click and select the Add Remote System
to generate a pop up from which you will select the remote system from the dropdown list. It is also
necessary that the Passphrase that you enter be the same for each Control Station that is added. Repeat
the Add Remote System step on the destination System which will setup the bi-directional trust
relationship.
Note: The Passphrase is similar to a password, but is generated by the user when
setting up the Control Station to Control Station communications link, and must be
the same on both systems for the bi-directional trust relationship.
49
Note: This dropdown list may be empty if the system you are trying to add has been
added already as a remote system. Make sure to only add the Primary Control Station.
50
You will see a pop up box when you click Add Interconnect that will guide you through the creation of the
interconnect. EMC recommends entering a meaningful name for the interconnect, such as dm2source,
select the Source Datamover, Source Interface, Destination System, Destination Datamover, and
Destination Interfaces.
Optionally select the checkbox to Create Interconnect on Peer, in order to create the bi-directional
interconnects on both systems in the same step, along with the Peer interconnect name to make things
simpler for you. Otherwise, after completing the first interconnect, you must select the other Systems
Interconnects folder, and Add Interconnect to complete the bi-directional communications link between
Source and Destination Data Movers.
Note: The name you choose can be any alpha-numeric name you would like. Spaces
and special characters are not allowed. When using the Create Interconnect on Peer
checkbox you will have to perform a Refresh Systems to have the destination
interconnect populated into the GUI.
After creating the Data Mover interconnects, they must be validated. This ensures that each interconnect
has connectivity to its opposing Data Mover. You can Validate the interconnects by right clicking on the
individual interconnect name itself and choosing Validate Interconnect. You will notice you can also
delete interconnects from here.
51
52
53
When clicking on the System Configuration Migration wizard an information message is generated. Click
Next and enter a name for the migration of the system configuration information, then select the source
and destination systems. This wizard can be run for both Data Mover configuration as well as the Cabinet
or Usermapper configuration.
54
55
56
Once you create a name and select the Source and Destination systems, you can choose Data Mover or
Cabinet and then click Next. EMC Recommends selecting the Cabinet level first as this will migrate the
Usermapper service and database. This is necessary if you are going to migrate the usermapper client as
part of the data mover system configuration migration. Clicking the Overwrite Destination check box will
choose to ignore any configuration on the destination data mover and overwrite it. If you want the
Destination Data Mover to mirror all the services of the Source Data Mover, select the
Overwrite_Destination option when creating the Pre-Plan, or when editing an existing Pre-Plan.
Next, select any or all of the Services that you would like to migrate from the source data mover to the
destination data mover. The server_usermapper client service will require migration of Usermapper
service which will be the Cabinet level migration. If migrating multiple cabinets to one care must be taken
as the Usermapper databases must be manually merged if need be.
When migrating using the Cabinet level, please keep in mind if you use the Overwrite Destination flag you
will be replacing the Usermapper data base with another systems data base. This can cause mapping
issues which in turn will cause access issues. If you are not 100% sure you should overwrite the
destination please contact the local TC for assistance.
57
The final step is to review the pre-plan summary information; this is where you will see the options you
have selected in the wizard as well as the CLI command that will be executed.
Clicking the finish button will place this pre-plan into the tasks pane of the main VDM GUI tool screen as
is seen in the following screenshot.
58
To examine the Pre-plan, highlight the entry under Tasks. On the right pane, you will see Information
about the Pre-Plan, as well as Options to Migrate Configuration, Edit, or Delete the Pre-Plan for this task.
59
To execute the Pre-Plan, select Migrate Configuration under the Options pane. You will get a pop up
confirmation of starting the configuration migration. You can also see the progress bar update as the task
is executed and the command will complete or fail. If the System Config Pre-Plan fails, you can look at the
logs by clicking the Logs button. For more information on the Logs, refer to the GUI Logs section of this
walkthrough.
Note: You may see a System Config Pre-Plan fail if the Destination system is already
configured with certain Data Mover service settings (e.g., Params, DNS or NTP), and
you have not selected the Overwrite Destination option.
You may see the pop up below if you are migrating server parameters as the Data Mover will need to be
rebooted to ensure the parameters take effect. You may also see the task fail if the Usermapper service is
not correctly setup on the destination side. In the case of such failure you will have to run the cabinet
(Usermapper) migration and then run the Data Mover Configuration migration.
60
You will notice as tasks proceed that the Event Log at the bottom of the main screen will show the status
for different options that are performed.
Note: If you see a failure during the system configuration migration you can always go
back and edit the task and re-run it. This edit can be done even if you want to change
from a data mover level to cabinet level migration.
61
Notice you can see the output of the commands run in the Information pane once the command is
complete.
Now that you are done migrating the system configuration, you can delete that task from the VDM GUI
tool by selecting the task and clicking Delete under the Options. You will get a pop up asking to confirm
the deletion of the pre-plan, When you confirm, it will be removed from the tasks list.
62
Next you are ready to start a VDM migration. Start this process the same way you initiated the System
Configuration Migration Wizard.
The VDM Migration Wizard overview window pops up. EMC recommends that you Refresh Systems from
the task wizard pop up to ensure that you have the most up to date information in the GUI before
continuing. Click Next to display the Select Source, Destination, and interconnects screen and enter an
alphanumeric name for the plan, then select Source System, Destination System, and Interconnect for
the migration.
63
On the next screen select the Source VDM that you want to migrate. You will need to select either a VDM
that is created on the destination or a Storage Pool on that destination in which to create the VDM. This
does not mean you need to have a VDM on the destination however. The system will migrate existing
VDMs and file systems to the destination side if you select the pool to use.
Note: If you do not have any storage for file as of yet on the new system you can log
into Unisphere and click on the Storage tab, then click on Disk Provisioning Wizard for
File. This will launch the easy to use Wizard to help you setup storage to the file side
of the VNX.
Note: EMC recommends letting the migration operation create the VDM for you on the
destination system. If there is a specific reason you need to manually create the
destination VDM, you must make sure that VDM is in the mounted state.
64
On the next screen you will choose the storage pool mapping for the VDM file system.
65
You will be able to choose different pools for each file system on the next screen if applicable. The
following screenshot provides you with the ability to change both the Storage pool and the Savvol
storage pool.
66
On the next screen you will have the option to migrate the existing IPs on the source system to the
destination system (Take_Over_IPs). After selecting the checkbox to Take_Over_IPs, select the source
device and destination device that you would like to map to.
67
On the next screen you will be able to select the interface destination device if you would like to change
which interfaces go to which devices.
68
The last screen you will see is the pre-plan summary screen where you can scroll down and review all
information that you have selected to this point and validate that it is what you would like. You can go
back and change anything that needs to be changed at this point.
69
Click Finish to add this pre-plan into the tasks list. From the tasks list you can select the task and then
see the Options available for this task.
70
After the Pre-Plan has been created, you will be able to select the task and click Create Plan which will
send the CLI command to the system to create the VDM Migration Plan. You will receive a pop up to
confirm the plan creation. During this time you will notice the Progress and the State change in the tasks
list.
Once the VDM Migration plan is created, it is listed under Tasks in a Defined state, as Type VDM Plan.
Note that the Pre-Plan terminology is not used (see the following screenshot). Also note that the available
Options have changed.
Note: If the VDM Migration Plan was created via the CLI, you will not be able to edit
the Plan via the VDM Migration GUI Tool.
71
Note: Creation and synchronization time of the Begin Migration step will depend on
the number of file systems, the data that populates them, and the rate of change of
that data.
When the migration is completed, you will notice the State has changed to READY TO COMPLETE and you
have the options to Complete / Cutover the VDM migration. After verifying the availability of your data,
and if you intend to switch access to the new destination system, you perform the switch by clicking on
the Complete / Cutover button.
The Complete / Cutover will perform the following:
Down the interfaces on the source VDM
Switchover the replication sessions to the destination side
Up the interfaces on the destination VDM
72
When you click on Begin Migration, you will notice a popup that will indicate this process can take some
time to complete.
73
Notice the Options have changed allowing you to Stop a migration task. This will allow you to pause a
migration that is in the Ready to Complete state. If you click an operation such as Stop and the task
will allow an attempt to abort you will see the option change as such. Since the abort task is just
attempting to abort the task and never a 100% guarantee, you may receive a message that the abort
failed.
The abort via the GUI cannot be initiated on a task that was started via the CLI.
To abort a task via the CLI see the Troubleshooting section that follows.
When clicking the Complete / Cutover you will notice a pop up confirming that a service disruption could
occur, and that production (i.e., Read/Write access) will switch to the Destination system, while the
Source system will become Read/Only. Additionally, a checkbox is provided to Ignore Checkpoint
Mismatch. The checkbox is for any checkpoints that have been deleted, refreshed, or mounted to source
74
file systems while the migration planning and execution was in progress. This option must be checked in
that situation in order to complete the migration.
Note: When doing the complete/cutover, keep in mind that if you do not have
dynamic DNS, you will want to do a manual DNS update shortly after the cutover to
the new system, especially if IPs are changing.
You will notice that the State changes from Completing to Completed once the operations are done and
the Event Log shows that the Task for completing the migration has succeeded. You will also notice that
the only remaining Option now available is Delete, which removes the completed task from the GUI
interface. Congratulations! You have just successfully completed a VDM Migration.
Once the migration is in a COMPLETED state, and you have verified access to all data on the destination
system, you can delete and clean up the migration session by clicking Delete in the Options pane.
75
Note: If the data is not accessible, or there is an issue that requires you to roll-back
the VDM Migration to the Source system, please call support immediately. Do not
click on the Delete step, as Roll-back can only be performed if the migration session
is in a COMPLETED state. Reference the document Using VNX File Migration
Technical Notes for the roll-back procedure.
Once you delete the completed migration session you will still see a task in the Tasks pane. This is
because the original migration plan is still defined, and only the migration session itself was deleted. You
will notice that you have to click Delete in the Options pane again to delete the Plan from the system. You
will have a checkbox listed, as shown below, where you can delete the VDM GUI Pre-Plan from the GUI as
well.
Note: Terminology can be confusing. The VDM GUI creates a Pre-Plan for all its
migration tasks, while the VDM Migration itself also requires the creation of a Plan,
whether performed via GUI or CLI.
76
After clicking next, you will enter an alpha-numeric Name for the FS migration session. You also select the
Source and Destination systems, as well as the Destination Interconnect to use.
Note: The migration Name cannot contain spacesuse underscores if necessary for
your naming convention.
77
Next you will see the Source file system and SAV Pool (SavVol) listed. You will select a destination File
System or Storage Pool to migrate to. You can optionally select a destination SAV Pool, or VDM, to
migrate to. Finally, you have the option to exclude the source checkpoints from being migrated to the
Destination system, using the Checkpoint Excluded checkbox
78
The SavVol is the pool that will house the file system checkpoints data. Having this on a separate file
system is sometimes common.
The Checkpoints Excluded option can be enabled, by default the checkpoints will be migrated to the
destination side if they meet the criteria for migrating checkpoints. If you do not want to migrate these
checkpoints you can choose this checkbox.
The next screen displays the Pre-plan Summary, where you can verify everything is as you would like, and
then click Finish to produce the Pre-plan.
79
On the VDM Migration Tool screen, you will notice the task appears in the Tasks pane.
80
Now that the Pre-Plan has been created, highlight the Pre-Plan under Tasks, and you will notice when you
click on the task that the Options pane is populated with actionable items. This is where you will click
Begin Migrate to start the file system migration.
Note: Unlike the VDM Migration, where a Pre-Plan and a Plan are created, the File
System Migration creates only a Pre-Plan, which is executed to perform the migration.
Note: Creation and synchronization time of the Begin Migration step will depend on
the number of file systems, the data that populates them, and the rate of change of
that data.
If you have an FSID conflict, a pop up will be displayed, which you will need to confirm to continue with
the migration. The pop-up explains what action may be required in the case of a File System ID conflict
between Source and Destination systems.
81
As with the VDM Migration, a pop-up is displayed that indicates this could take some time to migrate.
After clicking the Begin Migrate, the Type changes to FS Migration, while the State and Progress sections
will change as the migration goes through various stages, such as CREATING, INITIAL COPYING, and the
READY TO COMPLETE State.
82
When the operation has completed the data migration, the READY_TO_COMPLETE state is shown, and
new Options are available. When ready, and preferably during a time when file system activity is at a
minimum, select the Complete/Cutover button to cutover the migration to the destination side. You will
see a pop up informing you there may be disruption of service, and a checkbox to Ignore Checkpoint
Mismatch to ignore any checkpoint mismatches that may occur as a result of any checkpoint activity on
the Source system that may have occurred since the migration process began.
Once you click OK you will see the Progress and State change during the complete process.
83
Note: When doing the complete/cutover, keep in mind that if you do not have
dynamic DNS, you will want to do a manual DNS update shortly after the cutover to
the new system, especially if IPs are changing.
After the cutover is complete, the file system migration state reflects COMPLETED. You will see the
Progress as well as the Information sections indicate that the process is completed. At this point you
should verify that the migrated file system data on the Destination System is accessible to the
appropriate Hosts. Do not click on the Delete operation until you have verified that all data is accessible
to all clients the way that it was intended. Roll-back is ONLY supported if the file system migration
session is in a COMPLETED state, not deleted.
Note: If the data is not accessible, or there is an issue that requires you to roll-back
the VDM Migration to the Source system, please call support immediately. Do not
click on the Delete step, as Roll-back can only be performed if the migration session
is in a COMPLETED state.
84
If you have an existing Disaster Recovery (DR) solution on your source system, you will want to run
through an IP Replication Incremental Attach process to move your DR to the new system. This will help
mitigate having to do a full sync of your data from source to DR site by updating checkpoints and starting
IP Replication from your new source system to your existing DR system. The Incremental Attach
procedure can be found in the document titled Using VNX Replicator under the section entitled
Cascading replication with common base checkpoints.
Once satisfied that the migration was successful, and if applicable, the DR solution has been moved, you
can select the Delete button to cleanup all system tasks and sessions related to this migration operation.
You will see a pop up confirming the operation, which also allows you to delete the pre-plan from the
VDM GUI, as well. You will notice when you click ok, the State changes to Deleting and the FS Migration
task and Pre-Plan tasks will be deleted.
Note: It is also good to know that the Delete operation cleans up and removes all
associated Source and Destination IP Replication sessions associated with the
migration activity.
85
This concludes the walkthrough of the VDM Migration GUI Tool. You should now have the background
necessary to successfully perform Data Mover System Configuration Migrations, VDM Migrations, and File
System Migrations.
86
Conclusion
EMC recommends using VNX Replicator as the preferred tool for data migration from
an EMC file-storage array (such as a Celerra NS or VNX) to one of EMCs new VNX
arrays. This white paper provides step-by-step instructions on how to use Replicator
as well as the new VNX File Migration tool to perform the migration. If you wish to
migrate data from an EMC block-storage array (such as a CLARiiON AX or CX array),
please refer to the white paper Migrating Data from an EMC CLARiiON Array to a VNX
References
The following are available on support.emc.com:
Migrating Data from an EMC CLARiiON Array to a VNX Platform Using SAN Copy
87