Dell Emc Unity Implementation and Administration Lab Guide - 5.2
Dell Emc Unity Implementation and Administration Lab Guide - 5.2
IMPLEMENTATION AND
ADMINISTRATION
LAB GUIDE - 5.2
PARTICIPANT GUIDE
Dell Confidential and Proprietary
Copyright © 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be
trademarks of their respective owners.
Lab Topology
Review dedicated Lab Environment.
• WIN1
• WIN2
• NFS Client: Linux1
• UnityVSA-Source
• UnityVSA-Destination
• Domain Controller/DNS/NTP: ADDC
https://vcsa.hmarine.test:9443
The preceding table details each system within the lab setup. It provides names, IP
addressing, and access credentials.
Student Desktop
When accessing the lab setup, the student is redirected to the Student Desktop
Windows 10 system. The Task bar is preconfigured with application icons that are
needed for the lab exercises.
Throughout the lab exercises, students move between the Student Desktop,
Windows, and Linux Hosts. Students log in to the Windows and Linux hosts using
different credentials depending on the Lab being performed.
Scenario:
In this lab, you take a tour of the Unisphere Dashboard, System View, and Settings
pages to become familiar with the storage system and the GUI interface.
1. Launch the Google Chrome browser from the task bar on your Student
Desktop.
Enter the URL https://192.168.1.113, and log in to Unisphere with the admin user
account and the password Password123!
3. To customize the Dashboard, click the Main link text or use the drop-down
arrow in the upper left of the window and select Customize.
4. In the Add View Blocks window, select System Alerts, click Add View
Block, and then click Close to return to the Dashboard.
Is your new widget displayed? __________________
6. Enter My New Dashboard in the Dashboard Name field and click OK.
Select System Capacity, and click the Add View Block button.
Add another widget by selecting System Alerts, and click the Add View Block
button.
8. Let’s look at Unisphere preferences. Click the person icon in the upper right
corner and select Preferences. You can also change the password and log
out here.
9. In the window that opens, hover over the Optimize for remote access link.
What is the purpose of this option?
_____________________________________
Click the Clear User Cache button next. Click Yes to confirm.
What is the result of the clear user cache function?
________________________
10. Let’s look at the help options. Click the Help icon in the upper right of the
window to view the sub menu. Click the About link in the sub menu.
Which version of Unisphere is running?
_________________________________
Close the window.
The other two sub menu items include a link to the online help main menu
page and a direct link to the dashboard help page.
11. Click the Online Help link to view the main menu.
You can go to the online help using the content tree on the left or using the
categories on the right. Try the For more information link near the bottom of
the main menu.
12. Close the browser tabs to Online Help and Product Documentation and return
to the open Unisphere Dashboard tab.
Click the Alerts icon to view any recent alerts. If you select the View All
Alerts link, it brings you to the Alerts page. You can also reach the Alerts
page using the link in the navigation pane on the left.
Are there any alerts displayed?
_______________________________________
13. Next to the Alerts icon, click the Jobs icon to view any active jobs. In this case,
there are no active jobs shown.
a. Click the Jobs link in the navigation pane on the left to view the Jobs
page.
14. To the left of the Jobs icon, click the View system status icon. Select the
View system details link for more details.
____
Does the software version match what was reported in the About Unisphere
page? ________
Select the Enclosures tab. Click each port and view the details on the bottom
page.
16. Select the Virtual tab. There should be nine Virtual Disks assigned to your
UnityVSA storage system.
17. Now let’s look at some system settings. Click the Settings icon in the upper
right corner of Unisphere.
18. Click the Software Upgrades link to check the software version.
What is the release date of the UnityOS?
_______________________________
20. Now let’s look at the system limits. Click the System Limits link in the
Settings window. Use the slide bar to view all the limits.
21. Use the menu on the left to explore the rest of the Settings.
This completes part 1 of the lab exercise.
From the Manage Users & Groups page, launch the Create User or Group
wizard by clicking on the Add link (+).
2. From the Create User or Group wizard, notice that Local User is the only
option available because LDAP service was not configured for the Unity
system.
If LDAP was configured in the Directory Services tab of the System Settings
window, both existing LDAP User and Group could be configured to access
Unisphere.
Username: student
Password: P@ssw0rd
4. On the Select a Role section, select the Operator role for the new user.
A green check mark specifies that the operation was completed successfully.
Wait until the job has completed, and then click Close.
7. The new User is displayed in the Manage Users & Groups page.
Notice that among the information that is displayed in this page are the user
name, role, and the type of user (Local or LDAP).
8. Test the new user and the permissions that are associated with the role:
First log off from the Unisphere session. (Remember that we are logged as
Admin).
Confirm the Log Out operation. The browser redirects the page to the Login
screen.
User: student
Password: P@ssw0rd
10. Click the Settings icon in the upper right corner of Unisphere.
11. The default License Management page is displayed under the Software and
Licenses > License Information section.
13. This concludes the demonstration of Role-based user management. Close the
Settings window, and log out from the Unisphere session.
Scenario:
In this lab, you assign the storage tier levels to virtual disks presented to the Dell
UnityVSA system. From that storage space, you create several heterogeneous and
homogeneous pools. From the pools, you create LUNs, Consistency Groups, NAS
servers, and file systems.
1. From your Student Desktop system taskbar, launch Chrome and establish a
Unisphere session to the UnityVSA-Source system at IP address
192.168.1.113. The login credentials are admin/Password123!
2. Navigate to System > System View > Virtual. You will see nine Virtual Disks
that are displayed with three different capacities.
Select the first 9.9 GB Virtual Disk so it is highlighted, and then move the
cursor to the Details icon in the tool bar and click it. As a note, double-clicking
the Virtual Disk also works.
The Properties window is displayed. From the Storage Tier dropdown list,
select the Extreme Performance Tier. Click Apply, and then click Close.
Repeat this step for the other two 9.9 GB Virtual Disks.
3. Navigate to Storage > Pools. Click the + icon to create a new pool. The
Create Pool Wizard opens. In the Name field input: FAST VP-1. Click Next to
continue the wizard.
In the Assign Tier to the Virtual Drive window, click Next since you already
performed this step.
In the Select Storage Tiers window, check the box for each of the three
storage tiers, Extreme Performance Tier, Performance Tier, and Capacity
Tier as this pool will be a multi-tiered FAST VP Pool. Click Next to continue
the wizard.
In the Select Virtual Drives window, remove the check marks from all virtual
disks except the first one listed in each of the three tiers. Click Next to
continue the wizard.
Leave the Create VMware Capability Profile for the Pool check box
cleared. Click Next to continue the wizard.
The Results window displays the creation status. Close the window when the
operation completes.
4. Click the + icon to create another pool. The Create Pool Wizard opens. In the
Name field input: FAST VP-2. Click Next to continue the wizard.
In the Select Storage Tiers window, check the box for each of the three
storage tiers, Extreme Performance Tier, Performance Tier, and Capacity
Tier as this pool will be a multi-tiered FAST VP Pool. Click Next to continue
the wizard.
In the Select Virtual Drives window, remove the check marks from all virtual
disks except the first one listed in each of the three tiers. Click Next to
continue the wizard.
Leave the Create VMware Capability Profile for the Pool check box
cleared. Click Next to continue the wizard.
The Results window displays the creation status. Close the window when the
operation completes.
5. Click the + icon to create another pool. The Create Pool Wizard opens. In the
Name field input: Extreme Performance Pool. Click Next to continue the
wizard.
In the Select Storage Tiers window, check the box for Extreme Performance
Tier only, as this pool will be a single tier pool. Click Next to continue the
wizard.
In the Select Virtual Drives window, only the one virtual disk will be displayed
and checked. Click Next to continue the wizard.
Leave the Create VMware Capability Profile for the Pool check box
cleared. Click Next to continue the wizard.
In the Review Your Selections window, verify that you have a single 9.9 GB
drive in the Extreme Performance Pool. Click Finish.
6. Click the + icon to create another pool. The Create Pool Wizard opens. In the
Name field input: Performance Pool. Click Next to continue the wizard.
In the Select Storage Tiers window, check off the box for Performance Tier.
Leave the Create VMware Capability Profile for the Pool check box
cleared. Click Next to continue the wizard.
In the Review Your Selections window, verify that you have a single 19.9 GB
drive in the Performance Pool. Click Finish
7. Click the + icon to create another pool called Capacity Pool. Repeat the
process to add the last 49.9 GB drive (Virtual Disk 9) to the Capacity Pool.
In the Review Your Selections window, verify that you have a single 49.9 GB
drive in the Capacity Pool. Click Finish
The Results window displays the creation status. Close the window when the
operation completes.
1. In Unisphere, navigate to Storage > Block > LUNs. Click the + icon to create
a new LUN.
Number of LUNs: 1
Size: 5 GB
Thin: Checked
In the Configure Access section you will not configure host access yet. Click
Next to continue the wizard.
Also click Next in the Snapshot section and Next in the Replication section,
you will not be configuring those features yet.
The Summary section will display the configuration of the LUN to be created
as shown.
The Results window will display the status of the operation. When it completes,
Close the window. The newly created WIN1 LUN0 will be displayed.
Number of LUNs: 1
Size: 5 GB
Thin: Checked
In the Configure Access section you will not configure host access yet. Click
Next to continue the wizard.
Also click Next in the Snapshot section and Next in the Replication section,
you will not be configuring those features yet.
The Summary section will display the configuration of the LUN to be created
as shown.
The Results window will display the status of the operation. When it completes,
Close the window. The newly created Linux1 LUN0 will be displayed.
3. Select the Consistency Groups tab and click the + icon to create a
Consistency Group. The Create a Consistency Group wizard opens.
In the Name field input: FASTVP_CG. Click Next to continue the wizard.
In the Populate Consistency Group section click the + icon and select
Create new LUNs from the dropdown list.
Number of LUNs: 3
Name: CG_LUN
Size: 5 GB
Thin: Checked
Click OK to continue.
Also click Next in the Snapshot section and Next in the Replication section,
you will not be configuring those features yet.
The Summary section will display the configuration of the Consistency Group
to be created as shown.
4. The Results window displays the status of the operation. Please be patient,
the operation will take a moment to complete. When it completes, Close the
window.
In Unisphere, navigate to Storage > File > NAS Servers. Click the + icon to
create a new NAS server. A wizard to create the NAS server opens to the
Configure NAS Server General Settings section. Input the following
configuration:
IP Address: 192.168.1.115
Gateway: 192.168.1.1
Password: emc2Admin!
4. In the Configure NAS Server DNS section the Domain and Servers fields
are populated:
Domain: hmarine.test
Servers: 192.168.1.50
5. You will not be configuring Replication so click Next to continue the wizard.
6. You will now create a second NAS server which will be used to access file
Click the + icon to create a new NAS server. The wizard to create the NAS
server opens to the Configure NAS Server General Settings section. Input
the following configuration:
Storage Processor: SP A
IP Address: 192.168.1.116
Gateway: 192.168.1.1
9. In the Configure Unix Directory Service section, check the Enable a Unix
Directory Service using NIS or LDAP.
IP Address: 192.168.1.51
10. In the Configure NAS Server DNS section, check the Enable DNS option.
This exposes further configuration information. Input the following
configuration:
Domain: hmarine.test
11. You will not be configuring Replication so click Next to continue the wizard.
In Unisphere, navigate to Storage > File > File Systems. Click the + icon to
create a new file system.
A wizard opens for creating the file system. In the Configure the Protocols
the File System Supports section, select the Windows Shares (SMB) radio
button.
From the NAS Server dropdown list, the NAS_SMB should be selected as it is
the only NAS Server available configured for SMB.
Name: SMB_fs
3. In the Configure the File-level Retention section, keep the Off radio button
selected.
4. In the Configure the File System Storage Characteristics section, input the
following configuration:
Size: 5 GB
5. You will not configure a share for the file system at this time so in the
Configure the Initial Share section click Next to continue the wizard.
You will not configure Snapshots for the file system at this time so in the
Configure Snapshot Schedule section click Next to continue the wizard.
You will not configure Replication for the file system at this time so in the
Provide a Replication Mode and RPO section click Next to continue the
wizard.
6. The Summary section displays the details of the file system creation as
shown:
The Results section will display the status of the file system creation.
7. You will now configure a file system for NFS file storage.
From the File Systems page, click the + icon to create a new file system.
The wizard to create a file system opens. In the Configure the Protocols the
File System Supports section, select the default of Linux/Unix Shares (NFS)
radio button.
From the NAS Server dropdown list, the NAS_NFS should be selected as it is
• Name: NFS_fs
9. In the Configure the File-level Retention section, keep the Off radio button
selected.
10. In the Configure the File System Storage Characteristics section, input the
following configuration:
• Size: 5 GB
11. You will not configure a share for the file system at this time so in the
Configure the Initial Share section click Next to continue the wizard.
You will not configure Snapshots for the file system at this time so in the
Configure Snapshot Schedule section click Next to continue the wizard.
You will not configure Replication for the file system at this time so in the
Provide a Replication Mode and RPO section click Next to continue the
wizard.
12. The Summary section displays the details of the file system creation as
shown.
The Results section will display the status of the file system creation.
Scenario:
In this lab, you configure a Windows iSCSI host to access a LUN on the Dell
UnityVSA.
• IP Address: 192.168.3.100
1. From the Student Desktop taskbar, click the Remote Desktop Connection
(RDC) icon to launch the application.
Computer(s): WIN1
Username: \administrator
Password: emc2Local!
Click Ok
2. From the WIN1 system taskbar, click the iSCSI Initiator icon to launch the
application.
Select the Discovery tab. Click the Discover Portal button. In the Discover
Target Portal window, in the IP address or DNS name field input: 192.168.3.100
Select the Targets tab. In the Discovered targets section, the UnityVSA-Source
IQN is displayed with an Inactive status. Click the Connect button to connect the
initiator to the target.
In the Connect to Target window, check the Enable multi-path checkbox. Click
the Advanced button.
From the Advanced Settings window, in the Local adaptor dropdown list, select
Microsoft iSCSI Initiator. In the Initiator IP dropdown list, select 192.168.3.106.
From the Target portal IP dropdown list, select 192.168.3.100 /3260. Click OK to
configure the settings.
1. From your Student Desktop system, open the Unisphere session to UnityVSA-
Source go to to Access > Initiators.
2. Verify the WIN1 Initiator is displayed. You may need to refresh the Unisphere
page.
Note: The initiator displays a green circle with a white check icon
along with an orange dot, indicating that the initiator is not yet
associated with a host.
3. Navigate to Access > Hosts. Click the + icon and from the dropdown list
select Host. The Add a Host wizard opens. Input the following configuration:
Name: WIN1
5. The Summary > Review the host configuration window displays the
configuration of the host to be registered as shown.
The WIN1 host is now listed and indicates it is a registered host by the white check
in the green circle.
6. Navigate to Storage > Block > LUNs and double-click on WIN1 LUN0 to open
its properties page.
1. From the WIN1 RDC session, launch Computer Management from the
desktop or taskbar.
2. Navigate to Storage.
Locate the disk 5 GB. This disk is associated with the LUN you created. Right-
click the disk and bring it Online.
Once it displays Not Initialized, right-click and select Initialize Disk. Use the
default setting, and click OK.
Right-click the unallocated space and select New Simple Volume. The New
Simple Volume Wizard opens. Click Next to continue the wizard.
In the Specify Volume Size window, accept the default size, and click Next to
continue the wizard.
In the Assign Drive Letter or Path window, accept the default settings and
click Next to continue the wizard.
In the Format Partition window, accept the default settings for File system
and Allocation unit size. In the Volume label field input WIN1 LUN0 and
check the Perform a quick format option.
Right click in the window and select New > Text Document.
You have verified host access and successfully written data to the LUN.
If you see the following window after closing Computer Management, select
Cancel here:
Scenario:
This lab details the steps to configure an iSCSI interface on the Dell UnityVSA data
network port. After configuration, discovery of the target array is performed using
the Linux native iSCSI initiator software (open-iscsi).Once the target is discovered,
you will:
• Attach the host to the LUN.
• Format the drive
• Partition the drive
• Create a file system.
• Write data to the LUN.
The Linux host uses the open-iscsi initiator software. The driver contains
parameters which may need to be changed for Dell Unity. The settings can be
found in the Unity Series Configuring Host to Access Fibre Channel (FC) or
iSCSI Storage guide available on dell.com/support.
1. From your Student Desktop system desktop or taskbar, click the PuTTY icon
to launch the application.
In the PuTTY screen Host Name (or IP address) field input: Linux1 and click
the Open button.
In the PuTTY Security Alert window, if presented, click the Yes button to
connect to the system.
You are now logged in as root to the Linux1 host as indicated by the # cursor
symbol.
2. In this step, you identify the initiator IQN for the Linux system.
cat /etc/iscsi/initiatorname.iscsi
The command displays all node related settings. Verify that the following
configuration lines are present in the file:
node.startup = automatic
node.session.timeo.replacement_timeout - 120
node.conn[0].timeo.noop_out_interval = 10
node.conn[0].timeo.noop_out_timeout = 15
node.session.iscsi.InitialR2T - Yes
node.session.iscsi.ImmediateData - No
The output displays the Unity target port IP address, port number, and IQN.
Enable the iSCSI initiator software to start on system boot and shut down
when the host is brought down, run the commands:
6. Verify the iscsi initiator software status is active, run the command:
The output displays the iscsi service is loaded, active, and logged into the
Unity target IQN.
Verify the host iscsi initiator session to the Unity target, run the command:
iscsiadm -m session
The output displays the Unity target portal IP address and port number and its
IQN as seen below:
Note: The initiator displays a green circle with a white check icon
along with a blue square indicating that the initiator is not yet
associated with a host.
2. Navigate to Access > Hosts. Click the + icon and from the dropdown list
select Host. The Add a Host wizard opens. Input the following configuration:
Name: Linux1
The Linux1 host is now listed as a registered host as indicated by the white
check in the green circle:
5. Navigate to Storage > Block > LUNs and double-click on Linux1 LUN0 to
open its properties page.
1. From your Student Desktop, open the existing SSH session to the Linux1
system.
powermt config
The output of the command displays the Pseudo name PowerPath has
assigned to the new LUN. Verify the Pseudo name in the output as shown:
The example above shows the pseudo name of emcpowera. The name may
be different for you if the host had access to another LUN previously. For
example, the pseudo name returned could be emcpowerb for the LUN. Record
the pseudo name returned for you. You will use it in a following step.
2. LUN discovery.
Verify the new LUN is discovered as a disk device, run the command:
fdisk -l
The command displays the disk devices that the system can see. Locate the
disk device having the PowerPath pseudo name recorded in the earlier step.
The example below shows the disk device having the pseudo name
emcpowera.
Note: To see the new LUN on the Linux host, the host SCSI bus must be
rescanned. In some conditions, a reboot of the host may be needed.
Create a new primary partition on the pseudo named device, run the
command:
fdisk -c -u /dev/emcpowera
The command launches a wizard to partition the disk. You create a primary
partition on the disk using the default values for the disk device. Enter the
bolded responses at each of the wizard prompts shown:
fdisk -l
Verify the new device name. In the example below, the new partition is
/dev/emcpowera1:
Format and create a file system on the new device, run the command:
mkfs.ext4 /dev/emcpowera1
mkdir /emcpowera_mp
df-h
Note: The disk device mount does not persist if the host is
rebooted. As an option, to have the disk device mount persist on
boot, edit the /etc/fstab file to include the line as shown below:
Verify that the data was written to the LUN, run the command:
cat /emcpowera_mp/newfile
exit
This concludes the lab. You have configured a data network port on the Dell
UnityVSA, discovered the Target using the Linux native iSCSI Initiator,
attached the Linux host to the Dell UnityVSA LUN, formatted the device, and
wrote data to it!
Scenario:
In this lab, you create SMB shares to a Unity file system and access its file storage
from a Windows SMB client.
2. In Unisphere, go to Storage > File. Select the SMB_fs file system, place a
check in its checkbox, and clear any other file system. From the More Actions
dropdown list, select Create an SMB share (CIFS).
3. A wizard to create a share opens to the Select a source for the new share
section. In the File System field, the SMB_fs will be listed. The radio button
option is selected for the File System “SMB_fs". The option to create the
share on a Snapshot of the file system is grayed-out since there are no
snapshots of the file system.
4. In the Provide SMB share name and path section input the following
configuration:
Local Path: /
5. The Provide SMB share details section of the wizard lets you configure
various advanced SMB properties and offline availability options. In this lab,
you keep the default values.
6. The Summary section of the wizard displays the configuration summary for
the share as shown:
Click the SMB Shares tab. Verify the Top$ share is listed.
Username: hmarine\administrator
Password: emc2Admin!
Click Ok.
9. Now open the file system share by accessing its share name from the NAS
server that is associated with the file system.
10. The share to the file system opens. In the Explorer window, select the View
tab. Ensure that the Hidden items is checked as shown below:
You will see the .etc and lost+found folders that are present at the top-level
of all Unity file systems. These folders are used internally by the file system
and should not be disturbed or modified. As an administrator, you can control
the on-disk permissions to this top-level shared area of the file system.
The Top$ (\\NAS_SMB) properties windows opens. Select the Security tab.
11. A permissions window to the share opens. The Everyone group is listed and
has Full control permissions by default. As the domain administrator you can
add discrete administrative permissions.
12. A window for adding users and groups opens. In the Enter the object names
to select field type: Domain Admins and click the Check Names button. The
group appears underlined when it is located within the domain.
13. The Domain Admins group is now listed on the permissions. Select the
Domain Admins group. In the Allow column of checkboxes, check the Full
control box. Full Control automatically enters more permissions checkboxes.
Click the Apply button to add the new permission setting for the Domain
Admins group.
14. To limit access to the top level of the file system, modify the permissions for
the Everyone group. In the permissions window, select the Everyone group.
In the Allow column of checkboxes, clear all permissions except for List
folder contents. Leave that permission checked.
Leave the share open. You will be accessing it in the next part of this lab
exercise.
2. Adjust permissions on the folder for specific domain user and group access.
Right-click the Sales_Data folder, and select Properties. Select the Security
tab.
You will see the permissions that were set on the Top$ share are inherited by
the folder. Click the Edit button to add permissions to the folder.
A permissions window for the folder opens. Click the Add button.
3. A window to add users and groups to the folder opens. In the Enter the object
names to select field, type: Westcoast Sales; Eastcoast Sales and click the
Check Names button. The groups appear underlined when they are located
within the domain.
4. The groups are now listed in the permissions window for the folder.
Select the Eastcoast Sales group. In the Allow column of checkboxes, check
the Full control permission, and click the Apply button to assign the
permission for the group.
Repeat the same permissions assignment for the Westcoast Sales group.
Disconnect the RDC session to the WIN1 system by clicking the Logoff
Session icon on the WIN1 taskbar.
5. From the Student Desktop system, return to your Unisphere session. Login
again if your session has timed out. Navigate to Storage > File > SMB
Shares. Click the + icon to create an SMB share.
6. The wizard to create an SMB Share opens. Click the grid in the File System
field. A Select a File System for share creation window opens. Select the
SMB_fs for the file system and click Select.
8. Keep the default options for the Advanced wizard section and click Next to
continue.
11. From the Student Desktop system taskbar, launch the RDC application.
You will log in to the WIN1 system as a user who is a member of the
Westcoast Sales group.
Username: hmarine\swall
Password: emc2Admin!
Click OK
In the Open field, input \\NAS_SMB\Hmarine_Sales and click OK. The share
opens.
Right click in the share window and select New > Text Document. Name the
file Swall. Open the file, and write a line of text indicating the current time as
seen on the WIN1 system clock.
Did the operation to create, write to and save this file complete?
_______________
Log off from the WIN1 RDC session, double-click the Logoff session icon on
the WIN1 Desktop.
14. From the Student Desktop system, launch the RDC application again.
15. You will log in to the WIN1 system as a user who is a member of the
Propulsion Engineers group.
Username: hmarine\epratt
Password: emc2Admin!
Click OK.
In the Open field, input \\NAS_SMB\Hmarine_Sales and click OK. The share
opens.
The user who is a member of the Propulsion Engineers group does not have
permissions on the shared directory to perform the operations.
Username: hmarine\administrator
Password: emc2Admin!
Click OK.
2. From the WIN1 system taskbar, click the Computer Management icon to
launch the application.
3. From the left-side tree, right-click the Computer Management (Local) text
and select the Connect to another computer option.
4. With the radio button option selected for Another computer, in the field input:
NAS_SMB and click OK.
5. Expand the System Tools tree object. Be patient, this can take a moment to
complete.
6. Right-click the Shares object, and select the New Share option. This opens a
wizard to create a new shared folder. Click Next to continue the wizard.
7. The wizard configures a path for the shared folder. Click the Browse button.
In the Browse for Folder window, the structure for the NAS server is
displayed. Single click the SMB_fs file system and it populates the Folder
field.
Click the Make New Folder button. Name the folder Engineering_Data and
click the OK button.
8. The next wizard window names the share. Accept the default values here.
9. The next wizard window sets permissions on the folder. Select the radio button
option for Customize permissions and click the Custom button.
The window displays tabs for Share Permissions and Security. The Share
Permissions relates to permissions for seeing the shared item over the
network. The Security tab relates to on-disk permissions for the folder. You are
modifying both sets of permissions. Select the Share Permissions tab. For
the Everyone group, in the Allow column of checkboxes, check the Full
Control permission. This setting enables the share to be seen by all users.
Next you customize the on-disk permissions for a specific group. Select the
Security tab.
10. This opens a window for setting permissions on the Engineering_Data folder.
A window to add users and groups to the folder opens. In the Enter the object
names to select field input: Propulsion Engineers and click the Check
Names button. The group appears underlined when it is located within the
domain.
Click OK to continue.
12. From your Student Desktop access, the open session to Unisphere. If the
session has timed out, log in again. Navigate to Storage > File > SMB Shares
and refresh the page.
From the Student Desktop system, open an RDC session to WIN1 as the
epratt domain user.
Username: hmarine\epratt
Password: emc2Admin!
Click OK.
14. Double-click the Run icon on the WIN1 desktop and in the Open field input:
\\NAS_SMB\Engineering_Data and click OK. A window to the share opens.
15. Right-click in the share window, and select New > Text Document. Name the
file epratt. Open the file and add text to the file indicating the user who wrote
the file and the current time that is displayed on the WIN1 system. Save and
close the file.
16. Right-click on the file, and select Properties. Select the Security tab, and
click the Advanced button. Who is the file
Owner?_______________________________
Close the share window and logoff the RDC session to WIN1.
Scenario:
In this lab, you create NFS shares to a Unity file system and access its file storage
from a Linux NFS client.
2. Navigate to Storage > File > NFS Shares. Select the + icon to create an NFS
share.
3. The wizard to create a new NFS share opens to the Select a source for the
new share section. Click the grid in the File System field and a window opens
to select a Unity file system for creating the share. Double-click the NFS_fs to
select it for the share. The radio button option is selected for the File System
NFS_fs. The option to create the share on a Snapshot of the file system is
grayed-out since there are no snapshots of the file system.
4. In the Provide NFS share name and path section input the following
configuration:
Local Path: /
Leave all other share options and values at their default settings.
5. The wizard now presents the access configuration of the share. You configure
specific access for a single client having root privileges on the file system.
In the Access Type field, select the Read/Write, allow Root setting.
In the section listing hosts, select the Linux1 host by checking its checkbox
and click OK.
7. The wizard presents a Results window for the creation operation. Verify that
the operation completed successfully and click Close to exit the wizard.
8. From the Student Desktop system taskbar, click the PuTTY icon to launch the
application.
In the PuTTY screen Host Name (or IP address) field input: Linux1 and click
In the PuTTY security message, if presented, click the Yes button to connect
to the system.
You are now logged in as root to the Linux1 client as indicated by the # cursor
symbol.
10. Create an empty directory at the root of the client to use as a mount point for
mounting to the NFS share.
mkdir /nfs
Verify that the client is mounted to the remote file system, run the command:
df -h
cd /nfs
ls –la
You will see the .etc and lost+found folders that are present at the root of
the Unity file system. These folders are used internally by the file system and
should not be modified.
13. As the administrator, you have root user access to the file system through this
share.
You can now create subfolders on the file system to use for more shares for
the user community. Create a subfolder on the file system, run the command:
mkdir engineering_data
ls –la
Review the output, and record the following information for the folder:
ls –l
Stop accessing the share by changing directory out of the mount point, run the
command:
cd /
umount /nfs
df –h
exit
4. Return to your Unisphere session. Refresh your login if the security session
has expired. Navigate to Storage > File > NFS Shares.
5. The wizard to create a new NFS share is displayed. Click the grid in the File
System field to select a file system. From the list, highlight the NFS_fs and
click the Select button.
Leave all other share options and values at their default settings.
7. The wizard now presents the access configuration of the share. Configure
Read/Write access for a subnet.
From the Configure Access window, in the Default Access field keep the
In the Access Type field, select Read/Write from the dropdown list.
In the section listing hosts, click More Actions and from the dropdown list,
select Add Subnet.
The newly added Engineering network is now listed in the hosts section.
Select it by checking its checkbox, and click the OK button.
8. The wizard presents a summary screen for the NFS share configuration it will
create as shown here:
9. The wizard presents a results window for the creation operation. Verify that the
operation completed successfully and click Close to exit the wizard.
10. From the Student Desktop system taskbar, click the PuTTY icon to launch the
application.
In the PuTTY screen Host Name (or IP address) field input: Linux1 and click
the Open button.
You are now logged in as root to the Linux1 client as indicated by the # cursor
symbol.
Mount the share to the client mount point, run the command:
Verify that the client is mounted to the remote file system, run the command:
df -h
13. Now test NFS share access using the epallis user who is a member of the
engprop NFS users group.
su epallis
cd /nfs
Verify that the text is present in the file, run the command:
more epallis
Look at the file permissions bits, owner, and group, run the command:
ls –l
14. Now test access to the share as another user who is not a member of the
engprop NFS user group.
su swoo
more epallis
Create a file with some text added to it, run the command:
Why could the user read from the share but not write to it?
______________________________________________________________
_________________________________________
exit
exit
cd /
umount /nfs
exit
Scenario:
In this lab, you create VMware datastores on a Unity system, and make them
available to a VMware ESXi host through VMware aware integration.
1. Before creating VMware storage resources, you must add VMware hosts to
the storage environment so you can configure host access later.
Navigate to Access > VMware > vCenters. Click + to open the Add vCenter
wizard.
2. To discover your ESXi host, in the Discover ESXi hosts in the vCenter
section, enter the following settings:
Password: Password123!
Click Find.
3. A list opens that includes any ESXi hosts managed by this vCenter server.
Locate and check ESXi host esxi-1.hmarine.test.
5. The Summary displays the ESXi host to be added to VMware hosts as shown
here:
Note: The job will continue to run in the background if you close the
page before it completes.
Click the Virtual Machines tab. There will not be any VMs listed currently.
Click the Virtual Drives tab. There will not be any Virtual Drives that are listed
currently.
You will see the initiator for the ESXi host esxi-1.hmarine.test.
Note: the initiator has an orange dot on the health status icon
indicating there are no logged in initiator paths.
3. In the Enter a name for the datastore section input the following:
Name: VMFS_Datastore
4. In the Configure the storage for this datastore section, input the following
configuration:
• Size: 10 GB
• Thin: Checked
5. In the Configure Access section, click the + icon to define host access to the
datastore.
• Click OK.
In the Provide a Replication Mode and RPO section, you will not be
configuring replication for the datastore. Click Next to continue.
A Results screen shows the progress and status of the created datastore.
Password: Password123!
Click LOGIN.
8. The VMware vSphere UI opens. At the top, click the Menu drop-down list and
select Home, and then click the Hosts and Clusters object as shown:
If not already expanded, expand the tree vcsa.hmrine.test > Datacenter and
select the esxi-1.hmarine.test host. Select the Datastores tab from the page
as shown:
You will see the VMFS_Datastore listed with several details including the
format type. The storage resource is now ready for use within VMware.
As you can see, VMware integration with Unity provides ease of storage
provisioning and access. The ESXi host initiator is automatically registered in
Unity. Once created, the storage object is mounted and formatted as a virtual
machine file system, and is ready for use within VMware.
Leave the vSphere UI browser tab open, you will use it again in the next part
of the lab exercise.
Navigate to Storage > File > NAS Servers. You should find the two NAS
Servers that were created during the Storage Provisioning Lab.
Double-click NAS_NFS and verify the following settings under the General
tab:
Name: NAS_NFS
Current SP: SP A
IP Address: 192.168.1.116
Click Close.
3. In the Enter a name for the datastore section input the following:
Name: NFS_Datastore
4. In the Configure the storage for this datastore section, input the following
configuration:
Size: 10 GB
Thin: Checked
Note: Host IO Size allows users to specify the block size used to
communicate to hosts or choose predefined application profiles.
Click the + icon to customize access to the datastore for a specific host. A list
box is exposed to configure an access type and select or add hosts:
Click OK.
In the Provide a Replication Mode and RPO section, you will not be
configuring replication for the datastore. Click Next to continue.
A Results screen shows the progress and status of the created datastore.
7. Return to the open vSphere UI browser tab. The navigation in the left pane
should be set to Hosts and Clusters with the esxi-1.hmarine.test host
selected. The Datastores tab is selected as shown.
Click the Refresh icon at the top of the page if the new NFS_Datastore is not
shown.
Confirm the NFS_Datastore that you created is now available to the host for
VMware use.
From the information listed about the new datastore, record the value that is
shown for its Type: __________________
This further demonstrates the integration Dell Unity provides for VMware. The
NFS datastore has automatically mounted the Unity storage resource, and it is
ready for use within VMware.
Scenario:
This lab demonstrates how to set Host I/O Limits (Quality of Service) to provisioned
block storage resources using the Unisphere user interface. You can create Host
I/O Limits policies for block storage resources with host access that limit maximum
throughput in IO/s and/or maximum bandwidth in KBPS or MBPS.
From the Unisphere UI, open the system Settings window by single clicking
the Settings gear icon.
3. From the License Management page of the Software and Licenses >
License Information section, look for the Quality of Service (QoS) license.
Verify that the feature has a green check mark in front of it meaning that the
license is installed and the feature is operational.
There is also a control button which switches between Pause and Resume.
This control manages the enforcement of all Host I/O limits applied to the
Clicking on the
Pause tab disables
the enforcement of
Host I/O Limits on
the system.
From the page, host I/O limit policies can be created and any existing policies
are displayed. It also has a link to the Host I/O settings page which was
viewed in the previous step.
Note that there is no Host I/O Limit that is associated with the LUN.
Change the frequency range to Real Time. Wait at least 1 minute for the
graphic to refresh and populate with information.
Both I/O measures should default to zero since there is no write activity from
the Host to this LUN at the moment.
8. From the Student Desktop system taskbar, open an RDC session to WIN1.
Username: \administrator
Password: emc2Local!
Click OK.
Expand the Local Disk (C: ) tree and open the Big_Files folder.
There are some large files that are located in this folder which will be used to
generate some write I/O to the WIN1 LUN0.
Paste the files into the E: drive which is from the WIN1 LUN0 LUN. Wait for
the copy operation to complete.
Observe in the graphic the Bandwidth KB/s and Throughput IO/s used by the
host to perform the copy operation.
Delete all of the 1 GB files that were copied to the E: drive folder.
From the open Unisphere session, navigate to System > Performance >
Host I/O Limits.
2. On the Host I/O LImits page, you can define Host I/O Limit policies. Limiting
I/O bandwidth and throughput provides more predictable performance in
system workloads between hosts, their applications, and storage resources.
3. You can limit I/O traffic to block storage resources (LUNs, Consistency
Groups) by setting a threshold here (throughput and/or bandwidth).
In the Provide a Name and Limits section enter the configuration information
for a new Host I/O Limit policy:
Name: Silver_Policy
Shared: Unchecked
4. In the Select Storage Resources to Associate with the I/O Limit section,
select WIN1 LUN0.
A green check mark specifies that the operation was completed successfully.
9. Follow previous steps to create additional policies with the information below.
Click the + icon to create a second policy with the settings below:
Name: Gold_Policy
Shared: unchecked
Name: Platinum_Policy
Shared: Checked
Note that there is a Host I/O Limit Silver Policy that is associated with the LUN.
Observe that both bandwidth and throughput graphics show a line for the
limits.
Change the frequency range to Real Time. Wait at least 1 minute for the
graphic to refresh and populate with information.
Both I/O measures should default to zero since there is no write activity from
the Host to this LUN at the moment. The limit lines are still displayed in the
graphic.
12. This completes the lab exercise. You have successfully created a Host I/O
Limits absolute policy and associated it with block storage resource.
Scenario:
The UFS64 architecture enables users to extend thick and thin file systems.
Performing UFS64 file system extend operations is transparent to the client
meaning the array can still service I/O to a client during extend operations.
This lab demonstrates how to manually extend and shrink the capacity of a UFS64
file system using the Unisphere UI.
Verify that the new advertised size and space is allocated from the pool.
The Capacity option is selected, and the Current Pool Capacity is displayed.
The NAS servers and file systems that were created in the storage pool are
displayed.
Select the SMB_fs and note the value that is listed for it in the Total Pool
Space Used (GB) column. ____________
7. From the SMB_fs Properties window, change the size field to 10 GB.
For thick file systems, the extend operation increases the actual space that is
allocated to the resource from the pool.
9. From the FASTVP-2 Properties page Usage tab, select Storage Resources.
Click the Refresh icon and note the value for the SMB_fs Total Pool Space
Used (GB).
Was there a change in size after the extension of the file system?
_____________________
10. From the Usage tab, select the Capacity option. Record the following:
Have you noticed any changes besides the advertised file system size?
_______
The default Capacity option is selected, and the Current Pool Capacity is
displayed.
The NAS servers and file systems in the storage pool are displayed, along with
the space they are using from the pool and space that is consumed by their
snapshots.
Review the values that are displayed in the Total Pool Space Used (GB)
column. Add all these values together and verify that the sum correlates to the
Used Space you recorded previously.
Select the SMB_fs file system and record its Total Pool Space Used (GB)
value: _______
Click the Edit icon (pencil) or double-click the selection to open its properties.
5. From the SMB_fs Properties window, change the Size field to 5 GB.
6. A warning message displays alerting the user that the file system will be
reduced in size. An estimate of how much space will be returned to the pool is
displayed.
Did the Total Pool Space Used value change from the value that was recorded
in step 4? __________
8. From the FAST VP-2 Properties page, select the Usage tab and then select
Storage Resources.
What is the Total Pool Space that is used by the SMB_fs file system?
________
Scenario:
This lab demonstrates how to configure and use the FAST VP support on Dell
Unity XT systems using the Unisphere.
This lab also demonstrates how FAST VP data relocation can be scheduled or
manually started, and how it behaves with the selection of tiering policies for the
storage resources.
1. From your Student Desktop system, open a Unisphere session to the lab
UnityVSA-Source Unity system (IP address 192.168.1.113).
2. To check if FAST VP can be set in the Unity system, you must verify if the
feature is licensed and enabled.
3. From the License Management window, look for the FAST VP license.
4. FAST VP settings are managed at the System Settings level and at the
storage resource levels.
The data relocation can be paused and resumed by clicking the Pause button.
5. From the FAST VP Settings window, it is also possible to modify the schedule
for the Data Relocation between the storage tiers on the system.
The Schedule data relocations can be enabled or disabled using the check
box.
Prior to the change, if data was scheduled for relocation, then the FAST VP
Settings page calculates the amount of data to relocate and updates the
information on the window.
The FAST VP tab shows the following information for the heterogeneous pool:
Start and end time for the most recent data relocation
Amount of data in the pool scheduled to move to higher and lower tiers
The data relocation can be manually started from this page and that the
Manage FAST VP system settings window can also be launched from here
by clicking the blue text.
1. This part of the lab demonstrates how to check the data relocation operation
from a LUN and from a storage pool. In this section, you perform a simple
write operation to WIN1 LUN0 and observe how FAST VP relocates data
according to the storage tier policy assigned to the storage resource.
Username: \administrator
Password: emc2Local!
Click OK.
3. From the WIN1 Desktop or taskbar, click the File Explorer icon.
Expand the Local Disk (C: ) tree and open the Big_Files folder.
There are some sizeable files that are located in this folder which will be used
to generate some write I/O to the WIN1 LUN0 disk.
4. Select the four 1 GB files, and then right-click and select Copy.
5. Select the WIN1 LUN0 (E: ) disk from the File Explorer navigation pane.
Paste the files into the drive. Wait for the copy operation to complete.
You may notice that the LUN usage values do not increase to match what was
written to the LUN. This is caused by the sparse files that are used in the copy
operation and client-side caching.
___________________________
You may notice that the data on the LUN is distributed across multiple tiers
and does not reside in only the highest tier. The distribution of data depends
on the tier having space available from the pool. If there is no space on the
highest tier, space is allocated from the next highest tier having space
available for the data. The available space for the tiers within the pool is not
predictable for this lab exercise; it varies based on past activities you may
have performed. You are recording the distribution above so that when you
change the tiering policy, you can see its effect on the data distribution.
Change the Tiering Policy to Lowest Available Tier and click Apply.
10. In the FAST VP-1 Properties window, select the FAST VP tab.
11. In the Start Data Relocation window, enter the following settings if not
already selected:
The Move Down (GB) column is immediately populated with the total amount
of data that is to be allocated to the lowest available tier possible because of
the tiering policy chosen for WIN1 LUN0.
13. In the WIN1 LUN0 Properties window, select the FAST VP tab.
Note: you may need to wait several minutes for the data distribution values to
change. Closing and opening the LUN Properties page refreshes the data.
Change the Tiering Policy to Start High then Auto Tier. Click Apply.
14. Navigate to Storage > Pools. Double-click the FAST VP-1 storage pool.
15. In the FAST VP-1 Properties window, select the FAST VP tab.
Click Stop Relocation to interrupt the current data relocation. Then click Start
Relocation.
16. In the Start Data Relocation window, enter the following settings if not
already selected:
The Move Up (GB) column is immediately populated with the total amount of
data being allocated to a higher tier because of the tiering policy chosen for
WIN1 LUN0.
17. From your Student Desktop, open the existing RDC session to WIN1. In the
open File Explorer window, delete all the files that were copied to the WIN1
LUN0 and leave the existing New Text Document file.
You have successfully observed the data relocation based on choice of Tiering
Policy that is associated with a storage resource.
Scenario:
This lab demonstrates how to configure limits for specific users and directories on a
file system.
User Quotas and Tree Quotas are supported on Dell Unity XT systems both
independently and in combination. It is also possible to limit the amount of storage
that a user consumes by storing data on a quota tree.
1. From your Student Desktop system, open a Unisphere session to the lab
UnityVSA-Source Unity system (IP address 192.168.1.113).
The tab is divided in two sections: File System and Quota Tree. The default
section is File System showing the User Quota Report page that displays the
5. From the Create User Quota wizard window, click the + icon to open the
Configure User window.
Domain: hmarine.test
Name: Swest
User: hmarine.test\Swest
8. On the Limits page, specify the Soft Limit and a Hard Limit for the User
Quota.
A soft limit, when surpassed, begins the grace period countdown. The user
can continue to write to the file system until the grace period expires. The
grace period is modifiable and defaults to 7 days.
The hard limit stops any write activity when it is reached, regardless of any
grace period.
Soft Limit: 1 GB
Hard Limit: 3 GB
9. The quota settings are displayed on the Review Your Selections section as
shown here:
A green check mark specifies that the operation was completed successfully.
11. The new User Quota is displayed in the User Quota Report page.
Username: hmarine\swest
Password: emc2Admin!
Click OK.
Expand the Local Disk (C: ) tree and open the Big_Files folder.
5. Paste the file into the share. Wait for the copy operation to complete.
Click the Refresh icon to update the information displayed on the User Quota
Report.
Is there any change to storage consumption for the Swest user? ______
Note: You should not see any update to the user quotas when you click
refresh.
7. From the User Quota Report tab of the SMB_fs Properties window, click the
Manage Quota Settings link.
8. In the Manage Quota Settings dialog box, you realize that User Quotas were
never enforced.
Check the checkbox for Enforce User Quotas and click OK.
9. From the Quotas tab of the SMB_fs Properties window, click the Refresh
icon to poll updated information about your User Quotas.
10. Verify that the User Quota for Scott West has changed its state, and its Soft
Limit Usage has been fully consumed:
You can verify that there are entries in User Quota Report for the other users
(administrator, Swall, and Epratt) who have accessed the file system. Notice
that there are no limits that are defined for them.
Also notice that users are listed by User ID and by Windows or Unix names.
The Edit Selected User Quota Limits window opens. It displays the User ID
and the Windows Names for the user.
11. Next, you are going to test the Hard Limit defined in the user quota policy for
Scott West.
12. From the Local Disk (C: )\ Big_Files folder, copy 1GBFile-2 and 1GBFile-3
to the clipboard.
14. The copy process proceeds, but eventually is interrupted by an error message.
Review the error message.
The error message is due to the User Quota’s Hard Limit restricting the
consumable space for the user swest. Click Skip to end the copy operation.
Verify that only one of the files was copied to the directory.
Click the Refresh icon to update the information displayed on the User Quota
Report.
Notice that the Usage (GB) of the Swest account has increased above 2.0
GB. One of the files could not be copied because it would have exceeded the
3 GB hard limit.
Observe that the Soft Limit has been exceeded and the Grace Period (7
days) has been activated.
17. In the User Quota Report page, observe that the Hard Limit has been
increased to 5.0 GB.
19. From the Local Disk (C: )\ Big_Files folder, copy 750MBFile-1 to the
clipboard.
The copy process should succeed. Verify that the copied files are present on
the share.
Close the File Explorer windows and logoff the RDC session.
2. From the Quota tab of the SMB_fs Properties window, select Quota Tree on
the left of the window.
4. In the Create Quota Tree wizard, enter the configuration information for the
new quota tree.
Note: The path is relative to the root of the file system SMB_fs and
must start with a forward slash.
Soft Limit: 1 GB
Hard Limit: 3 GB
6. The quota settings are displayed on the Review Your Selections section as
shown here:
A green check mark indicates that the operation was completed successfully.
Wait until the job has completed, and then click Close.
Username: hmarine\eplace
Password: emc2Admin!
Click OK.
Expand the Local Disk (C: ) tree and open the Big_Files folder.
6. Paste the files into the share. Wait for the copy operation to complete.
From the Quota tab of the SMB_fs Properties window, click the Refresh icon
Are there any changes to storage usage for the /Engineering_Data path?
________
The usage for the /Engineering_Data path should have increased to ~2.0 GB.
9. From the Local Disk (C: )\Big_Files folder, copy the 500MBFile-1, and the
750MBFile-1 files.
One of the files gets copied to the share, but the copy of the second file is
interrupted with the message displayed here:
The only way to enable this copy operation to complete is to increase the Hard
Limit of the Quota Tree for the /Engineering_Data directory.
From the Quota tab of the SMB_fs Properties window, Select Quota Tree.
Double-click the quota for the Engineering_Data path.
12. From the Quota Tree Properties window, notice that the Soft Limit has been
exceeded and the Grace Period (7 days) is activated.
Modify the Hard Limit to 5 GB and click Apply. Then click Close.
14. In the error message left open in a previous step, click Try Again to continue
with the copy operation.
The operation should be successful this time. You should now see two 1 GB
Close the File Explorer windows and logoff the RDC session.
2. From the SMB_fs Properties page, select Quota > Quota Tree. Double-click
the quota tree /Engineering_Data.
3. From the Quota Tree Properties window, select the User Quotas tab.
In the Create User Quota wizard, click the + icon to add a user.
Domain: hmarine.test
Name: epratt
Soft Limit: 1 GB
Hard Limit: 3 GB
8. The quota settings are displayed on the Review Your Selections section as
shown here.
A green check mark specifies that the operation was completed successfully.
10. The new configuration is displayed in the Quota Tree Properties page.
Double-click the quota to verify that the Windows Name, Tree Path, Soft
Limit, and Hard limit values are correct. Click Close.
At the top of the Quota Tree Properties this page, check the Enforce User
Quotas checkbox.
Username: hmarine\epratt
Password: emc2Admin!
Click Ok.
Expand the Local Disk (C: ) tree and open the Big_Files folder.
6. Paste the files into the share. Wait for the copy operation to complete.
7. Now try copying the 500MBFile-1 and 750MBFile1 files from Big_Files to
Engineering_Data.
One of the files is copied to the share but the copy of the second file is
interrupted with the message displayed here:
From the User Quotas tab in the Quota Tree Properties page, click the
Refresh icon to update the information displayed.
Observe that the usage of the Epratt account has increased due to the copy
operations. The hard limit was exceeded when trying to copy one of the last
two files. Notice that one file did get copied to the share.
Notice that the Soft Limit has been exceeded and the Grace Period (7 days)
has been activated.
11. In the error message left open in a previous step, click Try Again to continue
with the copy operation.
The operation should be successful this time. You should now see two 1GB
files, one 750MB file and a 500MB file in the share.
The operation completes and all the files that are copied are on the share.
12. You have successfully configured and tested a User Quota on a Quota Tree.
Close the File Explorer windows and logoff the RDC session.
Scenario:
In this lab, you move a LUN from one storage pool to another.
Move a LUN
1. Establish a Unisphere session to the UnityVSA-Source system IP address:
192.168.1.113. The login credentials are: admin/Password123!
2. Navigate to Storage > Block > LUNs. Select the Linux1 LUN0.
This LUN is provided from the FAST VP-1 storage pool. You will move the
LUN to a different storage pool on the system. Before moving the LUN, you
will access its data from the Linux1 host to see how the move operation is
transparent to the host.
3. Launch a PuTTY session from the taskbar, and establish an SSH session to
the Linux1 system.
In the PuTTY screen Host Name (or IP address) field input: Linux1 and click
the Open button.
You are now logged in as root to the Linux1 client as indicate by the # cursor
symbol.
4. Verify that the host has the LUN mounted by running the df –h command.
Verify in the output a file named newfile and a folder named lost+found.
5. Next, create and run a script that displays the current date and lists the content
of the LUN every two seconds.
This script will run during the LUN move operations to demonstrate how the
move is transparent to the host data access on the LUN.
The script will continue displaying a varying time value on each loop to indicate
it is still running.
The Move LUN configuration window opens. Input the following configuration:
Thin: Checked
The right side of the page displays the move session progress. It should show
Running state.
7. Return to your PuTTY session and verify that the script is still running. The
second value in the date output will increment with each loop.
This verifies that the move operation does not affect host data access.
Refresh the page. The move session should now show Completed.
9. Initiate another Move operation to return the LUN to the FAST VP-1 pool.
From More Actions, select Move.
Thin: Checked
10. Return to your PuTTY session and verify that the script is still running with the
second value changing with each loop.
Stop the script by pressing the <Ctrl> <c> keyboard keys simultaneously.
The script will stop and return the prompt.
11. In the Unisphere session, refresh the screen and verify that the move
operation is complete and the Linux1 LUN0 is back on the FAST VP-1 pool.
Scenario:
Username: \administrator
Password: emc2Local!
Click OK.
Select WIN1 LUN0 drive, and open the text document that is created that you
in the earlier exercise.
Add a line of text to the document: This line was written to the LUN from
WIN1 prior to the initial snapshot.
Navigate to Storage > Block > LUNs, and double-click WIN1 LUN0 to open
its properties page.
4. Select the Snapshots tab and click the + icon to create a snapshot of the
LUN.
The Create Snapshot window opens. In the Name field, replace the default
system provided name with: Initial_snap
In the Description field, type the current time as displayed from the Student
Desktop taskbar.
Leave all other options at their default settings. The Local Retention Policy
option has the Pool Automatic Deletion Policy selected to enable the
snapshot to be retained until the pool the LUN was created from reaches a
predefined capacity threshold, and then deletes the snapshot to return space
back to the pool. The Retain Until option can be configured to retain the
snapshot to a specified calendar date and time for up to a year. The No
Automatic Deletion option prevents the snapshot from being deleted.
5. With the LUN Snapshots tab still open, select the Snapshot Schedule option.
The list displays three system defined schedules. Select the Default
Protection schedule, and review its creation and retention times. Select and
review the other two systems defined schedules one at a time.
Click the New Schedule button and the Create Schedule window opens. The
window provides a snapshot frequency granularity that is hourly/daily/weekly
based and also provides a retention policy configuration. You are going to
configure a snapshot schedule that creates a snapshot every Monday morning
at 7:00 AM and is retained for 7 days.
The Daily/Weekly option should be checked along with each day of the week.
Uncheck Tue, Wed, Thu, Fri, Sat, Sun, and leave Mon checked.
The Retention Policy section should have the Retain for radio button
selected and set to 7 Days.
7. The newly created schedule is now listed in the Snapshot schedule field.
A Modify Schedule window is displayed. Review the message and click Yes
to continue.
Snapshots of the LUN will now be created and retained automatically by the
system.
The system Settings page opens to the Configure Schedule Time Zone
portion of Management.
Expand the Schedule Time Zone list and select (UTC-05:00 Eastern Time
US & Canada).
Click Apply.
A Changing time zone message is displayed. Read the message. Click Yes
to accept the change.
The Unisphere session should display the Snapshot Schedule page with the
Monday AM Snap schedule selected.
On the right side slide-out panel, notice the Description for the schedule and
the change in creation time due to the schedule time zone change. Changing
the time zone schedule impacts schedule timing of existing schedules.
Double click the Monday AM Snap schedule to open its properties page.
Click Apply.
One by one, select each of the system defined schedules to see the impact of
the schedule time zone change. Also note that, unlike the user defined
Monday AM Snap, the system defined schedules cannot be modified.
From the open File Explorer window, open the text document on the WIN1
LUN0 drive.
Add the following line of text: This line was written to the LUN from WIN1
From your Student Desktop system taskbar, launch the RDC application and
establish an RDC session to the WIN2 host as the local administrator.
Username: \administrator
Password: emc2Local!
Click OK.
2. The WIN2 host must be registered to the UnityVSA-Source system. A first step
is to connect its iSCSI initiator to the array.
From the WIN2 system taskbar, click the iSCSI Initiator icon to launch the
application.
Select the Discovery tab. Click the Discover Portal button. In the Discover
Target Portal window, in its IP address or DNS name field input:
192.168.3.100
Select the Targets tab. In the Discovered targets section, the UnityVSA-
Source IQN is displayed with an Inactive status. Click the Connect button to
connect the initiator to the target.
From the Advanced Settings window, in the Local adapter dropdown list,
select Microsoft iSCSI Initiator.
3. From the Student Desktop system Unisphere session, go to Access > Hosts
and click the + icon and select Host from the list to add a new Host. The
wizard to add a new host opens. Input the following configuration:
Name: WIN2
Check the initiator checkbox and click Next to continue the wizard.
The Results window displays the status of the operation to add the host.
When it has completed successfully Close the window.
The WIN2 host should now be listed in the Hosts window with 1 displayed in
the Initiators column.
In Unisphere, go to Storage > Block > LUNs and select the WIN1 LUN0 and
click the Edit icon to open its properties page.
Select the Snapshots tab, and check the checkbox for the Initial_snap if not
already selected.
From the More Actions dropdown list, select the Attach to host action.
Click the + icon to configure access. In the Access Type dropdown list, select
Read/Write.
5. Open the existing RDC session to the WIN2 host. From the Desktop or
taskbar, click the Server Manager icon to launch the application.
From the upper right corner Tasks dropdown, select Rescan Storage.
A 5.00 GB disk is listed, Right-click it and select Bring Online. A Bring Disk
Online window is displayed. Click Yes to bring the disk online.
7. From the WIN2 taskbar, launch File Explorer and select WIN1 LUN0 drive.
Open the text document. Is the line of text present in the file that you entered
from the WIN1 host before the initial snapshot?
_________________________________
Is the line of text present in the file that you entered from the WIN1 host after
the initial snapshot was created? __________________________
8. Add the following line of text to the file: This line was written to the LUN
snapshot from WIN2.
Open the New Text Document again, was the line written to the file?
________________________________
In Server Manager under Disks, right-click the 5.00 GB disk and select Take
Offline.
In the Take Disk Offline window, click Yes to offline the disk.
Close Server Manager. In File Explorer the WIN1 LUN0 drive is no longer
present.
Close File Explorer and logoff from the WIN2 RDC session.
From the WIN1 LUN0 Properties page Snapshots tab, the Initial_snap
should still be checked. If not, check it.
From the More Actions dropdown, select Detach from host from the list.
In its open File Explorer window with the WIN1 LUN0 drive selected, open the
text document. Is the line of text that was written to the snapshot from the
WIN2 host present in the file?
______________________________________________________
Why? _________________________________________________________
Add the following line of text to the file: This line was written to the LUN
from WIN1 after the initial snapshot was taken and before the Restore
operation.
Right-click the 5.00 GB disk and select Take Offline from the list. In the Take
Disk Offline window, click Yes to offline the disk.
In File Explorer, verify that the WIN1 LUN0 drive is no longer present.
3. From your Student Desktop system, open the existing Unisphere session.
From the WIN1 LUN0 Properties page Snapshots tab, the Initial_snap
should still be checked. If not, check it.
From the More Actions dropdown, select Restore from the list.
The LUN has now been restored to the data state that the Initial_snap
snapshot captured.
Launch the Server Manager application and navigate to File and Storage
Services > Volumes > Disks. Right-click the 5.00 GB disk and select Bring
Online from the list.
In the Bring Disk Online window, click Yes to bring the disk online.
In the open File Explorer window with the WIN1 LUN0 drive selected, open
the text document.
Is the line of text written to the LUN by WIN1 prior to the initial snapshot
present in the file?
_____________________________________________________
Is the line of text written to the LUN by WIN1 after the initial snapshot and prior
to the restore present in the file? ___________________________________
Is the line of text written to the snapshot from WIN2 present in the file?
This completes the LUN Snapshot lab exercise. You have created a LUN
snapshot and a snapshot schedule, and set a schedule time zone. You
accessed a LUN snapshot to see its captured data state and wrote to the
snapshot. You also performed a Snapshot restore operation to restore a LUN
to the data state captured in the snapshot.
Scenario:
2. In Unisphere, go to Storage > File > File Systems. Click the + icon to create
a file system.
The Create a File System wizard opens. In the Protocol section, select the
Windows Shares (SMB) radio button. In the NAS Server list, the NAS_SMB
(Replication: No, Multiprotocol:No) is listed for the NAS Server.
On Configure the File-level Retention, leave the Off radio button selected
and click Next.
In the Pool field, select FAST VP-2. Configure the Size to be 5 GB. Leave all
other settings at default. Click Next to continue the wizard.
In the Shares > Configure the Initial Share section, check the SMB Share
(Windows) checkbox. Name the share: DP_FS_share. Click Next to continue
the wizard.
In the SMB Share’s Other Settings > Configure the SMB Share’s Other
Settings section, keep the defaults. Click Next to continue the wizard.
In the Snapshot > Configure Snapshot Schedule section, check the Enable
Automatic Snapshot Creation checkbox.
From the Snapshot schedule dropdown, select the Protection with longer
retention schedule from the list. Click Next to continue the wizard.
You will not configure Replication. Click Next to continue the wizard.
The Summary section displays the details of the file system creation. Click
Finish to perform the creation operation.
The Results section displays the status of the file system creation. Close the
window when the operation completes successfully.
3. Check the DP_fs file system checkbox and click the Edit icon to open its
properties page.
Navigate to the Snapshots page. There are no snapshots of the file system
yet, because the scheduled time for the automatic creation has not yet
occurred. The system creates snapshots of the file system that are read-only
and named based on the date and time the snapshot was created.
4. Access the DP_fs file system share, and create data on it before creating any
snapshots.
Launch the RDC application, and establish an RDC session to WIN1 as the
Domain administrator.
Username: hmarine\administrator
Password: emc2Admin!
Click OK.
5. From the WIN1 system taskbar, click the Run icon. In the Open field input:
\\NAS_SMB\DP_FS_share and click the OK button. The share window opens.
Right-click in the share window, and select New > Text Document. Open the
file and input the following text: This line was written to the file system prior
to any snapshot creation.
In the Description field input: This is the first snapshot of the file system. It
is a read-only snap.
2. Open the existing RDC session to WIN1. In the share window, open the New
text document and add the following line to the file: This line was added
after the first read-only snapshot was created.
3. Return to your open Unisphere session. In the Snapshots tab, click the + icon
to create a snapshot.
In the Name field input: Second_snap_rw. In the Description field input: This
is the second snapshot of the file system. It is a read/write snap.
Select the radio button to set the Access Type to Read/Write (shares).
The page now lists the two manually created snapshots: one read-only snap
and one read/write snap.
4. Open the existing RDC session to WIN1. In the share window, open the New
text document and add the following line to the file: This line was added
after the second read/write snapshot was created.
5. Place some data on the NFS_fs file system before creating a snapshot of it.
From your Student Desktop system taskbar, click the PuTTY icon to launch
the application and establish an SSH session to the Linux1 system.
In the PuTTY screen Host Name (or IP address) field input: Linux1 and click
the Open button.
Add a line of text to the epallis file by issuing the command: echo “this is
text written to the file prior to the read-only snapshot
of the file system.” >> epallis
The line of text will be appended to the end of the epallis file. Verify that the
text is present by issuing the command: more epallis
Return to your open Unisphere session. Navigate to Storage > File > File
Systems and select the NFS-_fs file system by placing a check in its
checkbox and clearing any other checkboxes.
Click the Edit icon to open the properties page and select the Snapshots tab.
Click the + icon to create a new snapshot.
In the Name field input: NFS_cvfs. In the Description field input: This is a
read-only snapshot to be accessed via NFS using the .ckpt data path.
Add a line of text to the epallis file by issuing the command: echo “this is
text written to the file after the creation of the read-
only snapshot of the file system.” >> epallis
Verify that the text was appended to the end of the epallis file by issuing the
command: more epallis
From the Select a source for the new share window, click the grid in the File
System field.
From the Select a source for the new share window, select the Snapshot
for File System “DP_fs” radio button.
Leave the Local Path setting as is to create a top-level share. Click Next to
continue the wizard.
In the Advanced > Provide SMB share details section, leave the default
settings in place, and click Next to continue the wizard.
The Results screen displays the share creation process. Close the window
when the share is successfully created.
2. Open the existing RDC session to WIN1. It should have an open window to the
DP_FS_share.
From the Student Desktop or system taskbar, click the Run icon to open
another window to the read/write snapshot share.
There should now be two open share windows: one to the file system share
DP_FS_share and one to the RW snapshot share DP_fs_RW_snap_share.
From each window, open the existing New Text Document and compare
them.
What are the differences between the content of the two files?
___________________________________________________
Why? _______________________________________________
Add the following line of text to the New Text Document opened from the
DP_fs_RW_snap_share: Text added to the RW snapshot.
Open the file from the DP_fs_share. Is the newly added line to the
DP_fs_RW_snap_share present in this file? __________________________
Why? ________________________________________________________
4. You will now access the read-only snapshot of the DP_fs using the snapshot
CVFS mechanism.
From your open DP_FS_share window, right-click in the white space and
select Properties.
The Folder versions section lists read-only snapshots of the file system with a
date and time timestamp when the snapshot was taken. You should see the
read-only snapshot that you created manually. If time had elapsed for the
snapshot schedule to have automatically created snapshots, they would be
listed here.
Select the snapshot you manually created, and click the Open button.
A new window opens with the snapshot path listed in its navigation field.
Right-click the New Text Document file to display the list of operations that
can be done to the file. Is the Delete option present?
_______________________________________
Why? _________________________________________________________
5. From the Network >NAS_SMB > DP_FS_share window, right-click the New
Text Document to display the list of operations. Is the Delete option present?
_____________________________
Why? ________________________________________________________
6. You can now recover the original file from the read-only snapshot. In the open
DP_FS_share (NAS_SMB) Properties window, highlight the dp_fs_share
folder and select Restore.
Close the open windows to the file system share and the read-only snapshot.
7. Access the read-only snapshot of the file system from the Linux1 NFS client.
Access the existing SSH session to list the content of the NFS share by
issuing the command: ls –la /nfs
Issue the following command to see the file content: more /nfs/epallis
This is the file that resides on the file system. To view and access the read-
only snapshot of the file system, you must explicitly use the hidden .ckpt
data path. You will then see a snapshot folder having a date/time name format
for when the snapshot was created. To see the read-only snapshot, issue the
following command: ls –la /nfs/.ckpt
To access the snapshot, change directory into the snapshot folder name using
the following command: cd /nfs/.ckpt/<snapshot folder name>
List the content of the snapshot by issuing the following command: ls –la
Look at the file content using the following command: more epallis
Is the content of the epallis file on the snapshot different than the content of
the file on the file system?
___________________________________________
Why? ________________________________________________________
Log off the epallis user by issuing the following command: exit
Unmount the file system by issuing the following command: umount /nfs
This completes the lab exercise. You have applied a snapshot schedule to a
file system during its creation. You have created manual Read/Write and read-
only snapshots of file systems and have accessed them.
Scenario:
The lab demonstrates how to configure a Thin Clone resource on Dell Unity XT
storage system.
Note: The lab requires you to go back between Unisphere and the
RDC session to WIN1. Also pay attention to the LUNs as they are
presented to the Windows host. Clones inherit the name of the
Base LUN and the disk numbers increase as the LUNs are made
available to the host. In the lab you will see three 15 GB disks
created, one for the original Base LUN, and one each for the Thin
Clones created.
Number of LUNs: 1
Pool: Capacity
Size: 15 GB
Thin: Checked
Click Next.
3. From the Configure Access window, click the + icon and check the WIN1
host from the list of available hosts.
Click OK.
Click Next.
Click Finish and verify that the LUN was created successfully.
Username: \administrator
Password: emc2Local!
Click OK.
7. From the WIN1 system taskbar, click the Computer Management icon.
8. Expand the Local Disk (C:) tree and open the Big_Files folder.
Select the first four 1 GB files, and copy the files to the Base LUN.
Note: You may see a window open by Microsoft asking if you want to format
the disk, select Cancel.
Check the Base LUN box, and click the Edit icon.
10. From the Base LUN Properties window, select the Snapshots tab.
Click OK.
The snapshot is created. Note the time at which the snapshot was taken. Also
note the Auto-Delete and Attached columns. To create a Thin Clone, these
columns should display a status of No.
11. With the Base LUN checked, from the More Actions dropdown, select
Clone.
The Populate Thin Clone window is displayed. You have an option of using
an existing snapshot of the Base LUN or creating a new version of the Base
LUN.
Take the default to create a Clone using the existing snapshot of the Base
LUN. (Snap1_Base_LUN4GB)
12. From the Configure Thin Clone window, input the following:
Name: TC1_Snap1_Base_LUN4GB
Click Next.
Click the + icon, add host WIN1, and close the window.
From the Results page, verify that the Thin Clone was created.
Select TC1_Snap1_Base_LUN4GB.
Perform a Rescan Disks of the storage from Disk Management. Note the
disk numbers for the disks.
Copy the last two remaining files (500MB and 750MB) from Big_Files folder
to the open LUN.
Perform Rescan Disks, then bring the first 15 GB LUN Online and open the
LUN.
Did the two new files get written to the Base LUN? _________
Name: Snap2_TC1_Base_LUN6GB
Verify that the snapshot appears under Snapshots and close the window.
From the Populate Thin Clone page, verify the radio button for Clone using
an existing snapshot of LUN TC1_Snap1_Base_LUN4GB is selected.
Name: TC2_Snap2_Base_LUN6GB
Click Next.
Verify that the Thin Clone was created and close the window.
Highlight TC2_Snap2_Base_LUN6GB.
18. From the WIN1, open Computer Management if not already open.
Perform a Rescan Disks of the storage from Disk Management. Note the
disk numbers for the disks. A fourth disk is displayed.
Record here the Disk number for the newest 15 GB disk: _________
Bring the most recent 15 GB disk Online and open the LUN.
19. Offline the current Base LUN (the most recent 15 GB disk).
Perform Rescan Disks and Online the original 15 GB disk (Base LUN) and
open it.
This verifies that changes that are made to the Clone do not affect the Base
LUN.
20. You have taken two snapshots and created two Thin Clones from those
snapshots.
Check the box for Base LUN and select Refresh from the More Actions
dropdown (clear any other LUNs).
The window displays all the available snapshots that you can select to refresh
the Base LUN. Read the line at the bottom about eligible snapshots.
Offline the original Base LUN, and then Online the Base LUN.
Open the LUN. It should display the six files since you refreshed the LUN with
the contents of TC2_Snap2_Base_LUN6GB.
4. In Unisphere: From the LUNs page, select the Base LUN, Select Refresh
from the More Actions dropdown.
Offline the original Base LUN, then Online the Base LUN.
Open the LUN. The LUN should display the original 4 GB files since you
refreshed the Base LUN with the contents of Snap1_Base_LUN4GB.
You have refreshed the Base LUN from two different images.
6. The next few steps have you clean up the resources that were created for the
exercise.
To remove Host Access to the LUNs, go to the Storage > Block > LUNs
page.
Check the box for WIN1, Click the Trash can icon (Remove Access).
• TC1_Snap1_Base_LUN4GB
• TC2_Snap2_Base_LUN6GB
9. You have successfully created Thin Clones within a single the Base LUN
family and restored the Base LUN from a snapshot of a Thin Clone.
Scenario:
This lab demonstrates how to create Asynchronous remote replication sessions for
LUNs, NAS servers, and their associated file systems. LUN and file system
Snapshots are also replicted. Perform replication Failover, Failback, Failover with
sync, and Resume operations.
More References:
• Dell Unity: Replication Technologies white paper
1. From your Student Desktop system taskbar, launch a browser and establish a
Unisphere session to the UnityVSA-Source system at IP address
192.168.1.113.
Open another browser tab from the + icon, and establish a Unisphere session
to the UnityVSA-Destination system at IP address 192.168.2.113.
2. The first replication communications channels are the Interfaces that are used
for replication on both systems. The Interfaces establish an IP connection
between the systems to carry the replicated data.
The User Name and Password fields are populated with credentials which
are: admin/Password123!
In the Maximum Bandwidth field, enter 51200. This limits the bandwidth to 50
megabits per second.
In the Days of the Week section, check the boxes for Mon, Tue, Wed, Thu,
Fri.
From the Hours of the day section, select 7:00 AM from the Start Hour list.
Select 7:00 PM from the End Hour list. Click OK to create the schedule.
Click Next to continue. Review the Summary page. Click Finish to create the
Replication Connection. The operation advances to the Results page and
takes a moment to complete several tasks to create the connection. Close the
Results page when complete. The new Connection between the two systems
is listed when the operation completes.
Select the Replication tab and click the Configure Replication button.
3. The next wizard section configures the destination storage resources for
creating the replicated LUN. The Name field defines the name the LUN has on
the destination system. Keep the populated name of: WIN1 LUN0.
The Pool field defines the destination system storage pool the LUN is created
from. Keep the populated pool of FAST VP-1.
Thin: Checked
The destination Tiering Policy can also be configured. Keep the default policy
of Start High Then Auto-Tier.
4. The wizard displays a Summary for the configuration of the replication session
as shown here:
The Results section displays the status of the creation operation. Verify the
Overall status displays 100% Completed.
The properties include: Session Name, Mode, Local Role, Time of Last Sync,
and Replicate Scheduled Snapshots.
The replicated object on the two systems identifies which system allows IO to
the LUN, and what Replication operations can be done to the session.
Check the box for the WIN1 LUN0 Resource. Click the More Actions
dropdown list. Compare its active and grayed out replication operations to the
ones recorded in the previous step. Are they the same or different?
_________________________
Click the View/Edit icon for the Session Name from the right of the session
name. If necessary, expand the window to view the Edit icon.
The replication session is displayed. Use the window slider to see the session
Name field on the far right. Use the refresh icon until the new session name is
displayed, this takes a minute to update.
Click the More Actions dropdown list. Compare its active and grayed out
replication operations to the ones recorded previously from the UnityVSA-
Source system.
Why? __________
9. Navigate to Storage > Block > LUNs. The WIN1 LUN0 is displayed. It was
created on the destination system by the replication process.
Select the Host Access tab and notice that no host has been granted access
to the LUN. Host access must be configured to the LUN so its replicated data
will be accessible to a host should the source site become unavailable and the
replication session failed over.
11. Establish an RDC session to the WIN2 host as the local administrator.
12. From the WIN2 system taskbar, click the iSCSI Initiator icon to launch the
application.
Select the Discovery tab. Click the Discover Portal button. In the Discover
Target Portal window, in its IP address or DNS name field input:
192.168.3.101
Select the Targets tab. In the Discovered targets section, the UnityVSA-
Destination IQN is displayed with an Inactive status. Select it and click the
Connect button to connect the initiator to the target.
In the Connect to Target window, check the Enable multi-path option. Click
the Advanced button.
From the Advanced Settings window, in the Local adaptor dropdown list,
select Microsoft iSCSI Initiator. In the Initiator IP dropdown list, select
192.168.3.107. From the Target portal IP dropdown list, select 192.168.3.101
/3260. Click OK to configure the settings.
From the LUN properties page Host Access tab, click the + icon to add host
access to the LUN.
The Select Host Access window opens. From the More Actions dropdown
list, select Add Host.
14. The Add a Host wizard opens. Input the following host configuration:
Name: WIN2
A Summary window displays the information for the host being added. Click
Finish to initiate the Add Host operation.
15. In the Select Host Access window, check the WIN2 checkbox to select the
host and click OK.
The WIN2 host is now configured to access the replicated WIN1 LUN0.
Select: NAS_SMB.
Click the Edit icon to open the NAS Server properties page.
Select the Replication tab and click the Configure Replication button.
3. The next wizard section configures the destination storage resources for
creating the replicated NAS Server and any of its associated file systems. The
Name field defines the name the NAS Server has on the destination system.
Keep the populated name of: NAS_SMB.
The Pool field defines the destination system storage pool the storage
resource is created from. Keep the populated pool of FAST VP-2
The Results window displays the status of the replication creation operation. It
takes some time to complete so be patient. Close the window when the
operation completes.
The NAS Server replication session details are displayed in the Replication
tab of the NAS_SMB Properties page.
4. From the Storage > File > File Systems tab, select the SMB_fs file system
and click the Edit icon to open its properties page.
Select the Replication tab. The session details are shown for the SMB_fs file
system. The system automatically replicates the existing file systems that are
associated with a NAS Server when the NAS Server is replicated.
In the Session Name field, replace the existing name with: SMB_fs_rep and
click OK.
6. Locate the replication session for the DP_fs file system and select it. Then
click the Edit icon to open its details page.
7. Locate the session for the NAS_SMB NAS Server and select it. Click the Edit
icon to open its session details page.
Click the Edit icon for the Session Name. Replace its session name with:
NAS_SMB_rep and click OK.
In the Network Interfaces section, the interface for the NAS_SMB server is
shown. This interface configuration was replicated along with the NAS Server
from the UnityVSA-Source. However, the UnityVSA-Destination system is
connected to a different network. So the interface configuration of the
replicated NAS Server must be changed.
You can now enter a new interface configuration for the NAS Server.
In the Edit NAS Server Interface confirmation window, click Yes to apply the
change.
The interface is now modified for the network that is connected to the
UnityVSA-Destination system. If the NAS Server replication session is failed
over, it will be available on the correct network and be able to provide data
services to users.
2. The WIN1 system has access to the WIN1 LUN0 from the previous lab
exercises.
From the WIN1 system taskbar, open File Explorer and select the WIN1
LUN0 (E:).
Open the New Text Document. Add the following line of text to the file: This
line was added from WIN1 before the Failover with sync was performed.
3. Before performing the replication operation Failover with sync, quiesce host
IO to WIN1 LUN0 on the UnityVSA-Source system. This is done by using
Server Manager to offline the disk.
Right-click the 5.00 GB disk and select Take Offline. Click the Yes button to
offline the disk.
Select the WIN1_LUN0_rep session. From the More Actions dropdown list,
select the Failover with sync operation.
The WIN1_LUN0 session State will now be Failed Over with Sync.
5. Access the existing RDC session to WIN1. In the open Server Manager
window, right-click the 5.00 GB disk and select Bring Online. Click the Yes
button to bring the disk online.
Why? _______________
In Server Manager, go to File and Storage Services > Disks. In the upper
right corner from the Tasks dropdown, select Rescan Storage and click Yes
in the Rescan Storage confirmation window.
7. From WIN2, launch File Explorer, access the WIN1 LUN0 (E:) drive and open
the New Text document.
Is the line of text added before the Failover with sync present in the file? ____
Add the following line of text to the file: This line was added from WIN2
when failed over to UnityVSA-Destination.
8. Next, perform the Resume operation on the replication session. The resume
operation keeps the read/write access to the LUN from the UnityVSA-
Destination system and restarts the replication from the UnityVSA-Destination
system to the UnityVSA-Source system.
Select the WIN1_LUN0_rep session and from the Morer Actions dropdown
list select Resume. The Resume Session window provides information about
the operation. Review the information and click the Yes button to resume the
replication session.
When completed, the session is displayed with a normal state. (Auto Sync
Configured)
9. Now perform a Sync operation. This operation manually synchronizes the data
state of the LUN on the UnityVSA-Source to the data state that is currently on
the UnityVSA-Destination system.
With WIN1 LUN0_rep session selected, from the More Actions dropdown list,
select the Sync operation.
10. At this point, the data states of the LUN on the UnityVSA-Destination and the
UnityVSA-Source are the same. The LUN on the UnityVSA-Destination system
is in the read/write state, while the LUN on the UnityVSA-Source is still
unavailable for access.
To test this, open the existing RDC session to WIN1. Open Server Manager
and try to bring the LUN online again.
11. In the next few steps, you return the replication as it was before the Failover
with sync operation. But before doing that, the WIN2 host IO should be
quiesced. You do this by using Server Manager to offline the disk.
Open the existing RDC session to WIN2. From its open Server Manager
window, right-click the 5.00 GB disk and select Take Offline. Click Yes to
offline the disk.
Form the More Actions dropdown list, select the Failover operation.
Review the information about the operation in the Failover Session window.
Click the Yes button to perform the failover operation.
13. Open the existing RDC session to the WIN1 system. In the open Server
Manager window, if the disk is not already online, right-click the 5.00 GB disk
and select Bring Online. Click the Yes button to online the disk.
Note: The disk may need to be placed offline and then online to be visible.
From the WIN1 Desktop or system taskbar, open File Explorer and select the
WIN1 LUN0 (E:) drive. Open the New Text Document.
Is the line that was written by the WIN2 host during failover present? ___
14. After the Failover operation, the Replication session is paused and must be
restarted by performing a Resume operation.
The session is now shown in the normal state (Auto Sync Configured).
• Username: hmarine\administrator
• Password: emc2Admin!
Click Ok
2. Before performing replication operations, add some data to the Top$ share on
the SMB_fs file system served by the NAS_SMB NAS Server.
From the WIN1 taskbar, click the Run icon and in the Open field input:
\\NAS_SMB\Top$
A window to the share opens. Right-click in the share, and select New > Text
Document. Open the file, and add the following line to the file: This line was
added prior to the manual sync operation.
From the More Actions dropdown list, select Sync to perform a manual Sync
of the replication session. Click Yes to perform the sync operation.
Double-click the the SMB_fs_rep session to open its Details window again.
4. Return to the RDC session for WIN1. From the WIN1 Desktop or system
taskbar, click the Run icon and in the Open field input: \\NAS_SMB\Top$
A window to the share opens. Open the New Text Document, and add the
following line to the file: This line was added prior to the failover operation.
5. Perform a failover operation for the NAS Server and its associated file
systems. The NAS Server is failed over first, and then the file systems.
Select the NAS_SMB_rep session. From the More Actions dropdown list,
select the Failover operation.
Review the information that is displayed in the Failover NAS Server and File
System Sessions window.
Verify that SMB_fs and DP_fs sessions are listed for Failover.
The sessions should be listed as Failed Over state with a yellow triangle.
From the WIN1 Desktop or system taskbar, click the Command Prompt icon.
The command flushes the client DNS Resolver cache. This is used because
the WIN1 client had previously accessed the NAS_SMB NAS Server on the
UnityVSA-Source system where it has an IP address of 192.168.1.115.
Therefore, that name to IP address resolution was held in the client’s DNS
cache. The NAS_SMB NAS Server on the UnityVSA-Destination system is
configured for the IP address of 192.168.2.115. The client DNS cache must be
flushed in order for it to be able to access the NAS_SMB NAS Server at its
new IP address.
Is the line of text added before the manual sync operation present?
_________
Is the line of text added before the failover operation present? ________
Why? _____________________________________________________
8. Add the following line to the file: This line was added during failover to the
UnityVSA-Destination system.
Select the SMB_fs_rep session and from the More Actions dropdown list
select the Failback operation. Click the Failback button to failback the
session.
Why? ___________________________________________________
10. Select the NAS_SMB_rep session and from the More Actions dropdown list,
select the Failback operation.
From the Failback NAS Server and File System Sessions window, verify
that the associated file systems are listed for Failback.
Click Failback.
11. Return to the RDC session to the WIN1 system. In the Command window,
flush the DNS client cache by running the following command: ipconfig
/flushdns
From the taskbar, click the Run icon and in the Open field input:
\\NAS_SMB\Top$ to access the Top$ share.
Is the line of text added during failover present in the file? _____________
Why? ____________________________________________________
Navigate to Storage > File > NAS Servers. Select the NAS_SMB NAS Server
and click the Edit icon to open its properties page.
Select the Network tab. Review the information in the Network Interfaces
section.
13. Access the open Unisphere session to the UnityVSA-Destination system and
go to Storage > File > NAS Servers. Select the NAS_SMB NAS Server and
click the Edit icon to open its properties page.
Select the Network tab review the information in the Network Interfaces
section.
14. Ping the two different IP addresses of the NAS_SMB NAS Server.
Access the RDC session to the WIN1 system. In the command window issue,
the following command: ping 192.168.1.115
The NAS Server network interface that displays Preferred = Yes is the NAS
Server that is the source of the replication session and therefore has its
interface up and available for access. The NAS Server network interface that
displays Preferred = No is the NAS Server that is the destination of the
replication session and should not be reachable for access. Therefore, its
interface is down and not accessible.
Username: \administrator
Password: emc2Local!
Click OK.
From the WIN1 Desktop or system taskbar, open File Explorer and select the
WIN1 LUN0(E:) drive.
This is the original data on the LUN of which we will take a snapshot.
After restoring the LUN from a snapshot, this is what you should see.
Click OK.
4. Quiesce IO before performing the Failover with sync operation using Server
Manager to offline the disk.
Select the WIN1_LUN0_rep session. From the More Actions dropdown list,
select the Failover with sync operation.
Review the message in the Failover Session After Sync confirmation window
and click Yes to confirm the operation.
6. Open the RDC session to the WIN2 system. Log in as local administrator.
Username: \administrator
Password: emc2Local!
Click OK.
In Server Manager, go to File and Storage Services > Disks. In the upper
right corner from the Tasks dropdown, select Rescan Storage. Click Yes to
rescan.
Right-click the 5.00 GB disk and select Bring Online. Click Yes to bring the
disk online.
7. From the WIN2 Desktop or taskbar, open File Explorer and select the WIN1
LUN0 disk. Open the New Text Document.
Are the lines of text indicating the original data on the LUN added before the
Failover with sync present in the file? __________
Enter some text: This line was written during a Failover operation to the
UnityVSA-Destination.
From the Restore window, read the message and click OK. This creates a
backup restore point and restores the LUN to the data state captured by the
snapshot.
9. Next, perform the Resume operation on the replication session. The resume
operation keeps the read/write access to the LUN from the UnityVSA-
Destination system and will restart the replication with the direction being from
the UnityVSA-Destination system to the UnityVSA-Source system.
Select the WIN1_LUN0 session and from the More Actions dropdown list
select Resume. The Resume Session window provides information about the
operation. Review the information and click the Yes button to resume the
replication session.
The session is now displayed with a normal state (Auto sync Configured).
10. Perform a Sync operation to manually synchronize the data state of the LUN
on the UnityVSA-Source to the data state that is on the UnityVSA-Destination
system.
With WIN1 LUN0_rep selected, from the More Actions dropdown list, select
the Sync operation.
11. In the next few steps, you return the replication as it was before the Failover
with sync operation. But before doing that, the WIN2 host IO should be
quiesced. Do this using Server Manager to offline the disk.
From your Student Desktop system, open the existing RDC session to WIN2.
From its open Server Manager window, right-click the 5.00 GB disk and select
Take Offline.
12. Next, perform a Replication Failover operation to return the LUN to read/write
status on the UnityVSA-Source system.
Select the WIN1_LUN0_rep session. If the session state still shows Failed
over with Sync, refresh the page.
From the More Actions dropdown list, select the Failover operation.
Review the information about the operation in the Failover Session window.
Click the Yes button to perform the failover operation.
The replication session should now show in the Failed Over state.
13. Open the existing RDC session to the WIN1 system. In the open Server
Manager window, right-click the 5.00 GB disk and select Bring Online. Click
the Yes button to online the disk.
From the WIN1 Desktop or system taskbar, open File Explorer and select the
WIN1 LUN0 disk. Open the New Text Document.
Has the LUN been restored with the original data? ________
14. After the Failover operation, the replication session is paused and must be
restated by performing a Resume operation.
The session is now shown in the normal state. The source of the replication is
again on the UnityVSA-Source system, and the replication destination is to the
UnityVSA-Destination system.
The snapshot was used to restore the original state of the source LUN.
1. Launch the RDC session to WIN1 and log in as the Domain administrator.
• Username: hmarine\administrator
• Password: emc2Admin!
Click OK.
2. Before performing replication operations, add some data to the DP_fs file
system served by the NAS_SMB NAS Server.
From the WIN1 taskbar, click the Run icon and in the Open field enter:
\\NAS_SMB\DP_fs_share.
A window to the share opens. Open the New > Text Document file and
remove all of the existing text. Add the following lines of text to the file:
This text should be seen after restoring the file system from a replicated
snapshot.
Name: Snap1_DP_fs
Click OK.
5. Return to the RDC session to the WIN1 system. From the WIN1 Desktop or
system taskbar, click the Run icon.
Delete the two lines you created earlier, and add the following:
Double-click the DP_fs_rep session to view its details. Record the time
displayed for Time of Last Sync: _________
Select the DP_fs_rep session. Under More Actions, select Sync. Click Yes
to perform the sync operation. The operation takes a moment to complete.
Close the DP_fs_rep Details window. Close the DP_fs Properties window.
7. Now perform a failover operation for the NAS Server and its associated file
systems.
Select the NAS_SMB_rep session. From the More Actions dropdown list,
select the Failover operation.
Review the information that is displayed in the Failover NAS Server and File
System Sessions window. Verify that SMB_fs and DP_fs sessions are listed
for Failover.
Return to the RDC session to the WIN1 system. From the WIN1 Desktop or
system taskbar, click the Command Prompt icon.
The command flushes the client DNS Resolver cache. This is used because
the WIN1 client had previously accessed the NAS_SMB NAS Server on the
UnityVSA-Source system where it has an IP address of 192.168.1.115.
Therefore, the name to IP address resolution was held in the client’s DNS
cache. The NAS_SMB NAS Server on the UnityVSA-Destination system is
configured for the IP address of 192.168.2.115. The client DNS cache must be
flushed so it can access the NAS_SMB NAS Server at its new IP address.
Is the line of text you added after the manual sync operation present?
__________
Are the two lines of text added before the sync operation present?
__________
Why? _____________________________________________________
9. Add the following line to the file: This line was added during failover to the
UnityVSA-Destination system.
10. Restore the DP_fs file system from the Snap1-DP_fs snapshot.
Name: Snap1_DP_fs_BU
Navigate to Protection & Mobility > Replication > Sessions and refresh the
page.
Select the NAS_SMB_rep session and from the More Actions dropdown list,
select the Failback operation.
From the Failback NAS Server and File System Sessions window, verify
that the associated file systems are listed for failback.
Select Failback.
Once completed, the replication sessions should display a State of Auto Sync
Configured (the session replicated Resource is a Source on the UnityVSA-
Source system and a Destination on the UnityVSA-Destination system).
Return to the RDC session to the WIN1 system. In the Command window,
flush the DNS client cache by running the following command: ipconfig
/flushdns
From the Desktop or taskbar, click the Run icon and in the Open field enter:
\\NAS_SMB\DP_fs_share to access the share.
Are the two lines of text added before the sync and failover operations present
in the file? ___________
Why? ____________________________________________________
Click Next.
IP Address: 192.168.2.118
Gateway: 192.168.2.1
Click Next.
Password: emc2Admin!
Click Next.
Click Next.
Verify that DNS is enabled for Domain hmarine.test and is using the
192.168.1.50 Servers.
Click Next.
Click Next.
Click Finish.
Enable SSH access by selecting Enable SSH and clicking the blue Execute
button.
Username: service
Password: Password123!
12. Launch the RDC session to WIN1 and log in as the Domain administrator.
• Username: hmarine\administrator
• Password: emc2Admin!
Click OK.
13. Simulate user caused data loss in a file on the production DP_fs file system
served by the NAS_SMB NAS Server.
From the WIN1 taskbar, click the Run icon and in the Open field enter:
\\NAS_SMB\DP_fs_share. Click OK.
A window to the share opens. Open the New > Text Document file and verify
the following lines of text are present:
This text should be seen after restoring the file system from a replicated
snapshot.
From the WIN1 taskbar, click the Run icon and in the Open field enter:
\\NAS_SMB_Proxy.
Double click the NAS_SMB share folder to open it. Various files and folders
will be shown for file systems, snapshots and internals associated with the
replicated NAS_SMB NAS Server.
This text should be seen after restoring the file system from a replicated
snapshot.
This is the data needed to be recovered to the production file system. Close
the file.
15. Copy the file from the Snap1_DS_fs snapshot to the DP_fs production file
system.
Copy the New Text Document from the share window accessing the
Snap1_DP_fs snapshot and paste it into the share window open to the
DP_fs_share.
Open the New Text Document just pasted into the DP_fs_share and verify it
contains the following lines of text:
This text should be seen after restoring the file system from a replicated
snapshot.
16. You have restored a file from a snapshot replica using a NAS proxy server to
the production file system.