EMC VMAX - Fully Pre-Allocate Tdev
EMC VMAX - Fully Pre-Allocate Tdev
EMC VMAX - Fully Pre-Allocate Tdev
Range of TDEVs:
symconfigure -sid xxx -cmd start allocate on tdev 0c6e:1116 end_cyl=last_cyl
allocate_type=persistent; commit
Example UNISPHERE:
From the Unisphere GUI navigate to storage>volumes right click the device you wish to modify
and select Start allocate.
Leave a comment
which takes some time, as a result of this latency the application may encounter poor
performance. These are the reasons why EMC strongly recommends to create and map dedicated
devices as Gatekeepers.
VMAX3: Creating the RDM Volumes and Associated Masking View
This is an example Masking View for a two node ESXi cluster on which the VMAX
management virtual machine shall reside:
1. Create a Port Group with the VMAX FA ports that the ESXi hosts have been zoned to:
symaccess -sid 123 -name MGMT_VM_PG -type port create
symaccess -sid 123 -name MGMT_VM_PG -type port -dirport 1d:24,2d:31,3D:28,4d:27 add
2. Create the Initiator Group containing the ESXi hosts WWNS:
symaccess -sid 123 -name MGMT_VM_IG -type initiator create -consistent_lun
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF8 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF9 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4C add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4D add
3. Create the Storage Group for the Gatekeeper RDM Volumes:
symsg -sid 123 create MGMT_VM_SG -slo optimized -srp SRP_1
Listing the SRP:
symcfg list -srp
4. Create the Gatekeeper volumes (10 Gatekeeper volumes in this example) and add to the
MGMT_VM_SG:
symconfigure -sid 123 -cmd create dev count=10, emulation=FBA, sg=MGMT_VM_SG,
size=3 CYL, config=tdev; preview -nop
symconfigure -sid 123 -cmd create dev count=10, emulation=FBA, sg=MGMT_VM_SG,
size=3 CYL, config=tdev; prepare -nop
symconfigure -sid 123 -cmd create dev count=10, emulation=FBA, sg=MGMT_VM_SG,
size=3 CYL, config=tdev; commit -nop
5. Create the Masking View:
symaccess -sid 123 create view -name MGMT_VM_MV -sg MGMT_VM_SG -pg
MGMT_VM_PG -ig MGMT_VM_IG
View Configuration Details
Confirm that the HOSTS are logged into the correct VMAX ports:
symaccess -sid 123 list logins -wwn 21000024ff5CXXF8
symaccess -sid 123 list logins -wwn 21000024ff5CXXF9
symaccess -sid 123 list logins -wwn 21000024ff55XX4C
symaccess -sid 123 list logins -wwn 21000024ff55XX4D
Verify that the HBA is a member of the correct Initiator Group:
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF8
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF9
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4C
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4D
Storage Group details:
symaccess -sid 123 list -type storage -name AMP_SG -v
symaccess -sid 123 show AMP_SG -type storage
##########################################
$VMhostname = *
ForEach ($VMhostname in (Get-VMHost -name $VMhostname)| sort)
{
Write-Host $VMhostname
}
Write-Host Please enter the ESXi Hostname where your target VM resides: -ForegroundColor
Yellow -NoNewline
$VMhostname = Read-Host
######################################
$Datastore = *
ForEach ($Datastore in (Get-Datastore -name $Datastore)| sort)
{
Write-Host $Datastore
}
Write-Host From the list provided Please enter the VMFS datastore where the RDM pointer
files will reside: -ForegroundColor Yellow -NoNewline
$Datastore = Read-Host
######################################
$VM = *
ForEach ($VM in (Get-VM -name $VM)| sort)
{
Write-Host $VM
}
Write-Host From the list provided Please enter the VM Name where the RDM volumes shall
be created on: -ForegroundColor Yellow -NoNewline
$VM = Read-Host
##############
Write-Host ESXi Hostname you have chosen: -ForegroundColor Yellow
Write-Host $VMhostname -ForegroundColor Green
Write-Host VMFS you have chosen: -ForegroundColor Yellow
Write-Host $Datastore -ForegroundColor Green
Write-Host Vitual Machine you have chosen: -ForegroundColor Yellow
Write-Host $VM -ForegroundColor Green
################
## ACLX T0:L0 ##
################
$LUN_0 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object
{$_.runtimename -like vmhba0:C0:T0:L0} | Select ConsoleDeviceName,runtimename
$LUN_0 = $LUN_0 | Select ConsoleDeviceName
$LUN_0 = $LUN_0 -replace @{ConsoleDeviceName=,
$LUN_0 = $LUN_0 -replace },
$LUN_0
Parent,Name,DiskType,ScsiCanonicalName,DeviceName,CapacityGB | fl
# Brief #
# Get-ScsiLun -VMHost $VMhostname -LunType disk
# NAA #
# Get-ScsiLun -VMHost $VMhostname -LunType disk | select CanonicalName
### Get IP Address for ViClient to check GUI ###
# Get-VMHost -Name $VMhostname | Get-VMHostNetworkAdapter
Leave a comment
2. Create and add your devices, here I am creating 5 x 2048 GB devices and adding to my
storage group. Note I can just create 2048 GB devices, no meta is created. At present we can
create devs up to 16TB soon to be increased further.
3. Present to the host via a masking view, no change from VMAX here.
symaccess sid 007 create view name myapp_mv sg myapp_sg pg myapp_pg ig
myapp_ig
Here I will highlight a few of the key commands to gather information about the configuration
and interaction with the SRP and SLO.
NOTE:- Monitoring and Alerting of FAST SLO is built into Unisphere for VMAX. SLO
compliance is reported at every level when looking at storage group components in Unisphere.
Viewing SRP Configured On The Array
Most VMAX3 arrays will only have a single SRP however it is possible to have multiple, if you
are using FAST.X or ProtectPoint you may have an additional SRP in the config, the following
command shows you what is available:
symcfg list srp
Note the default SRP is set to be usable by RDFA DSE, this is normal. There is no need to
configure a separate pool for DSE in VMAX3, we can reserve and cap some space from the
default SRP for this purpose.
To get a more detailed look at the SLOs and the workloads that can be associated with storage
groups you can run the following command. The output shows the approximate response time for
each.
symcfg list slo detail by_resptime all
this will show you how your SRP is being consumed by each of the SLO, it will also list how
much is consumed by DSE and Snapshot, remember this capacity all comes from your SRP so
this shows each storage group and whether or not it is associated with an SLO, we also get some
detail about the number of devices but we dont see much regarding the capacity.
Additionally you can see consumption on an individual device level on the application storage
group.
You can see the full breakdown of your SRP including drive pools and which SLO you have
available as well as TDAT information. The output below shows all the thin devices (TDEVS)
bound to the SRP and how much space they are each consuming.
Solutions Enabler 8.X also allows for moving devices between groups non-disruptively
Moving devices between child storage groups of a parent storage group when the masking view
uses the parent group.
Moving devices between storage groups when a view is on each storage group and both the
initiator group (IG) and the port group (PG) elements are common to the views (initiators and
ports from the source group must be present in the target).
Moving devices from a storage group with no masking view or service level to one in a
masking view. This is useful as you can now create devices and automatically add to storage
group from CLI, so a staging group may exist. Command is:
symsg test sid 123 sg staging_sg move dev 345 gold_sg
You could run the command above in a cron job or batch file every hour and snapvx will create a
new generation each time (gen 0).
Listing SnapVX Snapshots And Capacity Consumed
In order to see which storage groups are consuming the most space we can run the following
cmd:
symcfg list srp demand type sg
The output lists the storage groups showing their subscribed capacity (how much potential space
they can consume) as well as their actual allocated capacity. A Particulary useful output here is
the SnapShot Allocated (GB) Column, if you are in a bind for space you can quickly identify
which storage group has consumed the most snapshot space and terminate some snapshots to
return space to the SRP.
Note your storage group will only show up in this command output if it is FAST managed.
Although everything in VMAX3 is under fast control it is possible to create storage groups that
are not FAST managed for various use cases. A storage group is FAST managed if you explicitly
specify the SRP and or assign an SLO. Shown below SourceSG1 has a large capacity of
snapshot allocated storage.
To find out more about your snaps you can run the following cmd:
symsnapvx sid sg groupname list detail
If I want to link off and access a snap I can use a storage group which I have pre-created with the
same number of devices as the source/target devices can be same size or larger..
For deeper dive and more on the internals please see the Technote on EMC.com
https://www.emc.com/collateral/technical-documentation/h13697-emc-vmax3-localreplication.pdf
Useful Commands For Everyday Use!:
This information is at your finger tips with symcli -v
SYMCLI BASE Commands:
symapierr- Used to translate SYMAPI error code numbers into SYMAPI error messages.
symaudit List records from a Symmetrix audit log file.
symbcv Perform BCV support operations on Symmetrix BCV devices.
symcfg Discover or display Symmetrix configuration information. Refresh the hosts
Symmetrix database file or remove Symmetrix info from the file. Can also be used to view or
release a hanging Symmetrix exclusive lock.
symchg Monitor changes to Symmetrix devices or to logical objects stored on Symmetrix
devices.
symcli Provides the version number and a brief description of the commands included in the
Symmetrix Command Line
symdev Perform operations on a device given the devices Symmetrix name. Can also be used
to view Symmetrix device locks.
symdg- Perform operations on a device group (dg).
symdisk Display information about the disks within a Symmetrix.
symdrv List DRV devices on a Symmetrix.
symevent Monitor or inspect the history of events within a Symmetrix.
symhost Display host configuration information and performance statistics.
syminq Issues a SCSI Inquiry command on one or all devices. Interface.
symipsec Administers IPSec encryption on Gigabit Ethernet connections.
From the windows services.msc console check that both the ECOM and storsrvd services
are set to automatic and in a running state:
Check that EMC storsrvd daemon is installed and running from a Windows cmd prompt using
stordaemon.exe:
stordaemon install storsrvd -autostart
stordaemon start storsrvd
stordaemon.exe list
Or using the SC (service control) command you can query/start/config the ECOM and storsrvd
services:
sc
sc
sc
sc
sc
sc
query ECOM.exe
query storsrvd
start ECOM.exe
start storsrvd
config ECOM.exe start=auto
config storsrvd start=auto
Run netstat -a and check the host is listening on ports 5988 5989:
Or use the windows CLI to add the SYMCLI and ECOM directories to the PATH environment
variable:
setx /M PATH "%PATH%;C:\Program Files\EMC\SYMCLI\bin;C:\Program
Files\EMC\ECIM\ECOM\bin"
Select the option to add a new user and create the Vision user with administrator role and scope
local:
Windows Firewall
If the Windows firewall is enabled then rules will need to be created to allow ECOM ports TCP
5988&5989 and SLP port UDP 427. For example using the windows command line netsh to
create rules for SLP and ECOM:
At the prompt type dv to confirm connectivity between the VMAX and SMI-S:
200K has a possible 128 ports, while the flagship 400K can have up to 256 FC Front-end
connections.
Note: for demonstration purposes I am using Xs and a ? to explain the unique identifiers of a
VMAX system. Please refer to the .pdf listing to help understand the concept.
X:XX:XX:X = System-wide Unique ID as you will see from the provided WWPN listing this
value is the unique identifier per VMAX system (a follow-on post focusing on decoding
VMAX WWNs shall explain this further). On a per VMAX system the X:XX:XX:X value
will remain the same for all FC WWPNs associated with that VMAX system.
There is a notable change from the previous VMAX usage of WWPNs; there is now a unique
identifier labeled as ? this uniquely identifies a pair of engines:
? = Unique ID For Engines 1&2 | 3&4 | 4&6 | 7&8
On previous VMAX generations all the Xs and ? were consistent across all FC Port WWPNs
with only the last 2 hex values of a WWPN acting as the unique port identifier, with the VMAX
the unique port identifier is now the last three hex values. With VMAX the key point to note is
the ? value remains the same throughout directors 1-4 then increments by one hex value for the
next four directors, for example if C:04 is the unique ID for Director1 Port4 then for Director5
Port4 the C changes to D and remains at this value for directors 5-8 etc; so given this information
and referring to the list provided:
Director1 Port4 has a value of 50:00:09:75:58:01:EC:04
Director5 Port4 has a value of 50:00:09:75:58:01:ED:04
Director9 Port4 has a value of 50:00:09:75:58:01:EE:04
Director13 Port4 has a value of 50:00:09:75:58:01:EF:04
When using this approach in a single engine system the I/O ports from each director evenly span
both SAN fabrics.
HOST or Cluster FA Port Usage: in order to ensure a balanced approach is maintained, connect
a Host or cluster to 2xDirectors in a Single Engine system or 4xDirectors in a VMAX with
greater than 1xEngine.
Single Engine example:Zoning a Host evenly across 2 directors and across both fabrics using
ports 1D:4, 1D:31, 2D:28 & 2D:7:
Two Engine example: Zoning a Host or Cluster evenly across 4 directors and across both fabrics
using ports 1D:4, 2D:31, 3D:28 & 4D:7, this will spread load for performance and ensure fabric
redundancy :
These examples are a guideline for evenly balancing port utilization across all available director
ports. See below for additional reading.
VMAX ACLX GK: The first physical FA port on the array will have the show ACLX flag set;
thus any host attached to that port will be shown the ACLX device as LUN 000.
Hopefully these considerations and lists may assist you with planning (or automating) your
zoning scripts for VMAX systems.
SYMCLI List all FA WWNs: symcfg -sid xxx list -fa all -port -detail
Useful References:
VMAX3 Family New Features A Detailed Review of Open Systems White Paper
http://www.emc.com/collateral/technical-documentation/h13578-vmax3-family-new-featureswp.pdf
VMAX3 Reliability, Availability, and Serviceability Tech Notes
http://www.emc.com/collateral/technical-documentation/h13807-emc-vmax3-reliabilityavailability-and-serviceability-tech-note.pdf
5 Comments
reference the latest EMC publications for guidelines around quantity and size of the control
volumes. The following example configuration applies to VNX File OE 7.1.
Note: Please reference EMC documentation for precise instructions as this is an example only
config for deploying a VNX VG with a VMAX.
The following is a list of the celerra control volumes and sizes required for the NAS
installation:
2 x 12394 cylinders (11.62 GB)
3 x 2216 cylinders (2.03 GB)
1 x 69912 cylinders (64 GB)
1 x 2 cylinder volume for the gatekeeper device
VG Control Volumes and their respective HLU IDs:
The two 11.62 GB control LUNs map to HLU 0 and 1.
The three 2.03 GB control LUNs map to HLU 2, 3, and 4.
The 64 GB control LUN maps to HLU 5.
1 x 2 cyl gatekeeper LUN maps to 0F.
Listing the Control Volumes in order to gather their HEX values:
symdev -sid XXX list -emulation celerra
symaccess -sid XXX -name VG_IG -type initiator -wwn 50060160 add
symaccess -sid XXX -name VG_IG -type initiator -wwn 50060160 add
Create the port group using the VMAX FA Ports 7f:1,8f:1,9f:1,10f:1:
symaccess -sid XXX -name VG_PG -type port create
symaccess -sid XXX -name VG_PG -type port -dirport 7f:1,8f:1,9f:1,10f:1 add
Note: Ensure the ACLX volume is mapped to these FA ports 7f:1,8f:1,9f:1,10f:1 as 0E.
symdev -sid XXX list -aclx -v provides detailed information for the ACLX volume.
See here for further ACLX details: EMC VMAX Access Control Logix (ACLX) Gatekeeper
Mapping
Create the Storage Group:
Add the Control Devices as listed above (Do not add the gatekeeper volume at this stage to the
SG).
symaccess -sid XXX -name VG_SG -type storage create
symaccess -sid XXX -name VG_SG -type storage add devs 0055-005A
Create Masking View:
symaccess -sid XXX create view -name VG_MV -sg VG_SG -pg VG_PG -ig VG_IG -celerra
symaccess -sid XXX show view VG_MV
Now add 1 x 2 cyl Gatekeeper with a HLU value of 0F:
symaccess -sid XXX -name VG_SG -type storage add devs 005B -lun 0f -celerra
Verify the configuration:
symaccess -sid XXX show view VG_MV
symaccess -sid XXX list logins
## XBLADE WWNs: ##
show flogi database interface fc 1/17
XBlade 2: 50:06:01:60:xx:xx:xx:xx
show flogi database interface fc 4/29
XBlade 3: 50:06:01:68:xx:xx:xx:xx
## Configure: ##
conf t
interface fc2/15, fc3/19, fc1/17, fc4/29
no shut
vsan database
vsan 20 name NAS_WORKLOAD_VSAN_A
vsan 20 interface fc2/15, fc3/19, fc1/17, fc4/29
fcdomain domain 1 static vsan 20
fcdomain priority 2 vsan 20
fcdomain restart vsan 20
fcalias name XBlade2-00-00 vsan 20
member pwwn 50:06:01:60:xx:xx:xx:xx
fcalias name XBlade3-00-00 vsan 20
member pwwn 50:06:01:68:xx:xx:xx:xx
fcalias name VMAX40K_7f1 vsan 20
member pwwn 50:00:09:75:00:xx:xx:59
fcalias name VMAX40K_9f1 vsan 20
member pwwn 50:00:09:75:00:xx:xx:61
zone name XBlade2-00-00_to_VMAX-7f-1 vsan 20
member fcalias VMAX40K_7f1
member fcalias XBlade2-00-00
zone name XBlade3-00-00_to_VMAX-9f-1 vsan 20
member fcalias XBlade3-00-00
member fcalias VMAX40K_9f1
zoneset name zs_vsan20 vsan 20
zone name XBlade2-00-00_to_VMAX-7f-1
zone name XBlade3-00-00_to_VMAX-9f-1
zoneset activate name zs_vsan20 vsan 20
zone commit vsan 20
Copy Run Start
show zoneset active vsan 20
Fabric B Zoning
show interface description | grep VMAX40K
fc2/15 VMAX40K_10f1
fc3/19 VMAX40K_8f1
show interface description | grep XBlade
fc1/17 XBlade 2-00/00
fc4/29 XBlade 3-00/00
## VMAX WWNs: ##
show flogi database interface fc 2/15
10f1: 50:00:09:75:00:xx:xx:65
show flogi database interface fc 3/19
8f1: 50:00:09:75:00:xx:xx:5d
## XBLADE WWNs: ##
show flogi database interface fc 1/17
XBlade 2: 50:06:01:61:xx:xx:xx:xx
show flogi database interface fc 4/29
XBlade 3: 50:06:01:69:xx:xx:xx:xx
## Configure: ##
conf t
interface fc2/15, fc3/19, fc1/17, fc4/29
no shut
conf t
vsan database
vsan 21 name NAS_WORKLOAD_VSAN_B
vsan 21 interface fc2/15, fc3/19, fc1/17, fc4/29
fcdomain domain 2 static vsan 21
fcdomain priority 2 vsan 21
fcdomain restart vsan 21
fcalias name XBlade2-00-01 vsan 21
member pwwn 50:06:01:61:xx:xx:xx:xx
fcalias name XBlade3-00-01 vsan 21
member pwwn 50:06:01:69:xx:xx:xx:xx
fcalias name VMAX40K_10f1 vsan 21
member pwwn 50:00:09:75:00:xx:xx:65
fcalias name VMAX40K_8f1 vsan 21
member pwwn 50:00:09:75:00:xx:xx:5d
zone name XBlade2-00-01_to_VMAX-10f-1 vsan 21
member fcalias XBlade2-00-01
member fcalias VMAX40K_10f1
zone name XBlade3-00-01_to_VMAX-8f-1 vsan 21
member fcalias XBlade3-00-01
member fcalias VMAX40K_8f1
zoneset name zs_vsan21 vsan 21
zone name XBlade2-00-01_to_VMAX-10f-1
zone name XBlade3-00-01_to_VMAX-8f-1
zoneset activate name zs_vsan21 vsan 21
zone commit vsan 21
copy run start
show zoneset active vsan 21
NEXT: INSTALL NAS ON CONTROL STATION 0
====================================SUMMARY========================
===========
Congratulations!! Install for VNX software to release 7.1.76-4 succeeded.
Status: Success
Actual Time Spent: 40 minutes
Total Number of attempts: 1
Log File: /nas/log/install.7.1.76-4.Dec-02-11:54.log
=====================================END============================
===========
3. Perform Checks
Verify NAS Services are running:
Login to the Control Station as nasadmin and issue the cmd /nas/sbin/getreason from the CS
console. The reason code output should be as follows (see detailed list of Reason Codes below):
10 - slot_0 primary control station
11 - slot_1 secondary control station
5 - slot_2 contacted
5 - slot_3 contacted
Check the status of the DATA Movers and view which slot is active:
nas_server -info -all
Confirm the VMAX is connected to the VG:
nas_storage -check -all
nas_storage -list
List detailed information of the config:
/nas/bin/nas_storage info all
Code Levels:
List the datamovers: nas_server -list
Check the DART code installed on the Data Movers: server_version ALL
Check the NAS code installed on the Control Station: nas_version
Network Configuration:
Control Station: /sbin/ifconfig (eth3 is the mgmt interface)
Data Movers: server_ifconfig server_2 -all
Date & Time:
Control Station: date
Data Movers: server_date ALL
List the disk table to ensure all of the Control Volumes have been presented to both Data
Movers:
nas_disk -list
Check the File Systems:
df -h
Confirm the EMC NAS version installed and the model name:
/nasmcd/bin/nas_version
/nas/sbin/model
Check IP & DNS info on the CS:
nas_cs -info
Log Files:
Log file location: /var/log/messages
Example of NAS services starting successfully:
grep -A10 Starting NAS services /var/log/messages*
Output:
Dec 8 19:07:27 emcnas_i0 S95nas: Starting NAS services
Dec 8 19:07:46 emcnas_i0 EMCServer: nas_mcd: MCD will monitor CS IPMI connection.
Dec 8 19:08:46 emcnas_i0 EMCServer: nas_mcd: slot 0 missed 10 heartbeats from slot 1.
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Install Manager is running on slot 0, skipping
slot 1 reboot
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Slot 0 becomes primary due to timeout
Dec 8 19:08:52 emcnas_i0 mcd_helper: All NBS devices are up
Dec 8 19:09:08 emcnas_i0 kernel: kjournald starting. Commit interval 5 seconds
Check the Data Mover Logs:
server_log server_2
Complete a Health Check:
/nas/bin/nas_checkup
Failing over a Control Station:
Failover:
/nas/sbin/./cs_standby -failover
Takeover:
/nasmcd/sbin/./cs_standby -takeover
Or reboot:
nas_cs reboot
Determine the failover status of the Blades (Datamovers):
/nas/bin/nas_server -info all
Initiate a manual failover of server_2 to the standby Datamover:
server_standby server_2 -activate mover
List the status of the Datamovers:
nas_server -list
Review the information for server_2:
nas_server -info server_2
Shutdown Datamover (blade):
/nas/bin/server_cpu server_2 -halt now
Power on the Datamover (blade):
/nasmcd/sbin/t2reset pwron -s 2
Restore the original primary Datamover:
server_standby server_2 -restore mover
VG Shutdown:
Shutdown Control Stations and DATA Movers:
/nasmcd/sbin/nas_halt -f now
List of Reason Codes:
0 Reset (or unknown state)
1 DOS boot phase, BIOS check, boot sequence
Having completed a VMAX Health Check through Unisphere it has been highlighted that a
drive has failed:
Running the command symdisk list -failed will display the details of the failed disk (-v for more
detail):
You can also check if the failed disk has been spared out by issuing the command symdisk list
-isspare:
Ident/Symb = 9B identifies the Director and the MOD that the drive is connected to at the
Back-End. Thus we can gather at this stage that the drive is connected to Director 9 (Engine5).
On both directors of Engine5(9&10) there are two Back-End IO modules (MOD0 & MOD1) per
director, MOD0 has connections A0,A1,B0,B1 and MOD1 has connections C0,C1,D0,D1.
MOD0 on both the even and odd directors connect to DAEs 9,13,10,14 with MOD1 on both
directors connecting to DAEs 11,15,12,16. The 8 Redundant Loops on Engine5 connect up as
follows:
DAE9=LOOP0 (A0)
DAE10=LOOP2 (B0)
DAE11=LOOP4 (C0)
DAE12=LOOP6 (D0)
DAE13=LOOP1 (A1)
DAE14=LOOP3 (B1)
DAE15=LOOP5 (C1)
DAE16=LOOP7 (D1)
Int = C stands for interface, this is the port used on the MOD.
C = Port 0
D = Port 1
Thus far we can determine that the Drive is located on LOOP2 (9B 0).
TID = 1 refers to the target ID, or the disk location on the Loop.
From all this information we can determine that the location of the Failed drive (9B 0 1) is
Drive Bay-1A, DAE-10, Disk-01:
If you have access to SymmWin then you can Toggle the disk LED:
Leave a comment
This will give you the list of all Front-End adapters on the VMAX displaying both online and
connection status details. From the screenshot below you can see that FA-5E P0 and P1 are both
online and P0 is connected (in our case it is connected to an Cisco MDS 9513 Multilayer
Director). You can also see that while both FA-7H ports are online neither are connected to a port
on the MDS. FA-7G both ports are online and both are connected to ports on MDS.
In order to view the online status of all the Back-end director ports:
symcfg -sid XXX list -da all
From the output of this command you can also view the number of hyper volumes per port and
how they are distributed accross the backend.
If you wish to display the online status of both Front-end and Backend ports through a single
command:
symcfg -sid XXX list -dir all
RDF ports:
13 Comments
If you wish to confirm that a device has not already been assigned to a host:
symaccess -sid xx list assignment -dev xxx
Or if you need to check a series of devices:
symaccess -sid xxx list assignment -dev xxx:xxx
symaccess command performs all Auto-provisioning functions. Using the symaccess
command we will create a port group, initiator group and a storage group for each VMware ESX
host and combine these newly created groups into a Masking View.
Port Group Configuration
1. Create the Port Group that will be used for the two hosts:
symaccess -sid xxx -name ESX-Cluster-PG -type port create
2. Add FA ports to the port group; in this example we will add ports from Directors 8&9 From
Engines 4&5 8e:0,9e:0:
symaccess -sid xxx -name ESX-Cluster-PG -type port -dirport 8e:0,9e:0 add
Note on Port Groups: Where possible to achieve best performance and availability hosts should
be mapped to two or more Front-End ports on directors. If you have multiple engines then spread
across engines and directors Rule 17 (20/40K). Please see post: EMC VMAX 10K Zoning
with Cisco MDS Switches
Check that the Host HBAs are logging in:
symaccess -sid xxx list logins -dirport 8e:0
symaccess -sid xxx list logins -dirport 9e:0
Host ESX01 Masking View Configuration
1. Create the Initiator Group for ESX01:
symaccess -sid xxx -name ESX01_ig -type initiator create -consistent_lun
2. Add the ESX Initiator HBA WWNs to the Initiator Group:
symaccess -sid xxx -name ESX01_ig -type initiator -wwn wwn_A add
symaccess -sid xxx -name ESX01_ig -type initiator -wwn wwn_B add
3. Create the Storage Group for the first ESX host Boot volume:
symaccess sid xxx -name ESX01_sg -type storage create
4. Add the Symmetrix boot volume device to the Storage Group:
symaccess -sid xxx -name ESX01_sg -type storage add devs ####
5. Create the Masking View:
symaccess -sid xxx create view -name ESX01_mv -sg ESX01_sg -pg ESX-Cluster-PG -ig
ESX01_ig
Host ESX02 Masking View Configuration
1. symaccess -sid xxx -name ESX02_ig -type initiator create -consistent_lun
2. symaccess -sid xxx -name ESX02_ig -type initiator -wwn wwn_A add
symaccess -sid xxx -name ESX02_ig -type initiator -wwn wwn_B add
3. symaccess -sid xxx -name ESX02_sg -type storage create
4. symaccess -sid xxx -name ESX02_sg -type storage add devs ####
5. symaccess -sid xxx create view -name ESX02_mv -sg ESX02_sg -pg ESX-Cluster-PG -ig
ESX02_ig
Configuration of Cluster1 (ESX01,ESX02) with shared VMFS Datastore
1. We begin by cascading the cluster hosts into a single Initiator Group:
symaccess -sid xxx -name Cluster1_IG -type initiator create -consistent_lun
symaccess -sid xxx -name Cluster1_IG -type initiator -ig ESX01_ig add
symaccess -sid xxx -name Cluster1_IG -type initiator -ig ESX02_ig add
2. Create the Storage Group containing the shared Datastore(s):
symaccess -sid xxx -name Cluster1_SG -type storage create
4. Add the Symmetrix shared Datastore(s) device(s):
symaccess -sid xxx -name Cluster1_SG -type storage add devs ####(:####)
5. The Port Group contains the director Front-End ports zoned to the ESX Hosts (As per the PG
created above):
symaccess -sid xxx -name ESX-Cluster-PG -type port create
symaccess -sid xxx -name ESX-Cluster-PG -type port -dirport 8e:0,9e:0 add
6. The Masking View for the entire ESX cluster:
symaccess -sid xxx create view -name Cluster1_MV -sg Cluster1_SG -pg ESX-Cluster-PG -ig
Cluster1_IG
View Configuration Details
To view the configuration of the groups PG,IG,SG and MV (use -v for more detail):
symaccess -sid xxx list -type storage|port|initiator -v
symaccess -sid xxx list -type storage|port|initiator -name group_name
symaccess -sid xxx show group_name -type storage|port|initiator
symaccess -sid xxx list view -v
symaccess -sid xxx list view -name view_name
symaccess -sid xxx list view -name view_name -detail
symaccess -sid xxx list assignment -dev DevID
Examples:
symaccess -sid xxx list -type port (Lists all exisiting port group names)
symaccess -sid xxx show ESX-Cluster-PG -type port
symaccess -sid xxx list -type port -dirport 8e:0 (Lists all port groups that a particular director port
belongs to)
symaccess -sid xxx show -type initiator Cluster1_IG -detail
symaccess -sid xxx list logins -wwn xxxx (Verify that wwn xxx is logged in to the FAs)
symaccess -sid xxx list -type initiator -wwn xxxx(Verify that the HBA is a member of the correct
Initiator Group)
symaccess -sid xxx show Cluster1_SG -type storage
symaccess -sid xxx show view Cluster1_MV
symaccess -sid xxx list assignment -dev XXXX (Shows the masking details of devices)
Verify BOOT|DATA LUN Assignment to FA Port(s) (LUN To PORT GROUP Assignment):
symaccess -sid xxx list assignment -devs ####
symaccess -sid xxx list assignment -devs ####:####
Backup Masking View to File
The masking information can then be backed up to a file using the following command:
symaccess -sid xxx backup -file backupFileName
The backup file can then be used to retrieve and restore group and masking information.
The SYMAPI database file can be found in the Solutions enabler directory: for example
D:\Program Files\EMC\SYMAPI\db\symapi_db.bin if you wish to confirm the SE install
location quickly then issue the following registry query cmd:
reg.exe query HKEY_LOCAL_MACHINE\SOFTWARE\EMC\EMC Solutions Enabler /v
InstallPath
Note: On the VMAX Service Processor the masking information is automatically backed up
every 24 hours by the Scheduler. The file (accessDB.bin) is saved to
O:\EMC\S/N\public\user\backup.
Restore Masking View from File
To restore the masking information to Symmetrix enter the following command:
symaccess -sid xxx restore -file backupFileName
5 Comments
Search for:
@DavidCRing
Top Posts & Pages
Categories
Categories
Archives
Archives
David Ring
EMC VNXe 3200 Configuration Steps Via UEMCLI (Part1) October 22, 2015 David
Ring
EMC ViPR Cisco IVR Cross-Connect Zoning (VPLEX) October 9, 2015 David Ring