Managing Volumes and File Systems For VNX Manually
Managing Volumes and File Systems For VNX Manually
Managing Volumes and File Systems For VNX Manually
Release 7.0
EMC Corporation
Corporate Headquarters:
Hopkinton, MA 01748-9103
1-508-435-1000
www.EMC.com
Copyright © 1998 - 2012 EMC Corporation. All rights reserved.
Published January 2012
EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.
For the most up-to-date regulatory document for your product line, go to the Technical
Documentation and Advisories section on EMC Powerlink.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on
EMC.com.
All other trademarks used herein are the property of their respective owners.
Corporate Headquarters: Hopkinton, MA 01748-9103
Preface.....................................................................................................7
Chapter 1: Introduction.........................................................................11
Overview................................................................................................................12
System requirements.............................................................................................12
Restrictions.............................................................................................................12
Cautions..................................................................................................................13
User interface choices...........................................................................................14
Related information..............................................................................................14
Chapter 2: Concepts.............................................................................17
Overview................................................................................................................18
File systems............................................................................................................18
Inode..............................................................................................................19
Monitoring and repairing file systems.....................................................19
Volumes..................................................................................................................20
Disk volumes................................................................................................21
Slice volumes................................................................................................21
Stripe volumes.............................................................................................22
Metavolumes................................................................................................22
BCV................................................................................................................25
Planning considerations.......................................................................................27
Supported file system access protocols....................................................27
File system size guidelines.........................................................................28
NMFS............................................................................................................28
Volume configuration guidelines..............................................................29
Stripe volume configuration considerations...........................................30
Integration considerations.........................................................................30
Chapter 8: Troubleshooting..................................................................87
EMC E-Lab Interoperability Navigator..............................................................88
VNX user customized documentation...............................................................88
Known problems and limitations.......................................................................88
Error messages.......................................................................................................89
EMC Training and Professional Services...........................................................89
Glossary..................................................................................................93
Index.......................................................................................................97
As part of an effort to improve and enhance the performance and capabilities of its product lines,
EMC periodically releases revisions of its hardware and software. Therefore, some functions described
in this document may not be supported by all versions of the software or hardware currently in use.
For the most up-to-date information on product features, refer to your product release notes.
If a product does not function properly or does not function as described in this document, please
contact your EMC representative.
Note: Emphasizes content that is of exceptional importance or interest but does not relate to personal
injury or business/data loss.
CAUTION Indicates a hazardous situation which, if not avoided, could result in minor or
moderate injury.
Indicates a hazardous situation which, if not avoided, could result in death or serious injury.
DANGER Indicates a hazardous situation which, if not avoided, will result in death or serious
injury.
Note: Do not request a specific support representative unless one has already been assigned to
your particular system problem.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall
quality of the user publications.
Please send your opinion of this document to:
techpubcomments@EMC.com
Introduction
Overview
The EMC VNX system allows you to create and manage VNX volumes and file systems
manually or automatically.
This document explains the manual process for creating, configuring, and managing volumes
and file systems, including:
System requirements
Table 1 on page 12 describes the EMC® VNX™ series software, hardware, network, and
storage configurations.
Restrictions
◆ When creating volumes on a VNX system attached to an EMC Symmetrix® storage
system, use regular Symmetrix volumes (also called hypervolumes), not Symmetrix
metavolumes.
◆ You must use LUNs that have the same data service policies. If you mix LUNs, you will
receive a warning during diskmark.
Cautions
If any of this information is unclear, contact your EMC Customer Support Representative
for assistance:
◆ EMC does not recommend spanning a file system (including checkpoint file systems)
across multiple storage systems. All parts of a file system must use the same type of disk
storage and be stored on a single storage system. Spanning more than one storage system
increases the chance of data loss or data unavailability or both. This is primarily due to
the high-availability concern because one storage system could fail while the other
continues, making failover difficult. In this case, the targets might not be consistent. In
addition, a spanned file system is subject to any performance and feature set differences
between storage systems.
◆ Too many files in the root (/) of any file system might impact system performance. For
optimal performance, the number of objects (such as files and subdirectories) should not
exceed 500 names.
◆ Review Integration considerations on page 30 if you intend to use file systems with VNX
features such as international character sets, EMC SnapSure™, EMC TimeFinder®/FS,
quotas, or an antivirus agent (VEE CAVA).
◆ If you plan to set quotas on a file system to control the amount of space that users and
groups can consume, turn on quotas immediately after creating the file system. Turning
on quotas later, when the file system is in use, can cause temporary file system disruption,
including slow file system access, for systems that use version 6.0.40 and earlier, and can
impact system performance for systems that use version 6.0.41 and later. Using Quotas
on VNX contains instructions on turning on quotas and general quotas information.
◆ If your user environment requires international character support (that is, support of
non-English character sets or Unicode characters), configure the system to support this
feature before creating file systems. Using International Character Sets on VNX for File
contains instructions to support and configure international character support.
◆ If you plan to create TimeFinder/FS (local, NearCopy, or FarCopy) snapshots, do not use
slice volumes (nas_slice) when creating the production file system (PFS). Instead, use
the full portion of the disk presented to the system. Using slice volumes for a PFS slated
as the source for snapshots wastes storage space and can result in loss of PFS data.
◆ Do not attempt to use Symmetrix TimeFinder tools and utilities with file system copies
created by VNX TimeFinder/FS. It might result in loss of data.
◆ Do not manually edit the nas_db database without consulting EMC Customer Support.
Any changes to this database might cause problems when installing the system.
◆ Permanently unmounting all file systems from a Data Mover must be done with caution
because this operation deletes the contents of the mount table. To reestablish client access
to the file systems after this operation, rebuild the mount table by remounting each file
system on the Data Mover.
◆ If you use the nas_disk -delete -perm command to permanently remove a disk volume
from your VNX for File or legacy Celerra, do not use the nas_diskmark command at a
Cautions 13
Introduction
later time to discover and mark a LUN with the same host LUN identifier (HLU) as the
LUN underlying the removed disk volume. This causes data loss or data unavailability
on your VNX for File or legacy Celerra. To re-use underlying HLUs for VNX for File or
legacy Celerra disk volumes, do not use the -perm option. If you use the -perm option
to remove a disk volume, reboot the Data Mover prior to re-marking the LUN with the
duplicate HLU.
◆ The file system is unavailable to users during a file system check (fsck). NFS clients
receive an "NFS server not responding" message. CIFS clients lose the server connection
and must remap shares.
◆ Depending on the file system size, the fsck utility might use a significant amount of the
system's resources (memory and CPU) and might affect overall system performance.
◆ Only two fsck processes can run on a single Data Mover simultaneously.
◆ A file system check of a permanently unmounted file system can be executed on a standby
Data Mover.
◆ If a Data Mover restarts or experiences failover or failback while running the fsck utility
on an unmounted file system, restart the fsck utility on the unmounted file system.
Related information
Specific information related to the features and functionality described in this document is
included in:
VNX wizards
Unisphere software provides wizards for performing setup and configuration tasks. The
Unisphere online help provides more details on the wizards.
Related information 15
Introduction
Concepts
Overview
The system offers flexible volumes and file systems management.
Manual volume management allows you to create and aggregate different volume types
into usable file system storage that meets your configuration needs. When you create and
manage volumes manually, you have greater control over the location of storage allocated
to a file system. There are a variety of volume types and configurations from which you can
choose to optimize your file system's storage potential. You can divide, combine, and group
volumes to meet your specific configuration needs.
You can also manage VNX volumes and file systems without having to create and manage
underlying volumes. AVM is a feature that automatically creates and manages usable file
system storage. Although AVM is a simple way to create volumes and file systems,
automation can limit your control over the location of the storage allocated to a file system.
Managing Volumes and File Systems with VNX AVM provides additional information on AVM
capabilities of the system.
File systems
A file system is a method of naming and logically organizing files and directories on a storage
system. A VNX file system must be created and stored on a metavolume. The metavolume
provides:
◆ Expandable storage capacity that might be needed to dynamically expand a file system
◆ The means to form a logical volume that is larger than a single disk
A metavolume can include disk volumes, slice volumes, stripe volumes, or other
metavolumes.
The VNX system creates different file systems based on how they are used. Table 2 on page
18 lists the types of file system.
Inode
An inode is a data structure that stores information on files and the location of file blocks
in the NFS file system. The VNX system uses this information to identify if the file is a regular
file, a directory, or a symbolic link.
Each file requires at least one inode. Without inodes, you cannot create any new files, even
if there is space on the hard drive. The number of bytes per inode (nbpi) specifies the density
of inodes in the file system. EMC recommends an nbpi value of one inode for every 8,192
bytes, which is the default setting.
Occasionally, a file system might get corrupted if the system is shut down improperly or
the disk suffers a minor failure. In these situations, it might be necessary to try to repair the
file system by using the fsck utility. Cautions on page 13 provides information on file system
behavior during the fsck process.
The fsck utility checks file system consistency on a file system by detecting and correcting
file system storage errors.
When a file system corruption is detected during runtime, the Data Mover panics and the
restart or failover process starts.
During the restart process, file systems found to be corrupted are not mounted. Run the
nas_fsck command manually on these file systems during a suitable time window. You can
also use the nas_fsck command to check the status through the Control Station.
When the ufs.skipFsck parameter is set to True (default), the restart process does not run
fsck and the corrupted file systems are not mounted. To override this behavior, set this
parameter to False. The Parameters Guide for VNX for File provides detailed information on
parameters.
When the system begins fsck on a mounted file system, the fsck utility automatically
unmounts the file system, runs fsck, and then remounts the file system. The file system is
unavailable for the duration of the fsck. NFS clients see an "NFS server not responding"
message. CIFS clients lose connectivity to the server and must remap shares.
File systems 19
Concepts
The fsck utility should not be run on a server under heavy load to prevent the server from
running out of resources. In most cases, the user is notified when sufficient memory is
unavailable to run fsck. In these cases, users can choose one of these options:
◆ Start fsck during off-peak hours.
◆ Restart the server and start fsck immediately.
◆ Run fsck on a different server if the file system is unmounted.
The fsck utility cannot run on a read-only file system. You do not need to run fsck for normal
restart or shutdown operations. File system consistency is maintained through a logging
mechanism and restart and shutdown operations cause no corruption.
The first step in the fsck process is to ensure that the corruption can be safely corrected
without bringing down the server. The fsck process also corrects any inconsistencies in the
Access Control List (ACL) database. The corrupted file system is unavailable to users during
the fsck process. After the fsck utility finds and corrects the corruption, users regain access
to the file system. While fsck is running, other file systems mounted on the same server are
not affected and are available to users.
Volumes
A volume is a virtual disk into which a file system places data. Volumes underlie the file
system. Create volumes and metavolumes to assign storage space for a file system.
The VNX system supports the volume types listed in Table 3 on page 20. Each volume type
provides different benefits to satisfy storage requirements. With the exception of system
volumes, all volumes are initially available for data storage.
Disk volumes
A disk volume is a physical storage unit that is exported from the storage system to the
VNX for File. Disk volumes are the underlying storage of all other volume types.
A disk volume equates to a LUN as presented to the VNX for File by the storage system.
Each LUN is a usable storage-system volume that appears as a disk volume to the VNX
system. Disk volumes are typically created by EMC support personnel during the initial
installation and setup of the VNX system. After the initial installation and setup, configure
disk volumes only when you add LUNs to the storage system.
Slice volumes
A slice volume is a logical, nonoverlapping section cut from another volume component.
When you create a slice volume, you can indicate an offset. The offset is the distance in
megabytes between the disk and the start of a slice volume. If an offset is not specified, the
system places the slice in the first-fit algorithm (default), that is, the next available volume
space. An offset is rarely specified.
You must first identify the volume from which the slice volume will be created. The root
slice volumes that are created during installation appear when you list the volume
configurations. However, to protect the system, you do not have access privileges to them,
and therefore, cannot execute any commands against them.
Slice volumes can be configured to any size, but are typically used to create smaller, more
manageable units of storage. The definition of a "more manageable" logical volume size
depends on the system configuration and the type of data you are storing. Slicing is more
common with EMC VNX for Block storage systems because of the larger LUNs presented
to the VNX for File.
Figure 1 on page 21 shows an 18 GB volume on which a 2 GB slice is defined.
0
2 GB
5 GB
18 GB slice
7 GB
basic volume
volume
18 GB
CNS-000771
A disk volume, stripe volume, or metavolume used as part of a business continuance volume
(BCV) should not be sliced. BCV on page 25 provides information on using BCVs.
You must configure slice volumes as part of a metavolume to store file system data on them.
Metavolumes on page 22 provides additional information.
Volumes 21
Concepts
Stripe volumes
A stripe volume is a logical arrangement of participating disk volumes, slice volumes, or
metavolumes organized, as equally as possible, into a set of interlaced stripes. Stripe volumes
achieve greater performance and higher aggregate throughput because all participating
volumes can be active concurrently.
Figure 2 on page 22 shows an example of a stripe volume. The stripe is created across four
participating volumes of equal size.
8 GB stripe volume
CNS-000769
Stripe volumes improve performance because, unlike disk volumes, slice volumes, and
metavolumes, addressing within a stripe volume is conducted in an interlaced fashion across
volumes, rather than sequentially.
In a stripe volume, a read request is spread across all component volumes concurrently.
Figure 3 on page 22 shows addressing within a stripe volume.
Stripe volume
0 1 2 3
4 5 6 7
8 9 10 11
CNS-000773
Data is interlaced within the stripe volume starting with stripe unit 0 on the first participating
volume, continuing to stripe unit 1 on the next participating volume, and so on. As necessary,
data wraps back to the first participating volume. This is controlled by stripe depth, which
is the amount of data written to a participating stripe volume member before moving on to
the next participating member. Stripe volume configuration considerations on page 30
provides guidelines to configure a stripe volume.
Metavolumes
File systems can only be created and stored on metavolumes. A metavolume is an end-to-end
concatenation of one or more disk volumes, slice volumes, stripe volumes, or metavolumes.
Configuring a metavolume
Metavolumes can be created from a disk volume, stripe volume, slice volume, or another
metavolume. A file system is created on the metavolume.
You can expand a metavolume by adding additional disk volumes, stripe volumes, slice
volumes, or metavolumes to it.
When you extend a file system that is on a metavolume, the metavolume is automatically
extended. Figure 4 on page 23 shows a metavolume configuration that uses three disk
volumes.
Disk
volume
Disk Metavolume
volume
Disk
volume
CNS-000774
Volumes 23
Concepts
Information is read
into the metavolume in
ascending address order
Logical bloc k
address 0
Metavolume
Logical bloc k
address N
CNS-000770
9 GB 9 GB 9 GB 9 GB
CFS-000238
Create striped volumes over a specified number of disk volumes by defining a 32 KB stripe
depth. Put these striped volumes together to create a striped metavolume. You can then
create a file system on the metavolume as shown in Figure 7 on page 25.
Stripe depth
32,768 bytes
9 GB 9 GB 9 GB 9 GB
CNS-000772
Note: The total capacity of a metavolume equals the sum of all volumes that compose the metavolume.
BCV
BCVs are dedicated volumes that can be attached to a standard volume on which a file
system resides. The TimeFinder/FS feature of the VNX system uses BCVs to create file system
copies and mirror file systems dynamically. The EMC Customer Support Representative
creates BCVs on the storage system before installing the VNX software.
When planning for BCVs, ensure that you have as many BCVs as standard disk volumes to
be used by the largest file system. Figure 8 on page 25 shows the relationship between
standard volumes and BCVs.
Business
continuance
Standard volume
volume (BCV)
18 GB 18 GB
Figure 8. BCV
Volumes 25
Concepts
BCVs are based on the LUNs (entire disk volumes) that are presented to the VNX system.
BCVs should not use slice volumes because TimeFinder/FS operations are run against the
entire disk volume. Disk volumes, stripe volumes, and metavolumes used in BCVs should
not be sliced.
The TimeFinder/FS feature uses BCVs to create file system copies and mirror file systems
dynamically:
◆ With the file system copy function, you can create an exact copy of a file system to use
as input to a backup or restore operation, for application development, or for testing.
◆ With the mirror function, you can create a file system copy in which all changes to the
original file system are reflected in the mirrored file system.
After a BCV is created, use the fs_timefinder command to create a file system copy.
CAUTION Do not attempt to use Symmetrix TimeFinder tools and utilities with file system copies
created by VNX TimeFinder/FS. It might result in loss of data.
Planning considerations
This section provides information that is helpful to plan file systems for the VNX system:
◆ Supported file system access protocols on page 27
◆ File system size guidelines on page 28
◆ NMFS on page 28
◆ Volume configuration guidelines on page 29
◆ Stripe volume configuration considerations on page 30
◆ Integration considerations on page 30
Planning considerations 27
Concepts
conflicts between clients and VNX systems. The clients use the file layout information
to read and write file data directly from and to the storage system.
NMFS
An NMFS allows you to manage a collection of component file systems as a single file system.
CIFS and NFS clients see component file systems as a single share or single export.
File system capacity is managed independently for each component file system. This means
that you can increase the total capacity of the NMFS by extending an existing component
file system or by adding new component file systems.
The number of NMFS or component file systems is limited only to the number of file systems
allowed on a Data Mover. Hard links (NFS), renames, and simple moves are not possible
from one component file system to another.
The VNX features that support NMFS are:
◆ SnapSure
◆ VNX Replicator
◆ Quotas
◆ Security policies
◆ Backup
◆ EMC Symmetrix Remote Data Facility (SRDF®)
◆ TimeFinder/FS
◆ MPFS access to MPFS-enabled file systems
Planning considerations 29
Concepts
◆ If eight or more volumes are available, building stripe volumes on multiples of eight
volumes should give reasonable performance in most environments. If eight volumes
do not provide sufficient file system capacity, combine as many sets of eight volumes as
necessary into a single metavolume.
Integration considerations
This section identifies considerations for successful file system operations and integration
when using:
◆ Quotas on page 31
◆ TimeFinder/FS on page 31
◆ File-level retention on page 31
◆ SRDF on page 32
◆ MPFS on page 32
◆ VNX Replicator/SnapSure on page 32
◆ MirrorView/Synchronous on page 32
Quotas
To ensure that file systems do not become full, you can impose quota limits on users and
groups that create files on directory trees. You can set a hard quota limit on user, group, or
directory tree quotas to prevent allocation of all the space in the file system. When the hard
quota limit is reached, the system denies user requests to save additional files and notifies
the administrator that the hard quota limit has been reached. In this case, existing files can
be read but action must be taken either by the user or administrator to delete files from the
file system or increase the hard quota limit to allow saving of additional files.
To avoid degradation of file system performance, set the hard quota limit between 80 and
85 percent of the total file system space. In addition to setting the hard quota limit, set a
lower soft quota limit so that the administrator is notified when the hard quota limit is being
approached.
For example, to prevent a file system that contains 100 GB of storage from filling up, you
can set a soft quota limit of 80 GB and a hard quota limit of 85 GB by using user, group, or
directory tree quotas. When used space in the file system reaches 80 GB, the administrator
is notified that the soft limit is reached. When used space totals 85 GB, the system denies
user requests to save additional files, and the administrator is notified that the hard quota
limit is reached.
Using Quotas on VNX provides detailed information on quotas and how to set up user, group,
or directory tree quotas.
TimeFinder/FS
If you plan to create multiple copies of your PFS, plan for that number of BCVs. For example,
from one PFS, you can create 10 copies. Therefore, plan for 10 BCVs, not one.
TimeFinder/FS uses the physical disk, not the logical volume, when it creates BCV copies.
The copy is done track-by-track, so unused capacity is carried over to the BCVs.
Volumes used for BCVs should be of the same size as the standard volume.
Using TimeFinder/FS, NearCopy, and FarCopy on VNX provides additional information on
TimeFinder/FS.
File-level retention
File systems can only be enabled with file-level retention (FLR) capability at creation time.
When the file system is created and enabled for FLR, it is persistently marked as an FLR file
system and the FLR setting cannot be changed. After a file system is created and FLR is
enabled, an administrator can apply FLR protection to individual files. Files in the FLR
(locked) state can be stored with retention dates, which prohibit the deletion of the file until
expiration.
Using VNX File-Level Retention provides more information about FLR storage and FLR file
system behavior.
Planning considerations 31
Concepts
SRDF
All file systems on the Data Mover must be built on SRDF volumes. Using SRDF/S with VNX
for Disaster Recovery describes SRDF/S and Using SRDF/A with VNX describes SRDF/A.
If you use the AVM feature to create the file systems, specify the symm_std_rdf_src storage
pool. This storage pool directs AVM to allocate space from volumes configured during
installation for remote mirroring by using SRDF.
Automatic file system extension cannot be used for any file system that is part of a Remote
Data Facility (RDF) configuration.
Note: Do not use the nas_fs command with the -auto_extend option for file systems associated with
RDF configurations. Doing so generates the error message: Error 4121: operation not supported for
file systems of type SRDF.
MPFS
File systems mounted read-only are not acknowledged by clients that use MPFS, and thereby
allow clients to write to the file system.
You cannot enable MPFS access to file systems with a stripe depth of less than 32 KB. Using
VNX Multi-Path File System provides additional information on MPFS.
VNX Replicator/SnapSure
By using VNX Replicator, you can enable automatic file system extension on the source file
system. When the source file system hits its high water mark (HWM), the destination file
system automatically extends first. Then the source file system automatically extends. If the
extension of the destination file system succeeds but the source file system extension fails,
the file systems differ in size, which causes replication failure. Use the nas_fs -xtend
<fs_name> -option src_only command to manually adjust the size of the source file system.
Using VNX Replicator contains instructions to recover from this situation. Managing Volumes
and File Systems with VNX AVM provides information on automatic file system extension.
There must be sufficient file system space and disk storage available to support VNX
Replicator and SnapSure operations. To review the entire file system size, use the nas_fs
-list command. To calculate the SavVol file size, use the nas_disk -size command. The VNX
Command Line Interface Reference for File provides a detailed synopsis of the commands
associated with SnapSure and VNX Replicator.
MirrorView/Synchronous
All file systems on the Data Mover must be built on MirrorView/Synchronous LUNs. Using
MirrorView/Synchronous with VNX for Disaster Recovery provides detailed information on
MirrorView/Synchronous.
If you use the AVM feature to create the file systems, you must use the appropriate
MirrorView AVM storage pools for your RAID configuration. Managing Volumes and File
Systems with VNX AVM provides a list of the MirrorView AVM storage pools.
Configuring Volumes
Action
To view a list of unused disks and their sizes, type:
$ nas_disk -list
Output
Note
Column definitions:
id — ID of the disk (assigned automatically)
inuse — Indicates whether the disk is in use by a file system
sizeMB — Size of disk in megabytes
storageID-devID — ID of the storage system and device associated with the disk
type — Type of disk
name — Name of the disk
servers — Data Movers with access to the disk
Create volumes
You can create three types of volumes:
◆ Slice volume
◆ Stripe volume
◆ Metavolume
Each volume type provides different benefits to satisfy storage requirements. Volumes on
page 20 provides detailed information on volume types. Volume configuration guidelines
on page 29 lists common volume configurations and considerations associated with each
volume type.
List volumes
Action
To list all volumes on a system, type:
$ nas_volume -list
Output
This is a partial listing of the volume table that is displayed as the volume list:
Create volumes 35
Configuring Volumes
Note
You can also use the nas_slice -list command to list only slice volumes.
Column definitions:
id — ID of the volume (assigned automatically)
inuse — Indicates whether the volume is in use by a file system; y indicates yes, n indicates no
type — Type of volume
acl — Access control value assigned to the volume
name — Name of the volume
Action
To create a slice volume, use this command syntax:
$ nas_slice -name <name> -create <volume_name> <size>
where:
<name> = name of the slice volume
Example:
To create a slice volume named slv1, type:
$ nas_slice -name slv1 -create d8 1024
Output
id = 76
name = slv1
acl = 0
in_use = False
slice_of = d8
offset(MB) = 0
size (MB) = 1024
volume_name = slv1
When creating a stripe volume, if you do not specify a name for the stripe volume, a default
name is assigned. Stripe volume configuration considerations on page 30 provides more
information.
Action
To create a stripe volume, use this command syntax:
$ nas_volume -name <name> -create -Stripe <stripe_size> [<volume_name>,...]
where:
<name> = name of the stripe volume
Example:
To create a stripe volume called stv1, type:
$ nas_volume -name stv1 -create -Stripe 32768 d10,d12,d13,d15
Output
id = 125
name = slv1
acl = 0
in_use = False
type = stripe
volume_set = d10,d12,d13,d15
disks = d10,d12,d13,d15
Create a metavolume
When creating a metavolume, if you do not specify a name for the metavolume, a default
name is assigned.
To combine volumes into a metavolume, use the <volume_name> option consecutively in
the command syntax.
Action
To create a metavolume from a stripe volume, use this command syntax:
$ nas_volume -name <name> -create -Meta [<volume_name>,...]
where:
<name> = name of the stripe volume
Example:
To create metavolumes named slv1, slv2, and slv3 on disk volume d7, type:
$ nas_volume -name mtv1 -create -Meta slv1,slv2,slv3
Create volumes 37
Configuring Volumes
Output
id = 268
name = mtv1
acl = 0
in_use = False
type = meta
volume_set = slv1, slv2, slv3
disks = d8, d19, d9
Procedure
1. Create RAID groups and user LUNs as needed for VNX for File volumes. Ensure that
you add the LUNs to the VNX for File gateway system's storage group and that you set
the HLU to 16 or higher:
•
Always create the user LUNs in balanced pairs, one owned by SP A and one owned
by SP B. The paired LUNs must be the same size.
•
FC or SAS disks must be configured as RAID 1/0, RAID 5, or RAID 6. The paired
LUNs do not need to be in the same RAID group but should be of the same RAID
type. RAID groups and storage characteristics on page 40 lists the valid RAID group
and storage system combinations. Gateway models use the same combinations as the
NS-80 (for CX3™ series storage systems) or the NS-960 (for CX4™ series storage
systems).
•
SATA disks must be configured as RAID 1/0, RAID 5, or RAID 6. All LUNs in a RAID
group must belong to the same SP. Create pairs by using LUNs from two RAID groups.
RAID groups and storage characteristics on page 40 lists the valid RAID group and
storage system combinations. Gateway models use the same combinations as the
NS-80 (for CX3 series storage systems) or the NS-960 (for CX4 series storage systems).
•
The host LUN identifier (HLU) must be greater than or equal to 16 for user LUNs.
Note: If you use the nas_disk -delete -perm command to permanently remove a disk volume
from your VNX for File or legacy Celerra, do not use the nas_diskmark command at a later
time to discover and mark a LUN with the same HLU as the LUN underlying the removed
disk volume. This causes data loss or data unavailability on your VNX for File or legacy Celerra.
To re-use underlying HLUs for VNX for File or legacy Celerra disk volumes, do not use the
-perm option. If you use the -perm option to remove a disk volume, reboot the Data Mover
prior to re-marking the LUN with the duplicate HLU.
Note: If you create 4+1 RAID 3 LUNs, the Number of LUNs to Bind value is 1.
2. Perform one of these steps to make the new user LUNs available to the VNX for File:
•
Using the Unisphere software:
a. Select Storage ➤ Storage Configuration ➤ File Systems.
b. From the task list, select File Storage ➤ Rescan Storage Systems.
•
Using the VNX for File CLI, type the following command:
nas_diskmark -mark -all
Note: Do not change the HLU of the VNX for File LUNs after rescanning. This might cause data
loss or data unavailability.
Note: For VNX systems, Advanced Data Service Policy features such as FAST and compression are
supported on pool-based LUNs only. They are not supported on RAID-based LUNs.
To open the Disk Provisioning Wizard for File in the Unisphere software:
1. Select Storage ➤ Storage Configuration ➤ Storage Pools.
2. From the task list, select Wizards ➤ Disk Provisioning Wizard for File.
Note: To use the Disk Provisioning Wizard for File, you must log in to the Unisphere software by using
the global sysadmin user account or by using a user account which has privileges to manage storage.
An alternative to the Disk Provisioning Wizard for File is available by using the VNX for
File CLI at /nas/sbin/setup_clariion. This alternative is not available for unified VNX systems.
The script performs the following actions:
◆ Provisions the disks on integrated (non-Performance) VNX for Block storage systems
when there are unbound disks to configure. This script binds the data LUNs on the xPEs
and DAEs, and makes them accessible to the Data Movers.
◆ Ensures that your RAID groups and LUN settings are appropriate for your VNX for File
server configuration.
The Unisphere for File software supports only the array templates for legacy EMC CLARiiON
CX™ and CX3 storage systems. CX4 and VNX systems must use the User_Defined mode
with the /nas/sbin/setup_clariion CLI script.
The setup_clariion script allows you to configure VNX for Block storage systems on a
shelf-by-shelf basis by using predefined configuration templates. For each enclosure (xPE
or DAE), the script examines your specific hardware configuration and gives you a choice
of appropriate templates. You can mix combinations of RAID configurations on the same
storage system. The script then combines the shelf templates into a custom, User_Defined
array template for each VNX for Block system, and then configures your array.
Action
To create a file system, use this command syntax:
$ nas_fs -name <fs_name> -create <volume_name>
where:
<fs_name> = name of the file system
Example:
To create a file system called ufs1 by using existing volumes, type:
$ nas_fs -name ufs1 -create mtv1
Output
id = 18
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = mtv1
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
002806000209-006,002806000209-007,002806000209-008,002806000209-009
disks = d3,d4,d5,d6
Action
To create a mount point on a Data Mover, use this command syntax:
$ server_mountpoint <movername> -create <pathname>
where:
<movername> = name of the Data Mover
Example:
To create a mount point named server_3, type:
$ server_mountpoint server_3 -create /ufs1
Output
server_3: done
Note: The server_mount command creates a mount point if one does not exist.
Procedure
If you create a mount point on a Data Mover, mount the file system on that mount point.
The -option argument is used to specify a number of mount options. The VNX Command
Line Interface Reference for File provides a complete list of mount options available.
Action
To mount a file system on a mount point that is on a Data Mover, use this command syntax:
$ server_mount <movername> -option <options> <fs_name> <mount_point>
where:
<movername> = name of the Data Mover
Action
<mount_point> = path to mount point for the Data Mover; a <mount_point> must begin with a forward slash (/)
Example:
To mount ufs1 on mount point /ufs1 with access checking policy set to NATIVE and nooplock turned on, type:
$ server_mount server_2 -option accesspolicy=NATIVE,nooplock ufs1 /ufs1
Output
server_2: done
where:
<name> = name of the NMFS
Example:
To create an NMFS named nmfs1, type:
$ nas_fs -name nmfs1 -type nmfs -create
Output:
id = 26
name = nmfs1
acl = 0
in_use = False
type = nmfs
worm = off
volume = 0
pool =
rw_servers=
ro_servers=
rw_vdms =
stor_devs =
ro_vdms =
stor_devs =
disks =
2. Create a mount point in the root of the designated Data Mover for the new file system.
3. Mount the NMFS as read-only on the Data Mover by using this command syntax:
$ server_mount <movername> -option <options> <fs_name> <mount_point>
where:
<movername> = name of the Data Mover
<options> = list of mount options separated by comma
<fs_name> = name of the NMFS
<mount_point> = path to mount point for the Data Mover; a <mount_point> must begin
with a forward slash (/)
Example:
To mount an NMFS named nmfs1 as read-only on server_3, type:
$ server_mount server_3 -option ro nmfs1 /nmfs1
Output:
server_3: done
4. Export the new file system for NFS access.
or
Share the file system for CIFS access.
The steps to create a new component file system and to mount it on an NMFS are similar to
steps followed for mounting any file system:
1. Create a volume for the component file system.
2. Create the component file system on the new volume by using this command syntax:
$ nas_fs -name <name> -create <volume_name>
where:
<name> = name assigned to the file system
<volume_name> = name of the volume
Example:
To create a component file system called ufs1 on volume mtv1, type:
$ nas_fs -name ufs1 -create mtv1
3. Mount the component file system to the NMFS by using this command syntax:
$ server_mount <movername> -option <options> <fs_name> <mount_point>
where:
Output:
server_2: done
Managing Volumes
Example:
To check the volume capacity of mtv1, type:
$ nas_volume -size mtv1
Output
Rename a volume
Action
To rename a volume, use this command syntax:
$ nas_volume -rename <old_name> <new_name>
where:
<old_name> = current name of the volume
Example:
To rename the mtv metavolume to mtv1, type:
$ nas_volume -rename mtv mtv1
Output
id = 247
name = mtv1
acl = 0
in_use = False
type = meta
volume_set = stv1
disks = d3,d4,d5,d6
Clone a volume
You can make an exact copy of a stripe volume, slice volume, or metavolume by cloning it.
Cloning duplicates only the volume structure. It does not copy the file system or the data
in the file system at the time of cloning.
If -option disktype and source_volume:destination_volume are used together, the behavior
differs depending on which option is specified first.
Action
To clone a volume, use this command syntax:
$ nas_volume -Clone <volume_name> -option disktype=<disktype>
<source_volume>:<destination_volume>,...
where:
<volume_name> = name of the volume to be cloned
<source_volume> = sets a specific disk volume set for the source volume
<destination_volume> = sets a specific disk volume set for the destination volume
Example:
To clone the metavolume mtv1, type:
$ nas_volume -Clone mtv1
Output
id = 127
name = mtv1
acl = 0
in_use = False
type = meta
volume_set = d7
disks = d7
id = 128
name = v128
acl = 0
in_use = False
type = meta
volume_set = d8
disks = d8
Note
The example clones the metavolume mtv1. The default name of the cloned metavolume is v128.
Clone a volume 53
Managing Volumes
To ensure that the metavolume or stripe volume that you want to delete is not in use, list
the volume information and check the in_use parameter.
Action
To list the volume information, use this command syntax:
$ nas_volume -info <volume_name>
where:
<volume_name> = name of the metavolume or stripe volume
Example:
To list the volume information for mtv1, type:
$ nas_volume -info mtv1
Output
id = 247
name = mtv1
acl = 0
in_use = False
type = meta
volume_set = stv1
disks = d3,d4,d5,d6
Note
The in_use parameter for the mtv1 metavolume is False, indicating that the metavolume is not in use by a file system.
Remove all file systems from a volume that you want to delete. If the volume is part of a
larger metavolume configuration, remove file systems from the larger metavolume and
delete the larger metavolume before deleting the volume.
Action
To delete a metavolume, use this command syntax:
$ nas_volume -delete <volume_name>
where:
<volume_name> = name of the metavolume to delete
Example:
To delete a metavolume named mtv1, type:
$ nas_volume -delete mtv1
Output
id = 146
name = mtv1
acl = 1432, owner=nasadmin, ID=201
in_use = False
type = meta
volume_set = d7,mtv1
disks = d7,d8
Note
The in_use parameter for the mtv1 metavolume is False, indicating that the metavolume is not in use by a file system.
To ensure that the slice volume you want to delete is not in use, list the volume information
and check the in_use parameter.
Action
To list the volume information, use this command syntax:
$ nas_slice -info <slice_name>
where:
<slice_name> = name of the slice volume
Example:
To list the slice volume information for slv1, type:
$ nas_slice -info slv1
Output
id = 67
name = slv1
acl = 0
in_use = False
type = slice
slice_of = d7
offset(MB)= 0
size (MB) = 2048
volume_name = slv1
Note
The in_use parameter for the slv1 slice volume is False, indicating that the slice volume is not in use by a file system.
If the slice volume is part of a metavolume configuration, remove file systems from the
metavolume and delete the metavolume before deleting the slice volume.
Action
To delete a slice volume, use this command syntax:
$ nas_slice -delete <slice_name>
where:
<slice_name> = name of the slice volume to delete
Example:
To delete slice volume information for slv1, type:
$ nas_slice -delete slv1
Output
id = 67
name = slv1
acl = 0
in_use = False
slice_of = d7
offset(MB)= 0
size (MB) = 2048
Note
The in_use parameter for the slv1 slice volume is False, indicating that the slice volume is not in use by a file system.
Action
To export a file system for NFS access, use this command syntax:
$ server_export <movername> -Protocol nfs -option <options>/<pathname>
where:
<movername> = name of the Data Mover
Example:
To export the file system ufs2 on an NFS client, type:
$ server_export server_3 -Protocol nfs -option root=10.1.1.1 /ufs2
Output
server_3: done
Action
To export a file system for CIFS access, use this command syntax:
$ server_export <movername> -Protocol cifs -name <sharename>/<pathname>
where:
<movername> = name of the Data Mover
Example:
To export the file system ufs2 on a CIFS client, type:
$ server_export server_3 -Protocol cifs -name ufs2 /ufs2
Output
server_3: done
Export an NMFS
When you export an NMFS, you export and mount the NMFS root which provides access
to all component file systems. Any options set on the NMFS root propagate to the component
file systems. However, you can export the component file system with different export
options.
When you export a component file system in an NMFS hierarchy, you can export only the
mount point path of the component file system. Subdirectories of the component file system
cannot be exported.
Output
Note
Column definitions:
id — ID of the file system (assigned automatically)
inuse — Indicates whether the file system registered into the mount table of a Data Mover; y indicates yes, n indicates no
type — Type of file system
acl — Access control value for the file system
volume — Volume on which the file system resides
name — Name assigned to the file system
server — ID of the Data Mover that is accessing the file system
Action
To view configuration information of a specific file system, use this command syntax:
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Example:
To view configuration information on ufs1, type:
$ nas_fs -info ufs1
Output
id = 18
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = mtv1
pool =
rw_servers =
ro_servers =
rw_vdms =
ro_vdms =
symm_devs =
002806000209-006,002806000209-007,002806000209-008,002806000209-009
disks = d3,d4,d5,d6
Example:
To list mount points on server_3, type:
$ server_mountpoint server_3 -list
Output
server_3:
/.etc_common
/ufs1
/ufs1_snap1
Action
To display a list of all file systems mounted on a Data Mover, use this command syntax:
$ server_mount <movername>
where:
<movername> = name of the Data Mover
Example:
To list all file systems mounted on server_3, type:
$ server_mount server_3
Output
server_3:
fs2 on /fs2 uxfs,perm,rw
fs1 on /fs1 uxfs,perm,rw
root_fs_3 on / uxfs,perm,rw
Action
To display disk space of a file system, use this command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To view the total space available on ufs1, type:
$ nas_fs -size ufs1
Output
Example:
To view the total disk space of all file systems on server_2, type:
$ server_df server_2
Output
server_2:
Filesystem kbytes used avail capacity Mounted on
root_fs_common 15360 1392 13968 9% /.etc_common
ufs1 34814592 54240 34760352 0% /ufs1
ufs2 104438672 64 104438608 0% /ufs2
ufs1_snap1 34814592 64 34814528 0% /ufs1_snap1
root_fs_2 15360 224 15136 1% /
Action
To view the inode capacity on a single Data Mover, use this command syntax:
$ server_df <movername> -inode <fs_name>
where:
<movername> = name of the Data Mover
Example:
To view the inode allocation and availability on ufs2, type:
$ server_df server_2 -inode ufs2
Output
server_2:
Filesystem inodes used avail capacity Mounted on
ufs2 12744766 8 12744758 0% /ufs2\
Note
Column definitions:
Filesystem — Name of the file system
inodes — Total number of inodes allocated to the file system
used — Number of inodes in use by the file system
avail — Number of free inodes available for use by the file system
capacity — Percentage of total inodes in use
Mounted on — Name of the file system mount point on the Data Mover
Action
To view the inode capacity of all file systems on a Data Mover, use this command syntax:
$ server_df <movername> -inode
where:
<movername> = name of the Data Mover
Example:
To view the inode capacity of all file systems on server_2, type:
$ server_df server_2 -inode
Output
server_2:
Filesystem inodes used avail capacity Mounted on
root_fs_common 7870 14 7856 0% /.etc_common
ufs1 4250878 1368 4249510 0% /ufs1
ufs2 12744766 8 12744758 0% /ufs2
ufs1_snap1 4250878 8 4250870 0% /ufs1_snap1
root_fs_2 7870 32 7838 0% /
Procedure
1. Check the size of the file system before extending it by using this command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the size of the file system ufs1, type:
$ nas_fs -size ufs1
Output:
where:
<fs_name> = name of the file system
<volume_name> = name of the volume
Example:
To extend the file system ufs1, type:
$ nas_fs -xtend ufs1 emtv2b
Output:
id = 18
name = ufs1
acl = 0
in_use = True
type = uxfs
volume = mtv1, emtv2b
profile =
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
symm_devs = 002804000190-0034,002804000190-0035,002804000190-0036,
002804000190-0037,002804000190-0040,002804000190-0041,002804000190-0042,
002804000190-0043
disks = d3,d4,d5,d6,d15,d16,d17,d18
disk=d3 symm_dev=002804000190-0034 addr=c0t3l8-15-0 server=server_2
disk=d4 symm_dev=002804000190-0035 addr=c0t3l9-15-0 server=server_2
disk=d5 symm_dev=002804000190-0036 addr=c0t3l10-15-0 server=server_2
disk=d6 symm_dev=002804000190-0037 addr=c0t3l11-15-0 server=server_2
disk=d15 symm_dev=002804000190-0040 addr=c0t4l4-15-0 server=server_2
disk=d16 symm_dev=002804000190-0041 addr=c0t4l5-15-0 server=server_2
disk=d17 symm_dev=002804000190-0042 addr=c0t4l6-15-0 server=server_2
disk=d18 symm_dev=002804000190-0043 addr=c0t4l7-15-0 server=server_2
3. Check the size of the file system after extending it by using this command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the size of the file system ufs1 after extending it, type:
$ nas_fs -size ufs1
Output:
◆ You can extend the size of the source (production) file system without impacting the
destination file system by using the -xtend src_only option. The VNX Command Line
Interface Reference for File provides a detailed synopsis of the commands associated with
the VNX Replicator.
◆ Verify whether there is enough volume space to extend the source and destination file
systems.
1. On the primary site, verify the current sizes of the source and destination file systems
by using this command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To verify the current size of the source file system src_ufs1, type:
$ nas_fs -size src_ufs1
Output:
Output:
where:
<fs_name> = name of the source file system
<volume_name> = name of the volume
Example:
To extend the source file system (on the primary site), type:
4. Check the size of the file system after extending it by using this command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the source file system
Example:
To check the size of the source file system src_ufs1 after extending it, type:
$ nas_fs -size src_ufs1
Output:
Adjust the file system size threshold for all file systems
1. Change the size threshold for all file systems by using this command syntax:
$ server_param ALL -facility <facility_name> -modify <param_name>
-value <new_value>
where:
<movername> = name of the Data Mover
<facility_name> = name of the facility to which the parameter belongs
<param_name> = name of the parameter
<new_value> = new value for the parameter
Example:
To change the size threshold for all file systems to 85 percent, type:
$ server_param ALL -facility file -modify fsSizeThreshold -value 85
1. Change the file system size threshold on a single Data Mover by using this command
syntax:
$ server_param <movername> -facility <facility_name> -modify <param_name>
-value <new_value>
where:
<movername> = name of the Data Mover
<facility_name> = name of the facility to which the parameter belongs
<param_name> = name of the parameter
<new_value> = new value for the parameter
Example:
To change the size threshold for all file systems on server_2, type:
$ server_param server_2 -facility file -modify fsSizeThreshold -value 85
2. Restart the Data Mover for the change to take effect by using this command syntax:
$ server_cpu <movername> -reboot now
where:
<movername> = name of the Data Mover
Example:
To restart server_2 for the change to take effect, type:
$ server_cpu server_2 -reboot now
where:
<movername> = name of the Data Mover
<fs_name> = name of the file system to unmount
Note: To permanently unmount a file system from a Data Mover by specifying the mount point
path, use the -perm <mount_point> option instead of the -perm <fs_name> option.
Example:
To permanently unmount a file system named fs1, type:
$ server_umount server_2 -perm /fs1
Output:
server_2: done
2. Create a new mount point for the file system in the NMFS.
3. Mount the file system in the NMFS by using this command syntax:
$ server_mount <movername> -option <options> <fs_name> <mount_point>
where:
<movername> = name of the Data Mover
<options> = list of mount options separated by comma
<fs_name> = name of the NMFS
<mount_point> = pathname of the NMFS which is in the format /nmfs path/component
file system name
Example:
To mount a file system on a mount point on server_3 with a nolock option, type:
$ server_mount server_3 -option nolock fs5/nmfs4/fs5
Output:
server_2: done
Move an NMFS
You can move an NMFS from one Data Mover to another:
1. Permanently unmount each of the component file systems.
2. Permanently unmount the NMFS.
3. Mount the NMFS on the new Data Mover.
4. Mount each component file system on the NMFS on the new Data Mover.
Example:
To rename a file system ufs as ufs1, type:
$ nas_fs -rename ufs ufs1
Output
id = 18
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = mtv1
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
002806000209-006,002806000209-007,002806000209-008,002806000209-009
disks = d3,d4,d5,d
Action
To turn off the read prefetch mechanism, use this command syntax:
$ server_mount <movername> -option <options>,noprefetch <fs_name> <mount_point>
where:
<movername> = name of the Data Mover
<mount_point> = path to mount point for the Data Mover; a <mount_point> must begin with a forward slash (/)
Example:
To turn off the read prefetch mechanism for ufs1, type:
$ server_mount server_3 -option rw,noprefetch ufs1 /ufs1
Output
server_3: done
Turn off read prefetch for all file systems on a Data Mover
1. Turn off the read prefetch mechanism for all file systems on a Data Mover by using this
command syntax:
$ server_param <movername> -facility <facility_name> -modify
prefetch -value 0
where:
<movername> = name of the Data Mover
<facility_name> = facility for the parameters
Example:
To turn off the prefetch mechanism for all file systems on server_2, type:
$ server_param server_2 -facility file -modify prefetch -value 0
where:
<movername> = name of the Data Mover
Example:
To restart server_2 immediately, type:
$ server_cpu server_2 -reboot now
The write mechanisms are designed to improve performance for applications, such as
databases, with many connections to a large file. These mechanisms can enhance database
access through the NFS protocol by 30 percent or more. The mechanism is turned off by
default. However, it can be turned on for a file system.
Action
To turn on the uncached write mechanism for a file system, use this command syntax:
$ server_mount <movername> -option <options>,uncached <fs_name> <mount_point>
where:
<movername> = name of the Data Mover
<mount_point> = path to mount point for the Data Mover; a <mount_point> must begin with a forward slash (/)
Example:
To turn on the uncached write mechanism for the file system ufs1, type:
$ server_mount server_3 -option rw,uncached ufs1 /ufs1
Output
server_3: done
The -temp option of the server_umount command is the default and does not need to be
specified as part of the command.
Action
To temporarily unmount all file systems on a Data Mover, use this command syntax:
$ server_umount <movername> -temp -all
where:
<movername> = name of the Data Mover
Example:
To temporarily unmount all file systems on server_2, type:
$ server_umount server_2 -temp -all
Output
server_2: done
Permanently unmounting all file systems from a Data Mover deletes the contents of the
mount table. To reestablish client access to the file systems, you must rebuild the mount
table by remounting each file system on the Data Mover.
Action
To permanently unmount all file systems on a Data Mover, use this command syntax:
$ server_umount <movername> -perm -all
where:
<movername> = name of the Data Mover
Action
Example:
To permanently unmount all file systems on server_2, type:
$ server_umount server_2 -perm -all
Output
server_2: done
where:
<fs_name> = name of the file system
If the pool output line displays a value, the file system has an associated storage pool.
If the file system does not have an associated storage pool, proceed to step 3. If the file
system has an associated storage pool, proceed to step 4.
3. Determine and notate the metavolume name on which the file system is built. You need
to provide the metavolume name in step 10:
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Note: The Volume field contains the metavolume name. The Disks field lists the disks providing
storage to the file system.
4. If the file system has associated checkpoints, permanently unmount and then delete the
checkpoints and their associated volumes.
5. If the file system has associated BCVs, break the connection between (unmirror) the file
system and its BCVs.
6. If the file system is an NFS-exported file system, permanently disable client access to the
file system by using this command syntax:
$ server_export <movername> -Protocol nfs -unexport -perm
<pathname>
where:
<movername> = name of the Data Mover
<pathname> = NFS entry
7. If the file system is a CIFS-exported file system, permanently disable client access to the
file system by using this command syntax:
$ server_export <movername> -Protocol cifs -unexport <sharename>
where:
<movername> = name of the Data Mover
<sharename> = name of the shared component file system
8. Permanently unmount the file system from its associated Data Movers by using this
command syntax:
$ server_umount <movername> -perm <fs_name>
where:
<movername> = name of the Data Mover
<fs_name> = name of the file system
Note: To delete an NMFS, permanently unmount all component file systems in the NMFS.
9. Delete the file system or NMFS from the VNX system by using this command syntax:
$ nas_fs -delete <fs_name>
where:
<fs_name> = name of the file system
If the file system has an associated storage pool, as part of the file system delete operation,
AVM deletes all underlying volumes and frees the space for use by other file systems.
If the file system has no associated storage pool, proceed to step 10. The volumes
underlying the file system were created manually and must be manually deleted.
10. Delete the metavolume on which the file system was created by using this command
syntax:
$ nas_volume -delete <volume_name>
where:
<volume_name> = name of the volume
11. If the metavolume included stripe volumes, delete all stripe volumes associated with the
metavolume by using this command syntax, until the disk space is free:
$ nas_volume -delete <volume_name>
where:
<volume_name> = name of the volume
12. If the metavolume included slice volumes, delete all slice volumes associated with the
metavolume by using this command syntax, until the disk space is free:
$ nas_volume -delete <volume_name>
where:
<volume_name> = name of the volume
13. After freeing disk space, check for slice volumes, stripe volumes, and metavolumes not
in use (identified by an “n” in the inuse column in the command output) by using these
commands:
$ nas_volume -list
$ nas_slice -list
Delete unused volumes until you free all the disk space you want.
The tasks to check file system consistency and to repair a damaged file
system are:
◆ Run a file system check on page 84
◆ Start an ACL check on the file system on page 84
◆ List file system checks on page 85
◆ Display the file system check information on a file system on page
85
◆ Display information on all current file system checks on page 86
Example:
To start fsck on ufs1 and monitor the progress, type:
$ nas_fsck -start ufsl -monitor
Output
id = 27
name = ufs1
volume = mtv1
fsck_server = server_2
inode_check_percent = 10..20..30..40..60..70..80..100
directory_check_percent = 0..0..100
used_ACL_check_percent = 100
free_ACL_check_status = Done
cylinder_group_check_status = In Progress..Done
Action
To start an ACL check on a specified file system, use this command syntax:
$ nas_fsck -start <fs_name> -aclchkonly
where:
<fs_name> = name of the file system
Example:
To start an ACL check on ufs1 and monitor the progress, type:
$ nas_fsck -start ufsl -aclchkonly
Output
ACLCHK: in progress for file system ufs1
Output
Example:
To display information about file system check for ufs2, type:
$ nas_fsck -info ufs2
Output
name = ufs2
id = 23
volume = v134
fsck_server = server_5
inode_check_percent = 100
directory_check_percent = 100
used_ACL_check_percent = 100
free_ACL_check_status = Done
cylinder_group_check_status = In Progress
Output
name = ufs2
id = 23
volume = v134
fsck_server = server_5
inode_check_percent = 30
directory_check_percent = 0
used_ACL_check_percent = 0
free_ACL_check_status = Not Started
cylinder_group_check_status = Not Started
name = ufs1
id = 27
volume = mtv1
fsck_server = server_2
inode_check_percent = 100
directory_check_percent = 0
used_ACL_check_percent = 0
free_ACL_check_status = Not Started
cylinder_group_check_status = Not Started
Troubleshooting
The Unisphere software does not pro- To rename a file system, you can type
vide an interface for renaming a file the appropriate CLI command in the CLI
system. command entry page available in the
Unisphere software, or directly in the
CLI.
You are unable to mount a file system. There are many probable causes for Perform server_mount -all to activate
this scenario. An error message is dis- all entries in the mount table. Obtain a
played in most of the instances, though list of mounted file systems, and then
occasionally, there might not be one. observe the entries. If the file system in
question is already mounted (temporary
In this case, the mount table entry al-
or permanent), perform the necessary
ready exists.
steps to unmount it, and then retry.
An unmounted file system reappears in The file system might have been tem- Perform a permanent unmount to re-
the mount table after the system porarily unmounted before the system move the entry from the mount table.
restarts. restarts.
When you create a new file in the NMFS NMFS root directory is read-only. Do not try to create files or folders in the
root directory, a file exists error appears. NMFS root directory.
You are unable to slice a disk volume. You receive an error message and the To verify that the disk volume that you
slice will not be created. want to slice has enough unused space,
use this command syntax:
$ nas_volume -size
<volume_name>
Error messages
All event, alert, and status messages provide detailed information and recommended actions
to help you troubleshoot the situation.
To view message details, use any of these methods:
◆ Unisphere software:
• Right-click an event, alert, or status message and select to view Event Details, Alert
Details, or Status Details.
◆ CLI:
• Use this guide to locate information about messages that are in the earlier-release
message format.
• Use the text from the error message's brief description or the message's ID to search
the Knowledgebase on the EMC Online Support website. After logging in to EMC
Online Support, locate the applicable Support by Product page, and search for the
error message.
Error messages 89
Troubleshooting
experts. Go to the EMC Online Support website at http://Support.EMC.com for course and
registration information.
EMC Professional Services can help you implement your system efficiently. Consultants
evaluate your business, IT processes, and technology, and recommend ways that you can
leverage your information for the most benefit. From business plan to implementation, you
get the experience and expertise that you need without straining your IT staff or hiring and
training new personnel. Contact your EMC Customer Support Representative for more
information.
GID Support
Overview
The system software supports 32-bit GID (group IDs) on NFS and CIFS file systems. This
support enables a maximum GID value of 2,147,483,647 (approximately 2 billion).
◆ Some backup applications have restrictions. Ensure that the application can handle 32-bit
UIDs/GIDs.
There is no command to verify whether a file system supports 16-bit or 32-bit GIDs.
append-only state
State of a file when the data in it cannot be modified, but the file can have new data appended
to the end of it. In addition, the file itself cannot be deleted. Once a file in the append-only state
has been written to, changing it to the locked state by making it read-only locks it into that state
until its retention date has passed.
disk volume
On a VNX for File, a physical storage unit as exported from the storage system. All other volume
types are created from disk volumes.
See also metavolume, slice volume, stripe volume, and volume.
expired state
State of a file when its retention date has passed. A file in the expired state can be reverted back
to the locked state or deleted from the FLR-enabled file system, but cannot be altered.
file system
Method of cataloging and managing the files and directories on a system.
inode
“On-disk” data structure that holds information about files in a file system. This information
identifies the file type as being a file that includes VNX FileMover stub files, a directory, or a
symbolic link.
locked state
State of a file when its read/write permission is changed to read-only in a file system enabled
for file-level retention. Files committed to the locked (WORM) state cannot be altered or deleted
until their retention date has passed.
metavolume
On a VNX for File, a concatenation of volumes, which can consist of disk, slice, or stripe volumes.
Also called a hypervolume or hyper. Every file system must be created on top of a unique
metavolume.
See also disk volume, slice volume, stripe volume, and volume.
retention date
Date until which a locked file in an FLR-enabled file system will be protected. Users and
applications manage a file's retention date by using NFS or CIFS to set the file's last access time
to a future date and time. The retention timestamp is compared to the file system's FLR clock
to determine whether a file's retention date has passed.
slice volume
On VNX for File, a logical piece or specified area of a volume used to create smaller, more
manageable units of storage.
See also disk volume, metavolume, stripe volume, and volume.
storage pool
Groups of available disk volumes organized by AVM that are used to allocate available storage
to file systems. They can be created automatically by AVM or manually by the user.
See also Automatic Volume Management
stripe volume
Arrangement of volumes that appear as a single volume. Allows for stripe units that cut across
the volume and are addressed in an interlaced manner. Stripe volumes make load balancing
possible.
See also disk volume, metavolume, and slice volume.
volume
On VNX for File, a virtual disk into which a file system, database management system, or other
application places data. A volume can be a single disk partition or multiple partitions on one
or more physical drives.
See also disk volume, metavolume, slice volume, and stripe volume.
A deleting
file system 80
adjusting file system size threshold 72, 73 metavolume 54
automatic file system extension 32 stripe volume 54
disk volume
explanation 21
B freeing file system space 80
business continuance volume (BCV) renaming 52
configuration 25 unused 34
C E
capacity EMC E-Lab Navigator 88
volume 52 error messages 89
cautions 13, 14 export NMFS 61
file systems 13
fsck 14 F
fsck processes 14
nas_db database 13 file system
slice volumes 13 concepts 18
spanning storage systems 13 deleting 80
CAVA displaying mounted 64
integration considerations 13 freeing disk space allocated 80
cloning a volume 53 mirroring 26
component file system permanent
creating 48 mount 46
extending 68 quotas 13, 31
unmounting 78 size guidelines 28
creating unmount all
stripe volume 36, 37 permanent 79
volumes 34 temporary 79
file system size threshold 72, 73
change
D for Data Mover 72, 73
listing fsSizeThreshold parameter 72, 73
mount points 63 file-level retention
mounted file systems 64 integration considerations 31
fsck
GID support
restrictions 92 Q
quotas
I integration considerations 13, 31
quotas for file system 13
integration considerations
file-level retention 31
quotas 31 R
Replicator 32 RAID group combinations 40
SnapSure 32 renaming
SRDF 32 disk volume 52
TimeFinder/FS 31 metavolume 52
International character supportUnicode characters slice volume 52
13 stripe volume 52
Replicator
M integration considerations 32
restrictions
messages, error 89 GID support 92
metavolume 18, 22, 23, 24, 37, 52, 54 nas_db database 13
addressing 24 TimeFinder/FS 13
concepts 18
configuration guidelines 23
creating 37 S
deleting 54 server_mount command 45, 76, 78
renaming 52 slice volume
mirrored file system 26 how it works 21
mount point renaming 52
creating 45 SnapSure
listing 63 integration considerations 13, 32
MPFS. See multi-path file system (MPFS) 28 SRDF
multi-path file system (MPFS) 28, 32 integration considerations 32
integration considerations 32 stripe volume
creating 36, 37
N deleting 54
explanation 22
nas_db database improving performance with 22
cautions and restrictions 13 renaming 52
nas_fsck 84, 85, 86
nested mount file system (NMFS)
about 28 T
TFTP. See Trivial File Transfer Protocol (TFTP) 27
TimeFinder/FS
integration considerations 13, 31