VXFS 5.0
VXFS 5.0
VXFS 5.0
Administrator's Guide
Solaris
N18516F
Veritas File System Administrator's Guide
The software described in this book is furnished under a license agreement and may be used
only in accordance with the terms of the agreement.
PN: N18516F
Legal Notice
Copyright © 2006 Symantec Corporation.
Symantec, the Symantec Logo, and Storage Foundation are trademarks or registered
trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other
names may be trademarks of their respective owners.
The product described in this document is distributed under licenses restricting its use,
copying, distribution, and decompilation/reverse engineering. No part of this document
may be reproduced in any form by any means without prior written authorization of
Symantec Corporation and its licensors, if any.
THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS,
REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT,
ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO
BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL
OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING PERFORMANCE,
OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED IN THIS
DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.
The Licensed Software and Documentation are deemed to be "commercial computer software"
and "commercial computer software documentation" as defined in FAR Sections 12.212 and
DFARS Section 227.7202.
http://www.symantec.com
10 9 8 7 6 5 4 3 2 1
Contents
Chapter 7 Quotas
About quota limits ...................................................................... 101
About quota files on &ProductNameLong; ....................................... 102
About quota commands ............................................................... 103
About quota checking with Veritas File System ................................. 104
................................................................................................ 104
Using quotas ............................................................................. 104
Turning on quotas ................................................................ 104
Turning on quotas at mount time ............................................ 105
Editing user and group and group and group and group
quotas .......................................................................... 106
Modifying time limits ............................................................ 107
Viewing disk quotas and usage ................................................ 107
Displaying blocks owned by users or groups .............................. 108
Turning off quotas ................................................................ 108
Glossary
Index
10 Contents
Chapter 1
Introducing Veritas File
System
This chapter includes the following topics:
Logging
A key aspect of any file system is how to recover if a system crash occurs. Earlier
methods required a time-consuming scan of the entire file system. A better solution
is the method logging (or journaling) the metadata of files.
VxFS logs new attribute information into a reserved area of the file system,
whenever file system changes occur. The file system writes the actual data to disk
only after the write of the metadata to the log is complete. If and when a system
crash occurs, the system recovery code analyzes the metadata log and try to clean
up only those files. Without logging, a file system check (fsck) must look at all of
the metadata.
Intent logging minimizes system downtime after abnormal shutdowns by logging
file system transactions. When the system is halted unexpectedly, this log can be
replayed and outstanding transactions completed. The check and repair time for
file systems can be reduced to a few seconds, regardless of the file system size.
By default, VxFS file systems log file transactions before they are committed to
disk, reducing time spent checking and repairing file systems after the system is
halted unexpectedly.
Extents
An extent is a contiguous area of storage in a computer file system, reserved for
a file. When starting to write to a file, a whole extent is allocated. When writing
to the file again, the data continues where the previous write left off. This reduces
or eliminates file fragmentation.
Since VxFS is an extent-based file system, addressing is done through extents
(which can consist of multiple blocks) rather than in single blocks segments.
Extents can therefore enhance file system throughput.
■ Extent-based allocation
Extents allow disk I/O to take place in units of multiple blocks if storage is
allocated in consecutive blocks.
■ Extent attributes
Extent attributes are the extent allocation policies associated with a file.
■ Fast file system recovery
VxFS provides fast recovery of a file system from system failure.
■ Extended mount options
The VxFS file system supports extended mount options to specify enhanced
data integrity modes, enhanced performance modes, temporary file system
modes, improved synchronous writes, and large file sizes.
■ Enhanced data integrity modes
VxFS avoids the problem of uninitialized data appearing in a file by waiting
until the data has been flushed to disk before updating the new file size to disk.
■ Enhanced performance mode
VxFS provides mount options to improve performance.
■ Modes of temporary file systems
VxFS supplies an option to allow users to achieve higher performance on
temporary file sytems by delaying the logging for most operations.
■ Improved synchronous writes
VxFS provides superior performance for synchronous write applications.
■ Large files and file systems support
VxFS supports files larger than two terabytes and large file systems up to 256
terabytes.
■ Access control lists (ACLs)
An Access Control List (ACL) stores a series of entries that identify specific
users or groups and their access privileges for a directory or file.
■ Storage Checkpoints
Backup and restore applications can leverage Storage Checkpoint, a disk- and
I/O-efficient copying technology for creating periodic frozen images of a file
system.
■ Online backup
VxFS provides online data backup using the snapshot feature.
■ Quotas
VxFS supports quotas, which allocate per-user and per-group quotas and limit
the use of two principal resources: files and data blocks.
14 Introducing Veritas File System
Veritas File System features
Note: VxFS supports all UFS file system features and facilities except for linking,
removing, or renaming “.” and “..” directory entries. These operations may
disrupt file system operations.
Extent-based allocation
Disk space is allocated in 512-byte sectors to form logical blocks. VxFS supports
logical block sizes of 1024, 2048, 4096, and 8192 bytes. The default block size
is 1K. For file systems up to 4 TB, the block size is 1K. 2K for file systems up to
8TB, 4K for file systems up to 16 TB, and 8K for file systems beyond this size.
An extent is defined as one or more adjacent blocks of data within the file system.
An extent is presented as an address-length pair, which identifies the starting
block address and the length of the extent (in file system or logical blocks). VxFS
allocates storage in groups of extents rather than a block at a time.
Extents allow disk I/O to take place in units of multiple blocks if storage is allocated
in consecutive blocks. For sequential I/O, multiple block operations are
Introducing Veritas File System 15
Veritas File System features
considerably faster than block-at-a-time operations; almost all disk drives accept
I/O operations of multiple blocks.
Extent allocation only slightly alters the interpretation of addressed blocks from
the inode structure compared to block based inodes. A VxFS inode references 10
direct extents, each of which are pairs of starting block addresses and lengths in
blocks.
The VxFS inode supports different types of extents, namely ext4 and typed. Inodes
with ext4 extents also point to two indirect address extents, which contain the
addresses of first and second extents:
first Used for single indirection. Each entry in the extent indicates the
starting block number of an indirect data extent
second Used for double indirection. Each entry in the extent indicates the
starting block number of a single indirect address extent.
Each indirect address extent is 8K long and contains 2048 entries. All indirect
data extents for a file must be the same size; this size is set when the first indirect
data extent is allocated and stored in the inode. Directory inodes always use an
8K indirect data extent size. By default, regular file inodes also use an 8K indirect
data extent size that can be altered with vxtunefs; these inodes allocate the
indirect data extents in clusters to simulate larger extents.
Typed extents
VxFS has an inode block map organization for indirect extents known as typed
extents. Each entry in the block map has a typed descriptor record containing a
type, offset, starting block, and number of blocks.
Indirect and data extents use this format to identify logical file offsets and physical
disk locations of any given extent.
The extent descriptor fields are defined as follows:
type Identifies uniquely an extent descriptor record and defines the record's
length and format.
offset Represents the logical file offset in blocks for a given descriptor. Used
to optimize lookups and eliminate hole descriptor entries.
■ Indirect address blocks are fully typed and may have variable lengths up to a
maximum and optimum size of 8K. On a fragmented file system, indirect
extents may be smaller than 8K depending on space availability. VxFS always
tries to obtain 8K indirect extents but resorts to smaller indirects if necessary.
■ Indirect data extents are variable in size to allow files to allocate large,
contiguous extents and take full advantage of optimized I/O in VxFS.
■ Holes in sparse files require no storage and are eliminated by typed records.
A hole is determined by adding the offset and length of a descriptor and
comparing the result with the offset of the next record.
■ While there are no limits on the levels of indirection, lower levels are expected
in this format since data extents have variable lengths.
■ This format uses a type indicator that determines its record format and content
and accommodates new requirements and functionality for future types.
The current typed format is used on regular files and directories only when
indirection is needed. Typed records are longer than the previous format and
require less direct entries in the inode. Newly created files start out using the old
format, which allows for ten direct extents in the inode. The inode's block map is
converted to the typed format when indirection is needed to offer the advantages
of both formats.
Extent attributes
VxFS allocates disk space to files in groups of one or more extents. VxFS also
allows applications to control some aspects of the extent allocation. Extent
attributes are the extent allocation policies associated with a file.
The setext and getext commands allow the administrator to set or view extent
attributes associated with a file, as well as to preallocate space for a file.
See the setext(1) and getext(1) manual pages.
The vxtunefs command allows the administrator to set or view the default indirect
data extent size.
See the vxtunefs(1M) manual page.
Note: Inappropriate sizing of the intent log can have a negative impact on system
performance.
However, recent changes made to a system can be lost if a system failure occurs.
Specifically, attribute changes to files and recently created files may disappear.
The mount -o log intent logging option guarantees that all structural changes to
the file system are logged to disk before the system call returns to the application.
With this option, the rename(2) system call flushes the source file to disk to
guarantee the persistence of the file data before renaming it. The rename() call is
also guaranteed to be persistent when the system call returns. The changes to file
system data and metadata caused by the fsync(2) and fdatasync(2) system calls
are guaranteed to be persistent once the calls return.
Warning: Some applications and utilities may not work on large files.
Storage Checkpoints
To increase availability, recoverability, and performance, Veritas File System
offers on-disk and online backup and restore capabilities that facilitate frequent
and efficient backup strategies. Backup and restore applications can leverage a
Storage Checkpoint, a disk- and I/O-efficient copying technology for creating
periodic frozen images of a file system. Storage Checkpoints present a view of a
file system at a point in time, and subsequently identifies and maintains copies
of the original file system blocks. Instead of using a disk-based mirroring method,
Storage Checkpoints save disk space and significantly reduce I/O overhead by
using the free space pool available to a file system.
Storage Checkpoint functionality is separately licensed.
Online backup
VxFS provides online data backup using the snapshot feature. An image of a
mounted file system instantly becomes an exact read-only copy of the file system
at a specific point in time. The original file system is called the snapped file system,
the copy is called the snapshot.
When changes are made to the snapped file system, the old data is copied to the
snapshot. When the snapshot is read, data that has not changed is read from the
snapped file system, changed data is read from the snapshot.
Backups require one of the following methods:
■ Copying selected files from the snapshot file system (using find and cpio)
Introducing Veritas File System 21
Veritas File System features
Quotas
VxFS supports quotas, which allocate per-user and per-group quotas and limit
the use of two principal resources: files and data blocks. You can assign quotas
for each of these resources. Each quota consists of two limits for each resource:
hard limit and soft limit.
The hard limit represents an absolute limit on data blocks or files. A user can
never exceed the hard limit under any circumstances.
The soft limit is lower than the hard limit and can be exceeded for a limited amount
of time. This allows users to exceed limits temporarily as long as they fall under
those limits before the allotted time expires.
See “About quota limits” on page 101.
■ Since I/O to these devices bypasses the system buffer cache, VxFS saves on
the cost of copying data between user space and kernel space when data is
read from or written to a regular file. This process significantly reduces CPU
time per I/O transaction compared to that of buffered I/O.
scan an entire file system searching for modifications since a previous scan. FCL
functionality is a separately licensed feature.
See “About the File Change Log file” on page 109.
Multi-volume support
The multi-volume support (MVS) feature allows several volumes to be represented
by a single logical object. All I/O to and from an underlying logical volume is
directed by way of volume sets. This feature can be used only in conjunction with
VxVM. MVS functionality is a separately licensed feature.
See “About multi-volume support” on page 118.
Note: VxFS reduces the file lookup time in directories with an extremely large
number of files.
VxVM integration
VxFS interfaces with VxVM to determine the I/O characteristics of the underlying
volume and perform I/O accordingly. VxFS also uses this information when using
mkfs to perform proper allocation unit alignments for efficient I/O operations
from the kernel. VxFS also uses this information when using mkfs to perform
proper allocation unit alignments for efficient I/O operations from the kernel.
As part of VxFS/VxVM integration, VxVM exports a set of I/O parameters to
achieve better I/O performance. This interface can enhance performance for
different volume configurations such as RAID-5, striped, and mirrored volumes.
Full stripe writes are important in a RAID-5 volume for strong I/O performance.
VxFS uses these parameters to issue appropriate I/O requests to VxVM.
Application-specific parameters
You can also set application specific parameters on a per-file system basis to
improve I/O performance.
Introducing Veritas File System 25
Using Veritas File System
About defragmentation
Free resources are initially aligned and allocated to files in an order that provides
optimal performance. On an active file system, the original order of free resources
is lost over time as files are created, removed, and resized. The file system is
spread farther along the disk, leaving unused gaps or fragments between areas
that are in use. This process is known as fragmentation and leads to degraded
performance because the file system has fewer options when assigning a free
extent to a file (a group of contiguous data blocks).
VxFS provides the online administration utility fsadm to resolve the problem of
fragmentation.
The fsadm utility defragments a mounted file system by performing the following
actions:
■ Removing unused space from directories
■ Making all small files contiguous
■ Consolidating free blocks for file system use
This utility can run on demand and should be scheduled regularly as a cron job.
■ Tuning I/O
Block size
The unit of allocation in VxFS is a block. Unlike some other UNIX file systems,
VxFS does not make use of block fragments for allocation because storage is
allocated in extents that consist of one or more blocks.
30 VxFS performance: creating, mounting, and tuning File Systems
Choosing mount command options
You specify the block size when creating a file system by using the mkfs –o bsize
option. The block size cannot be altered after the file system is created. The
smallest available block size for VxFS is 1K, which is also the default block size.
Choose a block size based on the type of application being run. For example, if
there are many small files, a 1K block size may save space. For large file systems,
with relatively few files, a larger block size is more appropriate. Larger block sizes
use less disk space in file system overhead, but consume more space for files that
are not a multiple of the block size. The easiest way to judge which block sizes
provide the greatest system efficiency is to try representative system loads against
various sizes and pick the fastest. For most applications, it is best to use the default
values.
For 64-bit kernels, which support 32 terabyte file systems, the block size
determines the maximum size of the file system you can create. File systems up
to 4 TB require a 1K block size. For four to eight terabyte file systems, the block
size is 2K, For file systems between 8 and 16 TB, block size is 4K, and for greater
than 16 TB, the block size is 8K. If you specify the file system size when creating
a file system, the block size defaults to these values.
■ tmplog
■ logsize
■ nodatainlog
■ blkclear
■ minicache
■ convosync
■ ioerror
■ largefiles|nolorgefiles
■ cio
Caching behavior can be altered with the mincache option, and the behavior of
O_SYNC and D_SYNC writes can be altered with the convosync option.
See the fcntl(2) manual page.
The delaylog and tmplog modes can significantly improve performance. The
improvement over log mode is typically about 15 to 20 percent with delaylog; with
tmplog, the improvement is even higher. Performance improvement varies,
depending on the operations being performed and the workload. Read/write
intensive loads should show less improvement, while file system structure
intensive loads (such as mkdir, create, and rename) may show over 100 percent
improvement. The best way to select a mode is to test representative system loads
against the logging modes and compare the performance results.
Most of the modes can be used in combination. For example, a desktop machine
might use both the blkclear and mincache=closesync modes.
See the mount_vxfs(1M) manual page.
Note: The term “effects of system calls” refers to changes to file system data and
metadata caused by the system call, excluding changes to st_atime. See the stat(2)
manual page.
Persistence guarantees
In all logging modes, VxFS is fully POSIX compliant. The effects of the fsync(2)
and fdatasync(2) system calls are guaranteed to be persistent after the calls return.
The persistence guarantees for data or metadata modified by write(2), writev(2),
or pwrite(2) are not affected by the logging mount options. The effects of these
system calls are guaranteed to be persistent only if the O_SYNC, O_DSYNC,
VX_DSYNC, or VX_DIRECT flag, as modified by the convosync= mount option, has
been specified for the file descriptor.
The behavior of NFS servers on a VxFS file system is unaffected by the log and
tmplog mount options, but not delaylog. In all cases except for tmplog, VxFS
complies with the persistency requirements of the NFS v2 and NFS v3 standard.
Unless a UNIX application has been developed specifically for the VxFS file system
VxFS performance: creating, mounting, and tuning File Systems 33
Choosing mount command options
in log mode, it expects the persistence guarantees offered by most other file
systems and experiences improved robustness when used with a VxFS file system
mounted in delaylog mode. Applications that expect better persistence guarantees
than that offered by most other file systems can benefit from the log, mincache=,
and closesync mount options. However, most commercially available applications
work well with the default VxFS mount options, including the delaylog mode.
If performance is more important than data integrity, you can use the
mincache=tmpcache mode. The mincache=tmpcache mode disables special delayed
extending write handling, trading off less integrity for better performance. Unlike
the other mincache modes, tmpcache does not flush the file to disk the file is
closed. When the mincache=tmpcache option is used, bad data can appear in a
file that was being extended when a crash occurred.
■ convosync=delay
■ convosync=direct
■ convosync=dsync
■ convosync=unbuffered
The convosync=delay mode causes synchronous and data synchronous writes to
be delayed rather than to take effect immediately. No special action is performed
when closing a file. This option effectively cancels any data integrity guarantees
normally provided by opening a file with O_SYNC.
See the open(2), fcntl(2), and vxfsio(7) manual pages.
As with closesync, the direct, unbuffered, and dsync modes flush changes to the
file to disk when it is closed. These modes can be used to speed up applications
that use synchronous I/O. Many applications that are concerned with data integrity
specify the O_SYNC fcntl in order to write the file data synchronously. However,
this has the undesirable side effect of updating inode times and therefore slowing
down performance. The convosync=dsync, convosync=unbuffered, and
convosync=direct modes alleviate this problem by allowing applications to take
advantage of synchronous writes without modifying inode times as well.
Before using convosync=dsync, convosync=unbuffered, or convosync=direct,
make sure that all applications that use the file system do not require synchronous
inode time updates for O_SYNC writes.
Note: Applications and utilities such as backup may experience problems if they
are not aware of large files. In such a case, create your file system without large
file capability.
Specifying largefiles sets the largefiles flag. This lets the file system to hold files
that are two terabytes or larger. This is the default option.
To clear the flag and prevent large files from being created, type the following
command:
This guarantees that when a file is closed, its data is synchronized to disk and
cannot be lost. Thus, after an application has exited and its files are closed, no
data is lost even if the system is immediately turned off.
To mount a temporary file system or to restore from backup, type the following:
This combination might be used for a temporary file system where performance
is more important than absolute data integrity. Any O_SYNC writes are performed
as delayed writes and delayed extending writes are not handled. This could result
in a file that contains corrupted data if the system crashes. Any file written 30
seconds or so before a crash may contain corrupted data or be missing if this
mount combination is in effect. However, such a file system does significantly
40 VxFS performance: creating, mounting, and tuning File Systems
Using kernel tunables
less disk writes than a log file system, and should have significantly better
performance, depending on the application.
To mount a file system for synchronous writes, type the following:
It may be necessary to tune the dnlc (directory name lookup cache) size to keep
the value within an acceptable range relative to vxfs_ninode. It must be within
80% of vxfs_ninode to avoid spurious ENFILE errors or excessive CPU
consumption, but must be more than 50% of vxfs_ninode to maintain good
performance. The variable ncsize determines the size of dnlc. The default value
VxFS performance: creating, mounting, and tuning File Systems 41
Using kernel tunables
vx_maxlink
The VxFS vx_maxlink tunable determines the number of sub-directories that can
be created under a directory.
A VxFS file system obtains the value of vx_maxlink from the system configuration
file /etc/system. By default, vx_maxlink is 32K. To change the computed value of
vx_maxlink, you can add an entry to the system configuration file. For example:
vol_maxio
The vol_maxio parameter controls the maximum size of logical I/O operations
that can be performed without breaking up a request. Logical I/O requests larger
than this value are broken up and performed synchronously. Physical I/Os are
broken up based on the capabilities of the disk device and are unaffected by
changes to the vol_maxio logical request limit.
Raising the vol_maxio limit can cause problems if the size of an I/O requires more
memory or kernel mapping space than exists. The recommended maximum for
vol_maxio is 20% of the smaller of physical memory or kernel virtual memory. It
is not advisable to go over this limit. Within this limit, you can generally obtain
the best results by setting vol_maxio to the size of your largest stripe. This applies
to both RAID-0 striping and RAID-5 striping.
To increase the value of vol_maxio, add an entry to /etc/system (after the entry
forceload:drv/vxio) and reboot for the change to take effect. For example, the
following line sets the maximum I/O size to 16 MB:
42 VxFS performance: creating, mounting, and tuning File Systems
Monitoring free space
set vxio:vol_maxio=32768
Monitoring fragmentation
Fragmentation reduces performance and availability. Regular use of fsadm's
fragmentation reporting and reorganization facilities is therefore advisable.
The easiest way to ensure that fragmentation does not become a problem is to
schedule regular defragmentation runs using the cron command.
Defragmentation scheduling should range from weekly (for frequently used file
systems) to monthly (for infrequently used file systems). Extent fragmentation
should be monitored with fsadm command.
To determine the degree of fragmentation, use the following factors:
■ Percentage of free space in extents of less than 8 blocks in length
■ Percentage of free space in extents of less than 64 blocks in length
■ Percentage of free space in extents of length 64 blocks or greater
An unfragmented file system has the following characteristics:
■ Less than 1 percent of free space in extents of less than 8 blocks in length
■ Less than 5 percent of free space in extents of less than 64 blocks in length
■ More than 5 percent of the total file system size available as free extents in
lengths of 64 or more blocks
A badly fragmented file system has one or more of the following characteristics:
■ Greater than 5 percent of free space in extents of less than 8 blocks in length
VxFS performance: creating, mounting, and tuning File Systems 43
Tuning I/O
■ More than 50 percent of free space in extents of less than 64 blocks in length
■ Less than 5 percent of the total file system size available as free extents in
lengths of 64 or more blocks
The optimal period for scheduling of extent reorganization runs can be determined
by choosing a reasonable interval, scheduling fsadm runs at the initial interval,
and running the extent fragmentation report feature of fsadm before and after
the reorganization.
The “before” result is the degree of fragmentation prior to the reorganization. If
the degree of fragmentation is approaching the figures for bad fragmentation,
reduce the interval between fsadm runs. If the degree of fragmentation is low,
increase the interval between fsadm runs.
The “after” result is an indication of how well the reorganizer has performed. The
degree of fragmentation should be close to the characteristics of an unfragmented
file system. If not, it may be a good idea to resize the file system; full file systems
tend to fragment and are difficult to defragment. It is also possible that the
reorganization is not being performed at a time during which the file system in
question is relatively idle.
Directory reorganization is not nearly as critical as extent reorganization, but
regular directory reorganization improves performance. It is advisable to schedule
directory reorganization for file systems when the extent reorganization is
scheduled. The following is a sample script that is run periodically at 3:00 A.M.
from cron for a number of file systems:
outfile=/usr/spool/fsadm/out.‘/bin/date +'%m%d'‘
do
Tuning I/O
The performance of a file system can be enhanced by a suitable choice of I/O sizes
and proper alignment of the I/O requests based on the requirements of the
underlying special device. VxFS provides tools to tune the file systems.
44 VxFS performance: creating, mounting, and tuning File Systems
Tuning I/O
Note: The following tunables and the techniques work on a per file system basis.
Use them judiciously based on the underlying device properties and characteristics
of the applications that use the file system.
VxVM queries
VxVM receives the following queries during configuration:
■ The file system queries VxVM to determine the geometry of the underlying
volume and automatically sets the I/O parameters.
Note: When using file systems in multiple volume sets, VxFS sets the VxFS
tunables based on the geometry of the first component volume (volume 0) in
the volume set.
■ The mkfs command queries VxVM when the file system is created to
automatically align the file system to the volume geometry. If the default
alignment from mkfs is not acceptable, the -o align=n option can be used to
override alignment information obtained from VxVM.
■ The mount command queries VxVM when the file system is mounted and
downloads the I/O parameters.
If the default parameters are not acceptable or the file system is being used without
VxVM, then the /etc/vx/tunefstab file can be used to set values for I/O parameters.
The mount command reads the /etc/vx/tunefstab file and downloads any
parameters specified for a file system. The tunefstab file overrides any values
obtained from VxVM. While the file system is mounted, any I/O parameters can
be changed using the vxtunefs command which can have tunables specified on
the command line or can read them from the /etc/vx/tunefstab file. For more
details,
See the vxtunefs(1M) and tunefstab(4) manual pages.
The vxtunefs command can be used to print the current values of the I/O
parameters.
VxFS performance: creating, mounting, and tuning File Systems 45
Tuning I/O
# vxtunefs -p mount_point
/dev/vx/dsk/userdg/netbackup
read_pref_io=128k,write_pref_io=128k,read_nstream=4,write_nstream=4
/dev/vx/dsk/userdg/metasave
read_pref_io=128k,write_pref_io=128k,read_nstream=4,write_nstream=4
/dev/vx/dsk/userdg/solbuild
read_pref_io=64k,write_pref_io=64k,read_nstream=4,write_nstream=4
/dev/vx/dsk/userdg/solrelease
read_pref_io=64k,write_pref_io=64k,read_nstream=4,write_nstream=4
/dev/vx/dsk/userdg/solpatch
read_pref_io=128k,write_pref_io=128k,read_nstream=4,write_nstream=4
Parameter Description
read_pref_io The preferred read request size. The file system uses this
in conjunction with the read_nstream value to determine
how much data to read ahead. The default value is 64K.
write_pref_io The preferred write request size. The file system uses this
in conjunction with the write_nstream value to determine
how to do flush behind on writes. The default value is 64K.
Parameter Description
default_indir_ size On VxFS, files can have up to ten direct extents of variable
size stored in the inode. After these extents are used up,
the file must use indirect extents which are a fixed size
that is set when the file first uses indirect extents. These
indirect extents are 8K by default. The file system does
not use larger indirect extents because it must fail a write
and return ENOSPC if there are no extents available that
are the indirect extent size. For file systems containing
many large files, the 8K indirect extent size is too small.
The files that get into indirect extents use many smaller
extents instead of a few larger ones. By using this
parameter, the default indirect extent size can be
increased so large that files in indirects use fewer larger
extents. The tunable default_indir_size should be used
carefully. If it is set too large, then writes fail when they
are unable to allocate extents of the indirect extent size
to a file. In general, the fewer and the larger the files on
a file system, the larger the default_indir_size can be set.
This parameter should generally be set to some multiple
of the read_pref_io parameter. default_indir_size is not
applicable on Version 4 disk layouts.
Parameter Description
Parameter Description
fcl_winterval Specifies the time, in seconds, that must elapse before the
VxFS File Change Log (FCL) records a data overwrite, data
extending write, or data truncate for a file. The ability to
limit the number of repetitive FCL records for continuous
writes to the same file is important for file system
performance and for applications processing the FCL.
fcl_winterval is best set to an interval less than the
shortest interval between reads of the FCL by any
application. This way all applications using the FCL can
be assured of finding at least one FCL record for any file
experiencing continuous data changes.
Parameter Description
See max_seqio_extent_size).
Parameter Description
max_direct_iosz The maximum size of a direct I/O request that are issued
by the file system. If a larger I/O request comes in, then
it is broken up into max_direct_iosz chunks. This
parameter defines how much memory an I/O request can
lock at once, so it should not be set to more than 20
percent of memory.
Parameter Description
Parameter Description
Note: VxFS does not query VxVM with multiple volume sets. To improve I/O
performance when using multiple volume sets, use the vxtunefs command.
If the file system is being used with a hardware disk array or volume manager
other than VxVM, try to align the parameters to match the geometry of the logical
disk. With striping or RAID-5, it is common to set read_pref_io to the stripe unit
size and read_nstream to the number of columns in the stripe. For striped arrays,
use the same values for write_pref_io and write_nstream, but for RAID-5 arrays,
set write_pref_io to the full stripe size and write_nstream to 1.
For an application to do efficient disk I/O, it should use the following formula to
issue read requests:
■ read requests = read_nstream x by read_pref_io
Generally, any multiple or factor of read_nstream multiplied by read_pref_io
should be a good size for performance. For writing, the same rule of thumb applies
to the write_pref_io and write_nstream parameters. When tuning a file system,
the best thing to do is try out the tuning parameters under a real life workload.
If an application is doing sequential I/O to large files, it should try to issue requests
larger than the discovered_direct_iosz. This causes the I/O requests to be
performed as discovered direct I/O requests, which are unbuffered like direct I/O
but do not require synchronous inode updates when extending the file. If the file
is larger than can fit in the cache, using unbuffered I/O avoids removing useful
data out of the cache and lessens CPU overhead.
54 VxFS performance: creating, mounting, and tuning File Systems
Tuning I/O
Chapter 3
Extent attributes
This chapter includes the following topics:
because the unused space fragments free space by breaking large extents into
smaller pieces. By erring on the side of minimizing fragmentation for the file
system, files may become so non-contiguous that their I/O characteristics would
degrade.
Fixed extent sizes are particularly appropriate in the following situations:
■ If a file is large and contiguous, a large fixed extent size can minimize the
number of extents in the file.
Custom applications may also use fixed extent sizes for specific reasons, such as
the need to align extents to cylinder or striping boundaries on disk.
Other controls
The auxiliary controls on extent attributes determine the following conditions:
■ Whether allocations are aligned
■ Whether allocations are contiguous
■ Whether the file can be written beyond its reservation
■ Whether an unused reservation is released when the file is closed
■ Whether the reservation is a persistent attribute of the file
■ When the space reserved for a file will actually become part of the file
Alignment
Specific alignment restrictions coordinate a file's allocations with a particular
I/O pattern or disk alignment. Alignment can only be specified if a fixed extent
size has also been set. Setting alignment restrictions on allocations is best left to
well-designed applications.
See the mkfs_vxfs(1M) manual page.
See “About VxFS I/O” on page 61.
Contiguity
A reservation request can specify that its allocation remain contiguous (all one
extent). Maximum contiguity of a file optimizes its I/O characteristics.
Note: Fixed extent sizes or alignment cause a file system to return an error message
reporting insufficient space if no suitably sized (or aligned) extent is available.
This can happen even if the file system has sufficient free space and the fixed
extent size is large.
58 Extent attributes
Commands related to extent attributes
Reservation trimming
A reservation request can specify that any unused reservation be released when
the file is closed. The file is not completely closed until all processes open against
the file have closed it.
Reservation persistence
A reservation request can ensure that the reservation does not become a persistent
attribute of the file. The unused reservation is discarded when the file is closed.
source file system, or lacks free extents appropriate to satisfy the extent attribute
requirements.
The -e option takes any of the following keywords as an argument:
■ Cache advisories
Direct I/O
Direct I/O is an unbuffered form of I/O. If the VX_DIRECT advisory is set, the user
is requesting direct data transfer between the disk and the user-supplied buffer
for reads and writes. This bypasses the kernel buffering of data, and reduces the
CPU overhead associated with I/O by eliminating the data copy between the kernel
buffer and the user's buffer. This also avoids taking up space in the buffer cache
that might be better used for something else. The direct I/O feature can provide
significant performance gains for some applications.
The direct I/O and VX_DIRECT advisories are maintained on a per-file-descriptor
basis.
Unbuffered I/O
If the VX_UNBUFFERED advisory is set, I/O behavior is the same as direct I/O
with the VX_DIRECT advisory set, so the alignment constraints that apply to
direct I/O also apply to unbuffered I/O. For unbuffered I/O, however, if the file is
being extended, or storage is being allocated to the file, inode changes are not
updated synchronously before the write returns to the user. The VX_UNBUFFERED
advisory is maintained on a per-file-descriptor basis.
to the user. If the file is extended by the operation, the inode is written before the
write returns.
The direct I/O and VX_DSYNC advisories are maintained on a per-file-descriptor
basis.
Cache advisories
VxFS allows an application to set cache advisories for use when accessing files.
VxFS cache advisories enable applications to help monitor the buffer cache and
provide information on how better to tune the buffer cache to improve performance
gain.
The basic function of the cache advisory is to let you know whether you could
have avoided a later re-read of block X if the buffer cache had been a little larger.
Conversely, the cache advisory can also let you know that you could safely reduce
the buffer cache size without putting block X into jeopardy.
These advisories are in memory only and do not persist across reboots. Some
advisories are currently maintained on a per-file, not a per-file-descriptor, basis.
Only one set of advisories can be in effect for all accesses to the file. If two
conflicting applications set different advisories, both must use the advisories that
were last set.
All advisories are set using the VX_SETCACHE ioctl command. The current set of
advisories can be obtained with the VX_GETCACHE ioctl command.
See the vxfsio(7) manual page.
VxFS I/O Overview 65
Freezing and thawing a file system
decisions about the I/O sizes issued to VxFS for a file or file device. For more
details on this ioctl, refer to the vxfsio(7) manual page.
For a discussion on various I/O parameters, refer to “VxFS performance: creating,
mounting, and tuning File Systems” on page 31 and the vxtunefs(1M) manual
page.
A Storage Checkpoint of the primary fileset initially contains a pointer to the file
system block map rather than to any actual data. The block map points to the data
on the primary fileset.
Figure 5-1 shows the file system /database and its Storage Checkpoint.
The Storage Checkpoint is logically identical to the primary fileset when the
Storage Checkpoint is created, but it does not contain any actual data blocks.
70 Storage Checkpoints
How a Storage Checkpoint works
/database /database
In Figure 5-2, a square represents each block of the file system. This figure shows
a Storage Checkpoint containing pointers to the primary fileset at the time the
Storage Checkpoint is taken, as in Figure 5-1.
E
Storage Checkpoints 71
How a Storage Checkpoint works
The Storage Checkpoint presents the exact image of the file system by finding
the data from the primary fileset. As the primary fileset is updated, the original
data is copied to the Storage Checkpoint before the new data is written. When a
write operation changes a specific data block in the primary fileset, the old data
is first read and copied to the Storage Checkpoint before the primary fileset is
updated. Subsequent writes to the specified data block on the primary fileset do
not result in additional updates to the Storage Checkpoint because the old data
needs to be saved only once. As blocks in the primary fileset continue to change,
the Storage Checkpoint accumulates the original data blocks.
Copy-on-write
In Figure 5-3, the third block originally containing C is updated.
Before the block is updated with new data, the original data is copied to the Storage
Checkpoint. This is called the copy-on-write technique, which allows the Storage
Checkpoint to preserve the image of the primary fileset when the Storage
Checkpoint is taken.
Every update or write operation does not necessarily result in the process of
copying data to the Storage Checkpoint. In this example, subsequent updates to
this block, now containing C', are not copied to the Storage Checkpoint because
the original image of the block containing C is already saved.
72 Storage Checkpoints
Types of Storage Checkpoints
C’ C
limit the life of data Storage Checkpoints to minimize the impact on system
resources.
See “Showing the difference between a data and a nodata Storage Checkpoint”
on page 79.
A’
See “Showing the difference between a data and a nodata Storage Checkpoint”
on page 79.
74 Storage Checkpoints
Storage Checkpoint administration
Warning: If you create a Storage Checkpoint for backup purposes, do not mount
it as a writable Storage Checkpoint. You will lose the point-in-time image if
you accidently write to the Storage Checkpoint.
/dev/vx/dsk/fsvol/vol1:may_23
Note: The vol1 file system must already be mounted before the Storage
Checkpoint can be mounted.
78 Storage Checkpoints
Storage Checkpoint administration
■ To mount this Storage Checkpoint automatically when the system starts up,
put the following entries in the /etc/vfstab file:
e c device
i v e d #to fsck mount point FS fsck mount at mount
o t type pass boot options
t n u om
/ xv/dev/vx/rdsk/fsvol/vol1
/ v ed / /fsvol vxfs 1 yes —
/ k s d
/ l ov s f
1 l o v
# umount /fsvol_may_23
# umount /dev/vx/dsk/fsvol/vol1:may_23
/dev/vx/dsk/fsvol/vol1 /fsvol vxfs defaults 0 2
/dev/vx/dsk/fsvol/vol1:may_23 /fsvol_may_23 vxfs clone=may_23 0 0
Note: You do not need to run the fsck utility on Storage Checkpoint pseudo devices
because pseudo devices are part of the actual file system.
Note: A nodata Storage Checkpoint does not contain actual file data.
version 7 layout
134217728 sectors, 67108864 blocks of size 1024, log \
4 Examine the content of the original file and the Storage Checkpoint file:
# cat /mnt0/file
hello, world
# cat /mnt0@5_30pm/file
hello, world
6 Examine the content of the original file and the Storage Checkpoint file. The
original file contains the latest data while the Storage Checkpoint file still
contains the data at the time of the Storage Checkpoint creation:
# cat /mnt0/file
goodbye
# cat /mnt0@5_30pm/file
hello, world
Storage Checkpoints 81
Storage Checkpoint administration
# umount /mnt0@5_30pm
# fsckptadm -s set nodata ckpt@5_30pm /mnt0
# mount -F vxfs -o ckpt=ckpt@5_30pm \
/dev/vx/dsk/dg1/test0:ckpt@5_30pm /mnt0@5_30pm
8 Examine the content of both files. The original file must contain the latest
data:
# cat /mnt0/file
goodbye
You can traverse and read the directories of the nodata Storage Checkpoint;
however, the files contain no data, only markers to indicate which block of
the file has been changed since the Storage Checkpoint was created:
# ls -l /mnt0@5_30pm/file
-rw-r--r-- 1 root other 13 Jul 13 17:13 \
mnt0@5_30pm/file
# cat /mnt0@5_30pm/file
cat: /mnt0@5_30pm/file: I/O error
# ls -l /mnt0@5_30pm/file
-rw-r--r-- 1 root other 13 Jul 13 17:13 \
# cat /mnt0@5_30pm/file
cat: read error: No such file or directory
# ls -l /mnt0@5_30pm/file
-rw-r--r-- 1 root other 13 Jul 13 17:13 \
# cat /mnt0@5_30pm/file
cat: /mnt0@5_30pm/file: Input/output error
# ls -l /mnt0@5_30pm/file
-rw-r--r-- 1 root other 13 Jul 13 17:13 \
# cat /mnt0@5_30pm/file
cat: input error on /mnt0@5_30pm/file: I/O error
# ls -l /mnt0@5_30pm/file
-rw-r--r-- 1 root other 13 Jul 13 17:13 \
# cat /mnt0@5_30pm/file
cat: /mnt0@5_30pm/file: I/O error
82 Storage Checkpoints
Storage Checkpoint administration
2 Create four data Storage Checkpoints on this file system, note the order of
creation, and list them:
4 You can instead convert the “latest” Storage Checkpoint to a nodata Storage
Checkpoint in a delayed or asynchronous manner.
5 List the Storage Checkpoints, as in the following example. You will see that
the “latest” Storage Checkpoint is marked for conversion in the future.
Note: After you remove the “older” and “old” Storage Checkpoints, the “latest”
Storage Checkpoint is automatically converted to a nodata Storage Checkpoint
because the only remaining older Storage Checkpoint (“oldest”) is already a
nodata Storage Checkpoint:
Files can be restored by copying the entire file from a mounted Storage Checkpoint
back to the primary fileset. To restore an entire file system, you can designate a
mountable data Storage Checkpoint as the primary fileset using the
fsckpt_restore command.
$ cd /home/users/me
$ rm MyFile.txt
$ cd /home/checkpoints/mar_4/users/me
$ ls -l
-rw-r--r-- 1 me staff 14910 Mar 4 17:09 MyFile.txt
$ cp MyFile.txt /home/users/me
$ cd /home/users/me
$ ls -l
-rw-r--r-- 1 me staff 14910 Mar 4 18:21 MyFile.txt
# fsckpt_restore -l /dev/vx/dsk/dg1/vol2
/dev/vx/dsk/dg1/vol2:
UNNAMED:
ctime = Thu 08 May 2004 06:28:26 PM PST
mtime = Thu 08 May 2004 06:28:26 PM PST
flags = largefiles, file system root
CKPT6:
ctime = Thu 08 May 2004 06:28:35 PM PST
mtime = Thu 08 May 2004 06:28:35 PM PST
flags = largefiles
CKPT5:
ctime = Thu 08 May 2004 06:28:34 PM PST
mtime = Thu 08 May 2004 06:28:34 PM PST
flags = largefiles, nomount
CKPT4:
ctime = Thu 08 May 2004 06:28:33 PM PST
mtime = Thu 08 May 2004 06:28:33 PM PST
flags = largefiles
CKPT3:
ctime = Thu 08 May 2004 06:28:36 PM PST
mtime = Thu 08 May 2004 06:28:36 PM PST
flags = largefiles
CKPT2:
ctime = Thu 08 May 2004 06:28:30 PM PST
mtime = Thu 08 May 2004 06:28:30 PM PST
flags = largefiles
CKPT1:
ctime = Thu 08 May 2004 06:28:29 PM PST
mtime = Thu 08 May 2004 06:28:29 PM PST
flags = nodata, largefiles
90 Storage Checkpoints
Restoring a file system from a Storage Checkpoint
2 In this example, select the Storage Checkpoint “CKPT3” as the new root fileset:
If the filesets are listed at this point, it shows that the former UNNAMED root
fileset and CKPT6, CKPT5, and CKPT4 were removed, and that CKPT3 is now
the primary fileset. CKPT3 is now the fileset that will be mounted by default.
C C C
K K K
P P P
T T T
3 2 1
# fsckpt_restore -l /dev/vx/dsk/dg1/vol2
/dev/vx/dsk/dg1/vol2:
CKPT3:
ctime = Thu 08 May 2004 06:28:31 PM PST
mtime = Thu 08 May 2004 06:28:36 PM PST
flags = largefiles, file system root
CKPT2:
ctime = Thu 08 May 2004 06:28:30 PM PST
mtime = Thu 08 May 2004 06:28:30 PM PST
flags = largefiles
CKPT1:
ctime = Thu 08 May 2004 06:28:29 PM PST
mtime = Thu 08 May 2004 06:28:29 PM PST
flags = nodata, largefiles
Select Storage Checkpoint for restore operation
or <Control/D> (EOF) to exit
or <Return> to list Storage Checkpoints:
92 Storage Checkpoints
Storage Checkpoint quotas
hard limit An absolute limit that cannot be exceeded. If a hard limit is exceeded,
all further allocations on any of the Storage Checkpoints fail, but
existing Storage Checkpoints are preserved.
soft limit Must be lower than the hard limit. If a soft limit is exceeded, no new
Storage Checkpoints can be created. The number of blocks used must
return below the soft limit before more Storage Checkpoints can be
created. An alert and console message are generated.
■ Backup examples
You use the mount command to create a snapshot file system; the mkfs command
is not required. A snapshot file system is always read-only. A snapshot file system
exists only as long as it and the snapped file system are mounted and ceases to
exist when unmounted. A snapped file system cannot be unmounted until all of
94 Online backup using file system snapshots
Snapshot file system backups
Note: A snapshot file system ceases to exist when unmounted. If mounted again,
it is actually a fresh snapshot of the snapped file system. A snapshot file system
must be unmounted before its dependent snapped file system can be unmounted.
Neither the fuser command nor the mount command will indicate that a snapped
file system cannot be unmounted because a snapshot of it exists.
On cluster file systems, snapshots can be created on any node in the cluster, and
backup operations can be performed from that node. The snapshot of a cluster
file system is accessible only on the node where it is created, that is, the snapshot
file system itself cannot be cluster mounted.
See the Veritas Storage Foundation Cluster File System Administrator's Guide.
Warning: Any existing data on the device used for the snapshot is overwritten.
Backup examples
In the following examples, the vxdump utility is used to ascertain whether
/dev/vx/dsk/fsvol/vol1 is a snapshot mounted as /backup/home and does the
appropriate work to get the snapshot data through the mount point.
These are typical examples of making a backup of a 300,000 block file system
named /home using a snapshot file system on /dev/vx/dsk/fsvol/vol1 with a
snapshot mount point of /backup/home.
To create a backup using a snapshop file system
1 To back up files changed within the last week using cpio:
# cd /backup
/dev/rmt/c0s0
# umount /backup/home
/dev/rmt/c0s0
Reads from the snapshot file system are impacted if the snapped file system is
busy because the snapshot reads are slowed by the disk I/O associated with the
snapped file system.
The overall impact of the snapshot is dependent on the read to write ratio of an
application and the mixing of the I/O operations. For example, a database
application running an online transaction processing (OLTP) workload on a
snapped file system was measured at about 15 to 20 percent slower than a file
system that was not snapped.
Require a separate device for storage Reside on the same device as the original file
system
Cease to exist after being unmounted Can exist and be mounted on their own
Track changed blocks on the file system level Track changed blocks on each file in the file
system
Storage Checkpoints also serve as the enabling technology for two other Veritas
features: Block-Level Incremental Backups and Storage Rollback, which are used
extensively for backing up databases.
See “About Storage Checkpoints” on page 67.
■ A blockmap
■ Data blocks copied from the snapped file system
The following figure shows the disk structure of a snapshot file system.
super-block
bitmap
blockmap
data block
The super-block is similar to the super-block of a standard VxFS file system, but
the magic number is different and many of the fields are not applicable.
The bitmap contains one bit for every block on the snapped file system. Initially,
all bitmap entries are zero. A set bit indicates that the appropriate block was
copied from the snapped file system to the snapshot. In this case, the appropriate
position in the blockmap references the copied block.
The blockmap contains one entry for each block on the snapped file system.
Initially, all entries are zero. When a block is copied from the snapped file system
to the snapshot, the appropriate entry in the blockmap is changed to contain the
block number on the snapshot file system that holds the data from the snapped
file system.
The data blocks are filled by data copied from the snapped file system, starting
from the beginning of the data block area.
Initially, the snapshot file system satisfies read requests by finding the data on
the snapped file system and returning it to the requesting process. When an inode
update or a write changes the data in block n of the snapped file system, the old
data is first read and copied to the snapshot before the snapped file system is
updated. The bitmap entry for block n is changed from 0 to 1, indicating that the
data for block n can be found on the snapshot file system. The blockmap entry
for block n is changed from 0 to the block number on the snapshot file system
containing the old data.
A subsequent read request for block n on the snapshot file system will be satisfied
by checking the bitmap entry for block n and reading the data from the indicated
block on the snapshot file system, instead of from block n on the snapped file
system. This technique is called copy-on-write. Subsequent writes to block n on
the snapped file system do not result in additional copies to the snapshot file
system, since the old data only needs to be saved once.
All updates to the snapped file system for inodes, directories, data in files, extent
maps, and so forth, are handled in this fashion so that the snapshot can present
a consistent view of all file system structures on the snapped file system for the
time when the snapshot was created. As data blocks are changed on the snapped
file system, the snapshot gradually fills with data copied from the snapped file
system.
The amount of disk space required for the snapshot depends on the rate of change
of the snapped file system and the amount of time the snapshot is maintained. In
the worst case, the snapped file system is completely full and every file is removed
and rewritten. The snapshot file system would need enough blocks to hold a copy
of every block on the snapped file system, plus additional blocks for the data
structures that make up the snapshot file system. This is approximately 101
percent of the size of the snapped file system. Normally, most file systems do not
undergo changes at this extreme rate. During periods of low activity, the snapshot
should only require two to six percent of the blocks of the snapped file system.
During periods of high activity, the snapshot might require 15 percent of the
blocks of the snapped file system. These percentages tend to be lower for larger
file systems and higher for smaller ones.
Warning: If a snapshot file system runs out of space for changed data blocks, it is
disabled and all further attempts to access it fails. This does not affect the snapped
file system.
Chapter 7
Quotas
This chapter includes the following topics:
■ Using quotas
hard limit An absolute limit that cannot be exceeded under any circumstances.
soft limit Must be lower than the hard limit, and can be exceeded, but only for
a limited time. The time limit can be configured on a per-file system
basis only. The VxFS default limit is seven days.
Soft limits are typically used when a user must run an application that could
generate large temporary files. In this case, you can allow the user to exceed the
quota limit for a limited time. No allocations are allowed after the expiration of
102 Quotas
About quota files on &ProductNameLong;
also requires a separate group quotas file. The VxFS group quota file is named
quotas.grp. The VxFS user quotas file is named quotas. This name was used to
distinguish it from the quotas.user file used by other file systems under Solaris.
Note: Most of the quota commands in VxFS are similar to BSD quota commands.
However, the quotacheck command is an exception; VxFS does not support an
equivalent command.
vxedquota Edits quota limits for users and groups. The limit changes made by
vxedquota are reflected both in the internal quotas file and the external
quotas file.
Beside these commands, the VxFS mount command supports a special mount
option (–o quota), which can be used to turn on quotas at mount time.
For additional information on the quota commands, see the corresponding manual
pages.
Note:
Note:
104 Quotas
About quota checking with Veritas File System
Note:
Note: When VxFS file systems are exported via NFS, the VxFS quota commands
on the NFS client cannot query or edit quotas. You can use the VxFS quota
commands on the server to query or edit quotas.
Using quotas
The VxFS quota commands are used to manipulate quotas.
Turning on quotas
To use the quota functionality on a file system, quotas must be turned on. You
can turn quotas on at mount time or after a file system is mounted.
Quotas 105
Using quotas
Note: Before turning on quotas, the root directory of the file system must contain
a file for user quotas named quotas and a file for group quotas named quotas.grp
and a file for group quotas named quotas.grp and a file for group quotas named
quotas.grp and a file for group quotas named quotas.grp owned by root.
To turn on quotas
1 To turn on user and groupuser and groupuser and groupuser and group quotas
for a VxFS file system, enter:
# vxquotaonvxquotaonquotaonvxquotaonvxquotaon /mount_point
# vxquotaon –u /mount_point
# vxquotaon –g /mount_point
4
5
6
7
8
9
4
5
6
7
8
9
Editing user and group and group and group and group quotas
You can set up user and group quotas using the vxedquota command. You must
have superuser privileges to edit quotas.
vxedquota creates a temporary file for the given user; this file contains on-disk
quotas for each mounted file system that has a quotas file. It is not necessary that
quotas be turned on for vxedquota to work. However, the quota limits are
applicable only after quotas are turned on for a given file system.
To edit quotas
1 Specify the –u option to edit the quotas of one or more users specified by
username:
Editing the quotas of one or more users is the default behavior if the –u option
is not specified.
2 Specify the –g option to edit the quotas of one or more groups specified by
groupname:
# vxedquota –g groupname
1
2
◆
1
2
Quotas 107
Using quotas
1
2
# vxedquota [–u] –t
2 Specify the -g and -t options to modify time limits for any group:
# vxedquota –g –t
2 To display a group's quotas and disk usage on all mounted VxFS file systems
where the quotas.grp file exists, enter:
# vxquota –v –g groupname
1
2
3
1
2
108 Quotas
Using quotas
1
2
2 To display the number of files and the space owned by each group, enter:
# vxquot –g –f filesystem
1
2
◆
1
2
1
2
# vxquotaoff /mount_point
2 To turn off only user quotas for a VxFS file system, enter:
# vxquotaoff –u /mount_point
3 To turn off only group quotas for a VxFS file system, enter:
# vxquotaoff –g /mount_point
Chapter 8
File Change Log
This chapter includes the following topics:
FCL stores changes in a sparse file in the file system namespace. The FCL log file
is located in mount_point/lost+found/changelog. The FCL file behaves like a
regular file, but some operations are prohibited. The standard system calls open(2),
lseek(2), read(2) and close(2) can access the data in the FCL, while the write(2),
mmap(2) and rename(2) calls are not allowed.
Warning: In future VxFS releases, the FCL file might be pulled out of the namespace
such that these standard system calls will no longer work. Therefore, it is
recommended that all new applications be developed using the programmatic
interface.
110 File Change Log
File Change Log administrative interface
The FCL log file contains both the information about the FCL, which is stored in
the FCL superblock, and the changes to files and directories in the file system,
which is stored as FCL records.
See “File Change Log programmatic interface” on page 112.
In 4.1, the structure of the File Change Log file was exposed through the
/opt/VRTS/include/sys/fs/fcl.h header file. In this release, the internal
structure of the FCL file is opaque. The recommended mechanism to access the
FCL is through the API described by the /opt/VRTSfssdk/5.0/include/vxfsutil.h
header file.
The /opt/VRTS/include/sys/fs/fcl.h header file is included in this release to
ensure that applications accessing the FCL with the 4.1 header file do not break.
New applications should use the new FCL API described in
/opt/VRTSfssdk/5.0/include/vxfsutil.h. Existing applications should also be
modified to use the new FCL API.
With the addition of new record types, the FCL version in this release has been
updated to 4. To provide backward compatibility for the existing applications,
this release supports multiple FCL versions. Users now have the flexibility of
specifying of specifying the FCL version for new FCLs. The default FCL version is
4.
See the fcladm(1M) man page.
set Enables the recording of the audit, open, close, and stats events
in the File Change Log file. Setting the audit option enables all
events to be recorded in the FCL file when the command is issued.
Setting the audit option also lists the struct fcl_accessinfo
identifier, which shows the user ID. The open option enables all
files opened when the command is issued along with the command
names to be recorded in the FCL file. The close option allows the
recording of all files that are closed in the FCL file. The stats option
enables the statistics of all events to be recorded to the FCL.
clear Disables the recording of the audit, open, close, and stats events
after it has been set.
File Change Log 111
File Change Log administrative interface
fcl_keeptime Specifies the duration in seconds that FCL records stay in the FCL
file before they can be purged. The first records to be purged are
the oldest ones, which are located at the beginning of the file.
Additionally, records at the beginning of the file can be purged if
allocation to the FCL file exceeds fcl_maxalloc bytes. The
default value is 0. Note that fcl_keeptime takes precedence
over fcl_maxalloc. No hole is punched if the FCL file exceeds
fcl_maxalloc bytes but the life of the oldest record has not
reached fcl_keeptime seconds.
fcl_winterval Specifies the time in seconds that must elapse before the FCL
records an overwrite, extending write, or a truncate. This helps
to reduce the number of repetitive records in the FCL. The
fcl_winterval timeout is per inode. If an inode happens to go
out of cache and returns, its write interval is reset. As a result,
there could be more than one write record for that file in the same
write interval. The default value is 3600 seconds.
Either or both fcl_maxalloc and fcl_keeptime must be set to activate the FCL
feature. The following are examples of using the FCL administration command.
To activate FCL for a mounted file system, enter:
# fcladm on mount_point
To remove the FCL file for a mounted file system, on which FCL must be turned
off, enter:
# fcladm rm mount_point
To obtain the current FCL state for a mounted file system, enter:
Print the on-disk FCL super-block in text format to obtain information about the
FCL file by using offset 0. Because the FCL on-disk super-block occupies the first
block of the FCL file, the first and last valid offsets into the FCL file can be
determined by reading the FCL super-block and checking the fc_foff field. Enter:
To print the contents of the FCL in text format, of which the offset used must be
32-byte aligned, enter:
Simplified reading The API simplifies user tasks by reducing additional code needed
to parse FCL file entries. In 4.1, to obtain event information such
as a remove or link, the user was required to write additional code
to get the name of the removed or linked file. In this release, the
API allows the user to directly read an assembled record. The API
also allows the user to specify a filter to indicate a subset of the
event records of interest.
Backward Providing API access for the FCL feature allows backward
compatibility compatibility for applications. The API allows applications to
parse the FCL file independent of the FCL layout changes. Even if
the hidden disk layout of the FCL changes, the API automatically
translates the returned data to match the expected output record.
As a result, the user does not need to modify or re-compile the
application due to changes in the on-disk FCL layout.
The following sample code fragment reads the FCL superblock, checks that the
state of the FCL is VX_FCLS_ON, issues a call to vxfs_fcl_sync to obtain a finishing
offset to read to, determines the first valid offset in the FCL file, then reads the
entries in 8K chunks from this offset. The section process fcl entries is what an
application developer must supply to process the entries in the FCL.
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/fcntl.h>
#include <errno.h>
#include <fcl.h>
#include <vxfsutil.h>
#define FCL_READSZ 8192
char* fclname = "/mnt/lost+found/changelog";
int read_fcl(fclname) char* fclname;
{
struct fcl_sb fclsb;
uint64_t off, lastoff;
size_t size;
char buf[FCL_READSZ], *bufp = buf;
int fd;
int err = 0;
if ((fd = open(fclname, O_RDONLY)) < 0) {
return ENOENT;
}
if ((off = lseek(fd, 0, SEEK_SET)) != 0) {
close(fd);
return EIO;
}
size = read(fd, &fclsb, sizeof (struct fcl_sb));
if (size < 0) {
close(fd);
return EIO;
}
if (fclsb.fc_state == VX_FCLS_OFF) {
close(fd);
return 0;
}
if (err = vxfs_fcl_sync(fclname, &lastoff)) {
close(fd);
return err;
114 File Change Log
Reverse path name lookup
}
if ((off = lseek(fd, off_t, uint64_t)) != uint64_t) {
close(fd);
return EIO;
}
while (off < lastoff) {
if ((size = read(fd, bufp, FCL_READSZ)) <= 0) {
close(fd);
return errno;
}
/* process fcl entries */
off += size;
}
close(fd);
return 0;
}
The reverse path name lookup feature can be useful for a variety of applications,
such as for clients of the VxFS File Change Log feature, in backup and restore
utilities, and for replication products. Typically, these applications store
information by inode numbers because a path name for a file or directory can be
very long, thus the need for an easy method of obtaining a path name.
An inode is a unique identification number for each file in a file system. An inode
contains the data and metadata associated with that file, but does not include the
file name to which the inode corresponds. It is therefore relatively difficult to
determine the name of a file from an inode number. The ncheck command provides
a mechanism for obtaining a file name from an inode identifier by scanning each
directory in the file system, but this process can take a long period of time. The
VxFS reverse path name lookup feature obtains path names relatively quickly.
Note: Because symbolic links do not constitute a path to the file, the reverse path
name lookup feature cannot track symbolic links to files.
File Change Log 115
Reverse path name lookup
■ Allocating data
■ Volume encapsulation
■ Load balancing
Note: MVS is available only on file systems using disk layout Version 6 or later.
See “About disk layouts” on page 277.
Volume availability
MVS guarantees the availability of some volumes even when others are unavailable.
This allows you to mount a multi-volume file system even if one or more
component dataonly volumes are missing.
The volumes are separated by whether metadata is allowed on the volume. An
I/O error on a dataonly volume does not affect access to any other volumes. All
VxFS operations that do not access the missing dataonly volume function normally,
including:
■ Mounting the multi-volume file system, regardless if the file system is read-only
or read/write.
■ Kernel operations.
■ Performing a fsck replay. Logged writes are converted to normal writes if the
corresponding volume is dataonly.
■ Performing a full fsck.
■ Using all other commands that do not access data on a missing volume.
Some operations that could fail if a dataonly volume is missing include:
■ Reading or writing file data if the file's data extents were allocated from the
missing dataonly volume.
■ Using the vxdump command.
120 Multi-volume file systems
About volume sets
Volume availability is supported only on a file system with disk layout Version 7
or later.
2 Create two new volumes and add them to the volume set:
4 Use the ls command to see that when a volume set is created, the volumes
contained by the volume set are removed from the namespace and are instead
accessed through the volume set name:
# ls -l /dev/vx/rdsk/rootdg/myvset
1 root root 108,70009 May 21 15:37 /dev/vx/rdsk/rootdg/myvset
5 Create a volume, add it to the volume set, and use the ls command to see that
when a volume is added to the volume set, it is no longer visible in the
namespace:
After the file system is created, VxFS allocates space from the different
volumes within the volume set.
2 List the component volumes of the volume set using of the fsvoladm command:
3 Add a new volume by adding the volume to the volume set, then adding the
volume to the file system:
5 Increase the metadata space in the file system using the fsvoladm command:
# vxupgrade /mnt1
Version 5
124 Multi-volume file systems
Removing a volume from a multi-volume file system
# vxupgrade -n 7 /mnt1
# umount /mnt1
7 Edit the /etc/vfstab file to replace the volume device name, vol1, with the
volume set name, vset1.
8 Mount the file system:
volume, use the fsck -o zapvol=volname command. The zapvol option performs
a full file system check and zaps all inodes that refer to the specified volume. The
fsck command prints the inode numbers of all files that the command destroys;
the file names are not printed. The zapvol option only affects regular files if used
on a dataonly volume. However, it could destroy structural files if used on a
metadataok volume, which can make the file system unrecoverable. Therefore,
the zapvol option should be used with caution on metadataok volumes.
2 Create a file system on the myvset volume set and mount it:
4 Assign the policies at the file system level. The data policy must be specified
before the metadata policy:
The assignment of the policies on a file system-wide basis ensures that any
metadata allocated is stored on the device with the policy metadatapolicy
(vol2) and all user data is be stored on vol1 with the associated datapolicy
policy.
The following example shows how to assign pattern tables to a file system in a
volume set that contains two volumes from different classes of storage. The
pattern table is contained within the pattern file mypatternfile.
To assign pattern tables to directories
1 Define two allocation policies called mydata and mymeta to refer to the vol1
and vol2 volumes:
Allocating data
The following script creates a large number of files to demonstrate the benefit of
allocating data:
i=1
while [ $i -lt 1000 ]
do
dd if=/dev/zero of=/mnt1/$i bs=65536 count=1
i=‘expr $i + 1‘
done
Before the script completes, vol1 runs out of space even though space is still
available on the vol2 volume:
The solution is to assign an allocation policy that allocates user data from the
vol1 volume to vol2 if space runs out.
You must have system administrator privileges to create, remove, change policies,
or set file system or Storage Checkpoint level policies. Users can assign a
pre-existing policy to their files if the policy allows that.
Policies can be inherited for new files. A file will inherit the allocation policy of
the directory in which it resides if you run the fsapadm assignfile -f inherit
command on the directory.
Multi-volume file systems 129
Volume encapsulation
Volume encapsulation
Multi-volume support enables the ability to encapsulate an existing raw volume
and make the volume contents appear as a file in the file system.
Encapsulating a volume involves the following actions:
■ Adding the volume to an existing volume set.
■ Adding the volume to the file system using fsvoladm
Encapsulating a volume
The following example illustrates how to encapsulate a volume.
To encapsulate a volume
1 List the volumes:
The third volume will be used to demonstrate how the volume can be accessed
as a file, as shown later.
130 Multi-volume file systems
Volume encapsulation
6 Encapsulate dbvol:
# head -2 /mnt1/dbfile
root:x:0:1:Super-User:/:/sbin/sh
daemon:x:1:1::/:
The passwd file that was written to the raw volume is now visible in the new
file.
Note: If the encapsulated file is changed in any way, such as if the file is
extended, truncated, or moved with an allocation policy or resized volume,
or the volume is encapsulated with a bias, the file cannot be de-encapsulated.
Deencapsulating a volume
The following example illustrates how to deencapsulate a volume.
Multi-volume file systems 131
Reporting file extents
To deencapsulate a volume
1 List the volumes:
# find . | fsmap -
Volume Extent Type File
vol2 Data ./file1
vol1 Data ./file2
132 Multi-volume file systems
Load balancing
2 Report the extents of files that have either data or metadata on a single volume
in all Storage Checkpoints, and indicate if the volume has file system metadata:
Load balancing
An allocation policy with the balance allocation order can be defined and assigned
to files that must have their allocations distributed at random between a set of
specified volumes. Each extent associated with these files are limited to a maximum
size that is defined as the required chunk size in the allocation policy. The
distribution of the extents is mostly equal if none of the volumes are full or
disabled.
Load balancing allocation policies can be assigned to individual files or for all files
in the file system. Although intended for balancing data extents across volumes,
a load balancing policy can be assigned as a metadata policy if desired, without
any restrictions.
Multi-volume file systems 133
Load balancing
Note: If a file has both a fixed extent size set and an allocation policy for load
balancing, certain behavior can be expected. If the chunk size in the allocation
policy is greater than the fixed extent size, all extents for the file are limited by
the chunk size. For example, if the chunk size is 16 MB and the fixed extent size
is 3 MB, then the largest extent that satisfies both the conditions is 15 MB. If the
fixed extent size is larger than the chunk size, all extents are limited to the fixed
extent size. For example, if the chunk size is 2 MB and the fixed extent size is 3
MB, then all extents for the file are limited to 3 MB.
Rebalancing extents
Extents can be rebalanced by strictly enforcing the allocation policy. Rebalancing
is generally required when volumes are added or removed from the policy or when
the chunk size is modified. When volumes are removed from the volume set, any
extents on the volumes being removed are automatically relocated to other volumes
within the policy.
The following example redefines a policy that has four volumes by adding two
new volumes, removing an existing volume, and enforcing the policy for
rebalancing.
134 Multi-volume file systems
Converting a multi-volume file system to a single volume file system
To rebalance extents
1 Define the policy by specifying the -o balance and -c options:
Note: Steps 5, 6, and 8 are optional, and can be performed if you prefer to remove
the wrapper of the volume set object.
Multi-volume file systems 135
Converting a multi-volume file system to a single volume file system
# df /mnt1
/mnt1 (/dev/vx/dsk/dg1/vol1):16777216 blocks 3443528 files
2 If the first volume does not have sufficient capacity, grow the volume to a
sufficient size:
4 Remove all volumes except the first volume in the volume set:
Before removing a volume, the file system attempts to relocate the files on
that volume. Successful relocation requires space on another volume, and no
allocation policies can be enforced that pin files to that volume. The time for
the command to complete is proportional to the amount of data that must be
relocated.
5 Unmount the file system:
# umount /mnt1
# vxvset -g dg1
7 Edit the /etc/vfstab file to replace the volume set name, vset1, with the
volume device name, vol1.
8 Mount the file system:
■ Placement classes
Note: Some of the commands have changed or removed between the 4.1 release
and the 5.0 release to make placement policy management more user-friendly.
The following are the commands that have been removed: fsrpadm, fsmove, and
fssweep. The output of the queryfile, queryfs, and list options of the fsapadm
command now print the allocation order by name instead of number.
Note: Dynamic Storage Tiering is a licensed feature. You must purchase a separate
license key for DST to operate. See the Veritas Storage Foundation Release Notes.
The Using Dynamic Storage Tiering Symantec Yellow Book provides additional
information regarding the Dynamic Storage Tiering feature, including the value
of DST and best practices for using DST. You can download Using Dynamic Storage
Tiering from the following webpage:
http://www.symantec.com/enterprise/yellowbooks/index.jsp
Dynamic Storage Tiering 139
Placement classes
Placement classes
A placement class is a Dynamic Storage Tiering attribute of a given volume in a
volume set of a multi-volume file system. This attribute is a character string, and
is known as a volume tag. A volume may have different tags, one of which could
be the placment class. The placement class tag makes a volume distinguishable
by DST.
Volume tags are organized as hierarchical name spaces in which the levels of the
hierarchy are separated by periods. By convention, the uppermost level in the
volume tag hierarchy denotes the Storage Foundation component or application
that uses a tag, and the second level denotes the tag’s purpose. DST recognizes
volume tags of the form vxfs.placement_class.class_name. The prefix vxfs
identifies a tag as being associated with VxFS. placement_class identifies the
tag as a file placement class used by DST. class_name represents the name of the
file placement class to which the tagged volume belongs. For example, a volume
with the tag vxfs.placement_class.tier1 belongs to placement class tier1.
Administrators use the vxvoladm command to associate tags with volumes.
See the vxadm(1M) manual page.
VxFS policy rules specify file placement in terms of placement classes rather than
in terms of individual volumes. All volumes that belong to a particular placement
class are interchangeable with respect to file creation and relocation operations.
Specifying file placement in terms of placement classes rather than in terms of
specific volumes simplifies the administration of multi-tier storage in the following
ways:
■ Adding or removing volumes does not require a file placement policy change.
If a volume with a tag value of vxfs.placement_class.tier2 is added to a file
system’s volume set, all policies that refer to tier2 immediately apply to the
newly added volume with no administrative action. Similarly, volumes can be
evacuated, that is, have data removed from them, and be removed from a file
system without a policy change. The active policy continues to apply to the
file system’s remaining volumes.
■ File placement policies are not specific to individual file systems. A file
placement policy can be assigned to any file system whose volume set includes
volumes tagged with the tag values (placement classes) named in the policy.
This property makes it possible for data centers with large numbers of servers
to define standard placement policies and apply them uniformly to all servers
with a single administrative action.
140 Dynamic Storage Tiering
Administering placement policies
for which each document is the current active policy. When a policy document is
updated, SFMS can assign the updated document to all file systems whose current
active policies are based on that document. By default, SFMS does not update file
system active policies that have been created or modified locally, that is by the
hosts that control the placement policies' file systems. If a SFMS administrator
forces assignment of a placement policy to a file system, the file system's active
placement policy is overwritten and any local changes that had been made to the
placement policy are lost.
Tier Name Size (KB) Free Before (KB) Free After (KB)
tier4 524288 524256 524256
tier3 524288 522968 522968
tier2 524288 524256 524256
tier1 524288 502188 501227
allocating and relocating files are expressed in the file system's file placement
policy.
A VxFS file placement policy defines the desired placement of sets of files on the
volumes of a VxFS multi-volume file system. A file placement policy specifies the
placement classes of volumes on which files should be created, and where and
under what conditions the files should be relocated to volumes in alternate
placement classes or deleted. You can create file placement policy documents,
which are XML text files, using either an XML or text editor, or a VxFS graphical
interface wizard.
The following output shows the overall structure of a placement policy:
<?xml version="1.0"?>
<!DOCTYPE PLACEMENT_POLICY [
<!-- The placement policy document definition file -->
-->
<!-- XML requires all attributes must be enclosed in double quotes -->
<!ATTLIST PATTERN
Flags (recursive | nonrecursive) "nonrecursive"
>
1. disallow
>
2. KB
3. MB
4. GB
-->
<!-- XML requires all attributes must be enclosed in double quotes -->
<!ATTLIST BALANCE_SIZE
Units (bytes|KB|MB|GB) #REQUIRED
>
3. 0 or 1 MODAGE element
4. 0 or 1 IOTEMP element
5. 0 or 1 ACCESSTEMP element
1. low
2. high
1. 0 or 1 MIN element
2. 0 or 1 MAX element
-->
<!ELEMENT ACCAGE (MIN?, MAX?)>
<!-- The attributes of ACCAGE element -->
<!-- The possible and accepted values for Prefer are:
(THIS IS NOT IMPLEMENTED)
1. low
2. high
1. low
2. high
>
1. low
2. high
-->
<!-- The possible and accepted values for Type are:
1. nrbytes
2. nwbytes
3. nrwbytes
-->
<!-- XML requires all attributes must be enclosed in double quotes -->
<!ATTLIST IOTEMP
Prefer (low|high) #IMPLIED
Type (nrbytes|nwbytes|nrwbytes) #REQUIRED
>
-->
<!ELEMENT ACCESSTEMP (MIN?, MAX?, PERIOD)>
<!-- The attributes of ACCESSTEMP element -->
<!-- The possible and accepted values for Prefer are:
(THIS IS NOT IMPLEMENTED)
1. low
2. high
-->
<!-- The possible and accepted values for Type are:
1. nreads
2. nwrites
3. nrws
-->
<!-- XML requires all attributes must be enclosed in double quotes -->
<!ATTLIST ACCESSTEMP
Prefer (low|high) #IMPLIED
Type (nreads|nwrites|nrws) #REQUIRED
>
<!ATTLIST MAX
Flags (lt|lteq) #REQUIRED
>
SELECT statement
The VxFS placement policy rule SELECT statement designates the collection of
files to which a rule applies.
The following XML snippet illustrates the general form of the SELECT statement:
Dynamic Storage Tiering 157
File placement policy rules
<SELECT>
<DIRECTORY Flags="directory_flag_value">...value...
</DIRECTORY>
<PATTERN Flags="pattern_flag_value">...value...</PATTERN>
<USER>...value...</USER>
<GROUP>...value...</GROUP>
</SELECT>
A SELECT statement may designate files by using the following selection criteria:
<DIRECTORY> A full path name relative to the file system mount point. The
Flags=”directory_flag_value” XML attribute must have a value
of nonrecursive, denoting that only files in the specified directory
are designated, or a value of recursive, denoting that files in all
subdirectories of the specified directory are designated. The Flags
attribute is mandatory.
<PATTERN> Either an exact file name or a pattern using a single wildcard character
(*). For example, the pattern “abc*” denotes all files whose names begin
with “abc”. The pattern “abc.*” denotes all files whose names are
exactly "abc" followed by a period and any extension. The pattern
“*abc” denotes all files whose names end in “abc”, even if the name is
all or part of an extension. The pattern “*.abc” denotes files of any
name whose name extension (following the period) is “abc”. The
pattern “ab*c” denotes all files whose names start with “ab” and end
with “c”. The first "*" character is treated as a wildcard, while any
subsequent "*" characters are treated as literal text. The pattern cannot
contain "/".
The wildcard character matches any character, including ".", "?", and
"[", unlike using the wildcard in a shell.
<USER> User name of the file's owner. The user number cannot be specified
in place of the name.
<GROUP> Group name of the file's owner. The group number cannot be specified
in place of the group name.
One or more instances of any or all of the file selection criteria may be specified
within a single SELECT statement. If two or more selection criteria of different
types are specified in a single statement, a file must satisfy one criterion of each
type to be selected.
Dynamic Storage Tiering 159
File placement policy rules
In the following example, only files that reside in either the ora/db or the
crash/dump directory, and whose owner is either user1 or user2 are selected for
possible action:
<SELECT>
<DIRECTORY Flags="nonrecursive">ora/db</DIRECTORY>
<DIRECTORY Flags="nonrecursive">crash/dump</DIRECTORY>
<USER>user1</USER>
<USER>user2</USER>
</SELECT>
A rule may include multiple SELECT statements. If a file satisfies the selection
criteria of one of the SELECT statements, it is eligible for action.
In the following example, any files owned by either user1 or user2, no matter in
which directories they reside, as well as all files in the ora/db or crash/dump
directories, no matter which users own them, are eligible for action:
<SELECT>
<DIRECTORY Flags="nonrecursive">ora/db</DIRECTORY>
<DIRECTORY Flags="nonrecursive">crash/dump</DIRECTORY>
</SELECT>
<SELECT>
<USER>user1</USER>
<USER>user2</USER>
</SELECT>
When VxFS creates new files, VxFS applies active placement policy rules in the
order of appearance in the active placement policy's XML source file. The first
rule in which a SELECT statement designates the file to be created determines
the file's placement; no later rules apply. Similarly, VxFS scans the active policy
rules on behalf of each file when relocating files, stopping the rules scan when it
reaches the first rule containing a SELECT statement that designates the file. This
behavior holds true even if the applicable rule results in no action. Take for
example a policy rule that indicates that .dat files inactive for 30 days should be
relocated, and a later rule indicates that .dat files larger than 10 megabytes should
be relocated. A 20 megabyte .dat file that has been inactive for 10 days will not
be relocated because the earlier rule applied. The later rule is never scanned.
A placement policy rule's action statements apply to all files designated by any
of the rule's SELECT statements. If an existing file is not designated by a SELECT
statement in any rule of a file system's active placement policy, then DST does
not relocate or delete the file. If an application creates a file that is not designated
by a SELECT statement in a rule of the file system's active policy, then VxFS places
the file according to its own internal algorithms. If this behavior is inappropriate,
160 Dynamic Storage Tiering
File placement policy rules
the last rule in the policy document on which the file system's active placement
policy is based should specify <PATTERN>*</PATTERN> as the only selection
criterion in its SELECT statement, and a CREATE statement naming the desired
placement class for files not selected by other rules.
CREATE statement
A CREATE statement in a file placement policy rule specifies one or more
placement classes of volumes on which VxFS should allocate space for new files
to which the rule applies at the time the files are created. You can specify only
placement classes, not individual volume names, in a CREATE statement.
A file placement policy rule may contain at most one CREATE statement. If a rule
does not contain a CREATE statement, VxFS places files designated by the rule's
SELECT statements according to its internal algorithms. However, rules without
CREATE statements can be used to relocate or delete existing files that the rules'
SELECT statements designate.
The following XML snippet illustrates the general form of the CREATE statement:
<CREATE>
<ON Flags="...flag_value...">
<DESTINATION>
<CLASS>...placement_class_name...</CLASS>
<BALANCE_SIZE Units="units_specifier">...chunk_size...
</BALANCE_SIZE>
</DESTINATION>
<DESTINATION>...additional placement class specifications...
</DESTINATION>
</ON>
</CREATE>
placement classes. Failing that, VxFS resorts to its internal space allocation
algorithms, so file allocation does not fail unless there is no available space
any-where in the file system's volume set.
The Flags=”any” attribute differs from the catchall rule in that this attribute
applies only to files designated by the SELECT statement in the rule, which may
be less inclusive than the <PATTERN>*</PATTERN> file selection specification
of the catchall rule.
In addition to the placement class name specified in the <CLASS> sub-element,
a <DESTINATION> XML element may contain a <BALANCE_SIZE> sub-element.
Presence of a <BALANCE_SIZE> element indicates that space allocation should
be distributed across the volumes of the placement class in chunks of the indicated
size. For example, if a balance size of one megabyte is specified for a placement
class containing three volumes, VxFS allocates the first megabyte of space for a
new or extending file on the first (lowest indexed) volume in the class, the second
megabyte on the second volume, the third megabyte on the third volume, the
fourth megabyte on the first volume, and so forth. Using the Units attribute in
the <BALANCE_SIZE> XML tag, the balance size value may be specified in the
following units:
bytes Bytes
KB Kilobytes
MB Megabytes
GB Gigabytes
<CREATE>
<ON>
<DESTINATION>
<CLASS>tier1</CLASS>
</DESTINATION>
<DESTINATION>
<CLASS>tier2</CLASS>
<BALANCE_SIZE Units="MB">1</BALANCE_SIZE>
162 Dynamic Storage Tiering
File placement policy rules
</DESTINATION>
</ON>
</CREATE>
RELOCATE statement
The RELOCATE action statement of file placement policy rules specifies an action
that VxFS takes on designated files during periodic scans of the file system, and
the circumstances under which the actions should be taken. The fsppadm enforce
command is used to scan all or part of a file system for files that should be relocated
based on rules in the active placement policy at the time of the scan.
See the fsppadm(1M) manual page.
The fsppadm enforce scans file systems in path name order. For each file, VxFS
identifies the first applicable rule in the active placement policy, as determined
by the rules' SELECT statements. If the file resides on a volume specified in the
<FROM> clause of one of the rule's RELOCATE statements, and if the file meets
the criteria for relocation specified in the statement's <WHEN> clause, the file is
scheduled for relocation to a volume in the first placement class listed in the <TO>
clause that has space available for the file. The scan that results from issuing the
fsppadm enforce command runs to completion before any files are relocated.
The following XML snippet illustrates the general form of the RELOCATE
statement:
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>...placement_class_name...</CLASS>
</SOURCE>
<SOURCE>...additional placement class specifications...
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS>...placement_class_name...</CLASS>
<BALANCE_SIZE Units="units_specifier">
...chunk_size...
</BALANCE_SIZE>
Dynamic Storage Tiering 163
File placement policy rules
</DESTINATION>
<DESTINATION>
...additional placement class specifications...
</DESTINATION>
</TO>
<WHEN>...relocation conditions...</WHEN>
</RELOCATE>
<FROM> An optional clause that contains a list of placement classes from whose
volumes designated files should be relocated if the files meet the
conditions specified in the <WHEN> clause. No priority is associated
with the ordering of placement classes listed in a <FROM> clause. If
a file to which the rule applies is located on a volume in any specified
placement class, the file is considered for relocation.
<WHEN> An optional clause that indicates the conditions under which files to
which the rule applies should be relocated. Files that have been
unaccessed or unmodified for a specified period, reached a certain
size, or reached a specific I/O temperature or access temperature level
may be relocated. If a RELOCATE statement does not contain a
<WHEN> clause, files to which the rule applies are relocated
unconditionally.
The following are the criteria that can be specified for the <WHEN> clause:
<ACCAGE> This criterion is met when files are inactive for a designated period
or during a designated period relative to the time at which the
fsppadm enforce command was issued.
Dynamic Storage Tiering 165
File placement policy rules
<MODAGE> This criterion is met when files are unmodified for a designated
period or during a designated period relative to the time at which
the fsppadm enforce command was issued.
<SIZE> This criterion is met when files exceed or drop below a designated
size or fall within a designated size range.
<IOTEMP> This criterion is met when files exceed or drop below a designated
I/O temperature, or fall with in a designated I/O temperature range.
A file's I/O temperature is a measure of the I/O activity against it
during the period designated by the <PERIOD> element prior to
the time at which the fsppadm enforce command was issued.
<ACCESSTEMP> This criterion is met when files exceed or drop below a specified
average access temperature, or fall within a specified access
temperature range. A file's access temperature is similar to its I/O
temperature, except that access temperature is computed using
the number of I/O requests to the file, rather than the number of
bytes transferred.
The following XML snippet illustrates the general form of the <WHEN> clause in
a RELOCATE statement:
<WHEN>
<ACCAGE Units="...units_value...">
<MIN Flags="...comparison_operator...">
...min_access_age...</MIN>
<MAX Flags="...comparison_operator...">
...max_access_age...</MAX>
</ACCAGE>
<MODAGE Units="...units_value...">
<MIN Flags="...comparison_operator...">
...min_modification_age...</MIN>
<MAX Flags="...comparison_operator...">
...max_modification_age...</MAX>
</MODAGE>
<SIZE " Units="...units_value...">
<MIN Flags="...comparison_operator...">
...min_size...</MIN>
<MAX Flags="...comparison_operator...">
...max_size...</MAX>
</SIZE>
<IOTEMP Type="...read_write_preference...">
166 Dynamic Storage Tiering
File placement policy rules
<MIN Flags="...comparison_operator...">
...min_I/O_temperature...</MIN>
<MAX Flags="...comparison_operator...">
...max_I/O_temperature...</MAX>
<PERIOD>...days_of_interest...</PERIOD>
</IOTEMP>
<ACCESSTEMP Type="...read_write_preference...">
<MIN Flags="...comparison_operator...">
...min_access_temperature...</MIN>
<MAX Flags="...comparison_operator...">
...max_access_temperature...</MAX>
<PERIOD>...days_of_interest...</PERIOD>
</ACCESSTEMP>
</WHEN>
The access age (<ACCAGE>) element refers to the amount of time since a file was
last accessed. VxFS computes access age by subtracting a file's time of last access,
atime, from the time when the fsppadm enforce command was issued. The <MIN>
and <MAX> XML elements in an <ACCAGE> clause, denote the minimum and
maximum access age thresholds for relocation, respectively. These elements are
optional, but at least one must be included. Using the Units XML attribute, the
<MIN> and <MAX> elements may be specified in the following units:
hours Hours
days Days. A day is considered to be 24 hours prior to the time that the
fsppadm enforce command was issued.
Both the <MIN> and <MAX> elements require Flags attributes to direct their
operation.
For <MIN>, the following Flags attributes values may be specified:
gt The time of last access must be greater than the specified interval.
gteq The time of last access must be greater than or equal to the specified
interval.
lt The time of last access must be less than the specified interval.
Dynamic Storage Tiering 167
File placement policy rules
lteq The time of last access must be less than or equal to the specified
interval.
bytes Bytes
KB Kilobytes
MB Megabytes
GB Gigabytes
As with the other file relocation criteria, <IOTEMP> may be specified with a lower
threshold by using the <MIN> element, an upper threshold by using the <MAX>
element, or as a range by using both. However, I/O temperature is dimensionless
and therefore has no specification for units.
VxFS computes files' I/O temperatures over the period between the time when
the fsppadm enforce command was issued and the number of days in the past
specified in the <PERIOD> element, where a day is a 24 hour period. For example,
if the fsppadm enforce command was issued at 2 PM on Wednesday, and a
<PERIOD> value of 2 was specified, VxFS looks at file I/O activity for the period
between 2 PM on Monday and 2 PM on Wednesday. The number of days specified
in the <PERIOD> element should not exceed one or two weeks due to the disk
space used by the File Change Log (FCL) file.
See “About the File Change Log file” on page 109.
I/O temperature is a softer measure of I/O activity than access age. With access
age, a single access to a file resets the file's atime to the current time. In contrast,
a file's I/O temperature decreases gradually as time passes without the file being
accessed, and increases gradually as the file is accessed periodically. For example,
if a new 10 megabyte file is read completely five times on Monday and fsppadm
enforce runs at midnight, the file's two-day I/O temperature will be five and its
access age in days will be zero. If the file is read once on Tuesday, the file's access
age in days at midnight will be zero, and its two-day I/O temperature will have
dropped to three. If the file is read once on Wednesday, the file's access age at
midnight will still be zero, but its two-day I/O temperature will have dropped to
one, as the influence of Monday's I/O will have disappeared.
If the intention of a file placement policy is to keep files in place, such as on top-tier
storage devices, as long as the files are being accessed at all, then access age is
the more appropriate relocation criterion. However, if the intention is to relocate
files as the I/O load on them decreases, then I/O temperature is more appropriate.
The case for upward relocation is similar. If files that have been relocated to
lower-tier storage devices due to infrequent access experience renewed application
activity, then it may be appropriate to relocate those files to top-tier devices. A
policy rule that uses access age with a low <MAX> value, that is, the interval
between fsppadm enforce runs, as a relocation criterion will cause files to be
relocated that have been accessed even once during the interval. Conversely, a
policy that uses I/O temperature with a <MIN> value will only relocate files that
have experienced a sustained level of activity over the period of interest.
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier1</CLASS>
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</TO>
</RELOCATE>
The files designated by the rule's SELECT statement that reside on volumes in
placement class tier1 at the time the fsppadm enforce command executes would
be unconditionally relocated to volumes in placement class tier2 as long as space
permitted. This type of rule might be used, for example, with applications that
create and access new files but seldom access existing files once they have been
processed. A CREATE statement would specify creation on tier1 volumes, which
are presumably high performance or high availability, or both. Each instantiation
of fsppadm enforce would relocate files created since the last run to tier2 volumes.
The following example illustrates a more comprehensive form of the RELOCATE
statement that uses access age as the criterion for relocating files from tier1
volumes to tier2 volumes. This rule is designed to maintain free space on tier1
volumes by relocating inactive files to tier2 volumes:
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier1</CLASS>
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</TO>
<WHEN>
<SIZE Units="MB">
<MIN Flags="gt">1</MIN>
<MAX Flags="lt">1000</MAX>
</SIZE>
<ACCAGE Units="days">
170 Dynamic Storage Tiering
File placement policy rules
<MIN Flags="gt">30</MIN>
</ACCAGE>
</WHEN>
</RELOCATE>
Files designated by the rule's SELECT statement are relocated from tier1 volumes
to tier2 volumes if they are between 1 MB and 1000 MB in size and have not been
accessed for 30 days. VxFS relocates qualifying files in the order in which it
encounters them as it scans the file system's directory tree. VxFS stops scheduling
qualifying files for relocation when when it calculates that already-scheduled
relocations would result in tier2 volumes being fully occupied.
The following example illustrates a possible companion rule that relocates files
from tier2 volumes to tier1 ones based on their I/O temperatures. This rule might
be used to return files that had been relocated to tier2 volumes due to inactivity
to tier1 volumes when application activity against them increases. Using I/O
temperature rather than access age as the relocation criterion reduces the chance
of relocating files that are not actually being used frequently by applications. This
rule does not cause files to be relocated unless there is sustained activity against
them over the most recent two-day period.
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier2</CLASS>
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS>tier1</CLASS>
</DESTINATION>
</TO>
<WHEN>
<IOTEMP Type="nrbytes">
<MIN Flags="gt">5</MIN>
<PERIOD>2</PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
This rule relocates files that reside on tier2 volumes to tier1 volumes if their I/O
temperatures are above 5 for the two day period immediately preceding the issuing
of the fsppadm enforce command. VxFS relocates qualifying files in the order in
which it encounters them during its file system directory tree scan. When tier1
volumes are fully occupied, VxFS stops scheduling qualifying files for relocation.
Dynamic Storage Tiering 171
File placement policy rules
VxFS file placement policies are able to control file placement across any number
of placement classes. The following example illustrates a rule for relocating files
with low I/O temperatures from tier1 volumes to tier2 volumes, and to tier3
volumes when tier2 volumes are fully occupied:
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier1</CLASS>
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
<DESTINATION>
<CLASS>tier3</CLASS>
</DESTINATION>
</TO>
<WHEN>
<IOTEMP Type="nrbytes">
<MAX Flags="lt">4</MAX>
<PERIOD>3</PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
This rule relocates files whose 3-day I/O temperatures are less than 4 and which
reside on tier1 volumes. When VxFS calculates that already-relocated files would
result in tier2 volumes being fully occupied, VxFS relocates qualifying files to
tier3 volumes instead. VxFS relocates qualifying files as it encounters them in its
scan of the file system directory tree.
The <FROM> clause in the RELOCATE statement is optional. If the clause is not
present, VxFS evaluates files designated by the rule's SELECT statement for
relocation no matter which volumes they reside on when the fsppadm enforce
command is issued. The following example illustrates a fragment of a policy rule
that relocates files according to their sizes, no matter where they reside when the
fsppadm enforce command is issued:
<RELOCATE>
<TO>
<DESTINATION>
<CLASS>tier1</CLASS>
172 Dynamic Storage Tiering
File placement policy rules
</DESTINATION>
</TO>
<WHEN>
<SIZE Units="MB">
<MAX Flags="lt">10</MAX>
</SIZE>
</WHEN>
</RELOCATE>
<RELOCATE>
<TO>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</TO>
<WHEN>
<SIZE Units="MB">
<MIN Flags="gteq">10</MIN>
<MAX Flags="lt">100</MAX>
</SIZE>
</WHEN>
</RELOCATE>
<RELOCATE>
<TO>
<DESTINATION>
<CLASS>tier3</CLASS>
</DESTINATION>
</TO>
<WHEN>
<SIZE Units="MB">
<MIN Flags="gteq">100</MIN>
</SIZE>
</WHEN>
</RELOCATE>
This rule relocates files smaller than 10 megabytes to tier1 volumes, files between
10 and 100 megabytes to tier2 volumes, and files larger than 100 megabytes to
tier3 volumes. VxFS relocates all qualifying files that do not already reside on
volumes in their DESTINATION placement classes when the fsppadm enforce
command is issued.
Dynamic Storage Tiering 173
File placement policy rules
DELETE statement
The DELETE file placement policy rule statement is very similar to the RELOCATE
statement in both form and function, lacking only the <TO> clause. File placement
policy-based deletion may be thought of as relocation with a fixed destination.
The following XML snippet illustrates the general form of the DELETE statement:
<DELETE>
<FROM>
<SOURCE>
<CLASS>...placement_class_name...</CLASS>
</SOURCE>
<SOURCE>
...additional placement class specifications...
</SOURCE>
</FROM>
<WHEN>...relocation conditions...</WHEN>
</DELETE>
<FROM> An optional clause that contains a list of placement classes from whose
volumes designated files should be deleted if the files meet the
conditions specified in the <WHEN> clause. No priority is associated
with the ordering of placement classes in a <FROM> clause. If a file
to which the rule applies is located on a volume in any specified
placement class, the file is deleted. If a DELETE statement does not
contain a <FROM> clause, VxFS deletes qualifying files no matter on
which of a file system's volumes the files reside.
<WHEN> An optional clause specifying the conditions under which files to which
the rule applies should be deleted. The form of the <WHEN> clause
in a DELETE statement is identical to that of the <WHEN> clause in
a RELOCATE statement. If a DELETE statement does not contain a
<WHEN> clause, files designated by the rule's SELECT statement, and
the <FROM> clause if it is present, are deleted unconditionally.
<DELETE>
<FROM>
<SOURCE>
<CLASS>tier3</CLASS>
</SOURCE>
</FROM>
</DELETE>
<DELETE>
<FROM>
<SOURCE>
<CLASS>tier2</CLASS>
</SOURCE>
</FROM>
<WHEN>
<ACCAGE Units="days">
<MIN Flags="gt">120</MIN>
</ACCAGE>
</WHEN>
</DELETE>
The first DELETE statement unconditionally deletes files designated by the rule's
SELECT statement that reside on tier3 volumes when the fsppadm enforce
command is issued. The absence of a <WHEN> clause in the DELETE statement
indicates that deletion of designated files is unconditional.
The second DELETE statement deletes files to which the rule applies that reside
on tier2 volumes when the fsppadm enforce command is issued and that have not
been accessed for the past 120 days.
badly against files that happen to be accessed, however casually, within the
interval defined by the value of the <ACCAGE> pa-rameter.
■ Access age is a poor indicator of resumption of significant activity. Using
ACCAGE, the time since last access, as a criterion for relocating inactive files
to lower tier volumes may fail to schedule some relocations that should be
performed, but at least this method results in less relocat--ion activity than
necessary. Using ACCAGE as a criterion for relocating previously inactive files
that have become active is worse, because this method is likely to schedule
relocation activity that is not warranted. If a policy rule's intent is to cause
files that have experienced I/O activity in the recent past to be relocated to
higher performing, perhaps more failure tolerant storage, ACCAGE is too
coarse a filter. For example, in a rule specifying that files on tier2 volumes
that have been accessed within the last three days should be relocated to tier1
volumes, no distinction is made between a file that was browsed by a single
user and a file that actually was used intensively by applications.
DST implements the concept of I/O temperature and access temperature to
overcome these deficiencies. A file's I/O temperature is equal to the number of
bytes transferred to or from it over a specified period of time divided by the size
of the file. For example, if a file occupies one megabyte of storage at the time of
an fsppadm enforce operation and the data in the file has been completely read
or written 15 times within the last three days, VxFS calculates its 3-day average
I/O temperature to be 5 (15 MB of I/O ÷ 1 MB file size ÷ 3 days).
Similarly, a file's average access temperature is the number of read or write
requests made to it over a specified number of 24-hour periods divided by the
number of periods. Unlike I/O temperature, access temperature is unrelated to
file size. A large file to which 20 I/O requests are made over a 2-day period has
the same average access temperature as a small file accessed 20 times over a 2-day
period.
If a file system's active placement policy includes any <IOTEMP> or
<ACCESSTEMP> clauses, VxFS begins policy enforcement by using information
in the file system's FCL file to calculate average I/O activity against all files in the
file system during the longest <PERIOD> specified in the policy. Shorter specified
periods are ignored. VxFS uses these calculations to qualify files for I/O
temperature-based relocation and deletion.
See “About the File Change Log file” on page 109.
Note: If FCL is turned off, I/O temperature-based relocation will not be accurate.
When you invoke the fsppadm enforce command, the command displays a
warning if the FCL is turned off.
176 Dynamic Storage Tiering
Calculating I/O temperature and access temperature
As its name implies, the File Change Log records information about changes made
to files in a VxFS file system. In addition to recording creations, deletions,
extensions, the FCL periodically captures the cumulative amount of I/O activity
(number of bytes read and written) on a file-by-file basis. File I/O activity is
recorded in the FCL each time a file is opened or closed, as well as at timed intervals
to capture information about files that remain open for long periods.
If a file system's active file placement policy contains <IOTEMP> clauses, execution
of the fsppadm enforce command begins with a scan of the FCL to extract I/O
activity information over the period of interest for the policy. The period of interest
is the interval between the time at which the fsppadm enforce command was
issued and that time minus the largest interval value specified in any <PERIOD>
element in the active policy.
For files with I/O activity during the largest interval, VxFS computes an
approximation of the amount of read, write, and total data transfer (the sum of
the two) activity by subtracting the I/O levels in the oldest FCL record that pertains
to the file from those in the newest. It then computes each file's I/O temperature
by dividing its I/O activity by its size at Tscan. Dividing by file size is an implicit
acknowledgement that relocating larger files consumes more I/O resources than
relocating smaller ones. Using this algorithm requires that larger files must have
more activity against them in order to reach a given I/O temperature, and thereby
justify the resource cost of relocation.
While this computation is an approximation in several ways, it represents an easy
to compute, and more importantly, unbiased estimate of relative recent I/O activity
upon which reasonable relocation decisions can be based.
File relocation and deletion decisions can be based on read, write, or total I/O
activity.
The following XML snippet illustrates the use of IOTEMP in a policy rule to specify
relocation of low activity files from tier1 volumes to tier2 volumes:
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier1</CLASS>
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</TO>
<WHEN>
Dynamic Storage Tiering 177
Calculating I/O temperature and access temperature
<IOTEMP Type="nrbytes">
<MAX Flags="lt">3</MAX>
<PERIOD Units="days">4</PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
This snippet specifies that files to which the rule applies should be relocated from
tier1 volumes to tier2 volumes if their I/O temperatures fall below 3 over a period
of 4 days. The Type=”nrbytes” XML attribute specifies that total data transfer
activity, which is the the sum of bytes read and bytes written, should be used in
the computation. For example, a 50 megabyte file that experienced less than 150
megabytes of data transfer over the 4-day period immediately preceding the
fsppadm enforce scan would be a candidate for relocation. VxFS considers files
that experience no activity over the period of interest to have an I/O temperature
of zero. VxFS relocates qualifying files in the order in which it encounters the
files in its scan of the file system directory tree.
Using I/O temperature or access temperature rather than a binary indication of
activity, such as the POSIX atime or mtime, minimizes the chance of not relocating
files that were only accessed occasionally during the period of interest. A large
file that has had only a few bytes transferred to or from it would have a low I/O
temperature, and would therefore be a candidate for relocation to tier2 volumes,
even if the activity was very recent.
But, the greater value of I/O temperature or access temperature as a file relocation
criterion lies in upward relocation: detecting increasing levels of I/O activity
against files that had previously been relocated to lower tiers in a storage hierarchy
due to inactivity or low temperatures, and relocating them to higher tiers in the
storage hierarchy.
The following XML snippet illustrates relocating files from tier2 volumes to tier1
when the activity level against them increases.
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier2</CLASS>
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS>tier1</CLASS>
</DESTINATION>
</TO>
178 Dynamic Storage Tiering
Multiple criteria in file placement policy rule statements
<WHEN>
<IOTEMP Type="nrbytes">
<MAX Flags="gt">5</MAX>
<PERIOD Units="days">2</PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
The <RELOCATE> statement specifies that files on tier2 volumes whose I/O
temperature as calculated using the number of bytes read is above 5 over a 2-day
period are to be relocated to tier1 volumes. Bytes written to the file during the
period of interest are not part of this calculation.
Using I/O temperature rather than a binary indicator of activity as a criterion for
file relocation gives administrators a granular level of control over automated
file relocation that can be used to attune policies to application requirements. For
example, specifying a large value in the <PERIOD> element of an upward relocation
statement prevents files from being relocated unless I/O activity against them is
sustained. Alternatively, specifying a high temperature and a short period tends
to relocate files based on short-term intensity of I/O activity against them.
I/O temperature and access temperature utilize the sqlite3 database for building
a temporary table indexed on an inode. This temporary table is used to filter files
based on I/O temperature and access temperature. The temporary table is stored
in the database file .__fsppadm_fcliotemp.db, which resides in the lost+found
directory of the mount point.
<SELECT>
<DIRECTORY Flags="nonrecursive">db/datafiles</DIRECTORY>
<DIRECTORY Flags="nonrecursive">db/indexes</DIRECTORY>
<DIRECTORY Flags="nonrecursive">db/logs</DIRECTORY>
</SELECT>
<SELECT>
<DIRECTORY Flags="nonrecursive">db/datafiles</DIRECTORY>
<DIRECTORY Flags="nonrecursive">db/indexes</DIRECTORY>
<DIRECTORY Flags="nonrecursive">db/logs</DIRECTORY>
<USER>DBA_Manager</USER>
<USER>MFG_DBA</USER>
<USER>HR_DBA</USER>
</SELECT>
If a rule includes multiple SELECT statements, a file need only satisfy one of them
to be selected for action. This property can be used to specify alternative conditions
for file selection.
In the following example, a file need only reside in one of db/datafiles,
db/indexes, or db/logs or be owned by one of DBA_Manager, MFG_DBA, or
HR_DBA to be designated for possible action:
<SELECT>
<DIRECTORY Flags="nonrecursive">db/datafiles</DIRECTORY>
<DIRECTORY Flags="nonrecursive">db/indexes</DIRECTORY>
<DIRECTORY Flags="nonrecursive">db/logs</DIRECTORY>
</SELECT>
<SELECT>
<USER>DBA_Manager</USER>
<USER>MFG_DBA</USER>
<USER>HR_DBA</USER>
</SELECT>
180 Dynamic Storage Tiering
Multiple criteria in file placement policy rule statements
<CREATE>
<ON>
<DESTINATION>
<CLASS>tier1</CLASS>
</DESTINATION>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
<DESTINATION>
<CLASS>tier3</CLASS>
</DESTINATION>
</ON>
</CREATE>
In this statement, VxFS would allocate space for newly created files designated
by the rule's SELECT statement on tier1 volumes if space was available. If no tier1
volume had sufficient free space, VxFS would attempt to allocate space on a tier2
volume. If no tier2 volume had sufficient free space, VxFS would attempt allocation
on a tier3 volume. If sufficient space could not be allocated on a volume in any of
the three specified placement classes, allocation would fail with an ENOSPC error,
even if the file system's volume set included volumes in other placement classes
that did have sufficient space.
The <TO> clause in the RELOCATE statement behaves similarly. VxFS relocates
qualifying files to volumes in the first placement class specified if possible, to
volumes in the second specified class if not, and so forth. If none of the destination
criteria can be met, such as if all specified classes are fully occupied, qualifying
files are not relocated, but no error is signaled in this case.
Dynamic Storage Tiering 181
File placement policy rule and statement ordering
<WHEN>
<ACCAGE Units="days">
<MIN Flags="gt">30</MIN>
</ACCAGE>
<SIZE Units="MB">
<MIN Flags="gt">100</MIN>
</SIZE>
</WHEN>
You cannot write rules to relocate or delete a single designated set of files if the
files meet one of two or more relocation or deletion criteria.
The rules that comprise a placement policy may occur in any order, but during
both file allocation and fsppadm enforce relocation scans, the first rule in which
a file is designated by a SELECT statement is the only rule against which that file
is evaluated. Thus, rules whose purpose is to supersede a generally applicable
behavior for a special class of files should precede the general rules in a file
placement policy document.
The following XML snippet illustrates faulty rule placement with potentially
unintended consequences:
<?xml version="1.0"?>
<!DOCTYPE FILE_PLACEMENT_POLICY SYSTEM "placement.dtd">
<FILE_PLACEMENT_POLICY Version="5.0">
<RULE Name="GeneralRule">
<SELECT>
<PATTERN>*</PATTERN>
</SELECT>
<CREATE>
<ON>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</ON>
</CREATE>
...other statements...
</RULE>
<RULE Name="DatabaseRule">
<SELECT>
<PATTERN>*.db</PATTERN>
</SELECT>
<CREATE>
<ON>
<DESTINATION>
<CLASS>tier1</CLASS>
</DESTINATION>
</ON>
</CREATE>
...other statements...
</RULE>
</FILE_PLACEMENT_POLICY>
The GeneralRule rule specifies that all files created in the file system, designated
by <PATTERN>*</PATTERN>, should be created on tier2 volumes. The
DatabaseRule rule specifies that files whose names include an extension of .db
Dynamic Storage Tiering 183
File placement policy rule and statement ordering
should be created on tier1 volumes. The GeneralRule rule applies to any file created
in the file system, including those with a naming pattern of *.db, so the
DatabaseRule rule will never apply to any file. This fault can be remedied by
exchanging the order of the two rules. If the DatabaseRule rule occurs first in the
policy document, VxFS encounters it first when determining where to new place
files whose names follow the pattern *.db, and correctly allocates space for them
on tier1 volumes. For files to which the DatabaseRule rule does not apply, VxFS
continues scanning the policy and allocates space according to the specification
in the CREATE statement of the GeneralRule rule.
A similar consideration applies to statements within a placement policy rule. VxFS
processes these statements in order, and stops processing on behalf of a file when
it encounters a statement that pertains to the file. This can result in unintended
behavior.
The following XML snippet illustrates a RELOCATE statement and a DELETE
statement in a rule that is intended to relocate if the files have not been accessed
in 30 days, and delete the files if they have not been accessed in 90 days:
<RELOCATE>
<TO>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</TO>
<WHEN>
<ACCAGE Units="days">
<MIN Flags="gt">30</MIN>
</ACCAGE>
</WHEN>
</RELOCATE>
<DELETE>
<WHEN>
<ACCAGE Units="days">
<MIN Flags="gt">90</MIN>
</ACCAGE>
</WHEN>
</DELETE>
As written with the RELOCATE statement preceding the DELETE statement, files
will never be deleted, because the <WHEN> clause in the RELOCATE statement
applies to all selected files that have not been accessed for at least 30 days. This
includes those that have not been accessed for 90 days. VxFS ceases to process a
file against a placement policy when it identifies a statement that applies to that
file, so the DELETE statement would never occur. This example illustrates the
184 Dynamic Storage Tiering
File placement policies and extending files
general point that RELOCATE and DELETE statements that specify less inclusive
criteria should precede statements that specify more inclusive criteria in a file
placement policy document. The GUI automatically produce the correct statement
order for the policies it creates.
Quick I/O is part of the VRTSvxfs package, but is available for use only with other
Symantec products.
See the Veritas Storage Foundation Release Notes.
::cdev:vxfs:
xxx::cdev:vxfs:
Note: When Quick I/O is enabled, you cannot create a regular VxFS file with a
name that uses the ::cdev:vxfs: extension. If an application tries to create a
regular file named xxx::cdev:vxfs:, the create fails. If Quick I/O is not available,
it is possible to create a regular file with the ::cdev:vxfs: extension, but this could
cause problems if Quick I/O is later enabled. Symantec advises you to reserve the
extension only for Quick I/O files.
188 Quick I/O for Databases
About creating a Quick I/O file using qiomkfile
■ If the file xxx is being used for memory mapped I/O, it cannot be accessed as
a Quick I/O file.
■ An I/O fails if the file xxx has a logical hole and the I/O is done to that hole on
xxx::cdev:vxfs:.
■ The size of the file cannot be extended by writes through the Quick I/O
interface.
-a Creates a symbolic link with an absolute path name for a specified file. The
default is to create a symbolic link with a relative path name.
-e (For Oracle database files to allow tablespace resizing.) Extends the file size
by the specified amount.
-h (For Oracle database files.) Creates a file with additional space allocated for
the Oracle header.
-r (For Oracle database files to allow tablespace resizing.) Increases the file to
the specified size.
You can specify file size in terms of bytes (the default), or in kilobytes, megabytes,
gigabytes, or sectors (512 bytes) by adding a k, K, m, M, g, G, s, or S suffix. If the
Quick I/O for Databases 189
About creating a Quick I/O file using qiomkfile
size of the file including the header is not a multiple of the file system block size,
it is rounded to a multiple of the file system block size before preallocation.
The qiomkfile command creates two files: a regular file with preallocated,
contiguous space; and a symbolic link pointing to the Quick I/O name extension.
The first file created is a regular file named /database/.dbfile, which has
the real space allocated. The second file is a symbolic link named
/database/dbfile. This is a relative link to /database/.dbfile via the Quick
I/O interface. That is, to .dbfile::cdev:vxfs:. This allows .dbfile to be
accessed by any database or application as a raw character device.
■ If you specify the -a option with qiomkfile, an absolute path name is
used, such as the following:
$ ls -al
-rw-r--r-- 1 oracle dba 104857600 Oct 22 15:03 .dbfile
lrwxrwxrwx 1 oracle dba 19 Oct 22 15:03 dbfile -> .dbfile::cdev:v
or:
$ ls -lL
crw-r----- 1 oracle dba 43,0 Oct 22 15:04 dbfile
-rw-r--r-- 1 oracle dba 10485760 Oct 22 15:04 .dbfile
■ If you specified the -a option with qiomkfile, the results are as follows:
$ ls -al
-rw-r--r-- 1 oracle dba 104857600 Oct 22 15:05 .dbfile
190 Quick I/O for Databases
Accessing regular VxFS files through symbolic links
$ cd /database
$ ln -s .dbfile::cdev:vxfs: /database/dbfile
separate from the data directories. For example, you can create a directory named
/database and put in all the symbolic links, with the symbolic links pointing to
absolute path names.
# cd /database
# touch .dbfile
# ln -s .dbfile::cdev:vxfs: dbfile
$ sqlplus /nolog
$ cd /sybmaster
2 Create the masterdev file and preallocate 100 MB for the file:
You can use this master device while running the sybsetup program or
sybinit script.
4 Add a new 500 megabyte database device datadev to the file system /sybdata
on your dataserver:
$ cd /sybdata
$ qiomkfile -s 500m datadev
...
The following example creates a 100 megabyte master device masterdev on the
file system /sybmaster.
To create a new Sybase database device
1 Go to the /sybmaster file system:
$ cd /sybmaster
2 Create the masterdev file and preallocate 100 MB for the file:
You can use this master device while running the sybsetup program or sybinit
script.
3 To create the master device directly, enter:
4 Add a new 500 megabyte database device datadev to the file system /sybdata
on your dataserver:
$ cd /sybdata
...
6> go
Quick I/O for Databases 195
Using Quick I/O with Sybase databases
$ cd /sybmaster
2 Create the masterdev file and preallocate 100 MB for the file:
You can use this master device while running the sybsetup program or sybinit
script.
3 To create the master device directly, enter:
4 Add a new 500 megabyte database device datadev to the file system /sybdata
on your dataserver:
$ cd /sybdata
...
196 Quick I/O for Databases
Enabling and disabling Quick I/O
6> go
Note: Quick I/O must be enabled on the file system for Cached Quick I/O to operate.
To enable caching
1 Set the qio_cache_enable parameter of vxtunefs to enable caching on a file
system.
2 Enable the Cached Quick I/O feature for specific files using the qioadmin
command.
Note: The vxtunefs command enables caching for all the Quick I/O files on the
file system.
The following example enables Cached Quick I/O for the file system /database01.
198 Quick I/O for Databases
About Cached Quick I/O for databases
2 If desired, make this setting persistent across mounts by adding a file system
entry in the file /etc/vx/tunefstab:
/dev/vx/dsk/datadg/database01 qio_cache_enable=1
/dev/vx/dsk/datadg/database02 qio_cache_enable=1
Note: The cache advisories operate only if Cached Quick I/O is enabled for the file
system. If the qio_cache_enable flag is zero, Cached Quick I/O is OFF for all the
files in that file system even if the individual file cache advisory for a file is ON.
device=/dev/vx/dsk/datadg/database01
dates.dbf,off
names.dbf,off
sell.dbf,on
filename,OFF
$ vxtunefs -p /database01
qio_cache_enable = 1
Check the setting of the flag qio_cache_enable using the vxtunefs command,
and the individual cache advisories for each file, to verify caching.
2 You can add the following line to the file /etc/system to load Quick I/O
whenever the system reboots.
forceload: drv/fdd
3 Create a regular VxFS file and preallocate it to the required size, or use the
qiomkfile command. The size of this preallocation depends on the size
requirement of the database server.
4 Create and access the database using the file name xxx::cdev:vxfs:.
Appendix A
Quick Reference
This appendix includes the following topics:
■ Command summary
■ Using quotas
Command summary
Symbolic links to all VxFS command executables are installed in the /opt/VRTS/bin
directory. Add this directory to the end of your PATH environment variable to
access the commands.
Table A-1 describes the VxFS-specific commands.
202 Quick Reference
Command summary
Command Description
cfscluster CFS cluster configuration command. This functionality is available only with the Veritas Cluster
File System product.
cfsdgadm Adds or deletes shared disk groups to or from a cluster configuration. This functionality is
available only with the Veritas Cluster File System product.
cfsmntadm Adds, deletes, modifies, and sets policy on cluster mounted file systems. This functionality is
available only with the Veritas Cluster File System product.
cfsmount, Mounts or unmounts a cluster file system. This functionality is available only with the Veritas
cfsumount Cluster File System product.
df Reports the number of free disk blocks and inodes for a VxFS file system.
ff Lists file names and inode information for a VxFS file system.
fsclustadm Manages cluster-mounted VxFS file systems. This functionality is available only with the Veritas
Cluster File System product.
Command Description
This functionality is available only with the Veritas Cluster File System product.
ncheck Generates path names from inode numbers for a VxFS file system.
qiomkfile Creates a VxFS Quick I/O device file. This functionality is available only with the Veritas Quick
I/O for Databases feature.
qiostat Displays statistics for VxFS Quick I/O for Databases. This functionality is available only with
the Veritas Quick I/O for Databases feature.
qlogadm Administers low level IOCTL for the QuickLog driver. This functionality is available only with
the Veritas QuickLog feature.
qlogattach Attaches a previously formatted QuickLog volume to a QuickLog device. This functionality is
available only with the Veritas QuickLog feature.
qlogck Recovers QuickLog devices during the boot process. This functionality is available only with
the Veritas QuickLog feature.
qlogclustadm Administers Cluster QuickLog devices. This functionality is available only with the Veritas
QuickLog feature.
qlogdb Debugs QuickLog. This functionality is available only with the Veritas QuickLog feature.
qlogdetach Detaches a QuickLog volume from a QuickLog device. This functionality is available only with
the Veritas QuickLog feature.
204 Quick Reference
Command summary
Command Description
qlogdisable Remounts a VxFS file system with QuickLog logging disabled. This functionality is available
only with the Veritas QuickLog feature.
qlogenable Remounts a VxFS file system with QuickLog logging enabled. This functionality is available
only with the Veritas QuickLog feature.
qlogmk Creates and attaches a QuickLog volume to a QuickLog device. This functionality is available
only with the Veritas QuickLog feature.
qlogprint Displays records from the QuickLog configuration. This functionality is available only with the
Veritas QuickLog feature.
qlogrec Recovers the QuickLog configuration file during a system failover. This functionality is available
only with the Veritas QuickLog feature.
qlogrm Removes a QuickLog volume from the configuration file. This functionality is available only
with the Veritas QuickLog feature.
qlogstat Prints statistics for running QuickLog devices, QuickLog volumes, and VxFS file systems. This
functionality is available only with the Veritas QuickLog feature.
qlogtrace Prints QuickLog tracing. This functionality is available only with the Veritas QuickLog feature.
vxfsconvert Converts an unmounted file system to VxFS or upgrades a VxFS disk layout version.
vxquot Displays file system ownership summaries for a VxFS file system.
vxquota Displays user disk quotas and usage on a VxFS file system.
Command Description
Section 1 Description
qioadmin Administers VxFS Quick I/O for Databases cache. This functionality is available only with the
Veritas Quick I/O for Databases feature.
qiomkfile Creates a VxFS Quick I/O device file. This functionality is available only with the Veritas Quick
I/O for Databases feature.
qiostat Displays statistics for VxFS Quick I/O for Databases. This functionality is available only with
the Veritas Quick I/O for Databases feature.
Section 1M Description
cfscluster Configures CFS clusters. This functionality is available only with the Veritas Cluster File System
product.
cfsdgadm Adds or deletes shared disk groups to/from a cluster configuration. This functionality is available
only with the Veritas Cluster File System product.
cfsmntadm Adds, deletes, modifies, and sets policy on cluster mounted file systems. This functionality is
available only with the Veritas Cluster File System product.
cfsmount, Mounts or unmounts a cluster file system. This functionality is available only with the Veritas
cfsumount Cluster File System product.
df_vxfs Reports the number of free disk blocks and inodes for a VxFS file system.
ff_vxfs Lists file names and inode information for a VxFS file system.
fsclustadm Manages cluster-mounted VxFS file systems. This functionality is available only with the Veritas
Cluster File System product.
Section 1M Description
glmconfig Configures Group Lock Managers (GLM). This functionality is available only with the Veritas
Cluster File System product.
ncheck_vxfs Generates path names from inode numbers for a VxFS file system.
qlogadm Administers low level IOCTL for the QuickLog driver. This functionality is available only with
the Veritas QuickLog feature.
qlogck Recovers QuickLog devices during the boot process. This functionality is available only with
the Veritas QuickLog feature.
qlogdetach Detaches a QuickLog volume from a QuickLog device. This functionality is available only with
the Veritas QuickLog feature.
qlogmk Creates and attaches a QuickLog volume to a QuickLog device. This functionality is available
only with the Veritas QuickLog feature.
qlogstat Prints statistics for running QuickLog devices, QuickLog volumes, and VxFS file systems. This
functionality is available only with the Veritas QuickLog feature.
vxfsconvert Converts an unmounted file system to VxFS or upgrades a VxFS disk layout version.
Section 1M Description
vxquot Displays file system ownership summaries for a VxFS file system.
vxquota Displays user disk quotas and usage on a VxFS file system.
Section 3 Description
vxfs_ap_assign_fs Assigns an allocation policy for all file data and metadata within a
specified file system.
vxfs_ap_enforce_file Ensures that all blocks in a specified file match the file allocation policy.
Section 3 Description
vxfs_fiostats_set Turns on and off sub-file I/O statistics and resets statistics counters.
vxfs_nattr_fcheck
vxfs_nattr_utimes Sets access and modification times for named data streams.
Section 3 Description
Section 4 Description
qlog_config Describes the QuickLog configuration file. This functionality is available only with the Veritas
QuickLog feature.
Section 7 Description
qlog Describes the Veritas QuickLog device driver. This functionality is available only with the
Veritas QuickLog feature.
-o N Displays the geometry of the file system and does not write
to the device.
-o largefiles Allows users to create files larger than two gigabytes. The
default option is largefiles.
-s size Directs vxfsconvert to use free disk space past the current end of the
file system to store VxFS metadata.
special Specifies the name of the character (raw) device that contains the file
system to convert.
-o cluster Mounts a file system in shared mode. Available only with the VxFS
cluster file system feature.
-o cluster Mounts a file system in shared mode. Available only with the VxFS
cluster file system feature.
Mount options
The mount command has numerous options to tailor a file system for various
functions and environments.
The following table lists some of the specific_options:
Security feature If security is important, use blkclear to ensure that deleted files
are completely erased before the space is reused.
Support for large files If you specify the largefiles option, you can create files larger than
two gigabytes on the file system. The default option is largefiles.
Support for cluster file If you specify the cluster option, the file system is mounted in
systems shared mode. Cluster file systems depend on several other Veritas
products that must be correctly configured before a complete
clustering environment is enabled.
Using databases If you are using databases with VxFS and if you have installed a
license key for the Veritas Quick I/O for Databases feature, the
mount command enables Quick I/O by default (the same as
specifying the qio option). The noqio option disables Quick I/O. If
you do not have Quick I/O, mount ignores the qio option.
Alternatively, you can increase database performance using the
mount option convosync=direct, which utilizes direct I/O.
News file systems If you are using cnews, use delaylog (or
tmplog),mincache=closesync because cnews does an fsync() on
each news file before marking it received. The fsync() is
performed synchronously as required, but other options are
delayed.
Temporary file For a temporary file system such as /tmp, where performance is
systems more important than data integrity, use
tmplog,mincache=tmpcache.
# device # to mount # device to fsck mount point FS type fsck mount at mount
pass boot options
fd — /dev/fd fd — no —
/dev/dsk/c0t3d0s1 — — swap — no —
216 Quick Reference
Unmounting a file system
# umount /dev/vx/dsk/fsvol/vol1
# umount -a
This unmounts all file systems except /, /usr, /usr/kvm, /var, /proc, /dev/fd,
and /tmp.
mount -v
This shows the file system type and mount options for all mounted file systems.
The -v option specifies verbose mode.
To view the status of mounted file systems
◆ Use the mount command to view the status of mounted file systems:
mount -v
This shows the file system type and mount options for all mounted file systems.
The -v option specifies verbose mode.
# mount
/ on /dev/root read/write/setuid on Thu May 26 16:58:24 2004
/proc on /proc read/write on Thu May 26 16:58:25 2004
/dev/fd on /dev/fd read/write on Thu May 26 16:58:26 2004
/tmp on /tmp read/write on Thu May 26 16:59:33 2004
/var/tmp on /var/tmp read/write on Thu May 26 16:59:34 2004
fstyp -v special
218 Quick Reference
Identifying file system types
# fstyp -v /dev/vx/dsk/fsvol/vol1
The output indicates that the file system type is vxfs, and displays file system
information similar to the following:
vxfs
magic a501fcf5 version 7 ctime Tue Jun 25 18:29:39 2003
logstart 17 logend 1040
bsize 1024 size 1048576 dsize 1047255 ninode 0 nau 8
defiextsize 64 ilbsize 0 immedlen 96 ndaddr 10
aufirst 1049 emap 2 imap 0 iextop 0 istart 0
bstart 34 femap 1051 fimap 0 fiextop 0 fistart 0 fbstart
1083
nindir 2048 aulen 131106 auimlen 0 auemlen 32
auilen 0 aupad 0 aublocks 131072 maxtier 17
inopb 4 inopau 0 ndiripau 0 iaddrlen 8 bshift 10
inoshift 2 bmask fffffc00 boffmask 3ff checksum d7938aa1
oltext1 9 oltext2 1041 oltsize 8 checksum2 52a
free 382614 ifree 0
efree 676 413 426 466 612 462 226 112 85 35 14 3 6 5 4 4 0 0
fstyp -v special
# fstyp -v /dev/vx/dsk/fsvol/vol1
The output indicates that the file system type is vxfs, and displays file system
information similar to the following:
vxfs
magic a501fcf5 version 6 ctime Tue Jun 25 18:29:39 2003
logstart 17 logend 1040
bsize 1024 size 1048576 dsize 1047255 ninode 0 nau 8
defiextsize 64 ilbsize 0 immedlen 96 ndaddr 10
aufirst 1049 emap 2 imap 0 iextop 0 istart 0
bstart 34 femap 1051 fimap 0 fiextop 0 fistart 0 fbstart
1083
nindir 2048 aulen 131106 auimlen 0 auemlen 32
auilen 0 aupad 0 aublocks 131072 maxtier 17
inopb 4 inopau 0 ndiripau 0 iaddrlen 8 bshift 10
inoshift 2 bmask fffffc00 boffmask 3ff checksum d7938aa1
oltext1 9 oltext2 1041 oltsize 8 checksum2 52a
free 382614 ifree 0
efree 676 413 426 466 612 462 226 112 85 35 14 3 6 5 4 4 0 0
220 Quick Reference
Resizing a file system
Note: If a file system is full, busy, or too fragmented, the resize operation may
fail.
The device must have enough space to contain the larger file system.
See the format(1M) manual page.
See the Veritas Volume Manager Administrator's Guide.
To extend a VxFS file system
◆ Use the fsadm command to extend a VxFS file system:
newsize The size (in sectors) to which the file system will increase.
-r rawdev Specifies the path name of the raw device if there is no entry in
/etc/vfstab and fsadm cannot determine the raw device.
Note: In cases where data is allocated toward the end of the file system, shrinking
may not be possible. If a file system is full, busy, or too fragmented, the resize
operation may fail.
newsize The size (in sectors) to which the file system will shrink.
-r rawdev Specifies the path name of the raw device if there is no entry in
/etc/vfstab and fsadm cannot determine the raw device.
Warning: After this operation, there is unused space at the end of the device.
You can then resize the device, but be careful not to make the device smaller
than the new size of the file system.
Note: If a file system is full or busy, the reorg operation may fail.
-r rawdev Specifies the path name of the raw device if there is no entry in
/etc/vfstab and fsadm cannot determine the raw device.
source The special device name or mount point of the file system to
copy.
destination The name of the special device on which to create the snapshot.
# cd /restore
# vxrestore -v -x -f /dev/st1
Using quotas
You can use quotas to allocate per-user quotas on VxFS file systems.
See “Using quotas” on page 104.
See the vxquota(1M), vxquotaon(1M), vxquotaoff(1M), and vxedquota(1M) manual
pages.
Turning on quotas
You can enable quotas at mount time or after a file system is mounted. The root
directory of the file system must contain a file named quotas that is owned by
root.
226 Quick Reference
Using quotas
To turn on quotas
1 Turn on quotas for a mounted file system:
vxquotaon mount_point
If the root directory does not contain a quotas file, the mount command
succeeds, but quotas are not turned on.
# touch /mnt/quotas
# vxquotaon /mnt
vxedquota creates a temporary file for a specified user. This file contains on-disk
quotas for each mounted VxFS file system that has a quotas file. The temporary
file has one or more lines similar to the following:
Quotas do not need to be turned on for vxedquota to work. However, the quota
limits apply only after quotas are turned on for a given file system.
vxedquota has an option to modify time limits. Modified time limits apply to the
entire file system; you cannot set time limits for an individual user.
To set up user quotas
1 Invoke the quota editor:
vxedquota username
vxedquota -t
Viewing quotas
The superuser or individual user can view disk quotas and usage on VxFS file
systems using the vxquota command. This command displays the user's quotas
and disk usage on all mounted VxFS file systems where the quotas file exists. You
will see all established quotas regardless of whether or not the quotas are actually
turned on.
To view quotas for a specific user
◆ Use the vxquota command to view quotas for a specific user:
vxquota -v username
vxquotaoff mount_point
228 Quick Reference
Using quotas
# vxquotaoff /mnt
Appendix B
Diagnostic messages
This appendix includes the following topics:
■ Kernel messages
Marking an inode bad Inodes can be marked bad if an inode update or a directory-block
update fails. In these types of failures, the file system does not
know what information is on the disk, and considers all the
information that it finds to be invalid. After an inode is marked
bad, the kernel still permits access to the file name, but any
attempt to access the data in the file or change the inode fails.
Disabling transactions If the file system detects an error while writing the intent log, it
disables transactions. After transactions are disabled, the files in
the file system can still be read or written, but no block or inode
frees or allocations, structural changes, directory entry changes,
or other changes to metadata are allowed.
230 Diagnostic messages
About kernel messages
Disabling a file system If an error occurs that compromises the integrity of the file system,
VxFS disables itself. If the intent log fails or an inode-list error
occurs, the super-block is ordinarily updated (setting the
VX_FULLFSCK flag) so that the next fsck does a full structural
check. If this super-block update fails, any further changes to the
file system can cause inconsistencies that are undetectable by the
intent log replay. To avoid this situation, the file system disables
itself.
Warning: Be careful when running this command. By specifying the –y option, all
fsck user prompts are answered with a “yes”, which can make irreversible changes
if it performs a full file system check.
Although a log replay may produce a clean file system, do a full structural check
to be safe.
The file system usually becomes disabled because of disk errors. Disk failures that
disable a file system should be fixed as quickly as possible.
See the fsck_vxfs(1M) manual page.
To execute a full structural check
◆ Use the fsck command to execute a full structural check:
instance of the message to guarantee that the sequence of events is known when
analyzing file system problems.
Each message is also written to an internal kernel buffer that you can view in the
file /var/adm/messages.
In some cases, additional data is written to the kernel buffer. For example, if an
inode is marked bad, the contents of the bad inode are written. When an error
message is displayed on the console, you can use the unique message ID to find
the message in /var/adm/messages and obtain the additional information.
Kernel messages
Some commonly encountered kernel messages are described on the following
table:
■ Description
The file system is out of space.
Often, there is plenty of space and one runaway process used up
all the remaining free space. In other cases, the available free space
becomes fragmented and unusable for some files.
■ Action
Monitor the free space in the file system and prevent it from
becoming full. If a runaway process has used up all the space, stop
that process, find the files created by the process, and remove
them. If the file system is out of space, remove files, defragment,
or expand the file system.
To remove files, use the find command to locate the files that are
to be removed. To get the most space with the least amount of
work, remove large files or file trees that are no longer needed. To
defragment or expand the file system, use the fsadm. command.
See the fsadm_vxfs(1M) manual page.
232 Diagnostic messages
Kernel messages
■ Description
The kernel tried to write to a read-only file system. This is an
unlikely problem, but if it occurs, the file system is disabled.
■ Action
The file system was not written, so no action is required. Report
this as a bug to your customer support organization.
003, 004, 005 WARNING: msgcnt x: mesg 003: V-2-3: vx_mapbad - mount_point file
system free extent bitmap in au aun marked bad
■ Description
If there is an I/O failure while writing a bitmap, the map is marked
bad. The kernel considers the maps to be invalid, so does not do
any more resource allocation from maps. This situation can cause
the file system to report out of space or out of inode error messages
even though df may report an adequate amount of free space.
This error may also occur due to bitmap inconsistencies. If a bitmap
fails a consistency check, or blocks are freed that are already free
in the bitmap, the file system has been corrupted. This may have
occurred because a user or process wrote directly to the device or
used fsdb to change the file system.
The VX_FULLFSCK flag is set. If the map that failed was a free
extent bitmap, and the VX_FULLFSCK flag can't be set, then the
file system is disabled.
■ Action
Check the console log for I/O errors. If the problem is a disk failure,
replace the disk. If the problem is not related to an I/O failure, find
out how the disk became corrupted. If no user or process was
writing to the device, report the problem to your customer support
organization. Unmount the file system and use fsck to run a full
structural check.
Diagnostic messages 233
Kernel messages
006, 007 WARNING: msgcnt x: mesg 006: V-2-6: vx_sumupd - mount_point file
system summary update in au aun failed
■ Description
An I/O error occurred while writing the allocation unit or inode
allocation unit bitmap summary to disk. This sets the
VX_FULLFSCK flag on the file system. If the VX_FULLFSCK flag
can't be set, the file system is disabled.
■ Action
Check the console log for I/O errors. If the problem was caused by
a disk failure, replace the disk before the file system is mounted
for write access, and use fsck to run a full structural check.
■ Description
A directory operation failed in an unexpected manner. The mount
point, inode, and block number identify the failing directory. If the
inode is an immediate directory, the directory entries are stored
in the inode, so no block number is reported. If the error is ENOENT
or ENOTDIR, an inconsistency was detected in the directory block.
This inconsistency could be a bad free count, a corrupted hash
chain, or any similar directory structure error. If the error is EIO
or ENXIO, an I/O failure occurred while reading or writing the disk
block.
The VX_FULLFSCK flag is set in the super-block so that fsck will
do a full structural check the next time it is run.
■ Action
Check the console log for I/O errors. If the problem was caused by
a disk failure, replace the disk before the file system is mounted
for write access. Unmount the file system and use fsck to run a full
structural check.
234 Diagnostic messages
Kernel messages
■ Description
When the kernel allocates an inode from the free inode bitmap, it
checks the mode and link count of the inode. If either is non-zero,
the free inode bitmap or the inode list is corrupted.
The VX_FULLFSCK flag is set in the super-block so that fsck will
do a full structural check the next time it is run.
■ Action
Unmount the file system and use fsck to run a full structural check.
■ Description
The file system is out of inodes.
■ Action
Monitor the free inodes in the file system. If the file system is
getting full, create more inodes either by removing files or by
expanding the file system.
See the fsadm_vxfs(1M) online manual page.
■ Description
When the kernel tries to read an inode, it checks the inode number
against the valid range. If the inode number is out of range, the
data structure that referenced the inode number is incorrect and
must be fixed.
The VX_FULLFSCK flag is set in the super-block so that fsck will
do a full structural check the next time it is run.
■ Action
Unmount the file system and use fsck to run a full structural check.
Diagnostic messages 235
Kernel messages
■ Description
For a Version 2 and above disk layout, the inode list is dynamically
allocated. When the kernel tries to read an inode, it must look up
the location of the inode in the inode list file. If the kernel finds a
bad extent, the inode can't be accessed. All of the inode list extents
are validated when the file system is mounted, so if the kernel finds
a bad extent, the integrity of the inode list is questionable. This is
a very serious error.
The VX_FULLFSCK flag is set in the super-block and the file system
is disabled.
■ Action
Unmount the file system and use fsck to run a full structural check.
014 WARNING: msgcnt x: mesg 014: V-2-14: vx_iget - inode table overflow
■ Description
All the system in-memory inodes are busy and an attempt was
made to use a new inode.
■ Action
■ Description
An attempt to mark an inode bad on disk, and the super-block
update to set the VX_FULLFSCK flag, failed. This indicates that a
catastrophic disk error may have occurred since both an inode list
block and the super-block had I/O failures. The file system is
disabled to preserve file system integrity.
■ Action
Unmount the file system and use fsck to run a full structural check.
Check the console log for I/O errors. If the disk failed, replace it
before remounting the file system.
■ Description
An I/O error occurred while reading the inode list. The
VX_FULLFSCK flag is set.
■ Action
Check the console log for I/O errors. If the problem was caused by
a disk failure, replace the disk before the file system is mounted
for write access. Unmount the file system and use fsck to run a full
structural check.
Diagnostic messages 237
Kernel messages
017
238 Diagnostic messages
Kernel messages
017 (continued) WARNING: msgcnt x: mesg 017: V-2-17: vx_ilisterr - mount_point file
system inode inumber marked bad in core
■ Description
Log ID overflow. When the log ID reaches VX_MAXLOGID
(approximately one billion by default), a flag is set so the file system
resets the log ID at the next opportunity. If the log ID has not been
reset, when the log ID reaches VX_DISLOGID (approximately
VX_MAXLOGID plus 500 million by default), the file system is
disabled. Since a log reset will occur at the next 60 second sync
interval, this should never happen.
■ Action
Unmount the file system and use fsck to run a full structural check.
Diagnostic messages 241
Kernel messages
■ Description
Intent log failed. The kernel will try to set the VX_FULLFSCK and
VX_LOGBAD flags in the super-block to prevent running a log
replay. If the super-block can't be updated, the file system is
disabled.
■ Action
Unmount the file system and use fsck to run a full structural check.
Check the console log for I/O errors. If the disk failed, replace it
before remounting the file system.
■ Description
When a VxFS file system is mounted, the structure is read from
disk. If the file system is marked clean, the structure is correct and
the first block of the intent log is cleared.
If there is any I/O problem or the structure is inconsistent, the
kernel sets the VX_FULLFSCK flag and the mount fails.
If the error isn't related to an I/O failure, this may have occurred
because a user or process has written directly to the device or used
fsdb to change the file system.
■ Action
Check the console log for I/O errors. If the problem is a disk failure,
replace the disk. If the problem is not related to an I/O failure, find
out how the disk became corrupted. If no user or process is writing
to the device, report the problem to your customer support
organization. In either case, unmount the file system and use fsck
to run a full structural check.
242 Diagnostic messages
Kernel messages
■ Description
The remount of the root file system failed. The system will not be
usable if the root file system can't be remounted for read/write
access.
When a root Veritas File System is first mounted, it is mounted
for read-only access. After fsck is run, the file system is remounted
for read/write access. The remount fails if fsck completed a resize
operation or modified a file that was opened before the fsck was
run. It also fails if an I/O error occurred during the remount.
Usually, the system halts or reboots automatically.
■ Action
Reboot the system. The system either remounts the root cleanly
or runs a full structural fsck and remounts cleanly. If the remount
succeeds, no further action is necessary.
Check the console log for I/O errors. If the disk has failed, replace
it before the file system is mounted for write access.
If the system won't come up and a full structural fsck hasn't been
run, reboot the system on a backup root and manually run a full
structural fsck. If the problem persists after the full structural fsck
and there are no I/O errors, contact your customer support
organization.
■ Description
There were active files in the file system and they caused the
unmount to fail.
When the system is halted, the root file system is unmounted. This
happens occasionally when a process is hung and it can't be killed
before unmounting the root.
■ Action
fsck will run when the system is rebooted. It should clean up the
file system. No other action is necessary.
If the problem occurs every time the system is halted, determine
the cause and contact your customer support organization.
Diagnostic messages 243
Kernel messages
■ Description
Update to the current usage table (CUT) failed.
For a Version 2 disk layout, the CUT contains a fileset version
number and total number of blocks used by each fileset.
The VX_FULLFSCK flag is set in the super-block. If the super-block
can't be written, the file system is disabled.
■ Action
Unmount the file system and use fsck to run a full structural check.
■ Description
An I/O error occurred while writing the super-block during a resize
operation. The file system is disabled.
■ Action
Unmount the file system and use fsck to run a full structural check.
Check the console log for I/O errors. If the problem is a disk failure,
replace the disk before the file system is mounted for write access.
■ Description
Snapshot file system error.
When the primary file system is written, copies of the original data
must be written to the snapshot file system. If a read error occurs
on a primary file system during the copy, any snapshot file system
that doesn't already have a copy of the data is out of date and must
be disabled.
■ Action
An error message for the primary file system prints. Resolve the
error on the primary file system and rerun any backups or other
applications that were using the snapshot that failed when the
error occurred.
244 Diagnostic messages
Kernel messages
■ Description
A write to the snapshot file system failed.
As the primary file system is updated, copies of the original data
are read from the primary file system and written to the snapshot
file system. If one of these writes fails, the snapshot file system is
disabled.
■ Action
Check the console log for I/O errors. If the disk has failed, replace
it. Resolve the error on the disk and rerun any backups or other
applications that were using the snapshot that failed when the
error occurred.
■ Description
The snapshot file system ran out of space to store changes.
During a snapshot backup, as the primary file system is modified,
the original data is copied to the snapshot file system. This error
can occur if the snapshot file system is left mounted by mistake,
if the snapshot file system was given too little disk space, or the
primary file system had an unexpected burst of activity. The
snapshot file system is disabled.
■ Action
Make sure the snapshot file system was given the correct amount
of space. If it was, determine the activity level on the primary file
system. If the primary file system was unusually busy, rerun the
backup. If the primary file system is no busier than normal, move
the backup to a time when the primary file system is relatively idle
or increase the amount of disk space allocated to the snapshot file
system.
Rerun any backups that failed when the error occurred.
Diagnostic messages 245
Kernel messages
■ Description
During a snapshot backup, each snapshot file system maintains a
block map on disk. The block map tells the snapshot file system
where data from the primary file system is stored in the snapshot
file system. If an I/O operation to the block map fails, the snapshot
file system is disabled.
■ Action
Check the console log for I/O errors. If the disk has failed, replace
it. Resolve the error on the disk and rerun any backups that failed
when the error occurred.
■ Description
File system disabled, preceded by a message that specifies the
reason. This usually indicates a serious disk problem.
■ Action
Unmount the file system and use fsck to run a full structural check.
If the problem is a disk failure, replace the disk before the file
system is mounted for write access.
■ Description
Snapshot file system disabled, preceded by a message that specifies
the reason.
■ Action
Unmount the snapshot file system, correct the problem specified
by the message, and rerun any backups that failed due to the error.
246 Diagnostic messages
Kernel messages
■ Description
When the disk driver encounters an I/O error, it sets a flag in the
super-block structure. If the flag is set, the kernel will set the
VX_FULLFSCK flag as a precautionary measure. Since no other
error has set the VX_FULLFSCK flag, the failure probably occurred
on a data block.
■ Action
Unmount the file system and use fsck to run a full structural check.
Check the console log for I/O errors. If the problem is a disk failure,
replace the disk before the file system is mounted for write access.
■ Description
The kernel encountered an error while resetting the log ID on the
file system. This happens only if the super-block update or log
write encountered a device failure. The file system is disabled to
preserve its integrity.
■ Action
Unmount the file system and use fsck to run a full structural check.
Check the console log for I/O errors. If the problem is a disk failure,
replace the disk before the file system is mounted for write access.
■ Description
VOP_INACTIVE was called for an inode while the inode was being
used. This should never happen, but if it does, the file system is
disabled.
■ Action
Unmount the file system and use fsck to run a full structural check.
Report as a bug to your customer support organization.
Diagnostic messages 247
Kernel messages
■ Description
Update to the link count table (LCT) failed.
For a Version 2 and above disk layout, the LCT contains the link
count for all the structural inodes. The VX_FULLFSCK flag is set
in the super-block. If the super-block can't be written, the file
system is disabled.
■ Action
Unmount the file system and use fsck to run a full structural check.
■ Description
A read or a write error occurred while accessing file system
metadata. The full fsck flag on the file system was set. The message
specifies whether the disk I/O that failed was a read or a write.
File system metadata includes inodes, directory blocks, and the
file system log. If the error was a write error, it is likely that some
data was lost. This message should be accompanied by another file
system message describing the particular file system metadata
affected, as well as a message from the disk driver containing
information about the disk I/O error.
■ Action
Resolve the condition causing the disk error. If the error was the
result of a temporary condition (such as accidentally turning off
a disk or a loose cable), correct the condition. Check for loose cables,
etc. Unmount the file system and use fsck to run a full structural
check (possibly with loss of data).
In case of an actual disk error, if it was a read error and the disk
driver remaps bad sectors on write, it may be fixed when fsck is
run since fsck is likely to rewrite the sector with the read error. In
other cases, you replace or reformat the disk drive and restore the
file system from backups. Consult the documentation specific to
your system for information on how to recover from disk errors.
The disk driver should have printed a message that may provide
more information.
248 Diagnostic messages
Kernel messages
■ Description
A read or a write error occurred while accessing file data. The
message specifies whether the disk I/O that failed was a read or a
write. File data includes data currently in files and free blocks. If
the message is printed because of a read or write error to a file,
another message that includes the inode number of the file will
print. The message may be printed as the result of a read or write
error to a free block, since some operations allocate an extent and
immediately perform I/O to it. If the I/O fails, the extent is freed
and the operation fails. The message is accompanied by a message
from the disk driver regarding the disk I/O error.
■ Action
Resolve the condition causing the disk error. If the error was the
result of a temporary condition (such as accidentally turning off
a disk or a loose cable), correct the condition. Check for loose cables,
etc. If any file data was lost, restore the files from backups.
Determine the file names from the inode number.
See the ncheck(1M) manual page.
If an actual disk error occurred, make a backup of the file system,
replace or reformat the disk drive, and restore the file system from
the backup. Consult the documentation specific to your system for
information on how to recover from disk errors. The disk driver
should have printed a message that may provide more information.
Diagnostic messages 249
Kernel messages
■ Description
An attempt to write the file system super block failed due to a disk
I/O error. If the file system was being mounted at the time, the
mount will fail. If the file system was mounted at the time and the
full fsck flag was being set, the file system will probably be disabled
and Message 031 will also be printed. If the super-block was being
written as a result of a sync operation, no other action is taken.
■ Action
Resolve the condition causing the disk error. If the error was the
result of a temporary condition (such as accidentally turning off
a disk or a loose cable), correct the condition. Check for loose cables,
etc. Unmount the file system and use fsck to run a full structural
check.
If an actual disk error occurred, make a backup of the file system,
replace or reformat the disk drive, and restore the file system from
backups. Consult the documentation specific to your system for
information on how to recover from disk errors. The disk driver
should have printed a message that may provide more information.
■ Description
An update to the user quotas file failed for the user ID.
The quotas file keeps track of the total number of blocks and inodes
used by each user, and also contains soft and hard limits for each
user ID. The VX_FULLFSCK flag is set in the super-block. If the
super-block cannot be written, the file system is disabled.
■ Action
Unmount the file system and use fsck to run a full structural check.
Check the console log for I/O errors. If the disk has a hardware
failure, it should be repaired before the file system is mounted for
write access.
250 Diagnostic messages
Kernel messages
■ Description
A read of the user quotas file failed for the uid.
The quotas file keeps track of the total number of blocks and inodes
used by each user, and contains soft and hard limits for each user
ID. The VX_FULLFSCK flag is set in the super-block. If the
super-block cannot be written, the file system is disabled.
■ Action
Unmount the file system and use fsck to run a full structural check.
Check the console log for I/O errors. If the disk has a hardware
failure, it should be repaired before the file system is mounted for
write access.
■ Description
The hard limit on blocks was reached. Further attempts to allocate
blocks for files owned by the user will fail.
■ Action
Remove some files to free up space.
■ Description
The soft limit on blocks was exceeded continuously for longer than
the soft quota time limit. Further attempts to allocate blocks for
files will fail.
■ Action
Remove some files to free up space.
■ Description
The soft limit on blocks is exceeded. Users can exceed the soft limit
for a limited amount of time before allocations begin to fail. After
the soft quota time limit has expired, subsequent attempts to
allocate blocks for files fail.
■ Action
Remove some files to free up space.
Diagnostic messages 251
Kernel messages
■ Description
The hard limit on inodes was exceeded. Further attempts to create
files owned by the user will fail.
■ Action
Remove some files to free inodes.
■ Description
The soft limit on inodes has been exceeded continuously for longer
than the soft quota time limit. Further attempts to create files
owned by the user will fail.
■ Action
Remove some files to free inodes.
■ Description
The soft limit on inodes was exceeded. The soft limit can be
exceeded for a certain amount of time before attempts to create
new files begin to fail. Once the time limit has expired, further
attempts to create files owned by the user will fail.
■ Action
Remove some files to free inodes.
252 Diagnostic messages
Kernel messages
■ Description
To maintain reliable usage counts, VxFS maintains the user quotas
file as a structural file in the structural fileset.
These files are updated as part of the transactions that allocate
and free blocks and inodes. For compatibility with the quota
administration utilities, VxFS also supports the standard user
visible quota files.
When quotas are turned off, synced, or new limits are added, VxFS
tries to update the external quota files. When quotas are enabled,
VxFS tries to read the quota limits from the external quotas file.
If these reads or writes fail, the external quotas file is out of date.
■ Action
Determine the reason for the failure on the external quotas file
and correct it. Recreate the quotas file.
Diagnostic messages 253
Kernel messages
■ Description
If there is an I/O failure while writing a bitmap, the map is marked
bad. The kernel considers the maps to be invalid, so does not do
any more resource allocation from maps. This situation can cause
the file system to report “out of space” or “out of inode” error
messages even though df may report an adequate amount of free
space.
This error may also occur due to bitmap inconsistencies. If a bitmap
fails a consistency check, or blocks are freed that are already free
in the bitmap, the file system has been corrupted. This may have
occurred because a user or process wrote directly to the device or
used fsdb to change the file system.
The VX_FULLFSCK flag is set. If the VX_FULLFSCK flag can't be
set, the file system is disabled.
■ Action
Check the console log for I/O errors. If the problem is a disk failure,
replace the disk. If the problem is not related to an I/O failure, find
out how the disk became corrupted. If no user or process was
writing to the device, report the problem to your customer support
organization. Unmount the file system and use fsck to run a full
structural check.
■ Description
An I/O error occurred reading or writing an extent allocation unit
summary.
The VX_FULLFSCK flag is set. If the VX_FULLFSCK flag can't be
set, the file system is disabled.
■ Action
Check the console log for I/O errors. If the problem is a disk failure,
replace the disk. If the problem is not related to an I/O failure, find
out how the disk became corrupted. If no user or process was
writing to the device, report the problem to your customer support
organization. Unmount the file system and use fsck to run a full
structural check.
254 Diagnostic messages
Kernel messages
■ Description
An I/O error occurred reading or writing an inode allocation unit
summary.
The VX_FULLFSCK flag is set. If the VX_FULLFSCK flag cannot be
set, the file system is disabled.
■ Action
Check the console log for I/O errors. If the problem is a disk failure,
replace the disk. If the problem is not related to an I/O failure, find
out how the disk became corrupted. If no user or process was
writing to the device, report the problem to your customer support
organization. Unmount the file system and use fsck to run a full
structural check.
■ Description
An I/O error occurred while writing to the snapshot file system
bitmap. There is no problem with the snapped file system, but the
snapshot file system is disabled.
■ Action
Check the console log for I/O errors. If the problem is a disk failure,
replace the disk. If the problem is not related to an I/O failure, find
out how the disk became corrupted. If no user or process was
writing to the device, report the problem to your customer support
organization. Restart the snapshot on an error free disk partition.
Rerun any backups that failed when the error occurred.
Diagnostic messages 255
Kernel messages
■ Description
An I/O error occurred while reading the snapshot file system
bitmap. There is no problem with snapped file system, but the
snapshot file system is disabled.
■ Action
Check the console log for I/O errors. If the problem is a disk failure,
replace the disk. If the problem is not related to an I/O failure, find
out how the disk became corrupted. If no user or process was
writing to the device, report the problem to your customer support
organization. Restart the snapshot on an error free disk partition.
Rerun any backups that failed when the error occurred.
■ Description
During a file system resize, the remount to the new size failed. The
VX_FULLFSCK flag is set and the file system is disabled.
■ Action
Unmount the file system and use fsck to run a full structural check.
After the check, the file system shows the new size.
■ Description
A registered extended attribute intervention routine returned an
invalid return code to the VxFS driver during extended attribute
inheritance.
■ Action
Determine which vendor supplied the registered extended attribute
intervention routine and contact their customer support
organization.
256 Diagnostic messages
Kernel messages
■ Description
An error occurred while reading or writing a fileset structure.
VX_FULLFSCK flag is set. If the VX_FULLFSCK flag can't be set,
the file system is disabled.
■ Action
Unmount the file system and use fsck to run a full structural check.
■ Description
During inode validation, a discrepancy was found between the
inode version number and the fileset version number. The inode
may be marked bad, or the fileset version number may be changed,
depending on the ratio of the mismatched version numbers.
VX_FULLFSCK flag is set. If the VX_FULLFSCK flag can't be set,
the file system is disabled.
■ Action
Check the console log for I/O errors. If the problem is a disk failure,
replace the disk. If the problem is not related to an I/O failure, find
out how the disk became corrupted. If no user or process is writing
to the device, report the problem to your customer support
organization. In either case, unmount the file system and use fsck
to run a full structural check.
066 NOTICE: msgcnt x: mesg 066: V-2-66: DMAPI mount event - buffer
■ Description
An HSM (Hierarchical Storage Management) agent responded to
a DMAPI mount event and returned a message in buffer.
■ Action
Consult the HSM product documentation for the appropriate
response to the message.
Diagnostic messages 257
Kernel messages
■ Description
The file system mount failed because the file system was marked
as being under the management of an HSM agent, and no HSM
agent was found during the mount.
■ Action
Restart the HSM agent and try to mount the file system again.
■ Description
The value auto-tuned for the vxfs_ninode parameter is less than
125% of the ncsize parameter.
■ Action
To prevent this message from occurring, set vxfs_ninode to at least
125% of the value of ncsize. The best way to do this is to adjust
ncsize down, rather than adjusting vxfs_ninode up.
258 Diagnostic messages
Kernel messages
■ Description
The value of the system tunable parameters—vxfs_ninode and
vx_bc_bufhwm—add up to a value that is more than 66% of the
kernel virtual address space or more than 50% of the physical
system memory. VxFS inodes require approximately one kilobyte
each, so both values can be treated as if they are in units of one
kilobyte.
The value of the system tunable parameters—vx_ninode and
vx_bc_bufhwm—add up to a value that is more than 66% of the
kernel virtual address space or more than 50% of the physical
system memory. VxFS inodes require approximately one kilobyte
each, so both values can be treated as if they are in units of one
kilobyte.
■ Action
To avoid a system hang, reduce the value of one or both parameters
to less than 50% of physical memory or to 66% of kernel virtual
memory.
■ Description
The file system ran out of space while updating a Storage
Checkpoint. The Storage Checkpoint was removed to allow the
operation to complete.
■ Action
Increase the size of the file system. If the file system size cannot
be increased, remove files to create sufficient space for new Storage
Checkpoints. Monitor capacity of the file system closely to ensure
it does not run out of space.
See the fsadm_vxfs(1M) manual page.
Diagnostic messages 259
Kernel messages
071 NOTICE: msgcnt x: mesg 071: V-2-71: cleared data I/O error flag in
mount_point file system
■ Description
The user data I/O error flag was reset when the file system was
mounted. This message indicates that a read or write error occurred
while the file system was previously mounted.
See Message Number 038.
■ Action
Informational only, no action required.
072 WARNING: msgcnt x: vxfs: mesg 072: could not failover for
volume_name file system
■ Description
This message is specific to the cluster file system. The message
indicates a problem in a scenario where a node failure has occurred
in the cluster and the newly selected primary node encounters a
failure.
■ Action
Save the system logs and core dump of the node along with the
disk image (metasave) and contact your customer support
organization. The node can be rebooted to join the cluster.
■ Description
This message is specific to the cluster file system. The message
indicates a problem in a scenario where a node failure has occurred
in the cluster and the newly selected primary node encounters a
failure.
■ Action
Save the core dump of the node and contact your customer support
organization. The node can be rebooted to join the cluster.
260 Diagnostic messages
Kernel messages
075 WARNING: msgcnt x: mesg 075: V-2-75: replay fsck failed for
mount_point file system
■ Description
The log replay failed during a failover or while migrating the CFS
primary-ship to one of the secondary cluster nodes. The file system
was disabled.
■ Action
Unmount the file system from the cluster. Use fsck to run a full
structural check and mount the file system again.
■ Description
■ Action
The operation may take a considerable length of time. You can do
a forced unmount, or simply wait for the operation to complete so
file system can be unmounted cleanly.
See the umount_vxfs(1M) manual page.
The operation may take a considerable length of time. You can do
a forced unmount, or simply wait for the operation to complete so
file system can be unmounted cleanly.
See the umount_vxfs(1M) manual page.
The operation may take a considerable length of time. You can do
a forced unmount, or simply wait for the operation to complete so
file system can be unmounted cleanly.
See the umount_vxfs(1M) manual page.
The operation may take a considerable length of time. You can do
a forced unmount, or simply wait for the operation to complete so
file system can be unmounted cleanly.
See the umount_vxfs(1M) manual page.
The operation may take a considerable length of time. Wait for the
operation to complete so file system can be unmounted cleanly.
Diagnostic messages 261
Kernel messages
■ Description
Disk corruption was detected while changing fileset headers. This
can occur when writing a new inode allocation unit, preventing
the allocation of new inodes in the fileset.
■ Action
Unmount the file system and use fsck to run a full structural check.
■ Description
The inode list for the fileset was corrupted and the corruption was
detected while allocating new inodes. The failed system call returns
an ENOSPC error. Any subsequent inode allocations will fail unless
a sufficient number of files are removed.
■ Action
Unmount the file system and use fsck to run a full structural check.
262 Diagnostic messages
Kernel messages
079
Diagnostic messages 263
Kernel messages
079 (continued) WARNING: msgcnt x: mesg 017: V-2-79: vx_ilisterr - mount_point file
system inode inumber marked bad on disk
080 WARNING: msgcnt x: mesg 080: V-2-80: Disk layout versions older
than Version 4 will not be supported in the next release. It is advisable
to upgrade to the latest disk layout version now.
■ Action
Use the vxupgrade command to begin upgrading file systems
using older disk layouts to Version 5. Consider the following when
planning disk layout upgrades:
■ Version 1 disk layout file systems can support more than 8 million
inodes, while Version 2 disk layout file systems have an 8 million
inode limit.
The Version 1 disk layout provides finer control of disk geometry
than subsequent disk layouts. This finer control is not relevant on
disks employing newer technologies, but can still be applicable on
older hardware. If you are using Version 1 disk layout file systems
on older hardware that needs fine control of disk geometry, a disk
layout upgrade may be problematic.
Images of Version 1 or Version 2 disk layout file systems created
by copy utilities, such as dd or volcopy, will become unusable
after a disk layout upgrade. Offline conversions tools will be
provided in the next VxFS feature release to aid in migrating
volume-image backup copies of Version 1 and Version 2 disk layout
file systems to a Version 4 disk layout.
■ Description
This message displays when CFS detects a possible network
partition and disables the file system locally, that is, on the node
where the message appears.
■ Action
There are one or more private network links for communication
between the nodes in a cluster. At least one link must be active to
maintain the integrity of the cluster. If all the links go down, after
the last network link is broken, the node can no longer
communicate with other nodes in the cluster.
Check the network connections. After verifying that the network
connections is operating correctly, unmount the disabled file
system and mount it again.
Diagnostic messages 267
Kernel messages
■ Description
If a cluster node is in a partitioned state, and if the file system is
on a shared VxVM volume, this volume may become corrupted by
accidental access from another node in the cluster.
■ Action
These shared disks can also be seen by nodes in a different
partition, so they can inadvertently be corrupted. So the second
message 082 tells that the device mentioned is on shared volume
and damage can happen only if it is a real partition problem. Do
not use it on any other node until the file system is unmounted
from the mounted nodes.
083 WARNING: msgcnt x: mesg 083: V-2-83: mount_point file system log
is not compatible with the specified intent log I/O size
■ Description
Either the specified mount logiosize size is not compatible with
the file system layout, or the file system is corrupted.
■ Action
Mount the file system again without specifying the logiosize option,
or use a logiosize value compatible with the intent log specified
when the file system was created. If the error persists, unmount
the file system and use fsck to run a full structural check.
■ Description
In a cluster file system, when the primary of the file system fails,
a secondary file system is chosen to assume the role of the primary.
The assuming node will be able to enforce quotas after becoming
the primary.
If the new primary is unable to enforce quotas this message will
be displayed.
■ Action
Issue the quotaon command from any of the nodes that have the
file system mounted.
268 Diagnostic messages
Kernel messages
■ Description
The system administrator sets the quotas for Storage Checkpoints
in the form of a soft limit and hard limit. This message displays
when the hard limit is exceeded.
■ Action
Delete Storage Checkpoints or increase the hard limit.
■ Description
The system administrator sets the quotas for Storage Checkpoints
in the form of a soft limit and hard limit. This message displays
when the soft limit is exceeded.
■ Action
Delete Storage Checkpoints or increase the soft limit. This is not
a mandatory action, but is recommended.
■ Description
When performing an operation that changes an inode entry, if the
inode is incorrect, this message will display.
■ Action
Run a full file system check using fsck to correct the errors.
■ Description
The external quota file, quotas, contains the quota values, which
range from 0 up to 2147483647. When quotas are turned on by the
quotaon command, this message displays when a user exceeds
the quota limit.
■ Action
Correct the quota values in the quotas file.
Diagnostic messages 269
Kernel messages
■ Description
The supported quota limit is up to 2147483647 sectors. When
quotas are turned on by the quotaon command, this message
displays when a user exceeds the supported quota limit.
■ Action
Ask the user to delete files to lower the quota below the limit.
■ Description
One or more users or groups has a soft limit set greater than the
hard limit, preventing the BSD quota from being turned on.
■ Action
Check the soft limit and hard limit for every user and group and
confirm that the soft limit is not set greater than the hard limit.
■ Description
The vxfs kernel has experienced an error while trying to manage
the space consumed by the File Change Log file. Because the space
cannot be actively managed at this time, the FCL has been
deactivated and has been truncated to 1 file system block, which
contains the FCL superblock.
■ Action
Re-activate the FCL.
■ Description
The vxfs kernel was unable to map actual storage to the next offset
in the File Change Log file. This is mostly likely caused by a problem
with allocating to the FCL file. Because no new FCL records can be
written to the FCL file, the FCL has been deactivated.
■ Action
Re-activate the FCL.
270 Diagnostic messages
Kernel messages
093 WARNING: msgcnt x: mesg 093: V-2-93: Disk layout versions older
than Version 6 are not supported for shared mounts. Upgrade to the
latest layout version.
■ Action
Upgrade your disk layout to Version 7 for shared mounts. Use the
vxupgrade command to begin upgrading file systems using older
disk layouts to Version 7.
096 WARNING: msgcnt x: mesg 096: V-2-96: file_system file system fullfsck
flag set - function_name.
■ Description
The next time the file system is mounted, a full fsck must be
performed.
■ Action
No immediate action required. When the file system is unmounted,
run a full file system check using fsck before mounting it again.
097 WARNING: msgcnt x: mesg 097: V-2-97: VxFS failed to create new
thread (error_number, function_address:argument_address)
■ Description
VxFS failed to create a kernel thread due to resource constraints,
which is often a memory shortage.
■ Action
VxFS will retry the thread creation until it succeeds; no immediate
action is required. Kernel resources, such as kernel memory, might
be overcommitted. If so, reconfigure the system accordingly.
098 WARNING: msgcnt x: mesg 098: V-2-98: VxFS failed to initialize File
Change Log for fileset fileset (index number) of mount_point file
system
■ Description
VxFS mount failed to initialize FCL structures for the current fileset
mount. As a result, FCL could not be turned on. The FCL file will
have no logging records.
■ Action
Reactivate the FCL.
Diagnostic messages 271
Kernel messages
099 WARNING: msgcnt x: mesg 099: V-2-99: The specified value for
vx_ninode is less than the recommended minimum value of min_value
■ Description
Auto-tuning or the value specified by the system administrator
resulted in a value lower than the recommended minimum for the
total number of inodes that can be present in the inode cache. VxFS
will ignore the newly tuned value and will keep the value specified
in the message (VX_MINNINODE).
■ Action
Informational only; no action required.
■ Description
The size of the FCL file is approching the maximum file size supported.
This size is platform specific. When the FCL file is reaches the
maximum file size, the FCL will be deactivated and reactivated. All
logging information gathered so far will be lost.
■ Action
Take any corrective action possible to restrict the loss due to the
FCL being deactivated and reactivated.
■ Description
The size of FCL file reached the maximum supported file size and the
FCL has been reactivated. All records stored in the FCL file, starting
from the current fc_loff up to the maximum file size, have been purged.
New records will be recorded in the FCL file starting from offset
fs_bsize. The activation time in the FCL is reset to the time of
reactivation. The impact is equivalent to File Change Log being
deactivated and activated.
■ Action
Informational only; no action required.
272 Diagnostic messages
About unique message identifiers
■ Description
The command attempted to call stat() on a device path to ensure
that the path refers to a character device before opening the device,
but the stat() call failed. The error message will include the
platform-specific message for the particular error that was
encountered, such as "Access denied" or "No such file or directory".
■ Action
The corrective action depends on the particular error.
■ Description
The command attempted to open a disk device, but the open() call
failed. The error message includes the platform-specific message
for the particular error that was encountered, such as "Access
denied" or "No such file or directory".
■ Action
The corrective action depends on the particular error.
Diagnostic messages 273
Unique message identifiers
■ Description
The command attempted to read the superblock from a device, but
the read() call failed. The error message will include the
platform-specific message for the particular error that was
encountered, such as "Access denied" or "No such file or directory".
■ Action
The corrective action depends on the particular error.
■ Description
The command was invoked on a device that did not contain a valid
VxFS file system.
■ Action
Check that the path specified is what was intended.
■ Description
The command called stat() on a file, which is usually a file system
mount point, but the call failed.
■ Action
Check that the path specified is what was intended and that the
user has permission to access that path.
■ Description
The attempt to mount the file system failed because either the
request was to mount a particular Storage Checkpoint that does
not exist, or the file system is managed by an HSM and the HSM
is not running.
■ Action
In the first case, use the fsckptadm list command to see which
Storage Checkpoints exist and mount the appropriate Storage
Checkpoint. In the second case, make sure the HSM is running. If
the HSM is not running, start and mount the file system again.
274 Diagnostic messages
Unique message identifiers
■ Description
The attempt to mount a VxFS file system has failed because either
the volume being mounted or the directory which is to be the mount
point is busy.
The reason that a VxVM volume could be busy is if the volume is
in a shared disk group and the volume is currently being accessed
by a VxFS command, such as fsck, on a node in the cluster.
One reason that the mount point could be busy is if a process has
the directory open or has the directory as its current directory.
Another reason that the mount point could be busy is if the
directory is NFS-exported.
■ Action
For a busy mount point, if a process has the directory open or has
the directory as its current directory, use the fuser command to
locate the processes and either get them to release their references
to the directory or kill the processes. Afterward, attempt to mount
the file system again.
If the directory is NFS-exported, unexport the directory, such as
by using unshare mntpt on the Solaris operating system.
Afterward, attempt to mount the file system again.
■ Description
This message is printed by two different commands:
fsckpt_restore and mount. In both cases, the kernel's attempt
to mount the file system failed because of I/O errors or corruption
of the VxFS metadata.
■ Action
Check the console log for I/O errors and fix any problems reported
there. Run a full fsck.
Diagnostic messages 275
Unique message identifiers
■ Description
The mount options specified contain mutually-exclusive options,
or in the case of a remount, the new mount options differed from
the existing mount options in a way that is not allowed to change
in a remount.
■ Action
Change the requested mount options so that they are all mutually
compatible and retry the mount.
■ Description
Cluster mounts require the vxfsckd daemon to be running, which
is controlled by VCS.
■ Action
Check the VCS status to see why this service is not running. After
starting the daemon via VCS, try the mount again.
■ Description
In some releases of VxFS, before the VxFS mount command
attempts to mount a file system, mount tries to read the VxFS
superblock to determine the disk layout version of the file system
being mounted so that mount can check if that disk layout version
is supported by the installed release of VxFS. If the attempt to read
the superblock fails for any reason, this message is displayed. This
message will usually be preceded by another error message that
gives more information as to why the superblock could not be read.
■ Action
The corrective action depends on the preceding error, if any.
276 Diagnostic messages
Unique message identifiers
Appendix C
Disk layout
This appendix includes the following topics:
Version 1 Version 1 disk layout is the original VxFS disk layout Not Supported
provided with pre-2.0 versions of VxFS.
Version 2 Version 2 disk layout supports features such as filesets, Not Supported
dynamic inode allocation, and enhanced security. The
Version 2 layout is available with and without quotas
support.
278 Disk layout
About disk space allocation
Version 3 Version 3 disk layout encompasses all file system Not Supported
structural information in files, rather than at fixed
locations on disk, allowing for greater scalability.
Version 3 supports files and file systems up to one
terabyte in size.
Version 7 Version 7 disk layout enables support for variable and Supported
large size history log records.
Some of the disk layout versions were not supported on all UNIX operating systems.
Currently, only the Version 4, 5, 6, and 7 disk layouts can be created and mounted.
Version 1 and 2 file systems cannot be created nor mounted. Version 7 is the
default disk layout version.
The vxupgrade command is provided to upgrade an existing VxFS file system to
the Version 5, 6, or 7 layout while the file system remains online. You must upgrade
in steps from older to newer layouts.
See the vxupgrade(1M) manual page.
The vxfsconvert command is provided to upgrade Version 1 and 2 disk layouts
to the Version 7 disk layout while the file system is not mounted.
See the vxfsconvert(1M) manual page.
on the same system. VxFS allocates disk space to files in extents. An extent is a
set of contiguous blocks.
The Version 4 disk layout allows the file system to scale easily to accommodate
large files and large file systems.
The original disk layouts divided up the file system space into allocation units.
The first AU started part way into the file system which caused potential alignment
problems depending on where the first AU started. Each allocation unit also had
its own summary, bitmaps, and data blocks. Because this AU structural information
was stored at the start of each AU, this also limited the maximum size of an extent
that could be allocated. By replacing the allocation unit model of previous versions,
the need for alignment of allocation units and the restriction on extent sizes was
removed.
The VxFS Version 4 disk layout divides the entire file system space into fixed size
allocation units. The first allocation unit starts at block zero and all allocation
units are a fixed length of 32K blocks. An exception may be the last AU, which
occupies whatever space remains at the end of the file system. Because the first
AU starts at block zero instead of part way through the file system as in previous
versions, there is no longer a need for explicit AU alignment or padding to be
added when creating a file system.
The Version 4 file system also moves away from the model of storing AU structural
data at the start of an AU and puts all structural information in files. So expanding
the file system structures simply requires extending the appropriate structural
files. This removes the extent size restriction imposed by the previous layouts.
All Version 4 structural files reside in the structural fileset.
The structural files in the Version 4 disk layout are:
object location Contains the object location table (OLT). The OLT, which is referenced
table file from the super-block, is used to locate the other structural files.
label file Encapsulates the super-block and super-block replicas. Although the
location of the primary super-block is known, the label file can be used
to locate super-block copies if there is structural damage to the file
system.
280 Disk layout
VxFS Version 4 disk layout
device file Records device information such as volume length and volume label,
and contains pointers to other structural files.
fileset header file Holds information on a per-fileset basis. This may include the inode
of the fileset's inode list file, the maximum number of inodes allowed,
an indication of whether the file system supports large files, and the
inode number of the quotas file if the fileset supports quotas. When
a file system is created, there are two filesets—the structural fileset
defines the file system structure, the primary fileset contains user
data.
inode list file Both the primary fileset and the structural fileset have their own set
of inodes stored in an inode list file. Only the inodes in the primary
fileset are visible to users. When the number of inodes is increased,
the kernel increases the size of the inode list file.
inode allocation Holds the free inode map, extended operations map, and a summary
unit file of inode resources.
log file Maps the block used by the file system intent log.
extent allocation Indicates the allocation state of each AU by defining whether each
unit state file AU is free, allocated as a whole (no bitmaps allocated), or expanded,
in which case the bitmaps associated with each AU determine which
extents are allocated.
extent allocation Contains the AU summary for each allocation unit, which contains
unit summary file the number of free extents of each size. The summary for an extent
is created only when an allocation unit is expanded for use.
free extent map Contains the free extent maps for each of the allocation units.
file
quotas files Contains quota information in records. Each record contains resources
allocated either per user or per group.
The Version 4 disk layout supports Access Control Lists and Block-Level
Incremental (BLI) Backup. BLI Backup is a backup method that stores and retrieves
only the data blocks changed since the previous backup, not entire files. This
saves times, storage space, and computing resources required to backup large
databases.
Figure C-1 shows how the kernel and utilities build information about the structure
of the file system.
The super-block location is in a known location from which the OLT can be located.
From the OLT, the initial extents of the structural inode list can be located along
with the inode number of the fileset header file. The initial inode list extents
Disk layout 281
VxFS Version 5 disk layout
contain the inode for the fileset header file from which the extents associated
with the fileset header file are obtained.
As an example, when mounting the file system, the kernel needs to access the
primary fileset in order to access its inode list, inode allocation unit, quotas file
and so on. The required information is obtained by accessing the fileset header
file from which the kernel can locate the appropriate entry in the file and access
the required information.
kernel operating system. The maximum file system size on a 32-bit kernel is still
one terabyte. Files cannot exceed two terabytes in size. For 64-bit kernels, the
maximum size of the file system you can create depends on the block size:
If you specify the file system size when creating a file system, the block size
defaults to the appropriate value as shown above.
See the mkfs(1M) manual page.
The Version 5 disk layout also supports group quotas. Quota limits cannot exceed
one terabyte.
See “About quota files on &ProductNameLong;” on page 102.
Some UNIX commands may not work correctly on file systems larger than one
terabyte.
See “Using UNIX Commands on File Systems Larger than One TB” on page 283.
access control list (ACL) The information that identifies specific users or groups and their access privileges
for a particular file or directory.
agent A process that manages predefined Veritas Cluster Server (VCS) resource types.
Agents bring resources online, take resources offline, and monitor resources to
report any state changes to VCS. When an agent is started, it obtains configuration
information from VCS and periodically monitors the resources and updates VCS
with the resource status.
allocation unit A group of consecutive blocks on a file system that contain resource summaries,
free resource maps, and data blocks. Allocation units also contain copies of the
super-block.
API Application Programming Interface.
asynchronous writes A delayed write in which the data is written to a page in the system’s page cache,
but is not written to disk before the write returns to the caller. This improves
performance, but carries the risk of data loss if the system crashes before the data
is flushed to disk.
atomic operation An operation that either succeeds completely or fails and leaves everything as it
was before the operation was started. If the operation succeeds, all aspects of the
operation take effect at once and the intermediate states of change are invisible.
If any aspect of the operation fails, then the operation aborts without leaving
partial changes.
Block-Level Incremental A Symantec backup capability that does not store and retrieve entire files. Instead,
Backup (BLI Backup) only the data blocks that have changed since the previous backup are backed up.
buffered I/O During a read or write operation, data usually goes through an intermediate kernel
buffer before being copied between the user buffer and disk. If the same data is
repeatedly read or written, this kernel buffer acts as a cache, which can improve
performance. See unbuffered I/O and direct I/O.
contiguous file A file in which data blocks are physically adjacent on the underlying media.
data block A block that contains the actual data belonging to files and directories.
data synchronous A form of synchronous I/O that writes the file data to disk before the write returns,
writes but only marks the inode for later update. If the file size changes, the inode will
be written before the write returns. In this mode, the file data is guaranteed to be
286 Glossary
on the disk before the write returns, but the inode modification times may be lost
if the system crashes.
defragmentation The process of reorganizing data on disk by making file data blocks physically
adjacent to reduce access times.
direct extent An extent that is referenced directly by an inode.
direct I/O An unbuffered form of I/O that bypasses the kernel’s buffering of data. With direct
I/O, the file system transfers data directly between the disk and the user-supplied
buffer. See buffered I/O and unbuffered I/O.
discovered direct I/O Discovered Direct I/O behavior is similar to direct I/O and has the same alignment
constraints, except writes that allocate storage or extend the file size do not require
writing the inode changes before returning to the application.
encapsulation A process that converts existing partitions on a specified disk to volumes. If any
partitions contain file systems, /etc/filesystems entries are modified so that the
file systems are mounted on volumes instead. Encapsulation is not applicable on
some systems.
extent A group of contiguous file system data blocks treated as a single unit. An extent
is defined by the address of the starting block and a length.
extent attribute A policy that determines how a file allocates extents.
external quotas file A quotas file (named quotas) must exist in the root directory of a file system for
quota-related commands to work. See quotas file and internal quotas file.
file system block The fundamental minimum size of allocation in a file system. This is equivalent
to the fragment size on some UNIX file systems.
fileset A collection of files within a file system.
fixed extent size An extent attribute used to override the default allocation policy of the file system
and set all allocations for a file to a specific fixed size.
fragmentation The on-going process on an active file system in which the file system is spread
further and further along the disk, leaving unused gaps or fragments between
areas that are in use. This leads to degraded performance because the file system
has fewer options when assigning a file to an extent.
GB Gigabyte (230 bytes or 1024 megabytes).
hard limit The hard limit is an absolute limit on system resources for individual users for
file and data block usage on a file system. See quota.
indirect address extent An extent that contains references to other extents, as opposed to file data itself.
A single indirect address extent references indirect data extents. A double indirect
address extent references single indirect address extents.
indirect data extent An extent that contains file data and is referenced via an indirect address extent.
Glossary 287
inode A unique identifier for each file within a file system that contains the data and
metadata associated with that file.
inode allocation unit A group of consecutive blocks containing inode allocation information for a given
fileset. This information is in the form of a resource summary and a free inode
map.
intent logging A method of recording pending changes to the file system structure. These changes
are recorded in a circular intent log file.
internal quotas file VxFS maintains an internal quotas file for its internal usage. The internal quotas
file maintains counts of blocks and indices used by each user. See quotas and
external quotas file.
K Kilobyte (210 bytes or 1024 bytes).
large file A file larger than two one terabyte. VxFS supports files up to 8 exabytes in size.
large file system A file system larger than one terabytes. VxFS supports file systems up to 8 exabytes
in size.
latency For file systems, this typically refers to the amount of time it takes a given file
system operation to return to the user.
metadata Structural data describing the attributes of files on a disk.
MB Megabyte (220 bytes or 1024 kilobytes).
mirror A duplicate copy of a volume and the data therein (in the form of an ordered
collection of subdisks). Each mirror is one copy of the volume with which the
mirror is associated.
multi-volume file A single file system that has been created over multiple volumes, with each volume
system having its own properties.
MVS Multi-volume support.
object location table The information needed to locate important file system structural elements. The
(OLT) OLT is written to a fixed location on the underlying media (or disk).
object location table A copy of the OLT in case of data corruption. The OLT replica is written to a fixed
replica location on the underlying media (or disk).
page file A fixed-size block of virtual address space that can be mapped onto any of the
physical addresses available on a system.
preallocation A method of allowing an application to guarantee that a specified amount of space
is available for a file, even if the file system is otherwise out of space.
primary fileset The files that are visible and accessible to the user.
quotas Quota limits on system resources for individual users for file and data block usage
on a file system. See hard limit and soft limit.
288 Glossary
quotas file The quotas commands read and write the external quotas file to get or change
usage limits. When quotas are turned on, the quota limits are copied from the
external quotas file to the internal quotas file. See quotas, internal quotas file,
and external quotas file.
reservation An extent attribute used to preallocate space for a file.
root disk group A special private disk group that always exists on the system. The root disk group
is named rootdg.
shared disk group A disk group in which the disks are shared by multiple hosts (also referred to as
a cluster-shareable disk group).
shared volume A volume that belongs to a shared disk group and is open on more than one node
at the same time.
snapshot file system An exact copy of a mounted file system at a specific point in time. Used to do
online backups.
snapped file system A file system whose exact image has been used to create a snapshot file system.
soft limit The soft limit is lower than a hard limit. The soft limit can be exceeded for a limited
time. There are separate time limits for files and blocks. See hard limit and quotas.
Storage Checkpoint A facility that provides a consistent and stable view of a file system or database
image and keeps track of modified data blocks since the last Storage Checkpoint.
structural fileset The files that define the structure of the file system. These files are not visible or
accessible to the user.
super-block A block containing critical information about the file system such as the file
system type, layout, and size. The VxFS super-block is always located 8192 bytes
from the beginning of the file system and is 8192 bytes long.
synchronous writes A form of synchronous I/O that writes the file data to disk, updates the inode
times, and writes the updated inode to disk. When the write returns to the caller,
both the data and the inode have been written to disk.
TB Terabyte (240 bytes or 1024 gigabytes).
transaction Updates to the file system structure that are grouped together to ensure they are
all completed.
throughput For file systems, this typically refers to the number of I/O operations in a given
unit of time.
unbuffered I/O I/O that bypasses the kernel cache to increase I/O performance. This is similar to
direct I/O, except when a file is extended; for direct I/O, the inode is written to
disk synchronously, for unbuffered I/O, the inode update is delayed. See buffered
I/O and direct I/O.
Glossary 289
volume A virtual disk which represents an addressable range of disk blocks used by
applications such as file systems or databases.
volume set A container for multiple different volumes. Each volume can have its own
geometry.
vxfs The Veritas File System type. Used as a parameter in some commands.
VxFS Veritas File System.
VxVM Veritas Volume Manager.
290 Glossary
Index
A cpio_vxfs 58
access control lists 20 creating a multi-volume support file system 122
alias for Quick I/O files 187 creating file systems with large files 38
allocation policies 56 creating files with mkfs 211–212
default 56 creating Quick I/O files 188
extent 15 cron 26, 42
extent based 14 cron sample script 43
multi-volume support 125
D
B data copy 62
bad block revectoring 33 data integrity 18
blkclear 18 data Storage Checkpoints definition 72
blkclear mount option 33 data synchronous I/O 34, 63
block based architecture 23 data transfer 62
block size 14, 279 default
blockmap for a snapshot file system 99 allocation policy 56
buffered file systems 18 block sizes 14, 279
buffered I/O 63 default_indir_size tunable parameter 46
defragmentation 26
extent 42
C scheduling with cron 42
cache advisories 64 delaylog mount option 31–32
Cached Quick I/O 197 device file 280
Cached Quick I/O read-ahead 197 direct data transfer 62
cio direct I/O 62
Concurent I/O 39 directory reorganization 43
closesync 18 disabled file system
cluster mount 22 snapshot 100
commands transactions 229
cron 26 discovered direct I/O 63
fsadm 26 discovered_direct_iosize tunable parameter 46
getext 58 disk layout
mkfs 279 Version 1 277
qiostat 199 Version 2 277
setext 58 Version 3 278
contiguous reservation 57 Version 4 278–279
converting a data Storage Checkpoint to a nodata Version 5 278, 281
Storage Checkpoint 78 Version 6 278
convosync mount option 31, 35 Version 7 278
copy-on-write technique 67, 71 disk space allocation 14, 279
cp_vxfs 58 displaying mounted file systems 216
292 Index
O R
read-ahead functionality in Cached Quick I/O 197
O_SYNC 31
read-only Storage Checkpoints 77
object location table file 279
read_ahead 51
read_nstream tunable parameter 45
P read_pref_io tunable parameter 45
parameters relative and absolute path names used with symbolic
default 44 links 190
tunable 45 removable Storage Checkpoints definition 74
tuning 44 reorganization
performance directory 43
overall 30 extent 43
snapshot file systems 97 report extent fragmentation 42
preallocating space for Quick I/O files 191 reservation space 55
primary fileset relation to Storage Checkpoints 69 restrictions on Quick I/O 188
pseudo device 77 Reverse Path Name Lookup 114
Q S
qio module sectors
loading on system reboot 200 forming logical blocks 279
qio_cache_enable tunable parameter 50, 197 sequential I/O 63
qiomkfile 188 setext 58
qiostat 199 setfacl 20
Quick I/O 185 snapped file systems 20, 93
access Quick I/O files as raw devices 187 performance 97
access regular UNIX files 190 unmounting 94
creating Quick I/O files 188 snapread 94
direct I/O 186 snapshot 223
double buffering 187 how to create a backup file system 223
extension 187
read/write locks 186
Index 295
W
writable Storage Checkpoints 77
write size 57
write_nstream tunable parameter 46
write_pref_io tunable parameter 45
write_throttle tunable parameter 52