Compellent Storage Center Linux Best Practices
Compellent Storage Center Linux Best Practices
Compellent Storage Center Linux Best Practices
Document revision
Date Revision Comments
10/28/2009 A Initial Draft
Fixed Typos, Added iSCSI
10/28/2011 B
info, multipath
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR
IMPLIED WARRANTIES OF ANY KIND.
© 2012 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without
the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Dell, the DELL logo, the DELL badge, and Compellent are trademarks of Dell Inc. Other trademarks and
trade names may be used in this document to refer to either the entities claiming the marks and names
or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than
its own.
Page 2
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Contents
Document revision ...................................................................................................... 2
General syntax .......................................................................................................... 6
Conventions .............................................................................................................. 6
Preface ................................................................................................................... 7
Audience ................................................................................................................. 7
Purpose ................................................................................................................... 7
Customer support ....................................................................................................... 7
Introduction ............................................................................................................. 8
Managing Volumes ...................................................................................................... 8
Scanning for New Volumes ............................................................................................ 8
Kernel Version 2.6-2.6.9 (RHEL 4, SLES 9)....................................................................... 8
Kernel Versions 2.6.11+ (RHEL5, SLES10) ........................................................................ 9
Partitions and Filesystems ............................................................................................ 9
Partitions .............................................................................................................. 9
LVM ..................................................................................................................... 9
Disk Labels and UUIDs for Persistence ........................................................................... 9
New Filesystem Volume Label Creation ....................................................................... 10
Existing Filesystem Volume Label Creation ................................................................... 10
Discover Existing Labels .......................................................................................... 11
Example from /etc/fstab ......................................................................................... 11
Swap Space ......................................................................................................... 11
UUIDs ................................................................................................................. 11
GRUB ................................................................................................................. 12
Unmapping Volumes .................................................................................................. 12
Useful Tools ............................................................................................................ 12
lsscsi.................................................................................................................. 13
scsi_id ................................................................................................................ 13
proc/scsi/scsi ....................................................................................................... 14
dmesg ................................................................................................................ 14
Software iSCSI ......................................................................................................... 15
Network Configuration ............................................................................................ 15
Red Hat Configuration ............................................................................................ 16
SUSE Configuration ................................................................................................ 17
Scanning for New Volumes ....................................................................................... 17
Page 3
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Page 4
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Tables
Table 1. Document syntax ........................................................................................... 6
Figures
Figure 1. Example Storage Center LUN S/N Correlation 13
Figure 2. Server Port Selection 25
Figure 3. Storage Center Front End Port Selection 25
Figure 4. Storage Center Front End Port Selection - LUN 26
Figure 5. Storage Center Volume Mapping Screen 26
Figure 6. Volume Details 37
Figure 7. CompCU Created Replay Details 38
Figure 8. CompCU Created Volume List 41
Page 5
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
General syntax
Conventions
Timesavers are tips specifically designed to save time or reduce the number of steps.
Caution indicates the potential for risk including system or data damage.
Warning indicates that failure to follow directions could result in bodily harm.
Page 6
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Preface
Audience
The audience for this document is System Administrators who are responsible for the management of
Linux based systems that utilize the Dell Compellent SAN. Specifically, this document is intended to
cover the general practices pertaining to Red Hat Enterprise Linux (RHEL) v5.x and SuSE Linux
Enterprise Server (SLES) 10.x series. Documentation specific to the newer releases of RHEL6 and
SLES11 can be found on the Dell Compellent Knowledge Center portal: http://kc.compellent.com
Purpose
This document is intended to provide an overview of specific information required for administrating
storage on Linux servers connected to the Dell Compellent Storage Center. Due to the wide variety of
Linux distributions available and the variance between different versions, some information may vary
slightly. Users should be aware of these differences and consult the documentation for the specific
version of Linux being used. In general, this guide will address Red Hat Enterprise Linux (RHEL) and
SUSE Linux Enterprise Server (SLES).
This document is intended for administrators with at minimum a basic understanding of Linux systems,
specifically general tasks around managing disk partitions and filesystems.
It is important to note that as is common in Linux, there are many ways to do what is covered in this
white paper. This guide does not contain every possible way, and the way covered might not be the
best for all situations. This documentation is brief and intended as a starting point of reference for end
users. Users are encouraged to consult more detailed documentation available from their specific
distribution.
Also note that this guide will focus almost exclusively on the command line. Many of the distributions
have created graphical tools to achieve many of these tasks. This guide simply focuses on the command
line because it is the most universal.
Customer support
Dell Compellent provides live support 1-866-EZSTORE (866.397.8673), 24 hours a day, 7 days a week,
365 days a year. For additional support, email Dell Compellent at support@compellent.com. Dell
Compellent responds to emails during normal business hours.
Page 7
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Introduction
The goal of this paper is to provide guidance to Linux system administrators who will be utilizing
storage presented from the Dell Compellent Storage Center SAN. It is also intended to be useful for
storage administrators who manage the Dell Compellent SAN in an environment that has Linu- based
hosts connecting to the SAN.
Managing Volumes
Understanding how volumes are managed in Linux systems requires a basic understand of the /sys
pseudo file system. The /sys file system is a structure of files that allow for interaction with
various elements of the kernel and modules. Many of the files can be read to discover current
values, while others can be written to trigger events. This is generally done making use of the
commands cat and echo with a redirect (verses opening them with a traditional text editor).
To interact with the HBAs (including virtual software iSCSI HBAs) values are written to files in
/sys/class/scsi_host/ folder. Each HBA (each port on a multiport card counting as a unique HBA)
has its own hostX folder containing files for issuing scans and reading HBA parameters. Unless
otherwise noted, the ones discussed below will exist on QLogic and Emulex cards, as well as
software iSCSI.
Between 2.6.9 and 2.6.11 a major overhaul of the SCSI stack was implemented. As a
result, instructions are different between pre-2.6.11 and 2.6.11 and later kernels.
There are no negatives effects from rescanning an HBA, therefore it is not necessary to
explicitly know which host needs to be rescanned. It is just as easy to rescan all of them
when mapping a new volume.
Linux systems cannot discover LUN 0 on the fly. LUN 0 can only be discovered at boot
time and is thus reserved for the OS Volume in boot from SAN environments. All other
volumes should be mapped at LUN 1 or greater.
There will be no output from either of the commands. Any new LUNs will be logged in
dmesg and to the system messages.
Page 8
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Again there will be no output from the command. Any new LUNs will be logged in dmesg
and to the system messages.
Partitions
For volumes other than the primary boot drive, partition tables are unnecessary. As a
result, in many situations where only one partition would be required it is better not to
use one. Not using a partition table makes expanding volumes at a later time significantly
easier. In order to resize a volume with a partition table, the existing table must be
deleted and the new table must be carefully recreated using the same starting point.
This process can result in unreadable filesystems. By not using a partition table, volumes
can be expanded in fewer steps, and more recent systems can do the expansion online.
Consult the appendix on expanding a volume for instructions and limitations.
The following example shows creating an ext3 file system on a device without a partition
table. Note the prompt to proceed, this can be avoided by adding –F to the command.
LVM
When deciding to use LVMs or not a few things should be considered. For most systems it
is not possible to mount a View Volume of an LVM back to the same server for recovery
with the original volume still mounted without complicated manual tasks. Many of the
benefits of LVM are already provided for at the Compellent level. Due to the
complication of view volumes, LVMs are generally not recommended. LVMs should be
used in the case where a specific benefit is desired that is not provided by the SAN.
Currently Red Hat Enterprise Server 5.4 is the only release that can manage the duplicate LVM
signatures with built-in tools.
Page 9
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
/dev/sda, /dev/sdb, etc depending upon how they are discovered by the Linux operating
system via the various interfaces connecting the server to the storage.
The /dev/sdx names are used to designate the volumes for a myriad of things, but most
importantly, mount commands including /etc/fstab. In a static disk environment, the
/dev/sdx name works well for entries in the /etc/fstab file.
However, in the dynamic environment of fiber channel or iSCSI connectivity the Linux
operating system lacks the ability to track these disk designations persistently through
reboots and dynamic additions of new volumes via rescans of the storage subsystems.
There are multiple ways to ensure that disks are referenced by persistent names. This
guide will cover using Disk Labels and UUIDs. Disk Labels or UUIDs should be used with all
single pathed volumes.
Disk labels are also exceptionally useful when scripting replay recovery. In the example where
a view of a production volume is mapped to a backup server, it is not necessary to know what
drive letter the view volume is assigned. Since the label is written to the filesystem, the label
goes with the view and can easily be mounted or manipulated.
Disk labels will not work in a multipathed environment, and should not be used;
multipath device names are persistent by default and will not change. Multipating
does support aliasing the multipath device names for human readable names. Consult
the Alias section under Multipath Configuration for more information.
The examples below create a new file system with the label FileShare for the various
major filesystems types.
The process below will format the volume destroying all data on that volume.
It is also possible to set the filesystem label using the -L option of tune2fs.
Page 10
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
# e2label /dev/sde
FileShare
The LABEL= syntax can be used in a variety of places including mount commands and
Grub configuration. Disk labels can also be referenced as a path for applications that do
not recognize the LABEL= syntax. For example, the volume designated by the label
FileShare can be accessed at the path ‘/dev/disk/by-label/FileShare’.
Swap Space
Swap space can also be labeled, however only at the time of creation. This isn’t a
problem since no static data is stored in swap. To label an existing swap partition, follow
these steps.
# swapoff /dev/sda1
# mkswap –L swapLabel /dev/sda1
# swapon LABEL=swapLabel
The new swap label can be used in /etc/fstab just like any volume label.
UUIDs
An alternative to disk labels is UUIDs. They are static and safe for use anywhere,
however, their long length can make them awkward to work with. UUID is assigned at
filesystem creation.
Another simple way to discover the UUID of a device or partition is to do a long list on
the /dev/disk/by-uuid directory.
Page 11
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Disk UUIDs can be used in /etc/fstab or any place were persistent mappings is required.
Below is an example of its use in /etc/fstab.
As with disk labels, if an application requires an absolute path, the links created in
/dev/disk/by-uuid should work in almost all situations.
GRUB
In addition to /etc/fstab, GRUBs config file should also be reconfigured to reference
LABEL or UUID. The example below shows using a label for the root volume, UUID can be
used the same way. Labels or UUIDs can also be used for “resume” if needed.
Unmapping Volumes
Linux systems store information on each volume presented to it. Even if a volume is unmapped
on the Dell Compellent side, the Linux system will retain information about that volume until
the next reboot. If the Linux system is presented with a volume from the same target using the
same LUN number again, it will reuse the old data on the volume. This can result in
complications and misinformation.
Therefore, it is best practice to always delete the volume information on the Linux side after
the volume has been unmapped. This will not delete any data stored on the volume itself, just
the information about the volume stored by the OS (volume size, type, ect).
Determine the drive letter of the volume that will be unmapped. For example, /dev/sdc.
Delete the volume information on the Linux OS with the following command replacing sdc with
the correct device name.
Finally, remove the volume mapping using the Dell Compellent Storage Center GUI.
Useful Tools
Determining which Dell Compellent volume correlates to a specific Linux device can be
tricky, but the following tools can be useful and many are included in the base install.
Page 12
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
lsscsi
lsscsi is a tool that parses information from the /proc and /sys psudofilesystems into a
simple human readable output. Although not currently included in the base installs for
either Red Hat 5 or SLES 10, it is in the base repository and can be easily installed.
This output shows two drives from the Dell Compellent, it also shows that three front end
ports are visible but are not presenting a LUN 0. This is the expected behavior. There are
multiple modifies for lsscsi that provide even more detailed information.
The first column above shows the [host:channel:target:lun] designation for the volume.
The first number corresponds to the local HBA hostX that the volume is mapped to.
Channel is the SCSI bus address which will always be zero. The third number correlates to
the Compellent front end ports (targets). The last number is the LUN that the volume is
mapped on.
scsi_id
scsi_id can be used to report the wwid of a volume and is available in all base
installations. This wwid can be matched to the volume serial number reported in the Dell
Compellent GUI for accurate correlation.
The first part of the WWID is Dell Compellent’s unique ID, the middle part is made up of
the controller number in hex and the last part is the serial number of the volume. To
ensure correct correlation in environments with multiple Dell Compellent Storage
Centers, be sure to check the controller number as well.
Page 13
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
The only situation where the two numbers would not correlate is if a Copy Migrate had
been performed. In this case, a new serial number is assigned on the Dell Compellent
side, but the old WWID is used to present to the server so that the server is not
disrupted.
/proc/scsi/scsi
Viewing the contents of this file can provide information about LUNs and targets on
systems that do not have lsscsi installed. However, it is not easy to correlate to a specific
device.
dmesg
The output from dmesg can be useful for discovering what device name was assigned to a
recently discovered volume.
The above output is taken just after a host rescan and and shows that a 300 GB volume has
been discovered and assigned as /dev/sdf.
Page 14
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Software iSCSI
Most major Linux distributions have been including a software iSCSI initiator for at least a few releases.
Red Hat has included it in both 4 and 5 and SUSE has included it in 9, 10 and 11. The package can be
installed using the respective package management systems.
Both the Red Hat Enterprise Linux (RHEL) and Suse Linux Enterprise Server (SLES) utilize the open-iscsi
implementation of software iSCSi on the Linux platform. RHEL has included iSCSI support since v4.2
dating back to October 2005 and SLES has included open-iscsi since v10.1 dating back to May 2006. As
these have had several releases and several years of refinement, it is considered to be a mature
technology.
While iSCSI is considered to be a mature technology that allows organizations to economically scale
into the world of enterprise storage, it has grown in complexity at both the hardware and software
layers. The scope of this documented is limited to the default Linux iSCSI software initiator (open-
iscsi). For more advanced implementations (for example leveraging iSCSI HBAs, or drivers that make
use of iSCSI offload engines) please consult the associated vendor’s documentation and support
services.
For instructions on setting up an iSCSI network topology, please consult the Storage Center
Connectivity Guide.
This guide first covers some common elements, and then will walk through configuring an iSCSI volume
first on Red Hat, then on SUSE, finishing with some more common elements. For all other
distributions, please consult the documentation from that distribution.
Network Configuration
The system being configured will require a network port that can communicate with the iSCSI
ports on the Dell Compellent Storage Center. This does not necessarily have to be a dedicated
port, but is highly recommended and is generally considered best practice.
The most important thing to consider when configuring an iSCSI volume is network path. If it is
important that the iSCSI traffic go over a distinct port, or if multipathing is involved,
controlling what traffic is carried on which ports is important. This can be achieved at multiple
different levels, which is mostly a matter of choice for the administrator and a function of
what network infrastructure is in play.
It is best practice to separate traffic by subnet. In general, most administrators will dedicate a
second port to iSCSI traffic. This port will be in a different subnet than the rest of the network
traffic. This way, the TCP/IP layer handles the proper routing out the dedicated port.
This is also the best practice for multipathing. In a fully redundant multipath environment, one
switch fabric and corresponding ports should be in one subnet and the other in a different
subnet. This forces the traffic through the proper ports on the server side.
If distinct subnets are not an option, two other options are available. Traffic can either be
routed at the network layer by defining static routes, or at the iSCSI level via configuration.
Which option is mostly an administrator’s choice.
The following directions assume that a network port has already been configured and can
communicate with the Dell Compellent iSCSI ports.
Page 15
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
The iscsi software initiator is broken up into two main parts, the daemon which runs in the
background and handles connections and traffic, and the administration utility which is used to
configure and modify connections. Before anything can be configured the daemon needs to be
started. It should also be configured to start automatically in most cases.
The next step is to discover the iqn for the Dell Compellent ports. For Dell Compellent Storage
Center 4.x the discovery command needs to be run against each primary iSCSI port on the
system. Starting with Storage Center 5.0 running with virtual ports enabled, the discovery
command only needs to be run against the control port, it will report back all the iqns on the
system.
In the example below, iSCSI ports on the Dell Compellent system have the IP addresses
10.10.3.1 and 10.10.3.2
The iSCSI daemon saves the nodes in /var/lib/iscsi and will automatically log into them when
the daemon starts. To login now, the below command tells the software to log into all known
nodes.
The server object can now be created on the Dell Compellent Storage Center.
After creating the server object and mapping a volume to the initiator, the virtual HBA can be
rescanned to discover the new LUN.
Page 16
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
As long as the iscsi daemon is set to start on boot, the system will automatically login to the
Dell Compellent targets and discover all volumes.
SUSE Configuration
For SUSE systems, the package that provides the iSCSI initiator is named “open-iscsi” (it is not
necessary to install the “iscsitarget” package).
The iscsi software initiator is broken up into two main parts, the daemon which runs in the
background and handles connections and traffic, and the administration utility which is used to
configure and modify connections. Before anything can be configured the daemon needs to be
started. It should also be configured to start automatically in most cases.
The system stores information on each target. After the targets have been discovery, they can
be logged into. This creates the virtual HBAs as well as any disks devices for volumes mapped
at login time.
The last step is to configure the system to automatically login to the targets when the initiator
starts, which should be configured to start at boot time.
The host number just needs to be replaced with the correct host for the target connection.
Page 17
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
/etc/fstab Configuration
Since iSCSI is dependent on the network connection being up, any volumes that are added to
/etc/fstab need to be designated as network dependant. The example below will mount the
volume labeled iscsiVol. The important part is adding “_netdev” to the options.
When using iSCSI in a Multipathed environment, however, you can configure the iSCSI daemon
to fail a path very quickly. It will then pass outstanding I/O back to the multipathing layer. If
dm-multipath still has an available route, the I/O will be resubmitted to the live route. If all
available routes are down, dm-multipath will queue I/O until a route becomes available. This
allows an environment to sustain failures at the network and storage levels.
For the iscsi daemon, the following configuration settings directly affect iSCSI connection
timeouts:
To control how often a NOP-Out request is sent to each target, the following value can be set:
node.conn[0].timeo.noop_out_interval = X
To control the time out for the NOP-Out, the noop_out_timeout value can be used:
node.conn[0].timeo.noop_out_timeout = X
node.session.timeo.replacement_timeout = X
Again X is in seconds.
replacement_timeout will control how long to wait for session re-establishment before
failing pending SCSI commands and commands that are being operated on by the SCSI layer's
error handler up to a higher level like multipath, or to an application if multipath is not being
used.
Remember, from the NOP-Out section that if a network problem is detected, the running
commands are failed immediately. There is one exception to this and that is when the SCSI
layer's error handler is running. To check if the SCSI Error Handler is running, iscsiadm can be
run as:
Page 18
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
iscsiadm -m session -P 3
When the SCSI Error Handler is running, commands will not be failed until
node.session.timeo.replacement_timeout seconds.
To modify the timer that starts the SCSI Error Handler, you can either write directly to the
device's sysfs file:
Where X is in seconds, or depending on the Linux distribution, you can modify the udev rule.
To modify the udev rule, open /etc/udev/rules.d/60-raw.rules, and add the following
lines:
For multipath.conf:
The following line will tell dm-multipath to queue I/O in the event of no paths being found.
This line allows multipath to wait for Storage Center to recover in the event of a fail over
event:
Timeout and other connection settings are statically created during the discovery step and
written to config files in /var/lib/iscsi/*.
There is no specific timeout value that is appropriate for every environment. In multipathed
fibre channel environments, it is recommended to set timeout values on the HBA to five (5)
seconds. However, additional caution should be taken when determining the appropriate value
to use in an iSCSI configuration. Since iSCSI is often used on shared network switches, it is
extremely important that necessary consideration be made to avoid inadvertent non-iSCSI
network traffic from interfering with the iSCSI storage traffic.
It is important to take into consideration all the different variables that go into the
environment’s configuration and thoroughly test failover scenarios (port, switch, controller,
etc) before deploying into a production environment.
Page 19
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Server Configuration
Server Level Time Out Values
These settings need to be configured for Linux systems that are connected to Dell Compellent
systems without multipathing. Systems that do not have these settings could have volumes go
read-only during controller failover. Do not set these values on multipath systems; consult the
multipath configuration section for the correct settings.
This section covers configuration of QLogic 2xxx HBAs that utilize the qla2xxx module as well as
the Emulex LightPulse HBAs that utilize the lpfc module. This section will cover both the open
source default qla2xxx module and the proprietary QLogic release version.
HBA BIOS settings should be configured to the specifications documented by Dell Compellent
for the specific HBA and Storage Center version.
Module Settings
Depending on the version of Linux, the method for setting the module parameter will be
different. This guide will explicitly cover Red Hat Enterprise Linux 5 and SUSE Linux
Enterprise Server 10. The setting should be the same on any Linux system using a 2.6.11
or later kernel. For other distributions consult the specific documentation for that
distribution for how to configure module parameters. It has only been tested on Red Hat
and SUSE systems.
The important module parameter for the QLogic cards is qlport_down_retry, and for the
Emulex cards it is lpfc_devloss_tmo. These settings determine how long the system
waits to destroy a connection after losing connectivity with the port. During a controller
failover, the WWN for the active port will disappear from the fabric momentarily before
returning on the reserve port on the other controller. This process can take anywhere
from 5 to 60 seconds to fully propagate through a fabric. As a result, the default
timeout of 30 seconds is insufficient and the value is changed to 60.
or
Other module options such as queue depth can be left as is. The module will need to
be reloaded for the settings to take effect.
For local boot systems, unmount all SAN volumes and reload the module needed, for
QLogic, run the commands below, for Emulex substitute lpfc.
# modprobe -r qla2xxx
# modprobe qla2xxx
Page 20
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
For boot from SAN systems, the initial RAM disk needs to be rebuilt so that the setting
will take effect on boot. This will build the new initrd at the same file as the existing
one. Copying the existing one to a safe location is recommended.
Watch the output from the command and make sure that the “Adding module” line for
the applicable module has the options added.
The system will then need to be rebooted. Ensure that the Grub entry points to the
correct initrd.
For non-boot from SAN systems, create/edit the file /etc/modprobe.d/qla2xxx and add
the qlport_down_retry to the options line for QLogic cards. For Emulex edit
/etc/modprobe.d/lpfc and add the lpfc_devloss_tmo option. Below is an example for
the QLogic card.
Or for Emulex
Options lpfc_devloss_tmo=60
For boot from SAN systems using QLogic, append the following to the kernel line in
/boot/grub/menu.lst for each desired kernel.
qla2xxx.qlport_down_retry=60
Or for Emulex:
lpfc.lpfc_devloss_tmo=60
Page 21
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
# ./qlinstall -o qlport_down_retry=60
This will rebuild the initial RAM disk as well. Local boot systems can unload and reload
the qla2xxx module immediately. Boot from SAN systems will have to be rebooted for
the setting to take effect.
Verifying Parameter
To verify that the parameter has taken effect, run the appropriate following command
and check that the output is 60.
QLogic
# cat /sys/module/qla2xxx/parameters/qlport_down_retry
60
Emulex
# cat /sys/class/scsi_host/host0/lpfc_devloss_tmo
60
If possible, failover should be tested while running I/O to ensure that the configuration
is correct and functional.
# cat /sys/block/sdc/device/timeout
60
Do this for the correct Dell Compellent block device. If the value returned is not 60,
consult the documentation for the specific distribution in use.
To change the value on the HBA, consult the documentation with the HBA. To configure the
modules, follow the documentation below.
For the QLogic cards being controlled by the qla2xxx module, the parameter that needs to be
set is ql2xmaxqdepth. By default it is set to 32. For Emulex cards there are two parameters,
lpfc_lun_queue_depth and lpfc_hba_queue_depth.
Page 22
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
These values are set using the same procedure as the timeout configuration above. For
example, on a Red Hat system using a QLogic card, the file /etc/modprobe.conf would be
edited to contain a line like the following:
Follow the specific instructions for setting the module parameters that correspond to the
system being configured.
Multipath configuration
Though the default multipath configuration will generally appear to work and provide path
failure, key values must be set in order for the system to survive controller failover. It is
recommended to configure a volume as a multipath device whenever possible. Even in
situations in which a single path is used initially, configuring the volume as a multipath device
brings with it some advantages. Multipath device names are persistent across reboots, device
name aliases are supported to help with correlating volumes/LUNs to business function and is
useful when used with advanced Storage Center features such as Live Volume and availability
with controller failover/upgrades.
When a controller fails, the system will lose connectivity with the storage for a period of time,
during this time, it will often fail all paths. The default configuration is that once all paths are
failed to immediately fail the disk. This results in the filesystem going read-only. By telling the
system to wait before failing the disk, it can resume traffic once one or more of the paths have
returned.
Starting with Red Hat Enterprise Linux 5.4, the Dell Compellent device definition is already in
the default table. Therefore, it is not necessary to add the below device definition to the
multipath configuration file.
Pre-configuration
Set all HBA settings per spec for the given card and Storage Center version. DO NOT follow the
generic Linux timeout value documentation.
devices {
device {
vendor COMPELNT
product "Compellent Vol"
path_checker tur
no_path_retry queue
}
}
Page 23
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
channel) path fails. If left at multibus then multipathd will attempt to round-robin between
the faster fibre channel and slower iSCSI paths.
In that case, the Compellent device section needs to be changed in /etc/multipath.conf so that
the “path_grouping_policy” is “failover”.
devices {
device {
vendor COMPELNT
product "Compellent Vol"
path_grouping_policy failover
path_checker tur
no_path_retry queue
}
}
Multipath will automatically select the SCSI host adaptor with the lowest ID number to be the
active path. By nature, the HBAs will always get lower ID numbers than the software initiators,
therefore there is no need to configure anything special to set the priorities.
PortDown Timeout
When running in a multipath configuration, it is desirable to have the system fail faulty
links quickly. By default, system will fail the link after 30 seconds. This means, that if a
cable is unplugged, I/O will be halted for 30 seconds before the link has failed. Instead
the port down timeout should be tuned down to 1-5 seconds.
For QLogic cards, this is achieved by setting the qlport_down_retry parameter for the
qla2xxx module. If using the supplied qla2xxx module, consult the documentation for
the specific distribution. If using the QLogic source version, use the included script to
configure the parameter.
For the Emulex LightPulse cards the setting is lpfc_devloss_tmo for the lpfc module.
For example, with Red Hat based systems using the QLogic card, adding the following
line to /etc/modprobe.conf would set the timeout to five seconds.
Note that with boot from SAN systems, rebuilding the initial RAM disk and
rebooting may be required depending on the system.
The instructions from modifying the timeout for single path systems earlier in this
document can be referenced for more detailed instructions substituting 65 for 5.
Multipathing a Volume
It is recommended that multipathing be used whenever possible. Even if a volume is to be
initially mapped with a single path, the advantages to configuring the path using the multipath
subsystem is recommended as a best practice.
Page 24
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
The first step in multipathing a volume is to create the necessary mappings. While a volume
can be configured as multipath with only one path, obviously in order to achieve the benefits,
it is necessary to have at least two paths.
In this example, the server has two Fibre Channel ports and the Dell Compellent has two front
end ports on each controller. They are zoned in two separate “VSANs” to simulate a dual
switch fabric.
After selecting the server to map the volume too, the wizard will prompt for which ports on
the server to map to.
The next screen selects the front end ports on the Dell Compellent to use for the mapping.
Select both ports from one controller.
Lastly assign the LUN. In this case the Dell Compellent presents four pairs of mappings. In
reality there are only two valid paths, however the Dell Compellent system cannot determine
which are valid or not. It is best to create all the mappings unless the correct WWN pairings are
known.
Page 25
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
There will now be two up and two down mappings to the server. The down ones can be deleted
for clarity.
Next, rescan the HBAs on the server to detect the new volume.
The new paths to the disk should have been discovered. The output from ‘lsscsi’ verifies this.
Page 26
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
This output shows that /dev/sdc and /dev/sdd are both LUN 1, which was the LUN used for
mapping the volume. In order to add the blacklist exception, collect the WWID from the
volume. It will be the same on all paths, so it can be a good sanity check.
blacklist_exceptions {
wwid "36000d310000360000000000000005564"
wwid "36000d3100003600000000000000075c8"
}
To test that the configuration is correct, a dry run of the multipath command shows what
configuration changes would be made if the command was run.
This shows that a multipath device, mpath3, would be created from sdc and sdd, which is what
is expected. Run the command again without “-d” to create the new device. Remember that a
name for the device can be supplied (instead of the automatically generated ‘mpath3’) using
Aliases, see the section below.
Page 27
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Multipath Aliases
The multipath utility will automatically generate a new name for the multipath device. Unlike
the sdX names assigned to drives, these names are persistent over reboots and/or
reconfiguration. This means that they are safe to use in fstab, mount commands, and scripts.
Additionally, an alias can be defined which renames the device to a user defined string. This is
very useful for naming volumes along business function/usage. It is recommended to use
multipath aliases whenever possible.
To assign an alias to a volume, first find the WWID of the volume by running the following
command against one of the devices representing that volume.
Next, add the following section to the /etc/multipath.conf file using the WWID from
the above command.
Page 28
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
multipaths {
multipath {
wwid “36000d310000360000000000000000837”
alias “volName”
}
This defines the volume to be named “volName” instead of an assigned mpathX. The
new multipath definition will be created at /dev/mapper/volName.
To define multiple multipath aliases, place each one inside of its own multipath { …}
block inside the single multipaths { … } block.
If the generic multipath definition has already been created, unmount the volume.
Then reload the multipath service with the command:
Ths will recreate the definition with the new assigned alias.
The path /dev/mapper/volName can be reference any where a path to the device is
needed.
Expanding a file system that resides directly on a physical disk, however, can be done.
As always, when modifying partitions and/or file systems, some risk of data loss is present.
Dell Compellent recommends taking a Replay and ensuring a good backup exists of the
volume prior to executing any of the following steps.
These steps can be used to grow a volume that has no partition table on the disk. This does
require unmounting the volume, but does not require a server reboot.
Page 29
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Note that some versions do have the ability to do the resize after the volume has
been mounted. This can minimize the downtime, especially on larger volumes.
Consult the documentation for the specific release for risks and procedures.
The easiest way around this limitation is to not use a partition table. For data disks, no partition table
is required; the entire disk can simply be formatted with the file system of choice and mount the drive.
This is accomplished by simply running mkfs on the device without a partition.
The alternative is to use a GPT partition table as opposed to the traditional MBR system. GPT support is
native in RHEL5, SLES 10, and many other modern Linux distributions.
Page 30
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
# parted /dev/sdb
Run the following two commands inside of parted replacing 5000G with the volume size needed
In order to use the EXT4 file system a couple of add-on packages need to be installed first. These can
be found on the RHEL5 DVD or downloaded from the Red Hat Network support portal. These are the
e4fsprogs and e4fsprobs-libs packages.
Once installed, the EXT4 file system can be used to format volumes presented to the RHEL5 host from
Storage Center. The example below illustrates the process to create a volume, map it to the RHEL5
host and format it using the EXT4 file system. This example uses a 100GB volume, mapped to the
RHEL5 host in a multipathed environment.
First, create the volume using the Storage Center GUI, then map it to the RHEL5 server object. This
assumes the RHEL5 server object has been configured with both server HBA ports actively zoned to the
Front End HBA ports of the Storage Center SAN.
Once mapped, rescan the scsi_host ports to discover the new volume and its paths. Refer to the
previous sections for details on this step.
Determine the SCSI ID using the scsi_id command (assuming sdc is one of the device paths):
# scsi_id –u –g –s /block/sdc
Add the scsi_id to the blacklist exception section of the /etc/multipath.conf file and add a multipath
section to the multipaths section. Then, reload the multipathd configuration. Refer to the Multipath
Page 31
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Configuration section above for details. The end result should be a multipath device pointing to the
new volume created above:
Next, format the multipath device using the newly available EXT4 file system option:
Finally, mount the device and begin using the volume as needed.
The steps performed below can be dangerous, therefore Dell Compellent recommends taking
a replay and verifying that valid backups exist before performing any of the following steps.
This has not been tested on the root file system and is not recommended to be done on the
root/OS file system.
It is important to note that it is necessary to unmount the file system and perform an fsck as part of
the conversion process.
First, copy some data to the file system and perform a checksum to be compared after the conversion
to ext4 and fsck is complete.
{root@dean} {/testvol2} # df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/testvol2 9.9G 151M 9.2G 2% /testvol2
{root@dean} {/testvol2} # ll
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Next, perform the conversion to ext4, unmount the file system and run fsck to complete the
conversion.
Finally, mount the device as an ext4 file system and confirm the integrity of the test file.
{root@dean} {/testvol2} # df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/testvol2 9.9G 3.8G 5.7G 40% /testvol2
Update the /etc/fstab file and test by rebooting the system with the ext4 file system type used in the
/etc/fstab file to verify the file system mounts without error.
Page 33
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
# Testing
/dev/mapper/testvol2 /testvol2 ext4 defaults 0 0
{root@dean} {~} # df
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/mpath0p3 31G 5.9G 24G 21% /
/dev/mapper/mpath0p1 99M 30M 64M 32% /boot
tmpfs 7.9G 0 7.9G 0% /dev/shm
/dev/mapper/testvol2 9.9G 3.8G 5.7G 40% /testvol2
Scripting/Automation
The Dell Compellent Storage Center SAN supports the use of command line interface with the Storage
Center to accomplish common system/storage administration tasks. Scripting these tasks can be a
tremendous time saver to system administrators and can also be used to help in keeping the creation,
mapping and management of volumes consistent. The CLI tool is called the Dell Compellent Command
Utility (CompCU).
To use CompCU, the server must have the proper java release installed. Refer to the Command Utility
User Guide for more details. The CompCU.jar object can be downloaded from the Dell Compellent
support site. Once installed on the Linux server, this tool can be used to perform Storage Center tasks
from the Linux shell prompt, which can be incorporated into new or existing end-user management
scripts. Below are some common use cases for using CompCU.
Page 34
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
By no means do the examples below cover the full breadth of the usefulness of CompCU, the examples
below are designed to just give an initial insight into the sorts of tasks that can be automated with
CompCU. In addition, the examples below are run on a RHEL5 system that is connected to a Storage
Center SAN in a multipathed Fibre Channel configuration. The server object using both ports of a
QLE2562 HBA has been bound to the server object.
Next, download the CompCU zip file from the Dell Compellent support site and install onto the RHEL5
system by extracting the contents. Depending on the details of your RHEL5 installation, you may need
to update the path to the java binary and/or the /etc/alternatives link. Update your system as
appropriate. Refer to the Red Hat documentation for further details.
You can use the “-h” switch to get a help listing of available options for CompCU. Again, refer to the
Dell Compellent Command Utility user guide for further details.
Page 35
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
-xmloutputfile <arg> File name to save the CompCU return code in xml
format. Default is cu_output.xml.
<SNIP>
To facilitate the ease of access to using CompCU, you can run CompCU with the “-default” switch to
initially configure an encrypted password file. This file can then be referenced in other commands to
login to the Storage Center and perform the requested actions. Below is an example of this command
syntax to use:
{root@dean} {~/automation} #java -jar CompCU.jar -default -host sc9 -user Admin
-password mmm
{root@dean} {~/automation} # java -jar compcu.jar -default -host sc9 -user
Admin -password mmm
Compellent Command Utility (CompCU) 5.5.1.4
===============================================================================
==================
User Name: Admin
Host/IP Address: sc9
===============================================================================
==================
Connecting to Storage Center: sc9 with user: Admin
Saving CompCu Defaults to file [default.cli]...
This will create a default file called default.cli. You can simply rename this file to match the Storage
Center it refers to and reference it in future commands.
Below is an example command that will create a 100GB volume on the Storage Center SAN and map it
to the RHEL5 system.
===============================================================================
==================
User Name: Admin
Host/IP Address: sc9
Single Command: volume create -folder Linux -lun 100 -name dean-
testvol1-cli -server dean-qle2562 -size 100g
===============================================================================
==================
Connecting to Storage Center: sc9 with user: Admin
Running Command: volume create -folder Linux -lun 100 -name dean-testvol1-cli -
server dean-qle2562 -size 100g
Creating Volume using StorageType 1: storagetype='Assigned-Redundant-4096',
redundancy=Redundant, pagesize=4096, diskfolder=Assigned.
Successfully mapped Volume 'dean-testvol1-cli' to Server 'dean-qle2562'
Successfully created Volume 'dean-testvol1-cli', mapped it to Server 'dean-
qle2562' on Controller 'SN 101'
Page 36
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Then, rescan the scsi_host paths to discover the new volume. As this is a multipathed system, you can
then setup the multipath.conf file. Refer to the Multipath Configuration section of this white paper for
details on this process. After configuring the multipath device, a series of files are placed on the
volume and a replay is taken using CompCU.
The command below is an example of how to create a replay of the volume just created above:
===============================================================================
==================
User Name: Admin
Host/IP Address: sc9
Single Command: replay create -lun 200 -volume dean-testvol1-cli
===============================================================================
==================
Connecting to Storage Center: sc9 with user: Admin
Running Command: replay create -lun 200 -volume dean-testvol1-cli
Creating replay 'CUReplay_68274' on Volume 'dean-testvol1-cli' with no
expiration
Page 37
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
The above output shows that a replay was created with a default description. The replay size indicates
the amount of unique block data that is frozen for replay recovery if needed.
In the example below a file is mistakenly removed from the source volume. A view volume is then
created and mapped to the host to recover the file. NOTE: In production it is recommended that view
volumes be mapped to a system other than the source host. This will help avoid additional complexity
when more complex volume structures are utilized (e.g. LVM, database volumes, etc).
The command below is used to see a list of the available replays for the given volume:
===============================================================================
==================
User Name: Admin
Host/IP Address: sc9
Single Command: replay show -volume dean-testvol1-cli
===============================================================================
==================
Connecting to Storage Center: sc9 with user: Admin
Running Command: replay show -volume dean-testvol1-cli
Page 38
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Now, we can use CompCU to create a view volume of the replay we took earlier, map it to the server
and recover our file. For the purposes of this example, we are mapping the view volume back to the
source server, and only mounting one of the paths to recover the file:
===============================================================================
==================
User Name: Admin
Host/IP Address: sc9
Single Command: replay createview -index 101-72-1 -view dean-testvol1-
cli-view -folder Linux -server dean-qle2562 -lun 205
===============================================================================
==================
Connecting to Storage Center: sc9 with user: Admin
Running Command: replay createview -index 101-72-1 -view dean-testvol1-cli-view
-folder Linux -server dean-qle2562 -lun 205
Creating View Volume 'dean-testvol1-cli-view' on replay 'CUReplay_68274'
created at '10/28/2011 06:57:53 pm'...
Successfully mapped Volume 'dean-testvol1-cli-view' to Server 'dean-qle2562'
We can use the scsi_id command to verify the correct device to map by comparing the WWID with the
Volume SN listed in Storage Center. Refer to the Managing Volumes section of this paper for further
details. Then we map up the view volume and recover the file.
Page 39
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
{root@dean} {/view} # ll
total 14G
drwx------ 2 root root 16K Oct 28 18:53 lost+found/
---------- 1 root root 3.6G Oct 28 18:54 rhel-server-5.7-x86_64-dvd.iso
-rw-r--r-- 1 root root 1000M Oct 28 18:55 testfile.0
-rw-r--r-- 1 root root 1000M Oct 28 18:55 testfile.1
-rw-r--r-- 1 root root 1000M Oct 28 18:56 testfile.2
-rw-r--r-- 1 root root 1000M Oct 28 18:56 testfile.3
-rw-r--r-- 1 root root 1000M Oct 28 18:56 testfile.4
-rw-r--r-- 1 root root 1000M Oct 28 18:56 testfile.5
-rw-r--r-- 1 root root 1000M Oct 28 18:56 testfile.6
-rw-r--r-- 1 root root 1000M Oct 28 18:56 testfile.7
-rw-r--r-- 1 root root 1000M Oct 28 18:56 testfile.8
-rw-r--r-- 1 root root 1000M Oct 28 18:56 testfile.9
All of these examples, along with numerous other scenarios, can be integrated into new or existing
administration scripts to help system administrators automate common tasks. For example, below we
use a simple script to create ten (10) volumes rapidly and map them to a target server.
===============================================================================
==================
User Name: Admin
Host/IP Address: sc9
Single Command: volume create -folder Linux -lun 100 -name dean-
testvol0-cli -server dean-qle2562 -size 100g
===============================================================================
==================
Connecting to Storage Center: sc9 with user: Admin
Running Command: volume create -folder Linux -lun 100 -name dean-testvol0-cli -
server dean-qle2562 -size 100g
Creating Volume using StorageType 1: storagetype='Assigned-Redundant-4096',
redundancy=Redundant, pagesize=4096, diskfolder=Assigned.
Successfully mapped Volume 'dean-testvol0-cli' to Server 'dean-qle2562'
Successfully created Volume 'dean-testvol0-cli', mapped it to Server 'dean-
qle2562' on Controller 'SN 101'
<SNIP>
===============================================================================
==================
User Name: Admin
Page 40
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
The above example shows an example in which ten (10) volumes are quickly created from the Linux
shell prompt using shell scripting and CompCU.
Page 41
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
Prior to making any changes to the following parameters, a good understanding of the environment’s
current workload should be established. There are numerous methods by which this could be
accomplished, in addition to the System and/or Storage Administrator perception based on day-to-day
experience supporting the environment. One such tool is the freely available for download Dell
Performance Analysis Collection Kit (DPACK). This tool can be downloaded from the URL:
http://search.dell.com/results.aspx?s=gen&c=us&l=en&cs=&k=dpack&cat=sup&x=0&y=0
There are some general “rules of thumb” to keep in mind when it comes to performance tuning with
Linux:
1. Performance tuning is as much an art as a science. As there are a number of
variables that impact performance (I/O in particular), there are no specific values
that can be recommended for every environment. It is best to begin with as few
variables as possible and then add more layers as one tunes the system. For
example, start with single path, tune and then add multipath.
2. Make one change at a time and test the affect on performance with a performance
monitoring tool before making subsequent changes.
3. It is considered Best Practice to make sure all original settings are recorded so that
changes can be reverted to a known “stable” state.
4. As with other system tuning (e.g. failover), changes should always be made in non-
production installations first, and validated with as many environmental conditions
as possible before inserting changes into production environments.
There is also another “rule of thumb” that may be worthwhile to mention, and that is basically if
performance needs are being met, it is generally best to leave the settings as they are to avoid
introducing changes that can make the system less stable.
In addition, an understanding of the differences between block and file level data should be
established in order to be able to effectively target the tunable(s) that can most effectively impact
performance in a positive manner. Although the Dell Compellent Storage Center array is a block-based
storage device, the support for the iSCSI transport mechanism introduces performance considerations
that are typically associated with network and file level tuning.
When validating whether a change is having an impact on performance, leverage the Charting feature
of Enterprise Manager to track the performance. In addition, be sure to make singular changes
between iterations in order to better track what variables have the most affect (good or bad) on I/O
performance.
Page 42
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
This value can be changed on the HBA firmware as well as from within the Linux kernel module for the
HBA. Keep in mind that the lower value takes precedence. Therefore, one approach would be to set
the HBA setting to a high number and then tune the value downward from within the Linux kernel
module.
Refer to the Server Configuration section for details on modifying this value for the particular HBA
model being used.
The above command has changed the I/O scheduler for device “sda” to use the deadline
option instead of the cfq. This command will change for only the currently running instance. If
you want to make this change in subsequent system boots, a boot-time script could be used to
make this change on a per device basis, or for system wide I/O scheduler changes, pass in the
proper kernel option at boot time, such as:
Page 43
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
In multipath and LVM configurations, this modification should be made to each device
used by the device-mapper subsystem.
read_ahead_kb
This parameter is used to tell the Linux kernel how much to read (in kilobytes) at a time when
the kernel detects it is sequentially reading from a block device. Modifying this value can have
a noticeable effect on performance in heavy sequential read workloads. By default, on RHEL
5.x, this value is set to 128. Increasing this to a larger size may result in higher read
throughput performance.
nr_requests
This value is used by the kernel to set the depth of the request queue and is often used in
conjunction with changes to the Queue Depth of the HBA. With the cfq I/O scheduler, this is
set to 128 by default. Increasing this value will set the I/O subsystem to a larger threshold to
which it will continue scheduling requests. This keeps the I/O subsystem moving in one
direction longer, which can be more efficient in handling disk I/O.
Device-Mapper-Multipath rr_min_io
When taking advantage of multipath configuration in which multiple physical paths can be leveraged to
perform I/O operations to a multipathed device, the rr_min_io parameter can be modified to optimize
the I/O subsystem. The rr_min_io specifies the number of I/O requests to route to a path before
switching to the next path in the current path group.
By default the rr_min_io is set to 1000, generally this is much too high. A general “rule of thumb” is to
set this to 2 times the Queue Depth value and test performance. The goal of modifying this setting is
to try and create an I/O flow that most efficiently fills up the I/O “buckets” in equal proportions as it
is passes through the Linux I/O subsystem.
This value is modified by making changes to the /etc/multipath.conf file in the “Defaults” section. For
example:
#defaults {
# udev_dir /dev
# polling_interval 10
# selector "round-robin 0"
# path_grouping_policy multibus
# getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
# prio_callout /bin/true
# path_checker readsector0
# rr_min_io 100
# rr_weight priorities
# failback immediate
# no_path_retry fail
# user_friendly_name yes
#}
Page 44
Dell Compellent Storage Center Linux Best Practices – Best Practice Guide
iSCSI Considerations
Tuning performance for iSCSI is as much an effort in Ethernet network tuning as it is block-level tuning.
Many of the common Ethernet kernel tunable parameters should be experimented with in order to
determine what settings provide the highest performance gain with iSCSI. Often simply increasing the
frame size supported to jumbo frames can lead to iSCSI performance improvements over 1Gb/10Gb
Ethernet. As with Fibre Channel, changes should be made incrementally and evaluated against multiple
workload types expected in the environment in order to fully understand the affect on overall
performance.
In other words, tuning performance for iSCSI is often more time consuming as one must consider the
block-level subsystem tuning as well as network (Ethernet) tuning. A solid understanding of the various
Linux subsystem layers involved is necessary to effectively tune the system.
Kernel parameters that can be tuned for performance are found in the /proc/sys/net/core and
/proc/sys/net/ipv4 kernel parameters. Once optimal values are determined, these can be
permanently set in the /etc/sysctl.conf file.
Like most modern OSes, Linux can do a good job of auto-tuning TCP buffers, however by default some
of the settings are set conservatively low. Experimenting with the following kernel parameters can
lead to increased network performance, which can then affect iSCSI performance.
Summary
The above information is intended to give a System and/or Storage Administrator a starting point by
which to affect performance on a Linux system using Dell Compellent Storage Center volumes. There
are many variables that can be tuned on a Linux host and a different combination of modifications will
suit different installations better (or worse) than others. Incremental changes should be employed
following the pattern: Change, Test, and Repeat.
Page 45