XenServer 5.5.0 Reference
XenServer 5.5.0 Reference
XenServer 5.5.0 Reference
5.5.0
Published June 2009
1.0 Edition
XenServer Administrator's Guide: Release 5.5.0
Published June 2009
Copyright © 2008 Citrix Systems, Inc.
Xen®, Citrix®, XenServer™, XenCenter™ and logos are either registered trademarks or trademarks of Citrix Systems, Inc. in the
United States and/or other countries. Other company or product names are for informational purposes only and may be trademarks
of their respective owners.
This product contains an embodiment of the following patent pending intellectual property of Citrix Systems, Inc.:
1. United States Non-Provisional Utility Patent Application Serial Number 11/487,945, filed on July 17, 2006, and entitled “Using
Writeable Page Tables for Memory Address Translation in a Hypervisor Environment”.
2. United States Non-Provisional Utility Patent Application Serial Number 11/879,338, filed on July 17, 2007, and entitled “Tracking
Current Time on Multiprocessor Hosts and Virtual Machines”.
1. Document Overview ..................................................................................................... 1
How this Guide relates to other documentation .......................................................... 1
2. XenServer hosts and resource pools ............................................................................ 2
Hosts and resource pools overview .......................................................................... 2
Requirements for creating resource pools ................................................................. 2
Creating a resource pool .......................................................................................... 3
Adding shared storage ............................................................................................. 4
Installing and managing VMs on shared storage ........................................................ 4
Removing a XenServer host from a resource pool ..................................................... 5
High Availability ....................................................................................................... 6
HA Overview ................................................................................................... 6
Configuration Requirements ............................................................................. 7
Restart priorities .............................................................................................. 8
Enabling HA on a XenServer pool ............................................................................ 9
Enabling HA using the CLI ............................................................................... 9
Removing HA protection from a VM using the CLI ........................................... 10
Recovering an unreachable host ..................................................................... 10
Shutting down a host when HA is enabled ...................................................... 10
Shutting down a VM when it is protected by HA ............................................... 10
Authenticating users using Active Directory (AD) ...................................................... 11
Configuring Active Directory authentication ...................................................... 11
User authentication ........................................................................................ 12
Removing access for a user ........................................................................... 13
Leaving an AD domain ................................................................................... 14
3. Storage ..................................................................................................................... 15
Storage Overview .................................................................................................. 15
Storage Repositories (SRs) ............................................................................ 15
Virtual Disk Images (VDIs) ............................................................................. 15
Physical Block Devices (PBDs) ....................................................................... 15
Virtual Block Devices (VBDs) .......................................................................... 16
Summary of Storage objects .......................................................................... 16
Virtual Disk Data Formats ............................................................................... 16
Storage configuration ............................................................................................. 18
Creating Storage Repositories ........................................................................ 18
Upgrading LVM storage from XenServer 5.0 or earlier ...................................... 19
LVM performance considerations .................................................................... 19
Converting between VDI formats .................................................................... 20
Probing an SR ............................................................................................... 20
Storage Multipathing ...................................................................................... 23
Storage Repository Types ...................................................................................... 24
Local LVM ..................................................................................................... 25
Local EXT3 VHD ........................................................................................... 26
udev .............................................................................................................. 26
ISO ............................................................................................................... 27
EqualLogic ..................................................................................................... 27
NetApp .......................................................................................................... 28
Software iSCSI Support ................................................................................. 33
Managing Hardware Host Bus Adapters (HBAs) .............................................. 34
LVM over iSCSI ............................................................................................. 35
NFS VHD ...................................................................................................... 38
LVM over hardware HBA ................................................................................ 39
Citrix StorageLink Gateway (CSLG) SRs ......................................................... 40
Managing Storage Repositories ............................................................................. 44
Destroying or forgetting a SR ......................................................................... 45
XenServer Administrator's Guide iv
Introducing an SR ......................................................................................... 45
Resizing an SR .............................................................................................. 46
Converting local Fibre Channel SRs to shared SRs .......................................... 46
Moving Virtual Disk Images (VDIs) between SRs ............................................. 46
Adjusting the disk IO scheduler ...................................................................... 47
Virtual disk QoS settings ........................................................................................ 48
4. Networking ................................................................................................................ 50
XenServer networking overview .............................................................................. 50
Network objects ............................................................................................. 50
Networks ....................................................................................................... 51
VLANs ........................................................................................................... 51
NIC bonds ..................................................................................................... 52
Initial networking configuration ....................................................................... 53
Managing networking configuration ......................................................................... 53
Creating networks in a standalone server ........................................................ 54
Creating networks in resource pools ............................................................... 54
Creating VLANs ............................................................................................. 54
Creating NIC bonds on a standalone host ....................................................... 55
Creating NIC bonds in resource pools ............................................................. 57
Configuring a dedicated storage NIC ............................................................... 60
Controlling Quality of Service (QoS) ................................................................ 61
Changing networking configuration options ...................................................... 61
NIC/PIF ordering in resource pools ................................................................. 64
Networking Troubleshooting .................................................................................... 65
Diagnosing network corruption ........................................................................ 66
Recovering from a bad network configuration .................................................. 66
5. Workload Balancing ................................................................................................... 67
Workload Balancing Overview ................................................................................ 67
Workload Balancing Basic Concepts ............................................................... 67
Designing Your Workload Balancing Deployment ..................................................... 69
Deploying One Server .................................................................................... 69
Planning for Future Growth ............................................................................ 70
Increasing Availability ..................................................................................... 70
Multiple Server Deployments .......................................................................... 70
Workload Balancing Security ......................................................................... 73
Workload Balancing Installation Overview ................................................................ 74
Workload Balancing System Requirements ...................................................... 75
Workload Balancing Data Store Requirements ................................................. 76
Operating System Language Support .............................................................. 78
Preinstallation Considerations ......................................................................... 78
Installing Workload Balancing ......................................................................... 78
Windows Installer Commands for Workload Balancing ............................................. 83
ADDLOCAL ................................................................................................... 84
CERT_CHOICE ............................................................................................. 85
CERTNAMEPICKED ...................................................................................... 85
DATABASESERVER ...................................................................................... 86
DBNAME ....................................................................................................... 86
DBUSERNAME .............................................................................................. 87
DBPASSWORD ............................................................................................. 87
EXPORTCERT ............................................................................................... 88
EXPORTCERT_FQFN .................................................................................... 88
HTTPS_PORT ............................................................................................... 89
INSTALLDIR .................................................................................................. 89
PREREQUISITES_PASSED .......................................................................... 89
XenServer Administrator's Guide v
RECOVERYMODEL ....................................................................................... 90
USERORGROUPACCOUNT ........................................................................... 90
WEBSERVICE_USER_CB ............................................................................ 91
WINDOWS_AUTH ......................................................................................... 91
Initializing and Configuring Workload Balancing ....................................................... 92
Initialization Overview ..................................................................................... 92
To initialize Workload Balancing ...................................................................... 93
To edit the Workload Balancing configuration for a pool .................................... 94
Authorization for Workload Balancing ............................................................. 95
Configuring Antivirus Software ........................................................................ 96
Changing the Placement Strategy ................................................................... 97
Changing the Performance Thresholds and Metric Weighting ............................ 97
Accepting Optimization Recommendations .............................................................. 98
To accept an optimization recommendation ..................................................... 99
Choosing an Optimal Server for VM Initial Placement, Migrate, and Resume .............. 99
To start a virtual machine on the optimal server ............................................... 99
Entering Maintenance Mode with Workload Balancing Enabled ............................... 100
To enter maintenance mode with Workload Balancing enabled ........................ 100
Working with Workload Balancing Reports ............................................................. 100
Introduction .................................................................................................. 101
Types of Workload Balancing Reports ........................................................... 101
Using Workload Balancing Reports for Tasks ................................................. 101
Creating Workload Balancing Reports ........................................................... 101
Generating Workload Balancing Reports ....................................................... 103
Workload Balancing Report Glossary ............................................................ 104
Administering Workload Balancing ........................................................................ 107
Disabling Workload Balancing on a Resource Pool ........................................ 107
Reconfiguring a Resource Pool to Use Another WLB Server ........................... 108
Uninstalling Workload Balancing ................................................................... 108
Troubleshooting Workload Balancing ..................................................................... 108
General Troubleshooting Tips ....................................................................... 108
Error Messages .......................................................................................... 109
Issues Installing Workload Balancing ............................................................ 109
Issues Initializing Workload Balancing ........................................................... 109
Issues Starting Workload Balancing .............................................................. 110
Workload Balancing Connection Errors .......................................................... 110
Issues Changing Workload Balancing Servers ............................................... 110
6. Backup and recovery ............................................................................................... 111
Backups .............................................................................................................. 111
Full metadata backup and disaster recovery (DR) .................................................. 112
DR and metadata backup overview ............................................................... 112
Backup and restore using xsconsole ............................................................. 113
Moving SRs between hosts and Pools .......................................................... 114
Using Portable SRs for Manual Multi-Site Disaster Recovery .......................... 115
VM Snapshots ..................................................................................................... 115
Regular Snapshots ....................................................................................... 115
Quiesced Snapshots .................................................................................... 115
Taking a VM snapshot .................................................................................. 117
VM Rollback ................................................................................................ 117
Coping with machine failures ................................................................................ 118
Member failures ........................................................................................... 118
Master failures ............................................................................................. 118
Pool failures ................................................................................................. 119
Coping with Failure due to Configuration Errors ............................................. 119
XenServer Administrator's Guide vi
This section summarizes the rest of the guide so that you can find the information you need. The following
topics are covered:
• XenServer Installation Guide provides a high level overview of XenServer, along with step-by-step in-
structions on installing XenServer hosts and the XenCenter management console.
• XenServer Virtual Machine Installation Guide describes how to install Linux and Windows VMs on top of
a XenServer deployment. As well as installing new VMs from install media (or using the VM templates
provided with the XenServer release), this guide also explains how to create VMs from existing physical
machines, using a process called P2V.
• XenServer Software Development Kit Guide presents an overview of the XenServer SDK -- a selection
of code samples that demonstrate how to write applications that interface with XenServer hosts.
• XenAPI Specification provides a programmer's reference guide to the XenServer API.
• XenServer User Security considers the issues involved in keeping your XenServer installation secure.
• Release Notes provides a list of known issues that affect this release.
Chapter 2. XenServer hosts and
resource pools
This chapter describes how resource pools can be created through a series of examples using the xe
command line interface (CLI). A simple NFS-based shared storage configuration is presented and a number
of simple VM management examples are discussed. Procedures for dealing with physical node failures are
also described.
A pool always has at least one physical node, known as the master. Only the master node exposes an
administration interface (used by XenCenter and the CLI); the master forwards commands to individual
members as necessary.
• the CPUs on the server joining the pool are the same (in terms of vendor, model, and features) as the
CPUs on servers already in the pool.
• the server joining the pool is running the same version of XenServer software, at the same patch level,
as servers already in the pool.
The software will enforce additional constraints when joining a server to a pool – in particular:
You must also check that the clock of the host joining the pool is synchronized to the same time as the
pool master (for example, by using NTP), that its management interface is not bonded (you can configure
this once the host has successfully joined the pool), and that its management IP address is static (either
configured on the host itself or by using an appropriate configuration on your DHCP server).
XenServer hosts in resource pools may contain different numbers of physical network interfaces and have
local storage repositories of varying size. In practice, it is often difficult to obtain multiple servers with the
exact same CPUs, and so minor variations are permitted. If you are sure that it is acceptable in your envi-
XenServer Administrator's Guide XenServer hosts and resource pools 3
ronment for hosts with varying CPUs to be part of the same resource pool, then the pool joining operation
can be forced by passing a --force parameter.
Note
The requirement for a XenServer host to have a static IP address to be part of a resource pool also applies
to servers providing shared NFS or iSCSI storage for the pool.
Although not a strict technical requirement for creating a resource pool, the advantages of pools (for exam-
ple, the ability to dynamically choose on which XenServer host to run a VM and to dynamically move a
VM between XenServer hosts) are only available if the pool has one or more shared storage repositories.
If possible, postpone creating a pool of XenServer hosts until shared storage is available. Once shared
storage has been added, Citrix recommends that you move existing VMs whose disks are in local storage
into shared storage. This can be done using the xe vm-copy command or XenCenter.
• VM, local, and remote storage configuration is added to the pool-wide database. All of these will still be
tied to the joining host in the pool unless you explicitly take action to make the resources shared after
the join has completed.
• The joining host inherits existing shared storage repositories in the pool and appropriate PBD records are
created so that the new host can access existing shared storage automatically.
• Networking information is partially inherited to the joining host: the structural details of NICs, VLANs and
bonded interfaces are all inherited, but policy information is not. This policy information, which must be
re-configured, includes:
• the IP addresses of management NICs, which are preserved from the original configuration
• the location of the management interface, which remains the same as the original configuration. For
example, if the other pool hosts have their management interface on a bonded interface, then the joining
host must be explicitly migrated to the bond once it has joined. See To add NIC bonds to the pool master
and other hosts for details on how to migrate the management interface to a bond.
• Dedicated storage NICs, which must be re-assigned to the joining host from XenCenter or the CLI, and
the PBDs re-plugged to route the traffic accordingly. This is because IP addresses are not assigned
as part of the pool join operation, and the storage NIC is not useful without this configured correctly.
See the section called “Configuring a dedicated storage NIC” for details on how to dedicate a storage
NIC from the CLI.
To join XenServer hosts host1 and host2 into a resource pool using the CLI
The master-address must be set to the fully-qualified domain name of XenServer host host1 and
the password must be the administrator password set when XenServer host host1 was installed.
• XenServer hosts belong to an unnamed pool by default. To create your first resource pool, rename the
existing nameless pool. You can use tab-complete to get the <pool_uuid>:
The device-config:server refers to the hostname of the NFS server and de-
vice-config:serverpath refers to the path on the NFS server. Since shared is set to true, the
shared storage will be automatically connected to every XenServer host in the pool and any XenServer
hosts that subsequently join will also be connected to the storage. The UUID of the created storage
repository will be printed on the screen.
3. Find the UUID of the pool by the command
xe pool-list
4. Set the shared storage as the pool-wide default with the command
Since the shared storage has been set as the pool-wide default, all future VMs will have their disks
created on shared storage by default. See Chapter 3, Storage for information about creating other types
of shared storage.
xe sr-list
xe vm-start vm=<etch>
The master will choose a XenServer host from the pool to start the VM. If the on parameter is provided,
the VM will start on the specified XenServer host. If the requested XenServer host is unable to start
the VM, the command will fail. To request that a VM is always started on a particular XenServer host,
set the affinity parameter of the VM to the UUID of the desired XenServer host using the xe vm-
param-set command. Once set, the system will start the VM there if it can; if it cannot, it will default
to choosing from the set of possible XenServer hosts.
5. You can use XenMotion to move the Debian VM to another XenServer host with the command
Note
When a VM is migrated, the domain on the original hosting server is destroyed and the memory that
VM used is zeroed out before Xen makes it available to new VMs. This ensures that there is no informa-
tion leak from old VMs to new ones. As a consequence, it is possible that sending multiple near-simul-
taneous commands to migrate a number of VMs, when near the memory limit of a server (for example,
a set of VMs consuming 3GB migrated to a server with 4GB of physical memory), the memory of an
old domain might not be scrubbed before a migration is attempted, causing the migration to fail with a
HOST_NOT_ENOUGH_FREE_MEMORY error. Inserting a delay between migrations should allow Xen the
opportunity to successfully scrub the memory and return it to general use.
xe host-list
xe pool-eject host-uuid=<uuid>
Warning
Do not eject a host from a resource pool if it contains important data stored on its local disks. All of the
data will be erased upon ejection from the pool. If you wish to preserve this data, copy the VM to shared
storage on the pool first using XenCenter, or the xe vm-copy CLI command.
When a XenServer host containing locally stored VMs is ejected from a pool, those VMs will still be present
in the pool database and visible to the other XenServer hosts. They will not start until the virtual disks as-
sociated with them have been changed to point at shared storage which can be seen by other XenServer
hosts in the pool, or simply removed. It is for this reason that you are strongly advised to move any local
storage to shared storage upon joining a pool, so that individual XenServer hosts can be ejected (or phys-
ically fail) without loss of data.
High Availability
This section explains the XenServer implementation of virtual machine high availability (HA), and how to
configure it using the xe CLI.
Note
XenServer HA is only available with a Citrix Essentials for XenServer license. To learn more about
Citrix Essentials for XenServer and to find out how to upgrade, visit the Citrix website here.
HA Overview
When HA is enabled, XenServer continually monitors the health of the hosts in a pool. The HA mechanism
automatically moves protected VMs to a healthy host if the current VM host fails. Additionally, if the host
that fails is the master, HA selects another host to take over the master role automatically, meaning that
you can continue to manage the XenServer pool.
To absolutely guarantee that a host is unreachable, a resource pool configured for high-availability uses
several heartbeat mechanisms to regularly check up on hosts. These heartbeats go through both the storage
interfaces (to the Heartbeat SR) and the networking interfaces (over the management interfaces). Both of
these heartbeat routes can be multi-homed for additional resilience to prevent false positives.
XenServer dynamically maintains a failover plan for what to do if a set of hosts in a pool fail at any given
time. An important concept to understand is the host failures to tolerate value, which is defined as part of
HA configuration. This determines the number of failures that is allowed without any loss of service. For
example, if a resource pool consisted of 16 hosts, and the tolerated failures is set to 3, the pool calculates a
failover plan that allows for any 3 hosts to fail and still be able to restart VMs on other hosts. If a plan cannot
be found, then the pool is considered to be overcommitted. The plan is dynamically recalculated based on
VM lifecycle operations and movement. Alerts are sent (either through XenCenter or e-mail) if changes (for
example the addition on new VMs to the pool) cause your pool to become overcommitted.
XenServer Administrator's Guide XenServer hosts and resource pools 7
Overcommitting
A pool is overcommitted if the VMs that are currently running could not be restarted elsewhere following a
user-defined number of host failures.
This would happen if there was not enough free memory across the pool to run those VMs following failure.
However there are also more subtle changes which can make HA guarantees unsustainable: changes to
VBDs and networks can affect which VMs may be restarted on which hosts. Currently it is not possible for
XenServer to check all actions before they occur and determine if they will cause violation of HA demands.
However an asynchronous notification is sent if HA becomes unsustainable.
Overcommitment Warning
If you attempt to start or resume a VM and that action causes the pool to be overcommitted, a warning alert
is raised. This warning is displayed in XenCenter and is also available as a message instance through the
Xen API. The message may also be sent to an email address if configured. You will then be allowed to
cancel the operation, or proceed anyway. Proceeding will causes the pool to become overcommitted. The
amount of memory used by VMs of different priorities is displayed at the pool and host levels.
Host Fencing
If a server failure occurs such as the loss of network connectivity or a problem with the control stack is
encountered, the XenServer host self-fences to ensure that the VMs are not running on two servers simul-
taneously. When a fence action is taken, the server immediately and abruptly restarts, causing all VMs run-
ning on it to be stopped. The other servers will detect that the VMs are no longer running and the VMs will be
restarted according to the restart priorities assign to them. The fenced server will enter a reboot sequence,
and when it has restarted it will try to re-join the resource pool.
Configuration Requirements
To use the HA feature, you need:
• Shared storage, including at least one iSCSI or Fibre Channel LUN of size 356MiB or greater -- the
heartbeat SR. The HA mechanism creates two volumes on the heartbeat SR:
4MiB heartbeat volume
Used for heartbeating.
256MiB metadata volume
Stores pool master metadata to be used in the case of master failover.
If you are using a NetApp or EqualLogic SR, manually provision an iSCSI LUN on the array to use as
the heartbeat SR.
• A XenServer pool (this feature provides high availability at the server level within a single resource pool).
• Enterprise licenses on all hosts.
• Static IP addresses for all hosts.
Warning
Should the IP address of a server change while HA is enabled, HA will assume that the host's network
has failed, and will probably fence the host and leave it in an unbootable state. To remedy this situation,
XenServer Administrator's Guide XenServer hosts and resource pools 8
disable HA using the host-emergency-ha-disable command, reset the pool master using pool-emer-
gency-reset-master, and then re-enable HA.
• it must have its virtual disks on shared storage (any type of shared storage may be used; the iSCSI or
Fibre Channel LUN is only required for the storage heartbeat and can be used for virtual disk storage if
you prefer, but this is not necessary)
• it must not have a connection to a local DVD drive configured
• it should have its virtual network interfaces on pool-wide networks.
Citrix strongly recommends the use of a bonded management interface on the servers in the pool if HA is
enabled, and multipathed storage for the heartbeat SR.
If you create VLANs and bonded interfaces from the CLI, then they may not be plugged in and active despite
being created. In this situation, a VM can appear to be not agile, and cannot be protected by HA. If this
occurs, use the CLI pif-plug command to bring the VLAN and bond PIFs up so that the VM can become
agile. You can also determine precisely why a VM is not agile by using the xe diagnostic-vm-status CLI
command to analyze its placement constraints, and take remedial action if required.
Restart priorities
Virtual machines are assigned a restart priority and a flag that indicates whether they should be protected
by HA or not. When HA is enabled, every effort is made to keep protected virtual machines live. If a restart
priority is specified, any protected VM that is halted will be started automatically. If a server fails then the
VMs on it will be started on another server.
1|2|3
when a pool is overcommited the HA mechanism will attempt to restart protected VMs with the lowest
restart priority first
best-effort
VMs with this priority setting will be restarted only when the system has attempted to restart protected
VMs
ha-always-run=false
VMs with this parameter set will not be restarted
The restart priorities determine the order in which VMs are restarted when a failure occurs. In a given
configuration where a number of server failures greater than zero can be tolerated (as indicated in the HA
panel in the GUI, or by the ha-plan-exists-for field on the pool object on the CLI), the VMs that have
restart priorities 1, 2 or 3 are guaranteed to be restarted given the stated number of server failures. VMs
with a best-effort priority setting are not part of the failover plan and are not guaranteed to be kept
running, since capacity is not reserved for them. If the pool experiences server failures and enters a state
where the number of tolerable failures drops to zero, the protected VMs will no longer be guaranteed to be
restarted. If this condition is reached, a system alert will be generated. In this case, should an additional
failure occur, all VMs that have a restart priority set will behave according to the best-effort behavior.
If a protected VM cannot be restarted at the time of a server failure (for example, if the pool was overcom-
mitted when the failure occurred), further attempts to start this VM will be made as the state of the pool
XenServer Administrator's Guide XenServer hosts and resource pools 9
changes. This means that if extra capacity becomes available in a pool (if you shut down a non-essential
VM, or add an additional server, for example), a fresh attempt to restart the protected VMs will be made,
which may now succeed.
Note
No running VM will ever be stopped or migrated in order to free resources for a VM with al-
ways-run=true to be restarted.
Warning
When HA is enabled, some operations that would compromise the plan for restarting VMs may be dis-
abled, such as removing a server from a pool. To perform these operations, HA can be temporarily dis-
abled, or alternately, VMs protected by HA made unprotected.
xe pool-ha-enable heartbeat-sr-uuid=<sr_uuid>
xe pool-ha-compute-max-host-failures-to-tolerate
The number of failures to tolerate determines when an alert is sent: the system will recompute a failover
plan as the state of the pool changes and with this computation the system identifies the capacity of
the pool and how many more failures are possible without loss of the liveness guarantee for protected
VMs. A system alert is generated when this computed value falls below the specified value for ha-
host-failures-to-tolerate.
5. Specify the number of failures to tolerate parameter. This should be less than or equal to the computed
value:
XenServer Administrator's Guide XenServer hosts and resource pools 10
xe pool-param-set ha-host-failures-to-tolerate=<2>
xe host-emergency-ha-disable --force
If the host was the pool master, then it should start up as normal with HA disabled. Slaves should reconnect
and automatically disable HA. If the host was a Pool slave and cannot contact the master, then it may be
necessary to force the host to reboot as a pool master (xe pool-emergency-transition-to-master) or to
tell it where the new master is (xe pool-emergency-reset-master):
xe pool-emergency-transition-to-master uuid=<host_uuid>
xe pool-emergency-reset-master master-address=<new_master_hostname>
xe pool-ha-enable heartbeat-sr-uuid=<sr_uuid>
xe host-disable host=<host_name>
xe host-evacuate uuid=<host_uuid>
xe host-shutdown host=<host_name>
Note
If you shut down a VM from within the guest, and the VM is protected, it is automatically restarted under
the HA failure conditions. This helps ensure that operator error (or an errant program that mistakenly
XenServer Administrator's Guide XenServer hosts and resource pools 11
shuts down the VM) does not result in a protected VM being left shut down accidentally. If you want to
shut this VM down, disable its HA protection first.
Access is controlled by the use of subjects. A subject in XenServer maps to an entity on your directory
server (either a user or a group). When external authentication is enabled, the credentials used to create
a session are first checked against the local root credentials (in case your directory server is unavailable)
and then against the subject list. To permit access, you must create a subject entry for the person or group
you wish to grant access to. This can be done using XenCenter or the xe CLI.
For external authentication using Active Directory to be successful, it is important that the clocks on your
XenServer hosts are synchronized with those on your Active Directory server. When XenServer joins the
Active Directory domain, this will be checked and authentication will fail if there is too much skew between
the servers.
Note
The servers can be in different time-zones, and it is the UTC time that is compared. To ensure synchro-
nization is correct, you may choose to use the same NTP servers for your XenServer pool and the Active
Directory server.
When configuring Active Directory authentication for a XenServer host, the same DNS servers should be
used for both the Active Directory server (and have appropriate configuration to allow correct interoperability)
and XenServer host (note that in some configurations, the active directory server may provide the DNS
itself). This can be achieved either using DHCP to provide the IP address and a list of DNS servers to the
XenServer host, or by setting values in the PIF objects or using the installer if a manual static configuration
is used.
Citrix recommends enabling DCHP to broadcast host names. In particular, the host names localhost or
linux should not be assigned to hosts. Host names must consist solely of no more than 156 alphanumeric
characters, and may not be purely numeric.
xe pool-enable-external-auth auth-type=AD \
service-name=<full-qualified-domain> \
config:user=<username> \
config:pass=<password>
XenServer Administrator's Guide XenServer hosts and resource pools 12
The user specified needs to have Add/remove computer objects or workstations privileges,
which is the default for domain administrators.
Note
If you are not using DHCP on the network that Active Directory and your XenServer hosts use you can
use these two approaches to setup your DNS:
2. Manually set the management interface to use a PIF that is on the same network as your DNS server:
xe host-management-reconfigure pif-uuid=<pif_in_the_dns_subnetwork>
Note
External authentication is a per-host property. However, Citrix advises that you enable and disable this on
a per-pool basis – in this case XenServer will deal with any failures that occur when enabling authentica-
tion on a particular host and perform any roll-back of changes that may be required, ensuring that a con-
sistent configuration is used across the pool. Use the host-param-list command to inspect properties of
a host and to determine the status of external authentication by checking the values of the relevant fields.
xe pool-disable-external-auth
User authentication
To allow a user access to your XenServer host, you must add a subject for that user or a group that they are
in. (Transitive group memberships are also checked in the normal way, for example: adding a subject for
group A, where group A contains group B and user 1 is a member of group B would permit access to user
1.) If you wish to manage user permissions in Active Directory, you could create a single group that you then
add and remove users to/from; alternatively, you can add and remove individual users from XenServer, or
a combination of users and groups as your would be appropriate for your authentication requirements. The
subject list can be managed from XenCenter or using the CLI as described below.
When authenticating a user, the credentials are first checked against the local root account, allowing you
to recover a system whose AD server has failed. If the credentials (i.e. username then password) do not
match/authenticate, then an authentication request is made to the AD server – if this is successful the user's
information will be retrieved and validated against the local subject list, otherwise access will be denied.
Validation against the subject list will succeed if the user or a group in the transitive group membership of
the user is in the subject list.
XenServer Administrator's Guide XenServer hosts and resource pools 13
The entity name should be the name of the user or group to which you want to grant access. You may
optionally include the domain of the entity (e.g. '<xendt\user1>' as opposed to '<user1>') although the
behavior will be the same unless disambiguation is required.
xe subject-list
You may wish to apply a filter to the list, for example to get the subject identifier for a user named user1
in the testad domain, you could use the following command:
xe subject-list other-config:subject-name='<domain\user>'
2. Remove the user using the subject-remove command, passing in the subject identifier you learned
in the previous step:
3. You may wish to terminate any current session this user has already authenticated. See Terminating all
authenticated sessions using xe and Terminating individual user sessions using xe for more information
about terminating sessions. If you do not terminate sessions the users whose permissions have been
revoked may be able to continue to access the system until they log out.
xe subject-list
xe session-subject-identifier-logout-all
XenServer Administrator's Guide XenServer hosts and resource pools 14
1. Determine the subject identifier whose session you wish to log out. Use either the session-sub-
ject-identifier-list or subject-list xe commands to find this (the first shows users who have sessions,
the second shows all users but can be filtered, for example, using a command like xe subject-list oth-
er-config:subject-name=xendt\\user1 – depending on your shell you may need a double-backslash
as shown).
2. Use the session-subject-logout command, passing the subject identifier you have determined in the
previous step as a parameter, for example:
xe session-subject-identifier-logout subject-identifier=<subject-id>
Leaving an AD domain
Use XenCenter to leave an AD domain. See the XenCenter help for more information. Alternately run the
pool-disable-external-auth command, specifying the pool uuid if required.
Note
Leaving the domain will not cause the host objects to be removed from the AD database. See this knowl-
edge base article for more information about this and how to remove the disabled host entries.
Chapter 3. Storage
This chapter discusses the framework for storage abstractions. It describes the way physical storage hard-
ware of various kinds is mapped to VMs, and the software objects used by the XenServer host API to per-
form storage-related tasks. Detailed sections on each of the supported storage types include procedures
for creating storage for VMs using the CLI, with type-specific device configuration options, generating snap-
shots for backup purposes and some best practices for managing storage in XenServer host environments.
Finally, the virtual disk QoS (quality of service) settings are described.
Storage Overview
This section explains what the XenServer storage objects are and how they are related to each other.
The interface to storage hardware allows VDIs to be supported on a large number of SR types. The XenServ-
er SR is very flexible, with built-in support for IDE, SATA, SCSI and SAS drives locally connected, and iSCSI,
NFS, SAS and Fibre Channel remotely connected. The SR and VDI abstractions allow advanced storage
features such as sparse provisioning, VDI snapshots, and fast cloning to be exposed on storage targets
that support them. For storage subsystems that do not inherently support advanced operations directly, a
software stack is provided based on Microsoft's Virtual Hard Disk (VHD) specification which implements
these features.
Each XenServer host can use multiple SRs and different SR types simultaneously. These SRs can be shared
between hosts or dedicated to particular hosts. Shared storage is pooled between multiple hosts within a
defined resource pool. A shared SR must be network accessible to each host. All hosts in a single resource
pool must have at least one shared SR in common.
SRs are storage targets containing virtual disk images (VDIs). SR commands provide operations for creat-
ing, destroying, resizing, cloning, connecting and discovering the individual VDIs that they contain.
A storage repository is a persistent, on-disk data structure. For SR types that use an underlying block device,
the process of creating a new SR involves erasing any existing data on the specified storage target. Other
storage types such as NFS, Netapp, Equallogic and StorageLink SRs, create a new container on the storage
array in parallel to existing SRs.
CLI operations to manage storage repositories are described in the section called “SR commands”.
uration fields that are used to connect to and interact with a given storage target. For example, NFS device
configuration includes the IP address of the NFS server and the associated path that the XenServer host
mounts. PBD objects manage the run-time attachment of a given SR to a given XenServer host. CLI oper-
ations relating to PBDs are described in the section called “PBD commands”.
• File-based VHD on a Filesystem; VM images are stored as thin-provisioned VHD format files on either a
local non-shared Filesystem (EXT type SR) or a shared NFS target (NFS type SR)
• Logical Volume-based VHD on a LUN; The default XenServer blockdevice-based storage inserts a Logical
Volume manager on a disk, either a locally attached device (LVM type SR) or a SAN attached LUN over
either Fibre Channel (LVMoHBA type SR), iSCSI (LVMoISCSI type SR) or SAS (LVMoHBA type Sr).
VDIs are represented as volumes within the Volume manager and stored in VHD format to allow thin
provisioning of reference nodes on snapshot and clone.
• LUN per VDI; LUNs are directly mapped to VMs as VDIs by SR types that provide an array-specific plugin
(Netapp, Equallogic or StorageLink type SRs). The array storage abstraction therefore matches the VDI
storage abstraction for environments that manage storage provisioning at an array level.
XenServer Administrator's Guide Storage 17
VHD-based VDIs
VHD files may be chained, allowing two VDIs to share common data. In cases where a VHD-backed VM is
cloned, the resulting VMs share the common on-disk data at the time of cloning. Each proceeds to make its
own changes in an isolated copy-on-write (CoW) version of the VDI. This feature allows VHD-based VMs
to be quickly cloned from templates, facilitating very fast provisioning and deployment of new VMs.
The VHD format used by LVM-based and File-based SR types in XenServer uses sparse provisioning. The
image file is automatically extended in 2MB chunks as the VM writes data into the disk. For File-based VHD,
this has the considerable benefit that VM image files take up only as much space on the physical storage
as required. With LVM-based VHD the underlying logical volume container must be sized to the virtual size
of the VDI, however unused space on the underlying CoW instance disk is reclaimed when a snapshot or
clone occurs. The difference between the two behaviours can be characterised in the following way:
• For LVM-based VHDs, the difference disk nodes within the chain consume only as much data as has
been written to disk but the leaf nodes (VDI clones) remain fully inflated to the virtual size of the disk.
Snapshot leaf nodes (VDI snapshots) remain deflated when not in use and can be attached Read-only
to preserve the deflated allocation. Snapshot nodes that are attached Read-Write will be fully inflated on
attach, and deflated on detach.
• For file-based VHDs, all nodes consume only as much data as has been written, and the leaf node files
grow to accommodate data as it is actively written. If a 100GB VDI is allocated for a new VM and an OS
is installed, the VDI file will physically be only the size of the OS data that has been written to the disk,
plus some minor metadata overhead.
When cloning VMs based off a single VHD template, each child VM forms a chain where new changes
are written to the new VM, and old blocks are directly read from the parent template. If the new VM was
converted into a further template and more VMs cloned, then the resulting chain will result in degraded
performance. XenServer supports a maximum chain length of 30, but it is generally not recommended that
you approach this limit without good reason. If in doubt, you can always "copy" the VM using XenServer or
the vm-copy command, which resets the chain length back to 0.
VHD images support chaining, which is the process whereby information shared between one or more VDIs
is not duplicated. This leads to a situation where trees of chained VDIs are created over time as VMs and
their associated VDIs get cloned. When one of the VDIs in a chain is deleted, XenServer rationalizes the
other VDIs in the chain to remove unnecessary VDIs.
This coalescing process runs asynchronously. The amount of disk space reclaimed and the time taken to
perform the process depends on the size of the VDI and the amount of shared data. Only one coalescing
process will ever be active for an SR. This process thread runs on the SR master host.
If you have critical VMs running on the master server of the pool and experience occasional slow IO due
to this process, you can take steps to mitigate against this:
Space Utilisation
Space utilisation is always reported based on the current allocation of the SR, and may not reflect the
amount of virtual disk space allocated. The reporting of space for LVM-based SRs versus File-based SRs
will also differ given that File-based VHD supports full thin provisioning, while the underlying volume of an
LVM-based VHD will be fully inflated to support potential growth for writeable leaf nodes. Space utilisation
XenServer Administrator's Guide Storage 18
reported for the SR will depend on the number of snapshots, and the amount of difference data written to
a disk between each snapshot.
LVM-based space utilisation differs depending on whether an LVM SR is upgraded vs created as a new SR in
XenServer. Upgraded LVM SRs will retain a base node that is fully inflated to the size of the virtual disk, and
any subsequent snapshot or clone operations will provision at least one additional node that is fully inflated.
For new SRs, in contrast, the base node will be deflated to only the data allocated in the VHD overlay.
When VHD-based VDIs are deleted, the space is marked for deletion on disk. Actual removal of allocated
data may take some time to occur as it is handled by the coalesce process that runs asynchronously and
independently for each VHD-based SR.
LUN-based VDIs
Mapping a raw LUN as a Virtual Disk image is typically the most high-performance storage method. For
administrators that want to leverage existing storage SAN infrastructure such as Netapp, Equallogic or
StorageLink accessible arrays, the array snapshot, clone and thin provisioning capabilities can be exploited
directly using one of the array specific adapter SR types (Netapp, Equallogic or StorageLink). The virtual
machine storage operations are mapped directly onto the array APIs using a LUN per VDI representation.
This includes activating the data path on demand such as when a VM is started or migrated to another host.
Managed NetApp LUNs are accessible using the NetApp SR driver type, and are hosted on a Network
Appliance device running a version of Ontap 7.0 or greater. LUNs are allocated and mapped dynamically
to the host using the XenServer host management framework.
EqualLogic storage is accessible using the EqualLogic SR driver type, and is hosted on an EqualLogic
storage array running a firmware version of 4.0 or greater. LUNs are allocated and mapped dynamically to
the host using the XenServer host management framework.
For further information on StorageLink supported array systems and the various capabilities in each case,
please refer to the StorageLink documentation directly.
Storage configuration
This section covers creating storage repository types and making them available to a XenServer host. The
examples provided pertain to storage configuration using the CLI, which provides the greatest flexibility. See
the XenCenter Help for details on using the New Storage Repository wizard.
Note
Local SRs of type lvm and ext can only be created using the xe CLI. After creation all SR types can be
managed by either XenCenter or the xe CLI.
There are two basic steps involved in creating a new storage repository for use on a XenServer host using
the CLI:
2. Create the SR to initialize the SR object and associated PBD objects, plug the PBDs, and activate the SR.
These steps differ in detail depending on the type of SR being created. In all examples the sr-create com-
mand returns the UUID of the created SR if successful.
SRs can also be destroyed when no longer in use to free up the physical device, or forgotten to detach
the SR from one XenServer host and attach it to another. See the section called “Destroying or forgetting
a SR” for details.
Note
Upgrade is a one-way operation so Citrix recommends only performing the upgrade when you are certain
the storage will no longer need to be attached to a pool running an older software version.
Note
Non-transportable snapshots using the default Windows VSS provider will work on any type of VDI.
Warning
Do not try to snapshot a VM that has type=raw disks attached. This could result in a partial snapshot
being created. In this situation, you can identify the orphan snapshot VDIs by checking the snapshot-of
field and then deleting them.
VDI types
In general, VHD format VDIs will be created. You can opt to use raw at the time you create the VDI; this can
only be done using the xe CLI. After software upgrade from a previous XenServer version, existing data
will be preserved as backwards-compatible raw VDIs but these are special-cased so that snapshots can be
taken of them once you have allowed this by upgrading the SR. Once the SR has been upgraded and the
first snapshot has been taken, you will be accessing the data through a VHD format VDI.
To check if an SR has been upgraded, verify that its sm-config:use_vhd key is true. To check if a
VDI was created with type=raw, check its sm-config map. The sr-param-list and vdi-param-list xe
commands can be used respectively for this purpose.
XenServer Administrator's Guide Storage 20
2. Attach the new virtual disk to a VM and use your normal disk tools within the VM to partition and format,
or otherwise make use of the new disk. You can use the vbd-create command to create a new VBD to
map the virtual disk into your VM.
Probing an SR
The sr-probe command can be used in two ways:
In both cases sr-probe works by specifying an SR type and one or more device-config parameters for
that SR type. When an incomplete set of parameters is supplied the sr-probe command returns an error
message indicating parameters are missing and the possible options for the missing parameters. When a
complete set of parameters is supplied a list of existing SRs is returned. All sr-probe output is returned
as XML.
For example, a known iSCSI target can be probed by specifying its name or IP address, and the set of IQNs
available on the target will be returned:
Probing the same target again and specifying both the name/IP address and desired IQN returns the set
of SCSIids (LUNs) available on the target/IQN.
Probing the same target and supplying all three parameters will return a list of SRs that exist on the LUN,
if any.
chapuser No No
chappassword No No
username No Yes
password No Yes
chapuser No No
chappassword No No
*
aggregate No Yes
FlexVols No No
allocation No No
asis No No
username No Yes
password No Yes
chapuser No No
chappassword No No
†
storagepool No Yes
provision-type Yes No
protocol Yes No
provision-options Yes No
raid-type Yes No
*
Aggregate probing is only possible at sr-create time. It needs to be done there so that the aggregate can be specified at the point that the SR is created.
†
Storage pool probing is only possible at sr-create time. It needs to be done there so that the aggregate can be specified at the point that the SR
is created.
‡
If the username, password, or port configuration of the StorageLink service are changed from the default value then the appropriate parameter
and value must be specified.
Storage Multipathing
Dynamic multipathing support is available for Fibre Channel and iSCSI storage backends. By default, it uses
round-robin mode load balancing, so both routes have active traffic on them during normal operation. You
can enable multipathing in XenCenter or on the xe CLI.
Caution
Before attempting to enable multipathing, verify that multiple targets are available on your storage server.
For example, an iSCSI storage backend queried for sendtargets on a given portal should return multiple
targets, as in the following example:
xe pbd-unplug uuid=<pbd_uuid>
XenServer Administrator's Guide Storage 24
4. If there are existing SRs on the host running in single path mode but that have multiple paths:
• Migrate or suspend any running guests with virtual disks in affected the SRs
• Unplug and re-plug the PBD of any affected SRs to reconnect them using multipathing:
xe pbd-plug uuid=<pbd_uuid>
To disable multipathing, first unplug your VBDs, set the host other-config:multipathing pa-
rameter to false and then replug your PBDs as described above. Do not modify the oth-
er-config:multipathhandle parameter as this will be done automatically.
Multipath support in XenServer is based on the device-mapper multipathd components. Activation and
deactivation of multipath nodes is handled automatically by the Storage Manager API. Unlike the standard
dm-multipath tools in linux, device mapper nodes are not automatically created for all LUNs on the
system, and it is only when LUNs are actively used by the storage management layer that new device
mapper nodes are provisioned. It is unnecessary therefore to use any of the dm-multipath CLI tools to
query or refresh DM table nodes in XenServer. Should it be necessary to query the status of device-mapper
tables manually, or list active device mapper multipath nodes on the system, use the mpathutil utility:
• mpathutil list
• mpathutil status
Unlike the standard dm-multipath tools in Linux, device mapper nodes are not automatically created for
all LUNs on the system. As LUNs are actively used by the storage management layer, new device mapper
nodes are provisioned. It is unnecessary to use any of the dm-multipath CLI tools to query or refresh
DM table nodes in XenServer.
Note
Due to incompatibilities with the integrated multipath management architecture, the standard dm-mul-
tipath CLI utility should not be used with XenServer. Please use the mpathutil CLI tool for querying
the status of nodes on the host.
Note
Multipath support in Equallogic arrays does not encompass Storage IO multipathing in the traditional
sense of the term. Multipathing must be handled at the network/NIC bond level. Refer to the Equallogic
documentation for information about configuring network failover for Equallogic SRs/LVMoISCSI SRs.
tory. Modification of these files is unsupported, but visibility of these files may be valuable to developers and
power users. New storage manager plugins placed in this directory are automatically detected by XenServ-
er. Use the sm-list command (see the section called “Storage Manager commands”) to list the available
SR types .
New storage repositories are created using the New Storage wizard in XenCenter. The wizard guides
you through the various probing and configuration steps. Alternatively, use the sr-create command. This
command creates a new SR on the storage substrate (potentially destroying any existing data), and creates
the SR API object and a corresponding PBD record, enabling VMs to use the storage. On successful creation
of the SR, the PBD is automatically plugged. If the SR shared=true flag is set, a PBD record is created
and plugged for every XenServer Host in the resource pool.
All XenServer SR types support VDI resize, fast cloning and snapshot. SRs based on the LVM SR type
(local, iSCSI, or HBA) provide thin provisioning for snapshot and hidden parent nodes. The other SR types
support full thin provisioning, including for virtual disks that are active.
Note
Automatic LVM metadata archiving is disabled by default. This does not prevent metadata recovery for
LVM groups.
Warning
When VHD VDIs are not attached, for example in the case of a VDI snapshot, they are stored by default
thinly-provisioned. Because of this it is imperative to ensure that there is sufficient disk-space available
for the VDI to become thickly provisioned when attempting to attach it. VDI clones, however, are thick-
ly-provisioned.
EXT3 2TB
LVM 2TB
Netapp 2TB
EqualLogic 15TB
ONTAP(NetApp) 12TB
Local LVM
The Local LVM type presents disks within a locally-attached Volume Group.
By default, XenServer uses the local disk on the physical host on which it is installed. The Linux Logical
Volume Manager (LVM) is used to manage VM storage. A VDI is implemented in VHD format in an LVM
logical volume of the specified size.
XenServer Administrator's Guide Storage 26
XenServer versions prior to 5.5.0 did not use the VHD format and will remain in legacy mode. See the
section called “Upgrading LVM storage from XenServer 5.0 or earlier” for information about upgrading a
storage repository to the new format.
Local disks can also be configured with a local EXT SR to serve VDIs stored in the VHD format. Local disk
EXT SRs must be configured using the XenServer CLI.
By definition, local disks are not shared across pools of XenServer host. As a consequence, VMs whose
VDIs are stored in SRs on local disks are not agile -- they cannot be migrated between XenServer hosts
in a resource pool.
udev
The udev type represents devices plugged in using the udev device manager as VDIs.
XenServer has two SRs of type udev that represent removable storage. One is for the CD or DVD disk in
the physical CD or DVD-ROM drive of the XenServer host. The other is for a USB device plugged into a
XenServer Administrator's Guide Storage 27
USB port of the XenServer host. VDIs that represent the media come and go as disks or USB sticks are
inserted and removed.
ISO
The ISO type handles CD images stored as files in ISO format. This SR type is useful for creating shared
ISO libraries.
EqualLogic
The EqualLogic SR type maps LUNs to VDIs on a EqualLogic array group, allowing for the use of fast
snapshot and clone features on the array.
If you have access to an EqualLogic filer, you can configure a custom EqualLogic storage repository for
VM storage on you XenServer deployment. This allows the use of the advanced features of this filer type.
Virtual disks are stored on the filer using one LUN per virtual disk. Using this storage type will enable the
thin provisioning, snapshot, and fast clone features of this filer.
Consider your storage requirements when deciding whether to use the specialized SR plugin, or to use the
generic LVM/iSCSI storage backend. By using the specialized plugin, XenServer will communicate with the
filer to provision storage. Some arrays have a limitation of seven concurrent connections, which may limit
the throughput of control operations. Using the plugin will allow you to make use of the advanced array
features, however, so will make backup and snapshot operations easier.
Warning
There are two types of administration accounts that can successfully access the EqualLogic SM plugin:
• A group administration account which has access to and can manage the entire group and all storage
pools.
• A pool administrator account that can manage only the objects (SR and VDI snapshots) that are in the
pool or pools assigned to the account.
NetApp
The NetApp type maps LUNs to VDIs on a NetApp server, enabling the use of fast snapshot and clone
features on the filer.
Note
NetApp and EqualLogic SRs require a Citrix Essentials for XenServer license. To learn more about
Citrix Essentials for XenServer and to find out how to upgrade, visit the Citrix website here.
If you have access to Network Appliance™ (NetApp) storage with sufficient disk space, running a version
of Data ONTAP 7G (version 7.0 or greater), you can configure a custom NetApp storage repository for VM
storage on your XenServer deployment. The XenServer driver uses the ZAPI interface to the storage to
create a group of FlexVols that correspond to an SR. VDIs are created as virtual LUNs on the storage, and
attached to XenServer hosts using an iSCSI data path. There is a direct mapping between a VDI and a
XenServer Administrator's Guide Storage 29
raw LUN that does not require any additional volume metadata. The NetApp SR is a managed volume and
the VDIs are the LUNs within the volume. VM cloning uses the snapshotting and cloning capabilities of the
storage for data efficiency and performance and to ensure compatibility with existing ONTAP management
tools.
As with the iSCSI-based SR type, the NetApp driver also uses the built-in software initiator and its assigned
host IQN, which can be modified by changing the value shown on the General tab when the storage repos-
itory is selected in XenCenter.
The easiest way to create NetApp SRs is to use XenCenter. See the XenCenter help for details. See the
section called “Creating a shared NetApp SR over iSCSI” for an example of how to create them using the
xe CLI.
FlexVols
NetApp uses FlexVols as the basic unit of manageable data. There are limitations that constrain the design
of NetApp-based SRs. These are:
Precise system limits vary per filer type, however as a general guide, a FlexVol may contain up to 200
LUNs, and provides up to 255 snapshots. Because there is a one-to-one mapping of LUNs to VDIs, and
because often a VM will have more than one VDI, the resource limitations of a single FlexVol can easily
be reached. Also, the act of taking a snapshot includes snapshotting all the LUNs within a FlexVol and the
VM clone operation indirectly relies on snapshots in the background as well as the VDI snapshot operation
for backup purposes.
There are two constraints to consider when mapping the virtual storage objects of the XenServer host to
the physical storage. To maintain space efficiency it makes sense to limit the number of LUNs per FlexVol,
yet at the other extreme, to avoid resource limitations a single LUN per FlexVol provides the most flexibility.
However, because there is a vendor-imposed limit of 200 or 500 FlexVols, per filer (depending on the NetApp
model), this creates a limit of 200 or 500 VDIs per filer and it is therefore important to select a suitable
number of FlexVols taking these parameters into account.
Given these resource constraints, the mapping of virtual storage objects to the Ontap storage system has
been designed in the following manner. LUNs are distributed evenly across FlexVols, with the expectation
of using VM UUIDs to opportunistically group LUNs attached to the same VM into the same FlexVol. This
is a reasonable usage model that allows a snapshot of all the VDIs in a VM at one time, maximizing the
efficiency of the snapshot operation.
An optional parameter you can set is the number of FlexVols assigned to the SR. You can use between 1
and 32 FlexVols; the default is 8. The trade-off in the number of FlexVols to the SR is that, for a greater
number of FlexVols, the snapshot and clone operations become more efficient, because there are fewer
VMs backed off the same FlexVol. The disadvantage is that more FlexVol resources are used for a single
SR, where there is a typical system-wide limitation of 200 for some smaller filers.
Aggregates
When creating a NetApp driver-based SR, you select an appropriate aggregate. The driver can be probed
for non-traditional type aggregates, that is, newer-style aggregates that support FlexVols, and lists all ag-
gregates available and the unused disk space on each.
XenServer Administrator's Guide Storage 30
Note
Aggregate probing is only possible at sr-create time so that the aggregate can be specified at the point
that the SR is created, but is not probed by the sr-probe command.
Citrix strongly recommends that you configure an aggregate exclusively for use by XenServer storage,
because space guarantees and allocation cannot be correctly managed if other applications are sharing
the resource.
The alternative allocation strategy is thin provisioning, which allows the administrator to present more stor-
age space to the VMs connecting to the SR than is actually available on the SR. There are no space guar-
antees, and allocation of a LUN does not claim any data blocks in the FlexVol until the VM writes data. This
might be appropriate for development and test environments where you might find it convenient to over-
provision virtual disk space on the SR in the anticipation that VMs might be created and destroyed frequently
without ever utilizing the full virtual allocated disk.
Warning
If you are using thin provisioning in production environments, take appropriate measures to ensure that
you never run out of storage space. VMs attached to storage that is full will fail to write to disk, and in
some cases may fail to read from disk, possibly rendering the VM unusable.
FAS Deduplication
FAS Deduplication is a NetApp technology for reclaiming redundant disk space. Newly-stored data objects
are divided into small blocks, each block containing a digital signature, which is compared to all other sig-
natures in the data volume. If an exact block match exists, the duplicate block is discarded and the disk
space reclaimed. FAS Deduplication can be enabled on thin provisioned NetApp-based SRs and operates
according to the default filer FAS Deduplication parameters, typically every 24 hours. It must be enabled
at the point the SR is created and any custom FAS Deduplication configuration must be managed directly
on the filer.
Access Control
Because FlexVol operations such as volume creation and volume snapshotting require administrator privi-
leges on the filer itself, Citrix recommends that the XenServer host is provided with suitable administrator
XenServer Administrator's Guide Storage 31
username and password credentials at configuration time. In situations where the XenServer host does not
have full administrator rights to the filer, the filer administrator could perform an out-of-band preparation
and provisioning of the filer and then introduce the SR to the XenServer host using XenCenter or the sr-
introduce xe CLI command. Note, however, that operations such as VM cloning or snapshot generation
will fail in this situation due to insufficient access privileges.
Licenses
You need to have an iSCSI license on the NetApp filer to use this storage repository type; for the generic
plugins you need either an iSCSI or NFS license depending on the SR type being used.
Further information
For more information about NetApp technology, see the following links:
port the port to use for connecting to the NetApp server that hosts yes
the SR. Default is port 80.
username the login username used to manage the LUNs on the filer no
password the login password used to manage the LUNs on the filer no
aggregate the aggregate name on which the FlexVol is created Required for
sr_create
allocation specifies whether to provision LUNs using thick or thin provi- yes
sioning. Default is thick
Setting the SR other-config:multiplier parameter to a valid value adjusts the default multiplier at-
tribute. By default XenServer allocates 2.4 times the requested space to account for snapshot and metadata
overhead associated with each LUN. To save disk space, you can set the multiplier to a value >= 1. Setting
the multiplier should only be done with extreme care by system administrators who understand the space
allocation constraints of the NetApp filer. If you try to set the amount to less then 1, for example, in an attempt
to pre-allocate very little space for the LUN, the attempt will most likely fail.
Note
This works on new VDI creation in the selected FlexVol, or on all FlexVols during an SR scan and overrides
any manual size adjustments made by the administrator to the SR FlexVols.
Cloning a VDI entails generating a snapshot of the FlexVol and then creating a LUN clone backed off the
snapshot. When generating a VM snapshot you must snapshot each of the VMs disks in sequence. Because
all the disks are expected to be located in the same FlexVol, and the FlexVol snapshot operates on all
LUNs in the same FlexVol, it makes sense to re-use an existing snapshot for all subsequent LUN clones. By
default, if no snapshot hint is passed into the backend driver it will generate a random ID with which to name
the FlexVol snapshot. There is a CLI override for this value, passed in as an epochhint. The first time
the epochhint value is received, the backend generates a new snapshot based on the cookie name. Any
subsequent snapshot requests with the same epochhint value will be backed off the existing snapshot:
During NetApp SR provisioning, additional disk space is reserved for snapshots. If you plan to not use the
snapshotting functionality, you might want to free up this reserved space. To do so, you can reduce the value
of the other-config:multiplier parameter. By default the value of the multiplier is 2.4, so the amount
of space reserved is 2.4 times the amount of space that would be needed for the FlexVols themselves.
Shared iSCSI support using the software iSCSI initiator is implemented based on the Linux Volume Manager
(LVM) and provides the same performance benefits provided by LVM VDIs in the local disk case. Shared
iSCSI SRs using the software-based host initiator are capable of supporting VM agility using XenMotion:
VMs can be started on any XenServer host in a resource pool and migrated between them with no noticeable
downtime.
iSCSI SRs use the entire LUN specified at creation time and may not span more than one LUN. CHAP
support is provided for client authentication, during both the data path initialization and the LUN discovery
phases.
All iSCSI initiators and targets must have a unique name to ensure they can be uniquely identified on the
network. An initiator has an iSCSI initiator address, and a target has an iSCSI target address. Collectively
these are called iSCSI Qualified Names, or IQNs.
XenServer hosts support a single iSCSI initiator which is automatically created and configured with a random
IQN during host installation. The single initiator can be used to connect to multiple iSCSI targets concurrently.
iSCSI targets commonly provide access control using iSCSI initiator IQN lists, so all iSCSI targets/LUNs to
be accessed by a XenServer host must be configured to allow access by the host's initiator IQN. Similarly,
targets/LUNs to be used as shared iSCSI SRs must be configured to allow access by all host IQNs in the
resource pool.
XenServer Administrator's Guide Storage 34
Note
iSCSI targets that do not provide access control will typically default to restricting LUN access to a single
initiator to ensure data integrity. If an iSCSI LUN is intended for use as a shared SR across multiple
XenServer hosts in a resource pool, ensure that multi-initiator access is enabled for the specified LUN.
The XenServer host IQN value can be adjusted using XenCenter, or using the CLI with the following com-
mand when using the iSCSI software initiator:
Warning
It is imperative that every iSCSI target and initiator have a unique IQN. If a non-unique IQN identifier is
used, data corruption and/or denial of LUN access can occur.
Warning
Do not change the XenServer host IQN with iSCSI SRs attached. Doing so can result in failures connecting
to new targets or existing SRs.
For full details on configuring QLogic Fibre Channel and iSCSI HBAs please refer to the QLogic website.
Once the HBA is physically installed into the XenServer host, use the following steps to configure the HBA:
1. Set the IP networking configuration for the HBA. This example assumes DHCP and HBA port 0. Specify
the appropriate values if using static IP addressing or a multi-port HBA.
/opt/QLogic_Corporation/SANsurferiCLI/iscli -ipdhcp 0
3. Use the xe sr-probe command to force a rescan of the HBA controller and display available LUNs. See
the section called “Probing an SR” and the section called “Creating a shared LVM over Fibre Channel /
iSCSI HBA or SAS SR (lvmohba)” for more details.
XenServer Administrator's Guide Storage 35
Note
This step is not required. Citrix recommends that only power users perform this process if it is necessary.
Each HBA-based LUN has a corresponding global device path entry under /dev/disk/by-scsibus in
the format <SCSIid>-<adapter>:<bus>:<target>:<lun> and a standard device path under /dev. To remove
the device entries for LUNs no longer in use as SRs use the following steps:
1. Use sr-forget or sr-destroy as appropriate to remove the SR from the XenServer host database. See
the section called “Destroying or forgetting a SR” for details.
2. Remove the zoning configuration within the SAN for the desired LUN to the desired host.
3. Use the sr-probe command to determine the ADAPTER, BUS, TARGET, and LUN values corresponding
to the LUN to be removed. See the section called “Probing an SR” for details.
4. Remove the device entries with the following command:
Warning
Make absolutely sure you are certain which LUN you are removing. Accidentally removing a LUN required
for host operation, such as the boot or root device, will render the host unusable.
Creating a shared LVM over iSCSI SR using the software iSCSI initiator (lvmoisc-
si)
target the IP address or hostname of the iSCSI filer that hosts the yes
SR
targetIQN the IQN target address of iSCSI filer that hosts the SR yes
To create a shared lvmoiscsi SR on a specific LUN of an iSCSI target use the following command.
Creating a shared LVM over Fibre Channel / iSCSI HBA or SAS SR (lvmohba)
SRs of type lvmohba can be created and managed using the xe CLI or XenCenter.
To create a shared lvmohba SR, perform the following steps on each host in the pool:
1. Zone in one or more LUNs to each XenServer host in the pool. This process is highly specific to the SAN
equipment in use. Please refer to your SAN documentation for details.
2. If necessary, use the HBA CLI included in the XenServer host to configure the HBA:
• Emulex: /usr/sbin/hbanyware
• QLogic FC: /opt/QLogic_Corporation/SANsurferCLI
• QLogic iSCSI: /opt/QLogic_Corporation/SANsurferiCLI
See the section called “Managing Hardware Host Bus Adapters (HBAs)” for an example of QLogic iSCSI
HBA configuration. For more information on Fibre Channel and iSCSI HBAs please refer to the Emulex
and QLogic websites.
3. Use the sr-probe command to determine the global device path of the HBA LUN. sr-probe forces a re-
scan of HBAs installed in the system to detect any new LUNs that have been zoned to the host and
returns a list of properties for each LUN found. Specify the host-uuid parameter to ensure the probe
occurs on the desired host.
The global device path returned as the <path> property will be common across all hosts in the pool and
therefore must be used as the value for the device-config:device parameter when creating the SR.
If multiple LUNs are present use the vendor, LUN size, LUN serial number, or the SCSI ID as included
in the <path> property to identify the desired LUN.
XenServer Administrator's Guide Storage 37
xe sr-probe type=lvmohba \
host-uuid=1212c7b3-f333-4a8d-a6fb-80c5b79b5b31
Error code: SR_BACKEND_FAILURE_90
Error parameters: , The request is missing the device parameter, \
<?xml version="1.0" ?>
<Devlist>
<BlockDevice>
<path>
/dev/disk/by-id/scsi-360a9800068666949673446387665336f
</path>
<vendor>
HITACHI
</vendor>
<serial>
730157980002
</serial>
<size>
80530636800
</size>
<adapter>
4
</adapter>
<channel>
0
</channel>
<id>
4
</id>
<lun>
2
</lun>
<hba>
qla2xxx
</hba>
</BlockDevice>
<Adapter>
<host>
Host4
</host>
<name>
qla2xxx
</name>
<manufacturer>
QLogic HBA Driver
</manufacturer>
<id>
4
</id>
</Adapter>
</Devlist>
4. On the master host of the pool create the SR, specifying the global device path returned in the <path>
property from sr-probe. PBDs will be created and plugged for each host in the pool automatically.
xe sr-create host-uuid=<valid_uuid> \
content-type=user \
name-label=<"Example shared LVM over HBA SR"> shared=true \
device-config:SCSIid=<device_scsi_id> type=lvmohba
XenServer Administrator's Guide Storage 38
Note
You can use the BRAND_CONSOLE; Repair Storage Repository function to retry the PBD creation and
plugging portions of the sr-create operation. This can be valuable in cases where the LUN zoning was
incorrect for one or more hosts in a pool when the SR was created. Correct the zoning for the affected
hosts and use the Repair Storage Repository function instead of removing and re-creating the SR.
NFS VHD
The NFS VHD type stores disks as VHD files on a remote NFS filesystem.
NFS is a ubiquitous form of storage infrastructure that is available in many environments. XenServer allows
existing NFS servers that support NFS V3 over TCP/IP to be used immediately as a storage repository
for virtual disks (VDIs). VDIs are stored in the Microsoft VHD format only. Moreover, as NFS SRs can be
shared, VDIs stored in a shared SR allow VMs to be started on any XenServer hosts in a resource pool and
be migrated between them using XenMotion with no noticeable downtime.
Creating an NFS SR requires the hostname or IP address of the NFS server. The sr-probe command
provides a list of valid destination paths exported by the server on which the SR can be created. The NFS
server must be configured to export the specified path to all XenServer hosts in the pool, or the creation of
the SR and the plugging of the PBD record will fail.
As mentioned at the beginning of this chapter, VDIs stored on NFS are sparse. The image file is allocated
as the VM writes data into the disk. This has the considerable benefit that VM image files take up only as
much space on the NFS storage as is required. If a 100GB VDI is allocated for a new VM and an OS is
installed, the VDI file will only reflect the size of the OS data that has been written to the disk rather than
the entire 100GB.
VHD files may also be chained, allowing two VDIs to share common data. In cases where a NFS-based VM
is cloned, the resulting VMs will share the common on-disk data at the time of cloning. Each will proceed to
make its own changes in an isolated copy-on-write version of the VDI. This feature allows NFS-based VMs
to be quickly cloned from templates, facilitating very fast provisioning and deployment of new VMs.
Note
As VHD-based images require extra metadata to support sparseness and chaining, the format is not as
high-performance as LVM-based storage. In cases where performance really matters, it is well worth forcibly
allocating the sparse regions of an image file. This will improve performance at the cost of consuming
additional disk space.
XenServer's NFS and VHD implementations assume that they have full control over the SR directory on the
NFS server. Administrators should not modify the contents of the SR directory, as this can risk corrupting
the contents of VDIs.
XenServer has been tuned for enterprise-class storage that use non-volatile RAM to provide fast acknowl-
edgments of write requests while maintaining a high degree of data protection from failure. XenServer has
been tested extensively against Network Appliance FAS270c and FAS3020c storage, using Data OnTap
7.2.2.
XenServer Administrator's Guide Storage 39
In situations where XenServer is used with lower-end storage, it will cautiously wait for all writes to be
acknowledged before passing acknowledgments on to guest VMs. This will incur a noticeable performance
cost, and might be remedied by setting the storage to present the SR mount point as an asynchronous
mode export. Asynchronous exports acknowledge writes that are not actually on disk, and so administrators
should consider the risks of failure carefully in these situations.
The XenServer NFS implementation uses TCP by default. If your situation allows, you can configure the
implementation to use UDP in situations where there may be a performance benefit. To do this, specify the
device-config parameter useUDP=true at SR creation time.
Warning
Since VDIs on NFS SRs are created as sparse, administrators must ensure that there is enough disk
space on the NFS SRs for all required VDIs. XenServer hosts do not enforce that the space required for
VDIs on NFS SRs is actually present.
XenServer hosts support Fibre Channel (FC) storage area networks (SANs) through Emulex or QLogic host
bus adapters (HBAs). All FC configuration required to expose a FC LUN to the host must be completed
manually, including storage devices, network devices, and the HBA within the XenServer host. Once all FC
configuration is complete the HBA will expose a SCSI device backed by the FC LUN to the host. The SCSI
device can then be used to access the FC LUN as if it were a locally attached SCSI device.
Use the sr-probe command to list the LUN-backed SCSI devices present on the host. This command forces
a scan for new LUN-backed SCSI devices. The path value returned by sr-probe for a LUN-backed SCSI
device is consistent across all hosts with access to the LUN, and therefore must be used when creating
shared SRs accessible by all hosts in a resource pool.
See the section called “Creating Storage Repositories” for details on creating shared HBA-based FC and
iSCSI SRs.
Note
XenServer support for Fibre Channel does not support direct mapping of a LUN to a VM. HBA-based
LUNs must be mapped to the host and specified for use in an SR. VDIs within the SR are exposed to
VMs as standard block devices.
Note
Running the StorageLink service in a VM within a resource pool to which the StorageLink service is
providing storage is not supported in combination with the XenServer High Availability (HA) features. To
use CSLG SRs in combination with HA ensure the StorageLink service is running outside the HA-enabled
pool.
CSLG SRs can be created using the xe CLI only. After creation CSLG SRs can be viewed and managed
using both the xe CLI and XenCenter.
Because the CSLG SR can be used to access different storage arrays, the exact features available for a
given CSLG SR depend on the capabilities of the array. All CSLG SRs use a LUN-per-VDI model where a
new LUN is provisioned for each virtual disk. (VDI).
CSLG SRs can co-exist with other SR types on the same storage array hardware, and multiple CSLG SRs
can be defined within the same resource pool.
The StorageLink service can be configured using the StorageLink Manager or from within the XenServer
control domain using the StorageLink Command Line Interface (CLI). To run the StorageLink (CLI) use the
following command, where <hostname> is the name or IP address of the machine running the StorageLink
service:
/opt/Citrix/StorageLink/bin/csl \
server=<hostname>[:<port>][,<username>,<password>]
For more information about the StorageLink CLI please see the StorageLink documentation or use the /
opt/Citrix/StorageLink/bin/csl help command.
SRs of type cslg support two additional parameters that can be used with storage arrays that support LUN
grouping features, such as NetApp flexvols.
Note
When a new NetApp SR is created using StorageLink, by default a single FlexVol is created for the SR
that contains all LUNs created for the SR. To change this behaviour and specify the number of FlexVols
to create and the size of each FlexVol, use the sm-config:pool-size and sm-config:physical-
size parameters. sm-config:pool-size specifies the number of FlexVols. sm-config:physical-
size specifies the total size of all FlexVols to be created, so that each FlexVol will be of size sm-
config:physical-size divided by sm-config:pool-size.
To create a CSLG SR
<csl__storageSystemInfoList>
<csl__storageSystemInfo>
<friendlyName>5001-4380-013C-0240</friendlyName>
<displayName>HP EVA (5001-4380-013C-0240)</displayName>
<vendor>HP</vendor>
<model>EVA</model>
<serialNum>50014380013C0240</serialNum>
<storageSystemId>HP__EVA__50014380013C0240</storageSystemId>
<systemCapabilities>
<capabilities>PROVISIONING</capabilities>
<capabilities>MAPPING</capabilities>
<capabilities>MULTIPLE_STORAGE_POOLS</capabilities>
<capabilities>DIFF_SNAPSHOT</capabilities>
<capabilities>CLONE</capabilities>
</systemCapabilities>
<protocolSupport>
<capabilities>FC</capabilities>
</protocolSupport>
<csl__snapshotMethodInfoList>
<csl__snapshotMethodInfo>
<name>5001-4380-013C-0240</name>
<displayName></displayName>
<maxSnapshots>16</maxSnapshots>
<supportedNodeTypes>
<nodeType>STORAGE_VOLUME</nodeType>
</supportedNodeTypes>
<snapshotTypeList>
</snapshotTypeList>
<snapshotCapabilities>
</snapshotCapabilities>
</csl__snapshotMethodInfo>
<csl__snapshotMethodInfo>
<name>5001-4380-013C-0240</name>
<displayName></displayName>
<maxSnapshots>16</maxSnapshots>
<supportedNodeTypes>
<nodeType>STORAGE_VOLUME</nodeType>
</supportedNodeTypes>
<snapshotTypeList>
<snapshotType>DIFF_SNAPSHOT</snapshotType>
</snapshotTypeList>
<snapshotCapabilities>
</snapshotCapabilities>
</csl__snapshotMethodInfo>
<csl__snapshotMethodInfo>
<name>5001-4380-013C-0240</name>
<displayName></displayName>
<maxSnapshots>16</maxSnapshots>
<supportedNodeTypes>
<nodeType>STORAGE_VOLUME</nodeType>
</supportedNodeTypes>
<snapshotTypeList>
<snapshotType>CLONE</snapshotType>
</snapshotTypeList>
<snapshotCapabilities>
</snapshotCapabilities>
</csl__snapshotMethodInfo>
</csl__snapshotMethodInfoList>
</csl__storageSystemInfo>
</csl__storageSystemInfoList>
XenServer Administrator's Guide Storage 44
You can use grep to filter the sr-probe output to just the storage pool IDs
4. Add the desired storage system ID to the sr-probe command to identify the storage pools available
within the specified storage system
xe sr-probe type=cslg \
device-config:target=192.168.128.10 \ device-config:storageSystemId=HP__EVA__50014380013C0240
<?xml version="1.0" encoding="iso-8859-1"?>
<csl__storagePoolInfoList>
<csl__storagePoolInfo>
<displayName>Default Disk Group</displayName>
<friendlyName>Default Disk Group</friendlyName>
<storagePoolId>00010710B4080560B6AB08000080000000000400</storagePoolId>
<parentStoragePoolId></parentStoragePoolId>
<storageSystemId>HP__EVA__50014380013C0240</storageSystemId>
<sizeInMB>1957099</sizeInMB>
<freeSpaceInMB>1273067</freeSpaceInMB>
<isDefault>No</isDefault>
<status>0</status>
<provisioningOptions>
<supportedRaidTypes>
<raidType>RAID0</raidType>
<raidType>RAID1</raidType>
<raidType>RAID5</raidType>
</supportedRaidTypes>
<supportedNodeTypes>
<nodeType>STORAGE_VOLUME</nodeType>
</supportedNodeTypes>
<supportedProvisioningTypes>
</supportedProvisioningTypes>
</provisioningOptions>
</csl__storagePoolInfo>
</csl__storagePoolInfoList>
You can use grep to filter the sr-probe output to just the storage pool IDs
xe sr-probe type=cslg \
device-config:target=192.168.128.10 \
device-config:storageSystemId=HP__EVA__50014380013C0240 \
| grep storagePoolId
<storagePoolId>00010710B4080560B6AB08000080000000000400</storagePoolId>
5. Create the SR specifying the desired storage system and storage pool IDs
Destroying or forgetting a SR
You can destroy an SR, which actually deletes the contents of the SR from the physical media. Alternatively
you can forget an SR, which allows you to re-attach the SR, for example, to another XenServer host, without
removing any of the SR contents. In both cases, the PBD of the SR must first be unplugged. Forgetting an
SR is the equivalent of the SR Detach operation within XenCenter.
1. Unplug the PBD to detach the SR from the corresponding XenServer host:
xe pbd-unplug uuid=<pbd_uuid>
2. To destroy the SR, which deletes both the SR and corresponding PBD from the XenServer host database
and deletes the SR contents from the physical media:
xe sr-destroy uuid=<sr_uuid>
3. Or, to forget the SR, which removes the SR and corresponding PBD from the XenServer host database
but leaves the actual SR contents intact on the physical media:
xe sr-forget uuid=<sr_uuid>
Note
It might take some time for the software object corresponding to the SR to be garbage collected.
Introducing an SR
Introducing an SR that has been forgotten requires introducing an SR, creating a PBD, and manually plug-
ging the PBD to the appropriate XenServer hosts to activate the SR.
2. Introduce the existing SR UUID returned from the sr-probe command. The UUID of the new SR is re-
turned:
3. Create a PBD to accompany the SR. The UUID of the new PBD is returned:
xe pbd-plug uuid=<pbd_uuid>
XenServer Administrator's Guide Storage 46
5. Verify the status of the PBD plug. If successful the currently-attached property will be true:
xe pbd-list sr-uuid=<sr_uuid>
Note
Steps 3 through 5 must be performed for each host in the resource pool, and can also be performed using
the Repair Storage Repository function in XenCenter.
Resizing an SR
If you have resized the LUN on which a iSCSI or HBA SR is based, use the following procedures to reflect
the size change in XenServer:
1. iSCSI SRs - unplug all PBDs on the host that reference LUNs on the same target. This is required to
reset the iSCSI connection to the target, which in turn will allow the change in LUN size to be recognized
when the PBDs are replugged.
2. HBA SRs - reboot the host.
Note
In previous versions of XenServer explicit commands were required to resize the physical volume group
of iSCSI and HBA SRs. These commands are now issued as part of the PBD plug operation and are
no longer required.
4. Within XenCenter the SR is moved from the host level to the pool level, indicating that it is now shared.
The SR will be marked with a red exclamation mark to show that it is not currently plugged on all hosts
in the pool.
5. Select the SR and then select the Storage > Repair Storage Repository menu option.
6. Click Repair to create and plug a PBD for each host in the pool.
VDIs to the same or a different SR, and a combination of XenCenter and the xe CLI can be used to copy
individual VDIs.
xe vbd-list vm-uuid=<valid_vm_uuid>
Note
The vbd-list command displays both the VBD and VDI UUIDs. Be sure to record the VDI UUIDs rather
than the VBD UUIDs.
3. In XenCenter select the VM's Storage tab. For each VDI to be moved, select the VDI and click the Detach
button. This step can also be done using the vbd-destroy command.
Note
If you use the vbd-destroy command to detach the VDI UUIDs, be sure to first check if the VBD has the
parameter other-config:owner set to true. If so, set it to false. Issuing the vbd-destroy command
with other-config:owner=true will also destroy the associated VDI.
4. Use the vdi-copy command to copy each of the VM's VDIs to be moved to the desired SR.
the section called “Virtual disk QoS settings”) it is necessary to override the default setting and assign the
cfq disk scheduler to the SR. The corresponding PBD must be unplugged and re-plugged for the scheduler
parameter to take effect. The disk scheduler can be adjusted using the following command:
xe sr-param-set other-config:scheduler=noop|cfq|anticipatory|deadline \
uuid=<valid_sr_uuid>
Note
In the shared SR case, where multiple hosts are accessing the same LUN, the QoS setting is applied to
VBDs accessing the LUN from the same host. QoS is not applied across hosts in the pool.
Before configuring any QoS parameters for a VBD, ensure that the disk scheduler for the SR has been
set appropriately. See the section called “Adjusting the disk IO scheduler” for details on how to adjust the
scheduler. The scheduler parameter must be set to cfq on the SR for which the QoS is desired.
Note
Remember to set the scheduler to cfq on the SR, and to ensure that the PBD has been re-plugged in
order for the scheduler change to take effect.
The first parameter is qos_algorithm_type. This parameter needs to be set to the value ionice, which
is the only type of QoS algorithm supported for virtual disks in this release.
The QoS parameters themselves are set with key/value pairs assigned to the qos_algorithm_param
parameter. For virtual disks, qos_algorithm_param takes a sched key, and depending on the value, also
requires a class key.
• sched=rt or sched=real-time sets the QoS scheduling parameter to real time priority, which requires
a class parameter to set a value
• sched=idle sets the QoS scheduling parameter to idle priority, which requires no class parameter to
set any value
• sched=<anything> sets the QoS scheduling parameter to best effort priority, which requires a class
parameter to set a value
• an integer between 0 and 7, where 7 is the highest priority and 0 is the lowest, so that, for example, I/O
requests with a priority of 5, will be given priority over I/O requests with a priority of 2.
To enable the disk QoS settings, you also need to set the other-config:scheduler to cfq and replug
PBDs for the storage in question.
For example, the following CLI commands set the virtual disk's VBD to use real time priority 5:
Note
XenServer provides automated configuration and management of NICs using the xe command line in-
terface (CLI). Unlike previous XenServer versions, the host networking configuration files should not be
edited directly in most cases; where a CLI command is available, do not edit the underlying files.
If you are already familiar with XenServer networking concepts, you may want to skip ahead to one of the
following sections:
• For procedures on how to create networks for standalone XenServer hosts, see the section called “Cre-
ating networks in a standalone server”.
• For procedures on how to create networks for XenServer hosts that are configured in a resource pool,
see the section called “Creating networks in resource pools”.
• For procedures on how to create VLANs for XenServer hosts, either standalone or part of a resource
pool, see the section called “Creating VLANs”.
• For procedures on how to create bonds for standalone XenServer hosts, see the section called “Creating
NIC bonds on a standalone host”.
• For procedures on how to create bonds for XenServer hosts that are configured in a resource pool, see
the section called “Creating NIC bonds in resource pools”.
Note
Some networking options have different behaviors when used with standalone XenServer hosts compared
to resource pools. This chapter contains sections on general information that applies to both standalone
hosts and pools, followed by specific information and procedures for each.
Network objects
There are three types of server-side software objects which represent networking entities. These objects are:
• A PIF, which represents a physical network interface on a XenServer host. PIF objects have a name and
description, a globally unique UUID, the parameters of the NIC that they represent, and the network and
server they are connected to.
• A VIF, which represents a virtual interface on a Virtual Machine. VIF objects have a name and description,
a globally unique UUID, and the network and VM they are connected to.
XenServer Administrator's Guide Networking 51
• A network, which is a virtual Ethernet switch on a XenServer host. Network objects have a name and
description, a globally unique UUID, and the collection of VIFs and PIFs connected to them.
Both XenCenter and the xe CLI allow configuration of networking options, control over which NIC is used for
management operations, and creation of advanced networking features such as virtual local area networks
(VLANs) and NIC bonds.
From XenCenter much of the complexity of XenServer networking is hidden. There is no mention of PIFs
for XenServer hosts nor VIFs for VMs.
Networks
Each XenServer host has one or more networks, which are virtual Ethernet switches. Networks without an
association to a PIF are considered internal, and can be used to provide connectivity only between VMs
on a given XenServer host, with no connection to the outside world. Networks with a PIF association are
considered external, and provide a bridge between VIFs and the PIF connected to the network, enabling
connectivity to resources available through the PIF's NIC.
VLANs
Virtual Local Area Networks (VLANs), as defined by the IEEE 802.1Q standard, allow a single physical
network to support multiple logical networks. XenServer hosts can work with VLANs in multiple ways.
Note
All supported VLAN configurations are equally applicable to pools and standalone hosts, and bonded and
non-bonded configurations.
XenServer management interfaces cannot be assigned to a XenServer VLAN via a trunk port.
XenServer VLANs are represented by additional PIF objects representing VLAN interfaces corresponding
to a specified VLAN tag. XenServer networks can then be connected to the PIF representing the physical
NIC to see all traffic on the NIC, or to a PIF representing a VLAN to see only the traffic with the specified
VLAN tag.
For procedures on how to create VLANs for XenServer hosts, either standalone or part of a resource pool,
see the section called “Creating VLANs”.
XenServer Administrator's Guide Networking 52
NIC bonds
NIC bonds can improve XenServer host resiliency by using two physical NICs as if they were one. If one
NIC within the bond fails the host's network traffic will automatically be routed over the second NIC. NIC
bonds work in an active/active mode, with traffic balanced between the bonded NICs.
XenServer NIC bonds completely subsume the underlying physical devices (PIFs). In order to activate a
bond the underlying PIFs must not be in use, either as the management interface for the host or by running
VMs with VIFs attached to the networks associated with the PIFs.
XenServer NIC bonds are represented by additional PIFs. The bond PIF can then be connected to a
XenServer network to allow VM traffic and host management functions to occur over the bonded NIC. The
exact steps to use to create a NIC bond depend on the number of NICs in your host, and whether the
management interface of the host is assigned to a PIF to be used in the bond.
XenServer supports Source Level Balancing (SLB) NIC bonding. SLB bonding:
• is an active/active mode, but only supports load-balancing of VM traffic across the physical NICs
• provides fail-over support for all other traffic types
• does not require switch support for Etherchannel or 802.3ad (LACP)
• load balances traffic between multiple interfaces at VM granularity by sending traffic through different
interfaces based on the source MAC address of the packet
• is derived from the open source ALB mode and reuses the ALB capability to dynamically re-balance load
across interfaces
Any given VIF will only use one of the links in the bond at a time. At startup no guarantees are made about
the affinity of a given VIF to a link in the bond. However, for VIFs with high throughput, periodic rebalancing
ensures that the load on the links is approximately equal.
API Management traffic can be assigned to a XenServer bond interface and will be automatically load-
balanced across the physical NICs.
XenServer bonded PIFs do not require IP configuration for the bond when used for guest traffic. This is
because the bond operates at Layer 2 of the OSI, the data link layer, and no IP addressing is used at this
layer. When used for non-guest traffic (to connect to it with XenCenter for management, or to connect to
shared network storage), one IP configuration is required per bond. (Incidentally, this is true of unbonded
PIFs as well, and is unchanged from XenServer 4.1.0.)
Gratuitous ARP packets are sent when assignment of traffic changes from one interface to another as a
result of fail-over.
Re-balancing is provided by the existing ALB re-balance capabilities: the number of bytes going over each
slave (interface) is tracked over a given period. When a packet is to be sent that contains a new source
XenServer Administrator's Guide Networking 53
MAC address it is assigned to the slave interface with the lowest utilization. Traffic is re-balanced every
10 seconds.
Note
Bonding is set up with an Up Delay of 31000ms and a Down Delay of 200ms. The seemingly long Up
Delay is purposeful because of the time taken by some switches to actually start routing traffic. Without it,
when a link comes back after failing, the bond might rebalance traffic onto it before the switch is ready to
pass traffic. If you want to move both connections to a different switch, move one, then wait 31 seconds
for it to be used again before moving the other.
When a XenServer host has a single NIC, the follow configuration is present after installation:
When a host has multiple NICs the configuration present after installation depends on which NIC is selected
for management operations during installation:
In both cases the resulting networking configuration allows connection to the XenServer host by XenCenter,
the xe CLI, and any other management software running on separate machines via the IP address of the
management interface. The configuration also provides external networking for VMs created on the host.
The PIF used for management operations is the only PIF ever configured with an IP address. External
networking for VMs is achieved by bridging PIFs to VIFs using the network object which acts as a virtual
Ethernet switch.
The steps required for networking features such as VLANs, NIC bonds, and dedicating a NIC to storage
traffic are covered in the following sections.
To add or remove networks using XenCenter, refer to the XenCenter online Help.
xe network-create name-label=<mynetwork>
At this point the network is not connected to a PIF and therefore is internal.
Having the same physical networking configuration for XenServer hosts within a pool is important because
all hosts in a pool share a common set of XenServer networks. PIFs on the individual hosts are connected to
pool-wide networks based on device name. For example, all XenServer hosts in a pool with an eth0 NIC will
have a corresponding PIF plugged into the pool-wide Network 0 network. The same will be true for hosts
with eth1 NICs and Network 1, as well as other NICs present in at least one XenServer host in the pool.
If one XenServer host has a different number of NICs than other hosts in the pool, complications can arise
because not all pool networks will be valid for all pool hosts. For example, if hosts host1 and host2 are
in the same pool and host1 has four NICs while host2 only has two, only the networks connected to PIFs
corresponding to eth0 and eth1 will be valid on host2. VMs on host1 with VIFs connected to networks
corresponding to eth2 and eth3 will not be able to migrate to host host2.
All NICs of all XenServer hosts within a resource pool must be configured with the same MTU size.
Creating VLANs
For servers in a resource pool, you can use the pool-vlan-create command. This command creates the
VLAN and automatically creates and plugs in the required PIFs on the hosts in the pool. See the section
called “pool-vlan-create” for more information.
xe network-create name-label=network5
3. Use the pif-list command to find the UUID of the PIF corresponding to the physical NIC supporting the
desired VLAN tag. The UUIDs and device names of all PIFs are returned, including any existing VLANs:
XenServer Administrator's Guide Networking 55
xe pif-list
4. Create a VLAN object specifying the desired physical PIF and VLAN tag on all VMs to be connected
to the new VLAN. A new PIF will be created and plugged into the specified network. The UUID of the
new PIF object is returned.
5. Attach VM VIFs to the new network. See the section called “Creating networks in a standalone server”
for more details.
This section describes how to use the xe CLI to create bonded NIC interfaces on a standalone XenServer
host. See the section called “Creating NIC bonds in resource pools” for details on using the xe CLI to create
NIC bonds on XenServer hosts that comprise a resource pool.
xe vm-shutdown uuid=<vm_uuid>
2. Use the network-create command to create a new network for use with the bonded NIC. The UUID
of the new network is returned:
xe network-create name-label=<bond0>
3. Use the pif-list command to determine the UUIDs of the PIFs to use in the bond:
xe pif-list
4. Use the bond-create command to create the bond by specifying the newly created network UUID and
the UUIDs of the PIFs to be bonded separated by commas. The UUID for the bond is returned:
Note
See the section called “Controlling the MAC address of the bond” for details on controlling the MAC
address used for the bond PIF.
XenServer Administrator's Guide Networking 56
5. Use the pif-list command to determine the UUID of the new bond PIF:
xe pif-list device=<bond0>
6. Use the pif-reconfigure-ip command to configure the desired management interface IP address set-
tings for the bond PIF. See Chapter 8, Command line interface for more detail on the options available
for the pif-reconfigure-ip command.
7. Use the host-management-reconfigure command to move the management interface from the exist-
ing physical PIF to the bond PIF. This step will activate the bond:
xe host-management-reconfigure pif-uuid=<bond_pif_uuid>
8. Use the pif-reconfigure-ip command to remove the IP address configuration from the non-bonded PIF
previously used for the management interface. This step is not strictly necessary but might help reduce
confusion when reviewing the host networking configuration.
9. Move existing VMs to the bond network using the vif-destroy and vif-create commands. This step
can also be completed using XenCenter by editing the VM configuration and connecting the existing
VIFs of a VM to the bond network.
10. Restart the VMs shut down in step 1.
The MAC address of the bond can be changed from PIF/NIC currently in use for the management interface,
but doing so will cause existing network sessions to the host to be dropped when the bond is enabled and
the MAC/IP address in use changes.
The MAC address to be used for a bond can be controlled in two ways:
• an optional mac parameter can be specified in the bond-create command. Using this parameter, the
bond MAC address can be set to any arbitrary address.
• If the mac parameter is not specified, the MAC address of the first PIF listed in the pif-uuids parameter
is used for the bond.
• As when creating a bond, all VMs with VIFs on the bond must be shut down prior to destroying the bond.
After reverting to a non-bonded configuration, reconnect the VIFs to an appropriate network.
• Move the management interface to another PIF using the pif-reconfigure-ip and host-management-re-
configure commands prior to issuing the bond-destroy command, otherwise connections to the host
(including XenCenter) will be dropped.
XenServer Administrator's Guide Networking 57
Citrix recommends using XenCenter to create NIC bonds. For details, refer to the XenCenter help.
This section describes using the xe CLI to create bonded NIC interfaces on XenServer hosts that comprise
a resource pool. See the section called “Creating a NIC bond on a dual-NIC host” for details on using the
xe CLI to create NIC bonds on a standalone XenServer host.
Warning
Do not attempt to create network bonds while HA is enabled. The process of bond creation will disturb the
in-progress HA heartbeating and cause hosts to self-fence (shut themselves down); subsequently they
will likely fail to reboot properly and will need the host-emergency-ha-disable command to recover.
a. Use the network-create command to create a new pool-wide network for use with the bonded
NICs. The UUID of the new network is returned.
xe network-create name-label=<network_name>
b. Use the pif-list command to determine the UUIDs of the PIFs to use in the bond:
xe pif-list
c. Use the bond-create command to create the bond, specifying the network UUID created in step a
and the UUIDs of the PIFs to be bonded, separated by commas. The UUID for the bond is returned:
Note
See the section called “Controlling the MAC address of the bond” for details on controlling the MAC
address used for the bond PIF.
d. Use the pif-list command to determine the UUID of the new bond PIF:
XenServer Administrator's Guide Networking 58
xe pif-list network-uuid=<network_uuid>
e. Use the pif-reconfigure-ip command to configure the desired management interface IP address
settings for the bond PIF. See Chapter 8, Command line interface, for more detail on the options
available for the pif-reconfigure-ip command.
f. Use the host-management-reconfigure command to move the management interface from the
existing physical PIF to the bond PIF. This step will activate the bond:
xe host-management-reconfigure pif-uuid=<bond_pif_uuid>
g. Use the pif-reconfigure-ip command to remove the IP address configuration from the non-bonded
PIF previously used for the management interface. This step is not strictly necessary but might
help reduce confusion when reviewing the host networking configuration.
3. Open a console on a host that you want to join to the pool and run the command:
The network and bond information is automatically replicated to the new host. However, the manage-
ment interface is not automatically moved from the host NIC to the bonded NIC. Move the management
interface on the host to enable the bond as follows:
a. Use the host-list command to find the UUID of the host being configured:
xe host-list
b. Use the pif-list command to determine the UUID of bond PIF on the new host. Include the host-
uuid parameter to list only the PIFs on the host being configured:
c. Use the pif-reconfigure-ip command to configure the desired management interface IP address
settings for the bond PIF. See Chapter 8, Command line interface, for more detail on the options
available for the pif-reconfigure-ip command. This command must be run directly on the host:
d. Use the host-management-reconfigure command to move the management interface from the
existing physical PIF to the bond PIF. This step activates the bond. This command must be run
directly on the host:
xe host-management-reconfigure pif-uuid=<bond_pif_uuid>
e. Use the pif-reconfigure-ip command to remove the IP address configuration from the non-bonded
PIF previously used for the management interface. This step is not strictly necessary but may help
reduce confusion when reviewing the host networking configuration. This command must be run
directly on the host server:
4. For each additional host you want to join to the pool, repeat steps 3 and 4 to move the management
interface on the host and to enable the bond.
XenServer Administrator's Guide Networking 59
Warning
Do not attempt to create network bonds while HA is enabled. The process of bond creation disturbs the
in-progress HA heartbeating and causes hosts to self-fence (shut themselves down); subsequently they
will likely fail to reboot properly and you will need to run the host-emergency-ha-disable command to
recover them.
Note
If you are not using XenCenter for NIC bonding, the quickest way to create pool-wide NIC bonds is to
create the bond on the master, and then restart the other pool members. Alternately you can use the
service xapi restart command. This causes the bond and VLAN settings on the master to be inherited
by each host. The management interface of each host must, however, be manually reconfigured.
When adding a NIC bond to an existing pool, the bond must be manually created on each host in the pool.
The steps below can be used to add NIC bonds on both the pool master and other hosts with the following
requirements:
xe network-create name-label=<bond0>
2. Use XenCenter or the vm-shutdown command to shut down all VMs in the host pool to force all existing
VIFs to be unplugged from their current networks. The existing VIFs will be invalid after the bond is
enabled.
xe vm-shutdown uuid=<vm_uuid>
3. Use the host-list command to find the UUID of the host being configured:
xe host-list
4. Use the pif-list command to determine the UUIDs of the PIFs to use in the bond. Include the host-
uuid parameter to list only the PIFs on the host being configured:
xe pif-list host-uuid=<host_uuid>
5. Use the bond-create command to create the bond, specifying the network UUID created in step 1 and
the UUIDs of the PIFs to be bonded, separated by commas. The UUID for the bond is returned.
Note
See the section called “Controlling the MAC address of the bond” for details on controlling the MAC
address used for the bond PIF.
6. Use the pif-list command to determine the UUID of the new bond PIF. Include the host-uuid param-
eter to list only the PIFs on the host being configured:
7. Use the pif-reconfigure-ip command to configure the desired management interface IP address set-
tings for the bond PIF. See Chapter 8, Command line interface for more detail on the options available
for the pif-reconfigure-ip command. This command must be run directly on the host:
8. Use the host-management-reconfigure command to move the management interface from the exist-
ing physical PIF to the bond PIF. This step will activate the bond. This command must be run directly
on the host:
xe host-management-reconfigure pif-uuid=<bond_pif_uuid>
9. Use the pif-reconfigure-ip command to remove the IP address configuration from the non-bonded PIF
previously used for the management interface. This step is not strictly necessary, but might help reduce
confusion when reviewing the host networking configuration. This command must be run directly on
the host:
10. Move existing VMs to the bond network using the vif-destroy and vif-create commands. This step can
also be completed using XenCenter by editing the VM configuration and connecting the existing VIFs
of the VM to the bond network.
11. Repeat steps 3 - 10 for other hosts.
12. Restart the VMs previously shut down.
Assigning a NIC to a specific function will prevent the use of the NIC for other functions such as host
management, but requires that the appropriate network configuration be in place in order to ensure the NIC
is used for the desired traffic. For example, to dedicate a NIC to storage traffic the NIC, storage target,
switch, and/or VLAN must be configured such that the target is only accessible over the assigned NIC. This
allows use of standard IP routing to control how traffic is routed between multiple NICs within a XenServer.
Note
Before dedicating a network interface as a storage interface for use with iSCSI or NFS SRs, ensure
that the dedicated interface uses a separate IP subnet which is not routable from the main management
XenServer Administrator's Guide Networking 61
interface. If this is not enforced, then storage traffic may be directed over the main management interface
after a host reboot, due to the order in which network interfaces are initialized.
1. Ensure that the PIF is on a separate subnet, or routing is configured to suit your network topology in
order to force the desired traffic over the selected PIF.
2. Setup an IP configuration for the PIF, adding appropriate values for the mode parameter and if using
static IP addressing the IP, netmask, gateway, and DNS parameters:
If you want to use a storage interface that can be routed from the management interface also (bearing in
mind that this configuration is not recommended), then you have two options:
• After a host reboot, ensure that the storage interface is correctly configured, and use the xe pbd-unplug
and xe pbd-plug commands to reinitialize the storage connections on the host. This will restart the storage
connection and route it over the correct interface.
• Alternatively, you can use xe pif-forget to remove the interface from the XenServer database, and man-
ually configure it in the control domain. This is an advanced option and requires you to be familiar with
how to manually configure Linux networking.
For example, to limit a VIF to a maximum transfer rate of 100kb/s, use the vif-param-set command:
Hostname
The system hostname is defined in the pool-wide database and modified using the xe host-set-host-
name-live CLI command as follows:
The underlying control domain hostname changes dynamically to reflect the new hostname.
DNS servers
To add or remove DNS servers in the IP addressing configuration of a XenServer host, use the pif-recon-
figure-ip command. For example, for a PIF with a static IP:
To modify the IP address configuration of a PIF, use the pif-reconfigure-ip CLI command. See the section
called “pif-reconfigure-ip” for details on the parameters of the pif-reconfigure-ip command.
Note
See the section called “Changing IP address configuration in resource pools” for details on changing host
IP addresses in resource pools.
Note
Caution should be used when changing the IP address of a server, and other networking parameters.
Depending upon the network topology and the change being made, connections to network storage may
be lost. If this happens the storage must be replugged using the Repair Storage function in XenCenter,
or the pbd-plug command using the CLI. For this reason, it may be advisable to migrate VMs away from
the server before changing its IP configuration.
1. Use the pif-reconfigure-ip CLI command to set the IP address as desired. See Chapter 8, Command
line interface for details on the parameters of the pif-reconfigure-ip command:
XenServer Administrator's Guide Networking 63
2. Use the host-list CLI command to confirm that the member host has successfully reconnected to the
master host by checking that all the other XenServer hosts in the pool are visible:
xe host-list
Changing the IP address of the master XenServer host requires additional steps because each of the mem-
ber hosts uses the advertised IP address of the pool master for communication and will not know how to
contact the master when its IP address changes.
Whenever possible, use a dedicated IP address that is not likely to change for the lifetime of the pool for
pool masters.
2. When the IP address of the pool master host is changed, all member hosts will enter into an emergency
mode when they fail to contact the master host.
3. On the master XenServer host, use the pool-recover-slaves command to force the master to contact
each of the member hosts and inform them of the new master IP address:
xe pool-recover-slaves
Refer to the section called “Master failures” for more information on emergency mode.
Management interface
When XenServer is installed on a host with multiple NICs, one NIC is selected for use as the management
interface. The management interface is used for XenCenter connections to the host and for host-to-host
communication.
xe pif-list
2. Use the pif-param-list command to verify the IP addressing configuration for the PIF that will be used
for the management interface. If necessary, use the pif-reconfigure-ip command to configure IP ad-
dressing for the PIF to be used. See Chapter 8, Command line interface for more detail on the options
available for the pif-reconfigure-ip command.
xe pif-param-list uuid=<pif_uuid>
3. Use the host-management-reconfigure CLI command to change the PIF used for the management
interface. If this host is part of a resource pool, this command must be issued on the member host
console:
xe host-management-reconfigure pif-uuid=<pif_uuid>
XenServer Administrator's Guide Networking 64
Warning
Warning
Once the management interface is disabled, you will have to log in on the physical host console to perform
management tasks and external interfaces such as XenCenter will no longer work.
xe pif-list params=uuid,device,MAC,currently-attached,carrier,management, \
IP-configuration-mode
If the hosts have already been joined in a pool, add the host-uuid parameter to the pif-list command to
scope the results to the PIFs on a given host.
Re-ordering NICs
It is not possible to directly rename a PIF, although you can use the pif-forget and pif-introduce commands
to achieve the same effect with the following restrictions:
• The XenServer host must be standalone and not joined to a resource pool.
• Re-ordering a PIF configured as the management interface of the host requires additional steps which are
included in the example below. Because the management interface must first be disabled the commands
must be entered directly on the host console.
For the example configuration shown above use the following steps to change the NIC ordering so that
eth0 corresponds to the device with a MAC address of 00:19:bb:2d:7e:7a:
1. Use XenCenter or the vm-shutdown command to shut down all VMs in the pool to force existing VIFs
to be unplugged from their networks.
xe vm-shutdown uuid=<vm_uuid>
xe host-management-disable
3. Use the pif-forget command to remove the two incorrect PIF records:
xe pif-forget uuid=1ef8209d-5db5-cf69-3fe6-0e8d24f8f518
xe pif-forget uuid=829fd476-2bbb-67bb-139f-d607c09e9110
4. Use the pif-introduce command to re-introduce the devices with the desired naming:
xe pif-list params=uuid,device,MAC
6. Use the pif-reconfigure-ip command to reset the management interface IP addressing configuration.
See Chapter 8, Command line interface for details on the parameters of the pif-reconfigure-ip com-
mand.
7. Use the host-management-reconfigure command to set the management interface to the desired
PIF and re-enable external management connectivity to the host:
xe host-management-reconfigure pif-uuid=<728d9e7f-62ed-a477-2c71-3974d75972eb>
Networking Troubleshooting
If you are having problems with configuring networking, first ensure that you have not directly modified any
of the control domain ifcfg-* files directly. These files are directly managed by the control domain host
agent, and changes will be overwritten.
XenServer Administrator's Guide Networking 66
If the problem still persists, then you can use the CLI to disable receive / transmit offload optimizations on
the physical interface.
Warning
Disabling receive / transmit offload optimizations can result in a performance loss and / or increased CPU
usage.
First, determine the UUID of the physical interface. You can filter on the device field as follows:
xe pif-list device=eth0
Finally, re-plug the PIF or reboot the host for the change to take effect.
If a loss of networking occurs, the following notes may be useful in recovering and regaining network con-
nectivity:
• Citrix recommends that you ensure networking configuration is set up correctly before creating a resource
pool, as it is usually easier to recover from a bad configuration in a non-pooled state.
• The host-management-reconfigure and host-management-disable commands affect the XenServer
host on which they are run and so are not suitable for use on one host in a pool to change the configuration
of another. Run these commands directly on the console of the XenServer host to be affected, or use the
xe -s, -u, and -pw remote connection options.
• When the xapi service starts, it will apply configuration to the management interface first. The name of
the management interface is saved in the /etc/xensource-inventory file. In extreme cases, you can
stop the xapi service by running service xapi stop at the console, edit the inventory file to set the man-
agement interface to a safe default, and then ensure that the ifcfg files in /etc/sysconfig/net-
work-scripts have correct configurations for a minimal network configuration (including one interface
and one bridge; for example, eth0 on the xenbr0 bridge).
Chapter 5. Workload Balancing
Workload Balancing Overview
Workload Balancing is a XenServer feature that helps you balance virtual machine workloads across hosts
and locate VMs on the best possible servers for their workload in a resource pool. When Workload Balancing
places a virtual machine, it determines the best host on which to start a virtual machine or it rebalances the
workload across hosts in a pool. For example, Workload Balancing lets you determine where to:
When Workload Balancing is enabled, if you put a host into Maintenance Mode, Workload Balancing selects
the optimal server for each of the host's virtual machines. For virtual machines taken offline, Workload
Balancing provides recommendations to help you restart virtual machines on the optimal server in the pool.
Workload Balancing also lets you balance virtual-machine workloads across hosts in a XenServer resource
pool. When the workload on a host exceeds the level you set as acceptable (the threshold), Workload Bal-
ancing will make recommendations to move part of its workload (for example, one or two virtual machines)
to a less-taxed host in the same pool. It does this by evaluating the existing workloads on hosts against
resource performance on other hosts.
You can also use Workload Balancing to help determine if you can power off hosts at certain times of day.
Workload Balancing performs these tasks by analyzing XenServer resource-pool metrics and recommend-
ing optimizations. You decide if you want these recommendations geared towards resource performance
or hardware density. You can fine-tune the weighting of individual resource metrics (CPU, network, memo-
ry, and disk) so that the placement recommendations and critical thresholds align with your environment's
needs.
To help you perform capacity planning, Workload Balancing provides historical reports about host and pool
health, optimization and virtual-machine performance, and virtual-machine motion history.
Workload Balancing captures data for resource performance on virtual machines and physical hosts. It uses
this data, combined with the preferences you set, to provide optimization and placement recommendations.
Workload Balancing stores performance data in a SQL Server database: the longer Workload Balancing
runs the more precise its recommendations become.
Workload Balancing recommends moving virtual-machine workloads across a pool to get the maximum ef-
ficiency, which means either performance or density depending on your goals. Within a Workload Balancing
context:
• Performance refers to the usage of physical resources on a host (for example, the CPU, memory, net-
work, and disk utilization on a host). When you set Workload Balancing to maximize performance, it rec-
ommends placing virtual machines to ensure the maximum amount of resources are available for each
virtual machine.
• Density refers to the number of virtual machines on a host. When you set Workload Balancing to maximize
density, it recommends placing virtual machines to ensure they have adequate computing power so you
can reduce the number of hosts powered on in a pool.
XenServer Administrator's Guide Workload Balancing 68
Workload Balancing configuration preferences include settings for placement (performance or density), vir-
tual CPUs, and performance thresholds.
Workload Balancing does not conflict with settings you already specified for High Availability. Citrix designed
the features to work in conjunction with each other.
• Workload Balancing server. Collects data from the virtual machines and their hosts and writes the data
to the data store. This service is also referred to as the "data collector."
XenServer Administrator's Guide Workload Balancing 69
• Data Store. A Microsoft SQL Server or SQL Server Express database that stores performance and con-
figuration data.
For more information about Workload Balancing components for large deployments with multiple servers,
see Multiple Server Deployments.
Because one data collector can monitor multiple resource pools, you do not need multiple data collectors
to monitor multiple pools.
The following table shows the advantages and disadvantages to a single-server deployment:
Advantages Disadvantages
• Using SQL Server for the data store. In large environments, consider using SQL Server for the data
store instead of SQL Server Express. Because SQL Server Express has a 4GB disk-space limit, Workload
Balancing limits the data store to 3.5GB when installed on this database. SQL Server has no preset disk-
space limitation.
• Deploying the data store on a dedicated server. If you deploy SQL Server on a dedicated server
(instead of collocating it on the same computer as the other Workload Balancing services), you can let
it use more memory.
Increasing Availability
If Workload Balancing's recommendations or reports are critical in your environment, consider implementing
strategies to ensure high availability, such as one of the following:
However, Workload Balancing services are not "cluster aware," so if the primary server in the cluster fails,
any pending requests are lost when the secondary server in the cluster takes over.
• Making Workload Balancing part of a XenServer resource pool with High Availability enabled.
• Data Collection Manager service. Collects data from the virtual machines and their hosts and writes the
data to the data store. This service is also referred to as the "data collector."
• Web Service Host. Facilitates communications between the XenServer and the Analysis Engine. Requires
a security certificate, which you can create or provide during Setup.
• Analysis Engine service. Monitors resource pools and determines if a resource pool needs optimizations.
The size of your XenServer environment affects your Workload Balancing design. Since every environment
is different, the size definitions that follow are examples of environments of that size:
Size Example
Small
XenServer Administrator's Guide Workload Balancing 71
Size Example
Having multiple servers for Workload Balancing's services may be necessary in large environments. For
example, having multiple servers may reduce "bottlenecks." If you decide to deploy Workload Balancing's
services on multiple computers, all servers must be members of mutually trusted Active Directory domains.
Advantages Disadvantages
Workload Balancing supports multiple data collectors, which might be beneficial in environments with many
resource pools. When you deploy multiple data collectors, the data collectors work together to ensure all
XenServer pools are being monitored at all times.
XenServer Administrator's Guide Workload Balancing 72
XenServer Administrator's Guide Workload Balancing 73
All data collectors collect data from their own resource pools. One data collector, referred to as the master,
also does the following:
• Checks for configuration changes and determines the relationships between resource pools and data
collectors
• Checks for new XenServer resource pools to monitor and assigns these pools to a data collector
• Monitors the health of the other data collectors
If a data collector goes offline or you add a new resource pool, the master data collector rebalances the
workload across the data collectors. If the master data collector goes offline, another data collector assumes
the role of the master.
• When you install Workload Balancing on SQL Server Express, Workload Balancing limits the size of the
metrics data to 3.5GB. If the data grows beyond this size, Workload Balancing starts grooming the data,
deleting older data, automatically.
• Citrix recommends putting the data store on one computer and the Workload Balancing services on
another computer.
• For Workload Balancing data-store operations, memory utilization is the largest consideration.
Important
Citrix does not recommend changing the privileges or accounts under which the Workload Balancing
services run.
Encryption Requirements
XenServer communicates with Workload Balancing using HTTPS. Consequently, you must create or install
an SSL/TLS certificate when you install Workload Balancing (or the Web Services Host, if it is on a separate
server). You can either use a certificate from a Trusted Authority or create a self-signed certificate using
Workload Balancing Setup.
The self-signed certificate Workload Balancing Setup creates is not from a Trusted Authority. If you do
not want to use this self-signed certificate, prepare a certificate before you begin Setup and specify that
certificate when prompted.
If desired, during Workload Balancing Setup, you can export the certificate so that you can import it into
XenServer after Setup.
XenServer Administrator's Guide Workload Balancing 74
Note
If you create a self-signed certificate during Workload Balancing Setup, Citrix recommends that you even-
tually replace this certificate with one from a Trusted Authority.
Domain Considerations
When deploying Workload Balancing, your environment determines your domain and security requirements.
• If your Workload Balancing services are on multiple computers, the computers must be part of a domain.
• If your Workload Balancing components are in separate domains, you must configure trust relationships
between those domains.
Typically, you install and configure Workload Balancing after you have created one or more XenServer
resource pools in your environment.
You install all Workload Balancing functions, such as the Workload Balancing data store, the Analysis En-
gine, and the Web Service Host, from Setup.
• Installation Wizard. Start the installation wizard from Setup.exe. Citrix suggests installing Workload Bal-
ancing from the installation wizard because this method checks your system meets the installation re-
quirements.
• Command Line. If you install Workload Balancing from the command line, the prerequisites are not
checked. For Msiexec properties, see the section called “Windows Installer Commands for Workload
Balancing”.
When you install the Workload Balancing data store, Setup creates the database. You do not need to run
Workload Balancing Setup locally on the database server: Setup supports installing the data store across
a network.
If you are installing Workload Balancing services as components on separate computers, you must install
the database component before the Workload Balancing services.
After installation, you must configure Workload Balancing before you can use it to optimize workloads. For
information, see the section called “Initializing and Configuring Workload Balancing”.
XenServer Administrator's Guide Workload Balancing 75
For information about System Requirements, see the section called “Workload Balancing System Require-
ments”. For installation instructions, see the section called “Installing Workload Balancing”.
For information about data store requirements, see the section called “Workload Balancing Data Store
Requirements”.
If you are installing with the User Account Control (UAC) enabled, see Microsoft's documentation.
Recommended Hardware
Unless otherwise noted, Workload Balancing components require the following hardware (32-bit and 64-bit):
When all Workload Balancing services are installed on the same server, Citrix recommends that the server
have a minimum of a dual-core processor.
Analysis Engine
Note
In this topic, the term SQL Server refers to both SQL Server and SQL Server Express unless the version
is mentioned explicitly.
Note
During installation, Setup must connect and authenticate to the database server to create the data store.
Configure the SQL Server database instance to use either:
If you create an account on the database for use during Setup, the account must have sysadmin privileges
for the database instance where you want to create the Workload Balancing data store.
After installing SQL Server Express 2008 or SQL Server 2008, you must install the SQL Server 2005 Back-
ward Compatibility Components on all Workload Balancing computers before running Workload Balancing
Setup. The Backward Compatibility components let Workload Balancing Setup configure the database.
The Workload Balancing installation media includes the 32-bit editions of SQL Server Express 2008 and
the SQL Server 2005 Backward Compatibility Components.
• While some SQL Server editions may include the Backward Compatibility components with their installa-
tion programs, their Setup program might not install them by default.
• You can also obtain the Backward Compatibility components from the download page for the latest Mi-
crosoft SQL Server 2008 Feature Pack.
• US English
• Japanese (Native JP)
Note
In configurations where the database and Web server are installed on separate servers, the operating
system languages must match on both computers.
Preinstallation Considerations
You may need to configure software in your environment so that Workload Balancing can function correct-
ly. Review the following considerations and determine if they apply to your environment. Also, check the
XenServer readme for additional, late-breaking release-specific requirements.
• Account for Workload Balancing. Before Setup, you must create a user account for XenServer to use
to connect to Workload Balancing (specifically the Web Service Host service).
This user account can be either a domain account or an account local to the computer running Workload
Balancing (or the Web Service Host service).
Important
When you create this account in Windows, Citrix suggests enabling the Password never expires option.
During Setup, you must specify the authorization type (a single user or group) and the user or group
with permissions to make requests of the Web Service Host service. For additional information, see the
section called “Authorization for Workload Balancing ”.
• SSL/TLS Certificate. XenServer and Workload Balancing communicate over HTTPS. Consequently, dur-
ing Workload Balancing Setup, you must provide either an SSL/TLS certificate from a Trusted Authority
or create a self-signed certificate.
• Group Policy. If the server on which you are installing Workload Balancing is a member of a Group Policy
Organizational Unit, ensure that current or scheduled, future policies do not prohibit Workload Balancing
or its services from running.
Note
In addition, review the applicable release notes for release-specific configuration information.
1. Install a SQL Server or SQL Server Express database as described in Workload Balancing Data Store
Requirements.
2. Have a login on the SQL Server database instance that has SQL Login creation privileges. For SQL
Server Authentication, the account needs sysadmin privileges.
3. Create an account for Workload Balancing, as described in Preinstallation Considerations and have its
name on hand.
4. Configure all Workload Balancing servers to meet the system requirements described in Workload Bal-
ancing System Requirements.
After Setup is finished installing Workload Balancing, verify that it configure Workload Balancing before it
begins gathering data and making recommendations.
1. Launch the Workload Balancing Setup wizard from Autorun.exe, and select the Workload Balancing
installation option.
2. After the initial Welcome page appears, click Next.
3. In the Setup Type page, select Workload Balancing Services and Data Store, and click Next. This
option lets you install Workload Balancing, including the Web Services Host, Analysis Engine, and Data
Collection Manager services. After you click Next, Workload Balancing Setup verifies that your system
has the correct prerequisites.
4. Accept the End-User License Agreement.
5. In the Component Selection page, select all of the following components:
• Database. Creates and configures a database for the Workload Balancing data store.
• Services .
• Data Collection Manager. Installs the Data Collection Manager service, which collects data from the
virtual machines and their hosts and writes this data to the data store.
• Analysis Engine. Installs the Analysis Engine service, which monitors resource pools and recom-
mends optimizations by evaluating the performance metrics the data collector gathered.
• Web Service Host. Installs the service for the Web Service Host, which facilitates communications
between XenServer and the Analysis Engine.
If you enable the Web Service Host component, Setup prompts you for a security certificate. You can
either use the self-signed certificate Workload Balancing Setup provides or specify a certificate from
a Trusted Authority.
6. In the Database Server page, in the SQL Server Selection section, select one of the following:
• Enter the name of a database server . Lets you type the name of the database server that will host
the data store. Use this option to specify an instance name.
Note
If you installed SQL Express and specified an instance name, append the server name with \yourin-
stancename. If you installed SQL Express without specifying an instance name, append the server name
with \sqlexpress.
• Choose an existing database server . Lets you select the database server from a list of servers Work-
load Balancing Setup detected on your network. Use the first option (Enter the name of a database)
if you specified an instance name.
XenServer Administrator's Guide Workload Balancing 80
7. In the Install Using section, select one of the following methods of authentication:
• Windows Authentication . This option uses your current credentials (that is, the Windows credentials
you used to log on to the computer on which you are installing Workload Balancing). To select this
option, your current Windows credentials must have been added as a login to the SQL Server database
server (instance).
• SQL Server Authentication . To select this option, you must have configured SQL Server to support
Mixed Mode authentication.
Note
Citrix recommends clicking Test Connect to ensure Setup can use the credentials you provided to contact
the database server.
8. In the Database Information page, select Install a new Workload Balancing data store and type the
name you want to assign to the Workload Balancing database in SQL Server. The default database name
is WorkloadBalancing.
9. In the Web Service Host Account Information page, select HTTPS end point (selected by default).
Edit the port number, if necessary; the port is set to 8012 by default.
Note
If you are using Workload Balancing with XenServer, you must select HTTPS end points. XenServer can
only communicate with the Workload Balancing feature over SSL/TLS. If you change the port here, you
must also change it on XenServer using either the Configure Workload Balancing wizard or the XE
commands.
10.For the account (on the Workload Balancing server) that XenServer will use to connect to Workload
Balancing, select the authorization type, User or Group, and type one of the following :
• User name. Enter the name of the account you created for XenServer (for example,
workloadbalancing_user).
• Group name. Enter the group name for the account you created. Specifying a group name lets you
specify a group of users that have been granted permission to connect to the Web Service Host on the
Workload Balancing server. Specifying a group name lets more than one person in your organization
log on to Workload Balancing with their own credentials. (Otherwise, you will need to provide all users
with the same set of credentials to use for Workload Balancing.)
Specifying the authorization type lets Workload Balancing recognize the XenServer's connection. For
more information, see the section called “Authorization for Workload Balancing ”. You do not specify the
password until you configure Workload Balancing.
11.In the SSL/TLS Certificate page, select one of the following certificate options:
• Select existing certificate from a Trusted Authority. Specifies a certificate you generated from a
Trusted Authority before Setup. Click Browse to navigate to the certificate.
• Create a self-signed certificate with subject name. Setup creates a self-signed certificate for the
Workload Balancing server. Delete the certificate-chain text and enter a subject name.
• Export this certificate for import into the certificate store on XenServer. If you want to import
the certificate into the Trusted Root Certification Authorities store on the computer running XenServer,
select this check box. Enter the full path and file name where you want the certificate saved.
12.Click Install.
XenServer Administrator's Guide Workload Balancing 81
1. From any server with network access to the database, launch the Workload Balancing Setup wizard
from Autorun.exe, and select the WorkloadBalancing installation option.
2. After the initial Welcome page appears, click Next.
3. In the Setup Type page, select Workload Balancing Database Only, and click Next.
This option lets you install the Workload Balancing data store only.
After you click Next, Workload Balancing Setup verifies that your system has the correct prerequisites.
4. Accept the End-User License Agreement, and click Next.
5. In the Component Selection page, accept the default installation and click Next.
This option creates and configures a database for the Workload Balancing data store.
6. In the Database Server page, in the SQL Server Selection section, select one of the following:
• Enter the name of a database server. Lets you type the name of the database server that will host
the data store. Use this option to specify an instance name.
Note
If you installed SQL Express and specified an instance name, append the server name with \yourin-
stancename. If you installed SQL Express without specifying an instance name, append the server name
with \sqlexpress.
• Choose an existing database server. Lets you select the database server from a list of servers
Workload Balancing Setup detected on your network.
7. In the Install Using section, select one of the following methods of authentication:
• Windows Authentication. This option uses your current credentials (that is, the Windows credentials
you used to log on to the computer on which you are installing Workload Balancing). To select this
option, your current Windows credentials must have been added as a login to the SQL Server database
server (instance).
• SQL Server Authentication. To select this option, you must have configured SQL Server to support
Mixed Mode authentication.
Note
Citrix recommends clicking Test Connect to ensure Setup can use the credentials you provided to contact
the database server.
8. In the Database Information page, select Install a new Workload Balancing data store and type the
name you want to assign to the Workload Balancing database in SQL Server. The default database name
is WorkloadBalancing.
9. Click Install to install the data store.
1. Launch the Workload Balancing Setup wizard from Autorun.exe, and select the WorkloadBalancing
installation option.
2. After the initial Welcome page appears, click Next.
3. In the Setup Type page, select Workload Balancing Server Services and Database.
This option lets you install Workload Balancing, including the Web Services Host, Analysis Engine, and
Data Collection Manager services.
Workload Balancing Setup verifies that your system has the correct prerequisites.
4. Accept the End-User License Agreement, and click Next.
5. In the Component Selection page, select the services you want to install:
• Services .
• Data Collection Manager. Installs the Data Collection Manager service, which collects data from the
virtual machines and their hosts and writes this data to the data store.
• Analysis Engine. Installs the Analysis Engine service, which monitors resource pools and recom-
mends optimizations by evaluating the performance metrics the data collector gathered.
• Web Service Host. Installs the service for the Web Service Host, which facilitates communications
between XenServer and the Analysis Engine.
If you enable the Web Service Host component, Setup prompts you for a security certificate. You can
either use the self-signed certificate Workload Balancing Setup provides or specify a certificate from
a Trusted Authority.
6. In the Database Server page, in the SQL Server Selection section, select one of the following:
• Enter the name of a database server. Lets you type the name of the database server that is hosting
the data store.
Note
If you installed SQL Express and specified an instance name, append the server name with \yourin-
stancename. If you installed SQL Express without specifying an instance name, append the server name
with \sqlexpress.
• Choose an existing database server. Lets you select the database server from a list of servers
Workload Balancing Setup detected on your network.
Note
Citrix recommends clicking Test Connect to ensure Setup can use the credentials you provided to contact
the database server successfully.
7. In the Web Service Information page, select HTTPS end point (selected by default) and edit the port
number, if necessary. The port is set to 8012 by default.
Note
If you are using Workload Balancing with XenServer, you must select HTTPS end points. XenServer can
only communicate with the Workload Balancing feature over SSL/TLS. If you change the port here, you
XenServer Administrator's Guide Workload Balancing 83
must also change it on XenServer using either the Configure Workload Balancing wizard or the XE
commands.
8. For the account (on the Workload Balancing server) that XenServer will use to connect to Workload
Balancing, select the authorization type, User or Group, and type one of the following:
• User name. Enter the name of the account you created for XenServer (for example,
workloadbalancing_user).
• Group name. Enter the group name for the account you created. Specifying a group name lets more
than one person in your organization log on to Workload Balancing with their own credentials. (Other-
wise, you will need to provide all users with the same set of credentials to use for Workload Balancing.)
• Specifying the authorization type lets Workload Balancing recognize the XenServer's connection. For
more information, see the section called “Authorization for Workload Balancing ”. You do not specify
the password until you configure Workload Balancing.
9. In the SSL/TLS Certificate page, select one of the following certificate options:
• Select existing certificate from a Trusted Authority. Specifies a certificate you generated from a
Trusted Authority before Setup. Click Browse to navigate to the certificate.
• Create a self-signed certificate with subject name. Setup creates a self-signed certificate for the
Workload Balancing server. To change the name of the certificate Setup creates, type a different name.
• Export this certificate for import into the certificate store on XenServer. If you want to import
the certificate into the Trusted Root Certification Authorities store on the computer running XenServer,
select this check box. Enter the full path and file name where you want the certificate saved.
10.Click Install.
Workload Balancing Setup does not install an icon in the Windows Start menu. Use this procedure to verify
that Workload Balancing installed correctly before trying to connect to Workload Balancing with the Workload
Balancing Configuration wizard.
1. Verify Windows Add or Remove Programs (Windows XP) lists Citrix Workload Balancing in its in the
list of currently installed programs.
2. Check for the following services in the Windows Services panel:
• Citrix WLB Analysis Engine
• Citrix WLB Data Collection Manager
• Citrix WLB Web Service Host
All of these services must be started and running before you start configuring Workload Balancing.
3. If Workload Balancing appears to be missing, check the installation log to see if it installed successfully:
• If you used the Setup wizard, the log is at %Documents and Settings%\username\Local Settings\Temp
\msibootstrapper2CSM_MSI_Install.log (by default). On Windows Vista and Windows Server 2008,
this log is at %Users%\username\AppData\Local\Temp\msibootstrapper2CSM_MSI_Install.log. User
name is the name of the user logged on during installation.
• If you used the Setup properties (Msiexec), the log is at C:\log.txt (by default) or wherever you specified
for Setup to create it.
Set properties by adding Property=”value” on the command line after other switches and parameters.
The following sample command line performs a full installation of the Workload Balancing Windows Installer
package and creates a log file to capture information about this operation.
There are two Workload Balancing Windows Installer packages: workloadbalancing.msi and
workloadbalancingx64.msi. If you are installing Workload Balancing on a 64-bit operating system, specify
workloadbalancingx64.msi.
To see if Workload Balancing Setup succeeded, see the section called “To verify your Workload Balancing
installation”.
Important
Workload Balancing Setup does not provide error messages if you are installing Workload Balancing
using Windows Installer commands if the system is missing prerequisites. Instead, installation fails.
ADDLOCAL
Definition
Specifies one or more Workload Balancing features to install. The values of ADDLOCAL are Workload
Balancing components and services.
Possible values
• Database. Installs the Workload Balancing data store.
• Complete. Installs all Workload Balancing features and components.
• Services. Installs all Workload Balancing services, including the Data Collection Manager, the Analysis
Engine, and the Web Service Host service.
• DataCollection. Installs the Data Collection Manager service.
• Analysis_Engine. Installs the Analysis Engine service.
• DWM_Web_Service. Installs the Web Service Host service.
Default value
Blank
XenServer Administrator's Guide Workload Balancing 85
Remarks
• Separate entries by commas.
• The values must be installed locally.
• You must install the data store on a shared or dedicated server before installing other services.
• You can only install services standalone, without installing the database simultaneously, if you have a
Workload Balancing data store installed and specify it in the installation script using and for the database
type. See the section called “DBNAME” and the section called “DATABASESERVER” for more informa-
tion.
CERT_CHOICE
Definition
Specifies for Setup to either create a certificate or use an existing certificate.
Possible values
• 0. Specifies for Setup to create a new certificate.
• 1. Specifies an existing certificate.
Default value
1
Remarks
• You must also specify CERTNAMEPICKED. See the section called “CERTNAMEPICKED” for more infor-
mation.
CERTNAMEPICKED
Definition
Specifies the subject name when you use Setup to create a self-signed SSL/TLS certificate. Alternatively,
this specifies an existing certificate.
Possible values
cn. Use to specify the subject name of certificate to use or create.
Example
cn=wlb-kirkwood, where wlb-kirkwood is the name you are specifying as the name of the certificate
to create or the certificate you want to select.
Default value
Blank.
XenServer Administrator's Guide Workload Balancing 86
Remarks
You must specify this parameter with the CERT_CHOICE parameter. See the section called “CERT_CHOICE”
for more information.
DATABASESERVER
Definition
Specifies the database, and its instance name, where you want to install the data store. You can also use
this property to specify an existing database that you want to use or upgrade.
Possible values
User defined.
Note
If you specified an instance name when you installed SQL Server or SQL Express, append the server
name with \yourinstancename. If you installed SQL Express without specifying an instance name,
append the server name with \sqlexpress.
Default value
Local
Example
DATABASESERVER="WLB-DB-SERVER\SQLEXPRESS", where WLB-DB-SERVER is the name of your
database server and SQLEXPRESS is the name of the database instance.
Remarks
• Required property for all installations.
• Whether installing a database or connecting to an existing data store, you must specify this property with
DBNAME.
• Even if you are specifying a database on the same computer as you are performing Setup, you still must
define the name of the database.
• When you specify DATABASESERVER, in some circumstances, you must specify also the section called
“WINDOWS_AUTH” and its accompanying properties.
DBNAME
Definition
The name of the Workload Balancing database that Setup will create or upgrade during installation.
XenServer Administrator's Guide Workload Balancing 87
Possible values
User defined.
Default value
WorkloadBalancing
Remarks
• Required property for all installations. You must set a value for this property.
• Whether connecting to or installing a data store, you must specify this property with DATABASESERVER.
• Even if you are specifying a database on the same computer as you are performing Setup, you still must
define the name of the database.
• Localhost is not a valid value.
DBUSERNAME
Definition
Specifies the user name for the Windows or SQL Server account you are using for database authentication
during Setup.
Possible values
User defined.
Default value
Blank
Remarks
• This property is used with WINDOWS_AUTH (see the section called “WINDOWS_AUTH”) and DBPASSWORD
(see the section called “DBPASSWORD”.)
• Because you specify the server name and instance using the section called “DATABASESERVER”, do
not qualify the user name.
DBPASSWORD
Definition
Specifies the password for the Windows or SQL Server account you are using for database authentication
during Setup.
Possible values
User defined.
XenServer Administrator's Guide Workload Balancing 88
Default value
Blank.
Remarks
• Use this property with the parameters documented in the section called “WINDOWS_AUTH” and the
section called “DBUSERNAME”.
EXPORTCERT
Definition
Set this value to export an SSL/TLS certificate from the server on which you are installing Workload Bal-
ancing. Exporting the certificate lets you import it into the certificate stores of computers running XenServer.
Possible values
• 0. Does not exports the certificate.
• 1. Exports the certificate and saves it to the location of your choice with the file name you specify using
EXPORTCERT_FQFN.
Default value
0
Remarks
• Use with the section called “EXPORTCERT_FQFN”, which specifies the file name and path.
• Setup does not require this property to run successfully. (That is, you do not have to export the certificate.)
• This property lets you export self-signed certificates that you create during Setup as well as certificates
that you created using a Trusted Authority.
EXPORTCERT_FQFN
Definition
Set to specify the path (location) and the file name you want Setup to use when exporting the certificate.
Possible values
The fully qualified path and file name to which to export the certificate. For example, C:\Certifi-
cates\WLBCert.cer.
Default value
Blank.
XenServer Administrator's Guide Workload Balancing 89
Remarks
Use this property with the parameter documented in the section called “EXPORTCERT”.
HTTPS_PORT
Definition
Use this property to change the default port over which Workload Balancing (the Web Service Host service)
communicates with XenServer.
Specify this property when you are running Setup on the computer that will host the Web Service Host
service. This may be either the Workload Balancing computer, in a one-server deployment, or the computer
hosting the services.
Possible values
User defined.
Default value
8012
Remarks
• If you set a value other than the default for this property, you must also change the value of this port in
XenServer, which you can do with the Configure Workload Balancing wizard. The port number value
specified during Setup and in the Configure Workload Balancing wizard must match.
INSTALLDIR
Definition
Installation directory, where Installation directory is the location where the Workload
Balancing software is installed.
Possible values
User configurable
Default value
C:\Program Files\Citrix
PREREQUISITES_PASSED
Definition
You must set this property for Setup to continue. When enabled (PREREQUISITES_PASSED = 1), Setup
skips checking preinstallation requirements, such as memory or operating system configurations, and lets
you perform a command-line installation of the server.
XenServer Administrator's Guide Workload Balancing 90
Possible values
• 1. Indicates for Setup to not check for preinstallation requirements on the computer on which you are
running Setup. You must set this property to 1 or Setup fails.
Default value
0
Remarks
• This is a required value.
RECOVERYMODEL
Definition
Specifies the SQL Server database recovery model.
Possible values
• SIMPLE. Specifies the SQL Server Simple Recovery model. Lets you recover the database from the end
of any backup. Requires the least administration and consumes the lowest amount of disk space.
• FULL. Specifies the Full Recovery model. Lets you recover the database from any point in time. However,
this model uses consumes the largest amount of disk space for its logs.
• BULK_LOGGED. Specifies the Bulk-Logged Recovery model. Lets you recover the database from the
end of any backup. This model consumes less logging space than the Full Recovery model. However,
this model provides more protection for data than the Simple Recovery model.
Default value
SIMPLE
Remarks
For more information about SQL Server recovery models, see the Microsoft's MSDN Web site and search
for "Selecting a Recovery Model."
USERORGROUPACCOUNT
Definition
Specifies the account or group name that corresponds with the account XenServer will use when it connects
to Workload Balancing. Specifying the name lets Workload Balancing recognize the connection.
Possible values
• User name. Specify the name of the account you created for XenServer (for example,
workloadbalancing_user).
• Group name. Specify the group name for the account you created. Specifying a group name lets more
than one person in your organization log on to Workload Balancing with their own credentials. (Otherwise,
you will have to provide all users with the same set of credentials to use for Workload Balancing.)
XenServer Administrator's Guide Workload Balancing 91
Default value
Blank.
Remarks
• This is a required parameter. You must use this parameter with the section called
“WEBSERVICE_USER_CB ”.
• To specify this parameter, you must create an account on the Workload Balancing server before running
Setup. For more information, see the section called “Authorization for Workload Balancing ”.
• This property does not require specifying another property for the password. You do not specify the pass-
word until you configure Workload Balancing.
WEBSERVICE_USER_CB
Definition
Specifies the authorization type, user account or group name, for the account you created for XenServer
before Setup. For more information, see the section called “Authorization for Workload Balancing ”.
Possible values
• 0. Specifies the type of data you are specifying with USERORGROUPACCOUNT corresponds with a
group.
• 1. Specifies the type of data you are specifying with USERORGROUPACCOUNT corresponds with a
user account.
Default value
0
Remarks
• This is a required property. You must use this parameter with the section called “USERORGROUPAC-
COUNT”.
WINDOWS_AUTH
Definition
Lets you select the authentication mode, either Windows or SQL Server, when connecting to the database
server during Setup. For more information about database authentication during Setup, see SQL Server
Database Authentication Requirements.
Possible values
• 0. SQL Server authentication
• 1. Windows authentication
XenServer Administrator's Guide Workload Balancing 92
Default value
1
Remarks
• If you are logged into the server on which you are installing Workload Balancing with Windows credentials
that have an account on the database server, you do not need to set this property.
• If you specify WINDOWS_AUTH, you must also specify DBPASSWORD if you want to specify an account
other than the one you are logged onto the server on which you are running Setup.
• The account you specify must be a login on the SQL Server database with sysadmin privileges.
Before initializing Workload Balancing, configure your antivirus software to exclude Workload Balancing
folders, as described in the section called “Configuring Antivirus Software”.
After the initial configuration, the Initialize button on the WLB tab changes to a Disable button. This is
because after initialization you cannot modify the Workload Balancing server a resource pool uses without
disabling Workload Balancing on that pool and then reconfiguring it. For information, see the section called
“Reconfiguring a Resource Pool to Use Another WLB Server”.
Important
Following initial configuration, Citrix strongly recommends you evaluate your performance thresholds as
described in the section called “Evaluating the Effectiveness of Your Optimization Thresholds”. It is critical
to set Workload Balancing to the correct thresholds for your environment or its recommendations might
not be appropriate.
You can use the Configure Workload Balancing wizard in XenCenter or the XE commands to initialize
Workload Balancing or modify the configuration settings.
Initialization Overview
Initial configuration requires that you:
1. Specify the Workload Balancing server you want the resource pool to use and its port number.
2. Specify the credentials for communications, including the credentials:
• XenServer will use to connect to the Workload Balancing server
• Workload Balancing will use to connect to XenServer
For more information, see the section called “Authorization for Workload Balancing ”
3. Change the optimization mode, if desired, from Maximum Performance, the default setting, to Maximize
Density. For information about the placement strategies, see the section called “Changing the Placement
Strategy”.
XenServer Administrator's Guide Workload Balancing 93
4. Modify performance thresholds, if desired. You can modify the default utilization values and the criti-
cal thresholds for resources. For information about the performance thresholds, see the section called
“Changing the Performance Thresholds and Metric Weighting”.
5. Modify metric weighting, if desired. You can modify the importance Workload Balancing assigns to
metrics when it evaluates resource usage. For information about the performance thresholds, see the
section called “Metric Weighting Factors”.
Before the Workload Balancing feature can begin collecting performance data, the XenServers you want to
balance must be part of a resource pool. To complete this wizard, you need the:
• IP address (or NetBIOS name) and (optionally) port of the Workload Balancing server
• Credentials for the resource pool you want Workload Balancing to monitor
• Credentials for the account you created on the Workload Balancing server
By default, XenServer connects to Workload Balancing (specifically the Web Service Host service)
on port 8012.
Note
Do not edit this port number unless you have changed it during Workload Balancing Setup. The port
number value specified during Setup and in the Configure Workload Balancing wizard must match.
c. Enter the user name (for example, workloadbalancing_user) and password the computers running
XenServer will use to connect to the Workload Balancing server.
This must be the account or group that was configured during the installation of the Workload Balancing
Server. For information, see the section called “Authorization for Workload Balancing ”.
d. Enter the user name and password for the pool you are configuring (typically the password for the pool
master). Workload Balancing will use these credentials to connect to the computers running XenServer
in that pool.
To use the credentials with which you are currently logged into XenServer, select the Use the current
XenCenter credentials check box.
6. In the Basic Configuration page, do the following:
XenServer Administrator's Guide Workload Balancing 94
For information, see the section called “Changing the Placement Strategy”.
• If you want to allow placement recommendations that allow more virtual CPUs than a host's physical
CPUs, select the Overcommit CPU check box. For example, by default, if your resource pool has eight
physical CPUs and you have eight virtual machines, XenServer only lets you have one virtual CPU for
each physical CPU. Unless you select Overcommit CPU, XenServer will not let you add a ninth virtual
machine. In general, Citrix does not recommend enabling this option since it can degrade performance.
• If you want to change the number of weeks this historical data should be stored for this resource pool,
type a new value in the Weeks box. This option is not available if the data store is on SQL Server
Express.
7. Do one of the following:
If you... then...
want to modify advanced settings for thresh- click Next and continue with this procedure.
olds and change the priority given to specific re-
sources
8. In Critical Thresholds page, accept or enter a new value in the Critical Thresholds boxes.
Workload Balancing uses these thresholds when making virtual-machine placement and pool-optimiza-
tion recommendations. Workload Balancing strives to keep resource utilization on a host below the crit-
ical values set.
Moving the slider towards Less Important indicates that ensuring virtual machines always have the
highest amount of this resource available is not as vital on this resource pool.
For information about adjusting metric weighting, see Metric Weighting Factors.
10.Click Finish.
For information, see the section called “Changing the Placement Strategy”.
• If you want to allow placement recommendations that allow more virtual CPUs than a host's physical
CPUs, select the Overcommit CPU check box. For example, by default, if your resource pool has eight
physical CPUs and you have eight virtual machines, XenServer only lets you have one virtual CPU for
each physical CPU. Unless you select Overcommit CPU, XenServer will not let you add a ninth virtual
machine. In general, Citrix does not recommend enabling this option since it can degrade performance.
• If you want to change the number of weeks this historical data should be stored for this resource pool,
type a new value in the Weeks box. This option is not available if the data store is on SQL Server
Express.
6. Do one of the following:
If you... then...
want to modify advanced settings for thresh- click Next and continue with this procedure.
olds and change the priority given to specific re-
sources
7. In Critical Thresholds page, accept or enter a new value in the Critical Thresholds boxes.
Workload Balancing uses these thresholds when making virtual-machine placement and pool-optimiza-
tion recommendations. Workload Balancing strives to keep resource utilization on a host below the crit-
ical values set.
For information about adjusting these thresholds, see the section called “Critical Thresholds”.
8. In Metric Weighting page, if desired, adjust the sliders beside the individual resources.
Moving the slider towards Less Important indicates that ensuring virtual machines always have the
highest amount of this resource available is not as vital on this resource pool.
For information about adjusting metric weighting, see the section called “Metric Weighting Factors”.
9. Click Finish.
• User Account for Workload Balancing to connect to XenServer. Workload Balancing uses a XenServ-
er user account to connect to XenServer. You provide Workload Balancing with this account's credentials
when you run the Configure Workload Balancing wizard. Typically, you specify the credentials for the
pool (that is, the pool master's credentials).
• User Account for XenServer to Connect to Workload Balancing. XenServer communicates with the
Web Service Host using the user account you created before Setup.
XenServer Administrator's Guide Workload Balancing 96
During Workload Balancing Setup, you specified the authorization type (a single user or group) and the
user or group with permissions to make requests from the Web Service Host service.
During configuration, you must provide XenServer with this account's credentials when you run the Con-
figure Workload Balancing wizard.
• Exclude the following folder, which contains the Workload Balancing log:
On Windows XP and Windows Server 2003: %Documents and Settings%\All Users\Application Data\Cit-
rix\Workload Balancing\Data\Logfile.log
These paths may vary according to your operating system and SQL Server version.
XenServer Administrator's Guide Workload Balancing 97
Note
These paths and file names are for 32-bit default installations. Use the values that apply to your installa-
tion. For example, paths for 64-bit edition files might be in the %Program Files (x86)% folder.
Maximize Performance
(Default.) Workload Balancing attempts to spread workload evenly across all physical hosts in a resource
pool. The goal is to minimize CPU, memory, and network pressure for all hosts. When Maximize Perfor-
mance is your placement strategy, Workload Balancing recommends optimization when a virtual machine
reaches the High threshold.
Maximize Density
Workload Balancing attempts to fit as many virtual machines as possible onto a physical host. The goal is
to minimize the number of physical hosts that must be online.
When you select Maximize Density as your placement strategy, you can specify rules similar to the ones
in Maximize Performance. However, Workload Balancing uses these rules to determine how it can pack
virtual machines onto a host. When Maximize Density is your placement strategy, Workload Balancing
recommends optimization when a virtual machine reaches the Critical threshold.
Workload Balancing determines whether to recommend relocating a workload and whether a physical host
is suitable for a virtual-machine workload by evaluating:
Note
To prevent data from appearing artificially high, Workload Balancing evaluates the daily averages for a
resource and smooths utilization spikes.
Critical Thresholds
When evaluating utilization, Workload Balancing compares its daily average to four thresholds: low, medium,
high, and critical. After you specify (or accept the default) critical threshold, Workload Balancing sets the
other thresholds relative to the critical threshold on a pool.
XenServer Administrator's Guide Workload Balancing 98
The effect of the weighting varies according to the placement strategy you selected. For example, if you
selected Maximum Performance and you set Network Writes towards Less Important, if the Network
Writes on that server exceed the critical threshold you set, Workload Balancing still makes a recommenda-
tion to place a virtual machine's workload on a server but does so with the goal of ensuring performance
for the other resources.
If you selected Maximum Density as your placement recommendation and you specify Network Writes
as Less Important, Workload Balancing will still recommend placing workloads on that host if the Network
Writes exceed the critical threshold you set. However, the workloads are placed in the densest possible way.
Citrix recommends using most of the defaults in the Configure Workload Balancing wizard initially. How-
ever, you might need to change the network and disk thresholds to align them with the hardware in your
environment.
After Workload Balancing is enabled for a while, Citrix recommends evaluating your performance thresholds
and determining if you need to edit them. For example, consider if you are:
• Getting optimization recommendation when they are not yet required. If this is the case, try adjusting the
thresholds until Workload Balancing begins providing suitable optimization recommendations.
• Not getting recommendations when you think your network has insufficient bandwidth. If this is the case,
try lowering the network critical thresholds until Workload Balancing begins providing optimization rec-
ommendations.
Before you edit your thresholds, you might find it useful to generate a host health history report for each
physical host in the pool. See the section called “Host Health History” for more information.
• Placement strategy you select (that is, the placement optimization mode), as described in the section
called “Changing the Placement Strategy”
• Performance metrics for resources such as a physical host's CPU, memory, network, and disk utilization
The optimization recommendations display the name of the virtual machine that Workload Balancing rec-
ommends relocating, the host it currently resides on, and the host Workload Balancing recommends as the
machine's new location. The optimization recommendations also display the reason Workload Balancing
recommends moving the virtual machine (for example, "CPU" to improve CPU utilization).
After you accept an optimization recommendation, XenServer relocates all virtual machines listed as rec-
ommended for optimization.
XenServer Administrator's Guide Workload Balancing 99
Tip
You can find out the optimization mode for a resource pool by selecting the pool in XenCenter and checking
the Configuration section of the WLB tab.
After you click Apply Recommendations, XenCenter automatically displays the Logs tab so you can
see the progress of the virtual machine migration.
When you use these features with Workload Balancing enabled, host recommendations appear as star
ratings beside the name of the physical host. Five empty stars indicates the lowest-rated (least optimal)
server. When it is not possible to start or move a virtual machine to a host, an (X) appears beside the host
name with the reason.
2. From the VM menu, select Resume on Server and then select one of the following:
• Optimal Server. The optimal server is the physical host that is best suited to the resource demands
of the virtual machine you are starting. Workload Balancing determines the optimal server based on
its historical records of performance metrics and your placement strategy. The optimal server is the
server with the most stars.
• One of the servers with star ratings listed under the Optimal Server command. Five stars indicates the
most-recommended (optimal) server and five empty stars indicates the least-recommended server.
If an optimal server is not available, the words Click here to suspend the VM appear in the Enter Main-
tenance Mode dialog box. In this case, Workload Balancing does not recommend a placement because
no host has sufficient resources to run this virtual machine. You can either suspend this virtual machine
or exit Maintenance Mode and suspend a virtual machine on another host in the same pool. Then, if you
reenter the Enter Maintenance Modedialog box, Workload Balancing might be able to list a host that is a
suitable candidate for migration.
Note
When you take a server offline for maintenance and Workload Balancing is enabled, the words "Workload
Balancing" appear in the upper-right corner of the Enter Maintenance Mode dialog box.
To take the server out of maintenance mode, right-click the server and select Exit Maintenance Mode.
When you remove a server from maintenance mode, XenServer automatically restores that server's original
virtual machines to that server.
To generate a Workload Balancing report, you must have installed the Workload Balancing component,
registered at least one resource pool with Workload Balancing, and configured Workload Balancing on at
least one resource pool.
XenServer Administrator's Guide Workload Balancing 101
Introduction
Workload Balancing provides reporting on three types of objects: physical hosts, resource pools, and virtual
machines. At a high level, Workload Balancing provides two types of reports:
Workload Balancing provides some reports for auditing purposes, so you can determine, for example, the
number of times a virtual machine moved.
• the section called “Host Health History” . Similar to Pool Health History but filtered by a specific host.
• the section called “Optimization Performance History” . Shows resource usage before and after
executing optimization recommendations.
• the section called “Pool Health”. Shows aggregated resource usage for a pool. Helps you evaluate the
effectiveness of your optimization thresholds.
• the section called “Pool Health History” . Displays resource usage for a pool over time. Helps you
evaluate the effectiveness of your optimization thresholds.
• the section called “Virtual Machine Motion History” . Provides information about how many times
virtual machines moved on a resource pool, including the name of the virtual machine that moved, number
of times it moved, and physical hosts affected.
• the section called “Virtual Machine Performance History” . Displays key performance metrics for all
virtual machines that operated on a host during the specified timeframe.
3. Select the Start Date and the End Date for the reporting period. Depending on the report you select, you
might need to specify a host in the Host list box.
4. Click Run Report. The report displays in the report window.
Document Map. Lets you display a document map that helps you navigate
through long reports.
Page Forward/Back. Lets you move one page ahead or back in the report.
Back to Parent Report. Lets you return to the parent report when working with
drill-through reports.
Print. Lets you print a report and specify general printing options, such as the
printer, the number of pages, and the number of copies.
Print Layout. Lets you display a preview of the report before you print it.
Page Setup. Lets you specify printing options such as the paper size, page ori-
entation, and margins.
Export. Lets you export the report as an Acrobat (.PDF) file or as an Excel file
with a .XLS extension.
Find. Lets you search for a word in a report, such as the name of a virtual ma-
chine.
Page Setup.
Page Setup also lets you control the margins and paper size.
2. In the Page Setup dialog, select Landscape and click OK.
3. (Optional.) If you want to preview the print job, click
XenServer Administrator's Guide Workload Balancing 103
Print Layout.
4. Click
Print.
You can export a report in Microsoft Excel and Adobe Acrobat (PDF) formats.
• Excel
• Acrobat (PDF) file
• the section called “Host Health History” . Similar to Pool Health History but filtered by a specific host.
• the section called “Optimization Performance History” . Shows resource usage before and after
executing optimization recommendations.
• the section called “Pool Health”. Shows aggregated resource usage for a pool. Helps you evaluate the
effectiveness of your optimization thresholds.
• the section called “Pool Health History”. Displays resource usage for a pool over time. Helps you
evaluate the effectiveness of your optimization thresholds.
• the section called “Virtual Machine Motion History” . Provides information about how many times
virtual machines moved on a resource pool, including the name of the virtual machine that moved, number
of times it moved, and physical hosts affected.
• the section called “Virtual Machine Performance History” . Displays key performance metrics for all
virtual machines that operated on a host during the specified timeframe.
Toolbar Buttons
The following toolbar buttons in the Workload Reports window become available after you generate a report.
To display the name of a toolbar button, hold your mouse over toolbar icon.
XenServer Administrator's Guide Workload Balancing 104
Document Map. Lets you display a document map that helps you navigate
through long reports.
Page Forward/Back. Lets you move one page ahead or back in the report.
Back to Parent Report. Lets you return to the parent report when working with
drill-through reports.
Print. Lets you print a report and specify general printing options, such as the
printer, the number of pages, and the number of copies.
Print Layout. Lets you display a preview of the report before you print it.
Page Setup. Lets you specify printing options such as the paper size, page ori-
entation, and margins.
Export. Lets you export the report as an Acrobat (.PDF) file or as an Excel file
with a .XLS extension.
Find. Lets you search for a word in a report, such as the name of a virtual ma-
chine.
The colored lines (red, green, yellow) represent your threshold values. You can use this report with the
Pool Health report for a host to determine how a particular host's performance might be affecting overall
pool health. When you are editing the performance thresholds, you can use this report for insight into host
performance.
You can display resource utilization as a daily or hourly average. The hourly average lets you see the busiest
hours of the day, averaged, for the time period.
To view report data grouped by hour, expand + Click to view report data grouped by house for the time
period under the Host Health History title bar.
Workload Balancing displays the average for each hour for the time period you set. The data point is based
on a utilization average for that hour for all days in the time period. For example, in a report for May1, 2009
to May 15, 2009, the Average CPU Usage data point represents the resource utilization of all fifteen days
XenServer Administrator's Guide Workload Balancing 105
at 12:00 hours combined together as an average. That is, if CPU utilization was 82% at 12PM on May 1st,
88% at 12PM on May 2nd, and 75% on all other days, the average displayed for 12PM is 76.3%.
Note
Workload Balancing smooths spikes and peaks so data does not appear artificially high.
The dotted line represents the average usage across the pool over the period of days you select. A blue
bar indicates the day on which you optimized the pool.
This report can help you determine if Workload Balancing is working successfully in your environment. You
can use this report to see what led up to optimization events (that is, the resource usage before Workload
Balancing recommended optimizing).
This report displays average resource usage for the day; it does not display the peak utilization, such as
when the system is stressed. You can also use this report to see how a resource pool is performing if
Workload Balancing is not making optimization recommendations.
In general, resource usage should decline or be steady after an optimization event. If you do not see im-
proved resource usage after optimization, consider readjusting threshold values. Also, consider whether or
not the resource pool has too many virtual machines and whether or not new virtual machines were added
or removed during the timeframe you specified.
Pool Health
The pool health report displays the percentage of time a resource pool and its hosts spent in four different
threshold ranges: Critical, High, Medium, and Low. You can use the Pool Health report to evaluate the
effectiveness of your performance thresholds.
• Resource utilization in the Average Medium Threshold (blue) is the optimum resource utilization regard-
less of the placement strategy you selected. Likewise, the blue section on the pie chart indicates the
amount of time that host used resources optimally.
• Resource utilization in the Average Low Threshold Percent (green) is not necessarily positive. Whether
Low resource utilization is positive depends on your placement strategy. For example, if your placement
strategy is Maximum Density and most of the time your resource usage was green, Workload Balancing
might not be fitting the maximum number of virtual machines possible on that host or pool. If this is the
case, you should adjust your performance threshold values until the majority of your resource utilization
falls into the Average Medium (blue) threshold range.
• Resource utilization in the Average Critical Threshold Percent (red) indicates the amount of time average
resource utilization met or exceeded the Critical threshold value.
If you double-click on a pie chart for a host's resource usage, XenCenter displays the Host Health History
report for that resource (for example, CPU) on that host. Clicking the Back to Parent Report toolbar button
returns you to the Pool Health history report.
XenServer Administrator's Guide Workload Balancing 106
If you find the majority of your report results are not in the Average Medium Threshold range, you proba-
bly need to adjust the Critical threshold for this pool. While Workload Balancing provides default threshold
settings, these defaults are not effective in all environments. If you do not have the thresholds adjusted to
the correct level for your environment, Workload Balancing's optimization and placement recommendations
might not be appropriate. For more information, see the section called “Changing the Performance Thresh-
olds and Metric Weighting”.
Note
The High, Medium, and Low threshold ranges are based on the Critical threshold value you set when
you initialized Workload Balancing.
Workload Balancing extrapolates the threshold ranges from the values you set for the Critical thresholds
when you initialized Workload Balancing. Although similar to the Pool Health report, the Pool Health History
report displays the average utilization for a resource on a specific date rather than the amount of time overall
the resource spent in a threshold.
With the exception of the Average Free Memory graph, the data points should never average above the
Critical threshold line (red). For the Average Free Memory graph, the data points should never average
below the Critical threshold line (which is at the bottom of the graph). Because this graph displays free
memory, the Critical threshold is a low value, unlike the other resources.
• When the Average Usage line in the chart approaches the Average Medium Threshold (blue) line, it
indicates the pool's resource utilization is optimum regardless of the placement strategy configured.
• Resource utilization approaching the Average Low Threshold (green) is not necessarily positive. Whether
Low resource utilization is positive depends on your placement strategy. For example, if your placement
strategy is Maximum Density and most days the Average Usage line is at or below the green line, Workload
Balancing might not be placing virtual machines as densely as possible on that pool. If this is the case,
you should adjust the pool's Critical threshold values until the majority of its resource utilization falls into
the Average Medium (blue) threshold range.
• When the Average Usage line intersects with the Average Critical Threshold Percent (red), this indicates
the days when the average resource utilization met or exceeded the Critical threshold value for that
resource.
If you find the data points in the majority of your graphs are not in the Average Medium Threshold range,
but you are satisfied with the performance of this pool, you might need to adjust the Critical threshold for
this pool. For more information, see the section called “Changing the Performance Thresholds and Metric
Weighting”.
machine moved. This report also indicates the reason for the optimization. You can use this report to audit
the number of moves on a pool.
• The numbers on the left side of the chart correspond with the number of moves possible, which is based
on how many virtual machines are in a resource pool.
• You can look at details of the moves on a specific date by expanding the + sign in the Date section of
the report.
The initial view of the report displays an average value for resource utilization over the period you specified.
Expanding the + sign displays line graphs for individual resources. You can use these graphs to see trends
in resource utilization over time.
This report displays data for CPU Usage, Free Memory, Network Reads/Writes, and Disk Reads/Writes.
• Temporarily. Disabling Workload Balancing temporarily stops XenCenter from displaying recommenda-
tions for the specified resource pool. When you disable Workload Balancing temporarily, data collection
stops for that resource pool.
• Permanently. Disabling Workload Balancing permanently deletes information about the specified re-
source pool from the data store and stops data collection for that pool.
1. In the Resource pane of XenCenter, select the resource pool for which you want to disable Workload
Balancing.
2. In the WLB tab, click Disable WLB. A dialog box appears asking if you want to disable Workload Bal-
ancing for the pool.
3. Click Yes to disable Workload Balancing for the pool. Important: If you want to disable Workload Balancing
permanently for this resource pool, click the Remove all resource pool information from the Workload
Balancing Server check box.
XenServer disables Workload Balancing for the resource pool, either temporarily or permanently depending
on your selections.
• If you disabled Workload Balancing temporarily on a resource pool, to reenable Workload Balancing, click
Enable WLB in the WLB tab.
XenServer Administrator's Guide Workload Balancing 108
• If you disabled Workload Balancing permanently on a resource pool, to reenable it, you must reinitialize
it. For information, see To initialize Workload Balancing.
1. On the resource pool you want to point to a different Workload Balancing server, disable Workload
Balancing permanently. You do this by deleting its information for the resource pool from the data store
and stop collecting data. For instructions, see the section called “Disabling Workload Balancing on a
Resource Pool”.
2. In the Resource pane of XenCenter, select the resource pool for which you want to reenable Workload
Balancing.
3. In the WLB tab, click Initialize WLB. The Configure Workload Balancing wizard appears.
4. Reinitialize the resource pool and specify the new server's credentials in the Configure Workload Bal-
ancing wizard. You must provide the same information as you do when you initially configure a resource
pool for use with Workload Balancing. For information, see the section called “To initialize Workload Bal-
ancing”.
When you uninstall Workload Balancing, only the Workload Balancing software is removed from the Work-
load Balancing server. The data store remains on the system running SQL Server. To remove a Workload
Balancing data store, you must use the SQL Server Management Studio (SQL Server 2005 and SQL Server
2008).
If you want to uninstall both Workload Balancing and SQL Server from your computer, uninstall Workload
Balancing first and then delete the database using the SQL Server Management Studio.
The data directory, usually located at %Documents and Settings%\All Users\Application Data\Citrix\Work-
load Balancing\Data, is not removed when you uninstall Workload Balancing. You can remove the contents
of the data directory manually.
Here are a few tips for resolving general Workload Balancing issues:
• Windows Server 2003 and Windows XP: %Documents and Settings%\All Users\Application Data\Cit-
rix\Workload Balancing\Data\LogFile.log
• Windows Server 2008 and Windows Vista: %Users%\All Users\Citrix\Workload Balancing\Da-
ta\LogFile.log
• Check the logs in XenCenter's Logs tab for more information.
• If you receive an error message, review the XenCenter log, which is stored in these locations (by default):
• Windows Server 2003 and Windows XP: %Documents and Settings%\yourusername\Application Da-
ta\Citrix\XenCenter\logs\XenCenter.log
• Windows Server 2008 and Windows Vista: %Users%\<current_logged_on_user>\AppDa-
ta\Roaming\Citrix\XenCenter\logs\XenCenter.log
Error Messages
Workload Balancing displays error messages in the Log tab in XenCenter, in the Windows Event log, and,
in some cases, on screen as dialog boxes.
The location of the installation varies depending on whether you installed Workload Balancing using the
command-line installation or the Setup wizard. If you used the Setup wizard, the log is at %Documents and
Settings%\username\Local Settings\Temp\msibootstrapper2CSM_MSI_Install.log (by default).
Tip
When troubleshooting installations using installations logs, the log file is overwritten each time you install.
You might want to manually copy the installation logs to separate directory so that you can compare them.
For common installation and Msiexec errors, try searching the Citrix Knowledge Center and the Internet.
To verify that you installed Workload Balancing successfully, see the section called “To verify your Workload
Balancing installation”.
• Make sure that Workload Balancing installed correctly and all of its services are running. See the section
called “To verify your Workload Balancing installation”.
• Using the section called “Issues Starting Workload Balancing” as a guide, check to make sure you are
entering the correct credentials.
• You can enter a computer name in the WLB server name box, but it must be a fully qualified domain
name (FQDN). For example, yourcomputername.yourdomain.net. If you are having trouble entering
a computer name, try using the Workload Balancing server's IP address instead.
XenServer Administrator's Guide Workload Balancing 110
• Verifying the credentials you entered in the Configure Workload Balancing wizard match the credentials:
• You created on the Workload Balancing server
• On XenServer
• Verifying the IP address or NetBIOS name of the Workload Balancing server you entered in the Configure
Workload Balancing wizard is correct.
• Verifying the user or group name you entered during Setup matches the credentials you created on the
Workload Balancing server. To check what user or group name you entered, open the install log (search
for log.txt) and search for userorgroupaccount.
Click the Configure button on the WLB tab and reenter the server credentials.
Typical causes for this error include changing the server credentials or inadvertently deleting the Workload
Balancing user account.
To solve this problem, you can either uninstall the old Workload Balancing Server or manually stop the
Workload Balancing services (analysis, data collector and Web service) so that it will no longer monitor
the pool.
Citrix does not recommend using the pool-initialize-wlb XE command to deconfigure or change Workload
Balancing servers.
Chapter 6. Backup and recovery
This chapter presents the functionality designed to give you the best chance to recover your XenServer
from a catastrophic failure of hardware or software, from lightweight metadata backups to full VM backups
and portable SRs.
Backups
Citrix recommends that you frequently perform as many of the following backup procedures as possible to
recover from possible server and/or software failure.
xe pool-dump-database file-name=<backup>
This command checks that the target machine has an appropriate number of appropriately named
NICs, which is required for the backup to succeed.
Note
To backup a VM
Note
This backup also backs up all of the VM's data. When importing a VM, you can specify the storage
mechanism to use for the backed up data.
Warning
Because this process backs up all of the VM data, it can take some time to complete.
Using portable SRs has similar constraints to XenMotion as both cases result in VMs being moved between
hosts. To use portable SRs:
• The source and destination hosts must have the same CPU type and networking configuration. The
destination host must have a network of the same name as the one of the source host.
• The SR media itself, such as a LUN for iSCSI and FibreChannel SRs, must be able to be moved, re-
mapped, or replicated between the source and destination hosts
• If using tiered storage, where a VM has VDIs on multiple SRs, all required SRs must be moved to the
destination host or pool
• Any configuration data required to connect the SR on the destination host or pool, such as the target IP
address, target IQN, and LUN SCSI ID for iSCSI SRs, and the LUN SCSI ID for FibreChannel SRs, must
be maintained manually
• The backup metadata option must be configured for the desired SR
XenServer Administrator's Guide Backup and recovery 113
Note
When moving portable SRs between pools the source and destination pools are not required to have
the same number of hosts. Moving portable SRs between pools and standalone hosts is also supported
provided the above constraints are met.
Portable SRs work by creating a dedicated metadata VDI within the specified SR. The metadata VDI is used
to store copies of the pool or host database as well as the metadata describing the configuration of each
VM. As a result the SR becomes fully self-contained, or portable, allowing it to be detached from one host
and attached to another as a new SR. Once the SR is attached a restore process is used to recreate all of
the VMs on the SR from the metadata VDI. For disaster recovery the metadata backup can be scheduled
to run regularly to ensure the metadata SR is current.
The metadata backup and restore feature works at the command-line level and the same functionality is
also supported in xsconsole. It is not currently available through XenCenter.
In the menu-driven text console on the XenServer host, there are some menu items under the Backup,
Update and Restore menu which provide more user-friendly interfaces to these scripts. The operations
should only be performed on the pool master. You can use these menu items to perform 3 operations:
• Schedule a regular metadata backup to the default pool SR, either daily, weekly or monthly. This will
regularly rotate metadata backups and ensure that the latest metadata is present for that SR without any
user intervention being required.
• Trigger an immediate metadata backup to the SR of your choice. This will create a backup VDI if neces-
sary, and attach it to the host and backup all the metadata to that SR. Use this option if you have made
some changes which you want to see reflected in the backup immediately.
• Perform a metadata restoration operation. This will prompt you to choose an SR to restore from, and then
the option of restoring only VM records associated with that SR, or all the VM records found (potentially
from other SRs which were present at the time of the backup). There is also a dry run option to see which
VMs would be imported, but not actually perform the operation.
For automating this scripting, there are some commands in the control domain which provide an interface
to metadata backup and restore at a lower level than the menu options:
• xe-backup-metadata provides an interface to create the backup VDIs (with the -c flag), and also to attach
the metadata backup and examine its contents.
• xe-restore-metadata can be used to probe for a backup VDI on a newly attached SR, and also selectively
reimport VM metadata to recreate the associations between VMs and their disks.
Full usage information for both scripts can be obtained by running them in the control domain using the -h
flag. One particularly useful invocation mode is xe-backup-metadata -d which mounts the backup VDI into
dom0, and drops into a sub-shell with the backup directory so it can be examined.
XenServer Administrator's Guide Backup and recovery 114
1. On the source host or pool, in xsconsole, select the Backup, Restore, and Update menu option, select
the Backup Virtual Machine Metadata option, and then select the desired SR.
2. In XenCenter, select the source host or pool and shutdown all running VMs with VDIs on the SR to
be moved.
3. In the tree view select the SR to be moved and select Storage > Detach Storage Repository. The
Detach Storage Repository menu option will not be displayed if there are running VMs with VDIs on
the selected SR. After being detached the SR will be displayed in a grayed-out state.
Warning
Do not complete this step unless you have created a backup VDI in step 1.
4. Select Storage > Forget Storage Repository to remove the SR record from the host or pool.
5. Select the destination host in the tree view and select Storage > New Storage Repository.
6. Create a new SR with the appropriate parameters required to reconnect the existing SR to the desti-
nation host. In the case of moving a SR between pools or hosts within a site the parameters may be
identical to the source pool.
7. Every time a new SR is created the storage is checked to see if it contains an existing SR. If so, an option
is presented allowing re-attachment of the existing SR. If this option is not displayed the parameters
specified during SR creation are not correct.
8. Select Reattach.
9. Select the new SR in the tree view and then select the Storage tab to view the existing VDIs present
on the SR.
10. In xsconsole on the destination host, select the Backup, Restore, and Update menu option, select the
Restore Virtual Machine Metadata option, and select the newly re-attached SR.
11. The VDIs on the selected SR are inspected to find the metadata VDI. Once found, select the metadata
backup you want to use.
12. Select the Only VMs on this SR option to restore the VMs.
Note
Use the All VM Metadata option when moving multiple SRs between hosts or pools, or when using tiered
storage where VMs to be restored have VDIs on multiple SRs. When using this option ensure all required
SRs have been reattached to the destination host prior running the restore.
13. The VMs are restored in the destination pool in a shutdown state and are available for use.
XenServer Administrator's Guide Backup and recovery 115
Using portable SRs with storage layer replication between sites to enable the DR site in
case of disaster
1. Any storage layer configuration required to enable the mirror or replica LUN in the DR site are per-
formed.
2. An SR is created for each LUN in the DR site.
3. VMs are restored from metadata on one or more SRs.
4. Any adjustments to VM configuration required by differences in the DR site, such as IP addressing,
are performed.
5. VMs are started and verified.
6. Traffic is routed to the VMs in the DR site.
VM Snapshots
XenServer provides a convenient snapshotting mechanism that can take a snapshot of a VM storage and
metadata at a given time. Where necessary IO is temporarily halted while the snapshot is being taken to
ensure that a self-consistent disk image can be captured.
Snapshot operations result in a snapshot VM that is similar to a template. The VM snapshot contains all
the storage information and VM configuration, including attached VIFs, allowing them to be exported and
restored for backup purposes.
Regular Snapshots
Regular snapshots are crash consistent and can be performed on all VM types, including Linux VMs.
Quiesced Snapshots
Quiesced snapshots take advantage of the Windows Volume Shadow Copy Service (VSS) to generate ap-
plication consistent point-in-time snapshots. The VSS framework helps VSS-aware applications (for exam-
ple Microsoft Exchange or Microsoft SQL Server) flush data to disk and prepare for the snapshot before
it is taken.
XenServer Administrator's Guide Backup and recovery 116
Quiesced snapshots are therefore safer to restore, but can have a greater performance impact on a system
while they are being taken. They may also fail under load so more than one attempt to take the snapshot
may be required.
XenServer supports quiesced snapshots on Windows Server 2003 and Windows Server 2008 for both 32-
bit and 64-bit variants. Windows 2000, Windows XP and Windows Vista are not supported. Snapshot is
supported on all storage types, though for the LVM-based storage types the storage repository must have
been upgraded if it was created on a previous version of XenServer and the volume must be in the default
format (type=raw volumes cannot be snapshotted).
Note
Using EqualLogic or NetApp storage requires a Citrix Essentials for XenServer license. To learn more
about Citrix Essentials for XenServer and to find out how to upgrade, visit the Citrix website here.
Note
Do not forget to install the Xen VSS provider in the Windows guest in order to support VSS. This is done
using the install-XenProvider.cmd script provided with the Windows PV drivers. More details can
be found in the Virtual Machine Installation Guide in the Windows section.
In general, a VM can only access VDI snapshots (not VDI clones) of itself using the VSS interface. There is
a flag that can be set by the XenServer administrator whereby adding an attribute of snapmanager=true
to the VM's other-config allows that VM to import snapshots of VDIs from other VMs.
Warning
This opens a security vulnerability and should be used with care. This feature allows an administrator
to attach VSS snapshots using an in-guest transportable snapshot ID as generated by the VSS layer to
another VM for the purposes of backup.
VSS quiesce timeout: the Microsoft VSS quiesce period is set to a non-configurable value of 10 seconds,
and it is quite probable that a snapshot may not be able to complete in time. If, for example the XAPI daemon
has queued additional blocking tasks such as an SR scan, the VSS snapshot may timeout and fail. The
operation should be retried if this happens.
Note
The more VBDs attached to a VM, the more likely it is that this timeout may be reached. Citrix recommends
attaching no more that 2 VBDs to a VM to avoid reaching the timeout. However, there is a workaround to
this problem. The probability of taking a successful VSS based snapshot of a VM with more than 2 VBDs
can be increased manifold, if all the VDIs for the VM are hosted on different SRs.
VSS snapshot all the disks attached to a VM: in order to store all data available at the time of a VSS
snapshot, the XAPI manager will snapshot all disks and the VM metadata associated with a VM that can
XenServer Administrator's Guide Backup and recovery 117
be snapshotted using the XenServer storage manager API. If the VSS layer requests a snapshot of only a
subset of the disks, a full VM snapshot will not be taken.
Vm-snapshot-with-quiesce produces bootable snapshot VM images: To achieve this end, the XenServer
VSS hardware provider makes snapshot volumes writable, including the snapshot of the boot volume.
VSS snap of volumes hosted on dynamic disks in the Windows Guest: The vm-snapshot-with-quiesce CLI
and the XenServer VSS hardware provider do not support snapshots of volumes hosted on dynamic disks
on the Windows VM.
Taking a VM snapshot
Before taking a snapshot, see ???? and ???? for information about any special operating system-specific
configuration and considerations to take into account.
VM Rollback
Restoring a VM to snapshot state
Note
xe vm-list
xe vm-shutdown uuid=<vm_uuid>
xe vm-destroy uuid=<vm_uuid>
xe vm-start name-label=<vm_name>
XenServer Administrator's Guide Backup and recovery 118
Member failures
In the absence of HA, master nodes detect the failures of members by receiving regular heartbeat messages.
If no heartbeat has been received for 200 seconds, the master assumes the member is dead. There are
two ways to recover from this problem:
• Repair the dead host (e.g. by physically rebooting it). When the connection to the member is restored,
the master will mark the member as alive again.
• Shutdown the host and instruct the master to forget about the member node using the xe host-forget CLI
command. Once the member has been forgotten, all the VMs which were running there will be marked
as offline and can be restarted on other XenServer hosts. Note it is very important to ensure that the
XenServer host is actually offline, otherwise VM data corruption might occur. Be careful not to split your
pool into multiple pools of a single host by using xe host-forget, since this could result in them all mapping
the same shared storage and corrupting VM data.
Warning
• If you are going to use the forgotten host as a XenServer host again, perform a fresh installation of
the XenServer software.
• Do not use xe host-forget command if HA is enabled on the pool. Disable HA first, then forget the
host, and then reenable HA.
When a member XenServer host fails, there may be VMs still registered in the running state. If you are sure
that the member XenServer host is definitely down, and that the VMs have not been brought up on another
XenServer host in the pool, use the xe vm-reset-powerstate CLI command to set the power state of the
VMs to halted. See the section called “vm-reset-powerstate” for more details.
Warning
Incorrect use of this command can lead to data corruption. Only use this command if absolutely necessary.
Master failures
Every member of a resource pool contains all the information necessary to take over the role of master if
required. When a master node fails, the following sequence of events occurs:
1. The members realize that communication has been lost and each tries to reconnect for sixty seconds.
2. Each member then puts itself into emergency mode, whereby the member XenServer hosts will now
accept only the pool-emergency commands (xe pool-emergency-reset-master and xe pool-emergen-
cy-transition-to-master).
If the master comes back up at this point, it re-establishes communication with its members, the members
leave emergency mode, and operation returns to normal.
XenServer Administrator's Guide Backup and recovery 119
If the master is really dead, choose one of the members and run the command xe pool-emergency-tran-
sition-to-master on it. Once it has become the master, run the command xe pool-recover-slaves and the
members will now point to the new master.
If you repair or replace the server that was the original master, you can simply bring it up, install the XenServ-
er host software, and add it to the pool. Since the XenServer hosts in the pool are enforced to be homoge-
neous, there is no real need to make the replaced server the master.
When a member XenServer host is transitioned to being a master, you should also check that the default
pool storage repository is set to an appropriate value. This can be done using the xe pool-param-list
command and verifying that the default-SR parameter is pointing to a valid storage repository.
Pool failures
In the unfortunate event that your entire resource pool fails, you will need to recreate the pool database from
scratch. Be sure to regularly back up your pool-metadata using the xe pool-dump-database CLI command
(see the section called “pool-dump-database”).
Warning
Any VMs which were running on a previous member (or the previous host) which has failed will still be
marked as Running in the database. This is for safety -- simultaneously starting a VM on two different
hosts would lead to severe disk corruption. If you are sure that the machines (and VMs) are offline you
can reset the VM power state to Halted:
xe pool-emergency-transition-to-master
xe pool-recover-slaves
xe pool-restore-database file-name=<backup>
Warning
This command will only succeed if the target machine has an appropriate number of appropriately named
NICs.
2. If the target machine has a different view of the storage (for example, a block-mirror with a different IP
address) than the original machine, modify the storage configuration using the pbd-destroy command
and then the pbd-create command to recreate storage configurations. See the section called “PBD
commands” for documentation of these commands.
3. If you have created a new storage configuration, use pbd-plug or Storage > Repair Storage Repos-
itory menu item in XenCenter to use the new configuration.
4. Restart all VMs.
This command will attempt to restore the VM metadata on a 'best effort' basis.
3. Restart all VMs.
Chapter 7. Monitoring and managing
XenServer
XenServer and XenCenter provide access to alerts that are generated when noteworthy things happen. Xen-
Center provides various mechanisms of grouping and maintaining metadata about managed VMs, hosts,
storage repositories, and so on.
Note
Full monitoring and alerting functionality is only available with a Citrix Essentials for XenServer license.
To learn more about Citrix Essentials for XenServer and to find out how to upgrade, visit the Citrix website
here.
Alerts
XenServer generates alerts for the following events.
Configurable Alerts:
Alert Description
XenCenter old the XenServer expects a newer version but can still connect to the current
version
XenServer out of date XenServer is an old version that the current XenCenter cannot connect to
Missing IQN alert XenServer uses iSCSI storage but the host IQN is blank
Duplicate IQN alert XenServer uses iSCSI storage, and there are duplicate host IQNs
• ha_host_failed
• ha_host_was_fenced
• ha_network_bonding_error
• ha_pool_drop_in_plan_exists_for
• ha_pool_overcommitted
XenServer Administrator's Guide Monitoring and managing XenServer 122
• ha_protected_vm_restart_failed
• ha_statefile_lost
• host_clock_skew_detected
• host_sync_data_failed
• license_does_not_support_pooling
• pbd_plug_failed_on_server_start
• pool_master_transition
The following alerts appear on the performance graphs in XenCenter. See the XenCenter online help for
more information:
• vm_cloned
• vm_crashed
• vm_rebooted
• vm_resumed
• vm_shutdown
• vm_started
• vm_suspended
Customizing Alerts
Note
Most alerts are only available in a pool with a Citrix Essentials for XenServer license. To learn more
about Citrix Essentials for XenServer and to find out how to upgrade, visit the Citrix website here.
The performance monitoring perfmon runs once every 5 minutes and requests updates from XenServer
which are averages over 1 minute, but these defaults can be changed in /etc/sysconfig/perfmon.
Every 5 minutes perfmon reads updates of performance variables exported by the XAPI instance run-
ning on the same host. These variables are separated into one group relating to the host itself, and a
group for each VM running on that host. For each VM and also for the host, perfmon reads in the oth-
er-config:perfmon parameter and uses this string to determine which variables it should monitor, and
under which circumstances to generate a message.
<config>
<variable>
<name value="cpu_usage"/>
<alarm_trigger_level value="LEVEL"/>
</variable>
<variable>
<name value="network_usage"/>
<alarm_trigger_level value="LEVEL"/>
</variable>
</config>
XenServer Administrator's Guide Monitoring and managing XenServer 123
Valid VM Elements
name
what to call the variable (no default). If the name value is one of cpu_usage, network_usage, or
disk_usage the rrd_regex and alarm_trigger_sense parameters are not required as defaults
for these values will be used.
alarm_priority
the priority of the messages generated (default 5)
alarm_trigger_level
level of value that triggers an alarm (no default)
alarm_trigger_sense
high if alarm_trigger_level is a maximum value otherwise low if the alarm_trigger_level
is a minimum value. (default high)
alarm_trigger_period
number of seconds that values above or below the alarm threshold can be received before an alarm
is sent (default 60)
alarm_auto_inhibit_period
number of seconds this alarm disabled after an alarm is sent (default 3600)
consolidation_fn
how to combine variables from rrd_updates into one value (default is sum - other choice is average)
rrd_regex
regular expression to match the names of variables returned by the xe vm-data-source-list
uuid=<vmuuid> command that should be used to compute the statistical value. This parameter has
defaults for the named variables cpu_usage, network_usage, and disk_usage. If specified, the
values of all items returned by xe vm-data-source-list whose names match the specified regular ex-
pression will be consolidated using the method specified as the consolidation_fn.
defaults for the named variables cpu_usage and network_usage. If specified, the values of all items
returned by xe vm-data-source-list whose names match the specified regular expression will be con-
solidated using the method specified as the consolidation_fn.
Note
Email alerts are only available in a pool with a Citrix Essentials for XenServer license. To learn more
about Citrix Essentials for XenServer and to find out how to upgrade, visit the Citrix website here.
Alerts generated from XenServer can also be automatically e-mailed to the resource pool administrator, in
addition to being visible from the XenCenter GUI. To configure this, specify the email address and SMTP
server:
pool:other-config:mail-destination=<joe.bloggs@domain.tld>
pool:other-config:ssmtp-mailhub=<smtp.domain.tld[:port]>
You can also specify the minimum value of the priority field in the message before the email will be sent:
pool:other-config:mail-min-priority=<level>
Note
Some SMTP servers only forward mails with addresses that use FQDNs. If you find that emails are not
being forwarded it may be for this reason, in which case you can set the server hostname to the FQDN
so this is used when connecting to your mail server.
Custom Searches
XenCenter supports the creation of customized searches. Searches can be exported and imported, and
the results of a search can be displayed in the navigation pane. See the XenCenter online help for more
information.
For iSCSI and NFS storage, check your network statistics to determine if there is a throughput bottleneck
at the array, or whether the PBD is saturated.
Chapter 8. Command line interface
This chapter describes the XenServer command line interface (CLI). The xe CLI enables the writing of
scripts for automating system administration tasks and allows integration of XenServer into an existing IT
infrastructure.
The xe command line interface is installed by default on XenServer hosts and is included with XenCenter.
A stand-alone remote CLI is also available for Linux.
To use it, open a Windows Command Prompt and change directories to the directory where the file resides
(typically C:\Program Files\XenSource\XenCenter), or add its installation location to your system
path.
On Linux, you can install the stand-alone xe CLI executable from the RPM named xe-
cli-5.5.0-15119p.i386.rpm on the Linux Pack CD, as follows:
xe help command
xe help
xe help --all
Basic xe syntax
The basic syntax of all XenServer xe CLI commands is:
Each specific command contains its own set of arguments that are of the form argument=value. Some
commands have required arguments, and most have some set of optional arguments. Typically a command
will assume default values for some of the optional arguments when invoked without them.
If the xe command is executed remotely, additional connection and authentication arguments are used.
These arguments also take the form argument=argument_value.
The server argument is used to specify the hostname or IP address. The username and password
arguments are used to specify credentials. A password-file argument can be specified instead of the
password directly. In this case an attempt is made to read the password from the specified file (stripping CRs
and LFs off the end of the file if necessary), and use that to connect. This is more secure than specifying
the password directly at the command line.
XenServer Administrator's Guide Command line interface 127
The optional port argument can be used to specify the agent port on the remote XenServer host (defaults
to 443).
xe vm-list
-u username
-pw password
-p port
-s server
Arguments are also taken from the environment variable XE_EXTRA_ARGS, in the form of comma-separated
key/value pairs. For example, in order to enter commands on one XenServer host that are run on a remote
XenServer host, you could do the following:
export XE_EXTRA_ARGS="server=jeffbeck,port=443,username=root,password=pass"
and thereafter you would not need to specify the remote XenServer host parameters in each xe command
you execute.
Using the XE_EXTRA_ARGS environment variable also enables tab completion of xe commands when issued
against a remote XenServer host, which is disabled by default.
argument=value
without quotes, as long as value doesn't have any spaces in it. There should be no whitespace in between
the argument name, the equals sign (=), and the value. Any argument not conforming to this format will
be ignored.
If you use the CLI while logged into a XenServer host, commands have a tab completion feature similar to
that in the standard Linux bash shell. If you type, for example
xe vm-l
and then press the TAB key, the rest of the command will be displayed when it is unambiguous. If more
than one command begins with vm-l, hitting TAB a second time will list the possibilities. This is particularly
useful when specifying object UUIDs in commands.
Note
When executing commands on a remote XenServer host, tab completion does not normally work. How-
ever if you put the server, username, and password in an environment variable called XE_EXTRA_ARGS
on the machine from which you are entering the commands, tab completion is enabled. See the section
called “Basic xe syntax” for details.
Command types
Broadly speaking, the CLI commands can be split in two halves: Low-level commands concerned with listing
and parameter manipulation of API objects, and higher level commands for interacting with VMs or hosts
in a more abstract level. The low-level commands are:
• <class>-list
• <class>-param-get
• <class>-param-set
• <class>-param-list
• <class>-param-add
• <class>-param-remove
• <class>-param-clear
• bond
• console
• host
• host-crashdump
• host-cpu
• network
• patch
• pbd
• pif
XenServer Administrator's Guide Command line interface 129
• pool
• sm
• sr
• task
• template
• vbd
• vdi
• vif
• vlan
• vm
Note that not every value of <class> has the full set of <class>-param- commands; some have just a subset.
Parameter types
The objects that are addressed with the xe commands have sets of parameters that identify them and define
their states.
Most parameters take a single value. For example, the name-label parameter of a VM contains a single
string value. In the output from parameter list commands such as xe vm-param-list, such parameters have
an indication in parentheses that defines whether they can be read and written to, or are read-only. For
example, the output of xe vm-param-list on a specified VM might have the lines
user-version ( RW): 1
is-control-domain ( RO): false
The first parameter, user-version, is writeable and has the value 1. The second, is-control-domain,
is read-only and has a value of false.
The two other types of parameters are multi-valued. A set parameter contains a list of values. A map pa-
rameter is a set of key/value pairs. As an example, look at the following excerpt of some sample output of
the xe vm-param-list on a specified VM:
platform (MRW): acpi: true; apic: true; pae: true; nx: false
allowed-operations (SRO): pause; clean_shutdown; clean_reboot; \
hard_shutdown; hard_reboot; suspend
The platform parameter has a list of items that represent key/value pairs. The key names are followed
by a colon character (:). Each key/value pair is separated from the next by a semicolon character (;). The M
preceding the RW indicates that this is a map parameter and is readable and writeable. The allowed-op-
erations parameter has a list that makes up a set of items. The S preceding the RO indicates that this
is a set parameter and is readable but not writeable.
In xe commands where you want to filter on a map parameter, or set a map parameter, use the separator :
(colon) between the map parameter name and the key/value pair. For example, to set the value of the foo
key of the other-config parameter of a VM to baa, the command would be
Note
In previous releases the separator - (dash) was used in specifying map parameters. This syntax still works
but is deprecated.
<class>-param-list uuid=<uuid>
Lists all of the parameters and their associated values. Unlike the class-list command, this will list the
values of "expensive" fields.
To change the parameters that are printed, the argument params should be specified as a comma-separated
list of the required parameters, e.g.:
xe vm-list params=name-label,other-config
xe vm-list params=all
Note that some parameters that are expensive to calculate will not be shown by the list command. These
parameters will be shown as, for example:
To filter the list, the CLI will match parameter values with those specified on the command-line, only printing
object that match all of the specified constraints. For example:
will only list those VMs for which both the field power-state has the value halted, and for which the field
HVM-boot-policy has the value BIOS order.
It is also possible to filter the list based on the value of keys in maps, or on the existence of values in a set.
The syntax for the first of these is map-name:key=value, and the second is set-name:contains=value
For scripting, a useful technique is passing --minimal on the command line, causing xe to print only the
first field in a comma-separated list. For example, the command xe vm-list --minimal on a XenServer host
with three VMs installed gives the three UUIDs of the VMs, for example:
a85d6717-7264-d00e-069b-3b1d19d56ad9,aaa3eec5-9499-bcf3-4c03-af10baea96b7, \
42c044de-df69-4b30-89d9-2c199564581d
xe command reference
This section provides a reference to the xe commands. They are grouped by objects that the commands
address, and listed alphabetically.
Bonding commands
Commands for working with network bonds, for resilience with physical interface failover. See the section
called “Creating NIC bonds on a standalone host” for details.
The bond object is a reference object which glues together master and member PIFs. The master PIF is
the bonding interface which must be used as the overall PIF to refer to the bond. The member PIFs are a
set of 2 or more physical interfaces which have been combined into the high-level bonded interface.
Bond parameters
Bonds have the following parameters:
members set of UUIDs for the underlying bonded read only set parameter
PIFs
bond-create
bond-create network-uuid=<network_uuid> pif-uuids=<pif_uuid_1,pif_uuid_2,...>
XenServer Administrator's Guide Command line interface 132
Create a bonded network interface on the network specified from a list of existing PIF objects. The command
will fail if PIFs are in another bond already, if any member has a VLAN tag set, if the referenced PIFs are
not on the same XenServer host, or if fewer than 2 PIFs are supplied.
bond-destroy
host-bond-destroy uuid=<bond_uuid>
Delete a bonded interface specified by its UUID from the XenServer host.
CD commands
Commands for working with physical CD/DVD drives on XenServer hosts.
CD parameters
allowed-operations A list of the operations that can be per- read only set parameter
formed on this CD
current-operations A list of the operations that are currently read only set parameter
in progress on this CD
vbd-uuids A list of the unique identifiers for the read only set parameter
VBDs on VMs that connect to this CD
crashdump-uuids Not used on CDs since crashdumps can- read only set parameter
not be written to them
other-config A list of key/value pairs that specify ad- read/write map parameter
ditional configuration parameters for the
CD
xenstore-data Data to be inserted into the xenstore tree read only map parameter
sm-config names and descriptions of storage man- read only map parameter
ager device config keys
cd-list
cd-list [params=<param1,param2,...>] [parameter=<parameter_value>...]
List the CDs and ISOs (CD image files) on the XenServer host or pool, filtering on the optional argument
params.
If the optional argument params is used, the value of params is a string containing a list of parameters of
this object that you want to display. Alternatively, you can use the keyword all to show all parameters. If
params is not used, the returned list shows a default subset of all available parameters.
Optional arguments can be any number of the CD parameters listed at the beginning of this section.
Console commands
Commands for working with consoles.
XenServer Administrator's Guide Command line interface 134
The console objects can be listed with the standard object listing command (xe console-list), and the
parameters manipulated with the standard parameter commands. See the section called “Low-level param
commands” for details.
Console parameters
Consoles have the following parameters:
other-config A list of key/value pairs that specify ad- read/write map parameter
ditional configuration parameters for the
console.
Event commands
Commands for working with events.
Event classes
Event classes are listed in the following table:
vm A Virtual Machine
pif A physical network interface (separate VLANs are represented as several PIFs)
XenServer Administrator's Guide Command line interface 135
sr A storage repository
pbd The physical block devices through which hosts access SRs
event-wait
event-wait class=<class_name> [<param-name>=<param_value>] [<param-name>=/=<param_value>]
Block other commands from executing until an object exists that satisfies the conditions given on the com-
mand line. x=y means "wait for field x to take value y", and x=/=y means "wait for field x to take any value
other than y".
blocks until a VM with UUID $VM reboots (i.e. has a different start-time value).
The class name can be any of the Event classes listed at the beginning of this section, and the parameters
can be any of those listed in the CLI command class-param-list.
XenServer hosts are the physical servers running XenServer software. They have VMs running on them
under the control of a special privileged Virtual Machine, known as the control domain or domain 0.
The XenServer host objects can be listed with the standard object listing command (xe host-list, xe host-
cpu-list, and xe host-crashdump-list), and the parameters manipulated with the standard parameter com-
mands. See the section called “Low-level param commands” for details.
Host selectors
Several of the commands listed here have a common mechanism for selecting one or more
XenServer hosts on which to perform the operation. The simplest is by supplying the argument
host=<uuid_or_name_label>. XenServer hosts can also be specified by filtering the full list of hosts
on the values of fields. For example, specifying enabled=true will select all XenServer hosts whose en-
abled field is equal to true. Where multiple XenServer hosts are matching, and the operation can be per-
formed on multiple XenServer hosts, the option --multiple must be specified to perform the operation.
The full list of parameters that can be matched is described at the beginning of this section, and can be
XenServer Administrator's Guide Command line interface 136
obtained by running the command xe host-list params=all. If no parameters to select XenServer hosts are
given, the operation will be performed on all XenServer hosts.
Host parameters
XenServer hosts have the following parameters:
capabilities list of Xen versions that the read only set parameter
XenServer host can run
XenServer hosts contain some other objects that also have parameter lists.
vendor the vendor string for the CPU name, for read only
example, "GenuineIntel"
modelname the vendor string for the CPU model, read only
for example, "Intel(R) Xeon(TM) CPU
3.00GHz"
timestamp Timestamp of the date and time that the read only
crashdump occurred, in the form yyyym-
mdd-hhmmss-ABC, where ABC is the
timezone indicator, for example, GMT
host-backup
host-backup file-name=<backup_filename> host=<host_name>
Download a backup of the control domain of the specified XenServer host to the machine that the command
is invoked from, and save it there as a file with the name file-name.
Caution
While the xe host-backup command will work if executed on the local host (that is, without a specific
hostname specified), do not use it this way. Doing so would fill up the control domain partition with the
backup file. The command should only be used from a remote off-host machine where you have space
to hold the backup file.
host-bugreport-upload
host-bugreport-upload [<host-selector>=<host_selector_value>...] [url=<destination_url>]
[http-proxy=<http_proxy_name>]
Generate a fresh bug report (using xen-bugtool, with all optional files included) and upload to the Citrix
Support ftp site or some other location.
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above). Optional arguments can be any number of the host selectors listed at
the beginning of this section.
Optional parameters are http-proxy: use specified http proxy, and url: upload to this destination URL. If
optional parameters are not used, no proxy server is identified and the destination will be the default Citrix
Support ftp site.
host-crashdump-destroy
host-crashdump-destroy uuid=<crashdump_uuid>
Delete a host crashdump specified by its UUID from the XenServer host.
host-crashdump-upload
host-crashdump-upload uuid=<crashdump_uuid>
[url=<destination_url>]
[http-proxy=<http_proxy_name>]
XenServer Administrator's Guide Command line interface 140
Upload a crashdump to the Citrix Support ftp site or other location. If optional parameters are not used, no
proxy server is identified and the destination will be the default Citrix Support ftp site. Optional parameters
are http-proxy: use specified http proxy, and url: upload to this destination URL.
host-disable
host-disable [<host-selector>=<host_selector_value>...]
Disables the specified XenServer hosts, which prevents any new VMs from starting on them. This prepares
the XenServer hosts to be shut down or rebooted.
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above). Optional arguments can be any number of the host selectors listed at
the beginning of this section.
host-dmesg
host-dmesg [<host-selector>=<host_selector_value>...]
Get a Xen dmesg (the output of the kernel ring buffer) from specified XenServer hosts.
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above). Optional arguments can be any number of the host selectors listed at
the beginning of this section.
host-emergency-management-reconfigure
host-emergency-management-reconfigure interface=<uuid_of_management_interface_pif>
Reconfigure the management interface of this XenServer host. Use this command only if the XenServer
host is in emergency mode, meaning that it is a member in a resource pool whose master has disappeared
from the network and could not be contacted for some number of retries.
host-enable
host-enable [<host-selector>=<host_selector_value>...]
Enables the specified XenServer hosts, which allows new VMs to be started on them.
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above). Optional arguments can be any number of the host selectors listed at
the beginning of this section.
host-evacuate
host-evacuate [<host-selector>=<host_selector_value>...]
Live migrates all running VMs to other suitable hosts on a pool. The host must first be disabled using the
host-disable command.
If the evacuated host is the pool master, then another host must be selected to be the pool master. To change
the pool master with HA disabled, you need to use the pool-designate-new-master command. See the
section called “pool-designate-new-master” for details. With HA enabled, your only option is to shut down
the server, which will cause HA to elect a new master at random. See the section called “host-shutdown”.
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above). Optional arguments can be any number of the host selectors listed at
the beginning of this section.
XenServer Administrator's Guide Command line interface 141
host-forget
host-forget uuid=<XenServer_host_UUID>
The xapi agent forgets about the specified XenServer host without contacting it explicitly.
Use the --force parameter to avoid being prompted to confirm that you really want to perform this oper-
ation.
Warning
Don't use this command if HA is enabled on the pool. Disable HA first, then enable it again after you've
forgotten the host.
Tip
This command is useful if the XenServer host to "forget" is dead; however, if the XenServer host is live
and part of the pool, you should use xe pool-eject instead.
host-get-system-status
host-get-system-status filename=<name_for_status_file>
[entries=<comma_separated_list>] [output=<tar.bz2 | zip>] [<host-selector>=<host_selector_value>...]
Download system status information into the specified file. The optional parameter entries is a com-
ma-separated list of system status entries, taken from the capabilities XML fragment returned by the host-
get-system-status-capabilities command. See the section called “host-get-system-status-capabilities” for
details. If not specified, all system status information is saved in the file. The parameter output may be
tar.bz2 (the default) or zip; if this parameter is not specified, the file is saved in tar.bz2 form.
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above).
host-get-system-status-capabilities
host-get-system-status-capabilities [<host-selector>=<host_selector_value>...]
Get system status capabilities for the specified host(s). The capabilities are returned as an XML fragment
that looks something like this:
Attribute Description
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above).
host-is-in-emergency-mode
host-is-in-emergency-mode
Returns true if the host the CLI is talking to is currently in emergency mode, false otherwise. This CLI
command works directly on slave hosts even with no master host present.
host-license-add
host-license-add license-file=<path/license_filename> [host-uuid=<XenServer_host_UUID>]
Parses a local license file and adds it to the specified XenServer host.
host-license-view
host-license-view [host-uuid=<XenServer_host_UUID>]
host-logs-download
host-logs-download [file-name=<logfile_name>] [<host-selector>=<host_selector_value>...]
Download a copy of the logs of the specified XenServer hosts. The copy is saved by default in a timestamped
file named hostname-yyyy-mm-dd T hh:mm:ssZ.tar.gz. You can specify a different filename using
the optional parameter file-name.
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above). Optional arguments can be any number of the host selectors listed at
the beginning of this section.
Caution
While the xe host-logs-download command will work if executed on the local host (that is, without a
specific hostname specified), do not use it this way. Doing so will clutter the control domain partition with
the copy of the logs. The command should only be used from a remote off-host machine where you have
space to hold the copy of the logs.
host-management-disable
host-management-disable
Disables the host agent listening on an external management network interface and disconnects all con-
nected API clients (such as the XenCenter). Operates directly on the XenServer host the CLI is connected
to, and is not forwarded to the pool master if applied to a member XenServer host.
Warning
Be extremely careful when using this CLI command off-host, since once it is run it will not be possible to
connect to the control domain remotely over the network to re-enable it.
host-management-reconfigure
host-management-reconfigure [interface=<device> ] | [pif-uuid=<uuid> ]
Reconfigures the XenServer host to use the specified network interface as its management inter-
face, which is the interface that is used to connect to the XenCenter. The command rewrites the
MANAGEMENT_INTERFACE key in /etc/xensource-inventory.
If the device name of an interface (which must have an IP address) is specified, the XenServer host will
immediately rebind. This works both in normal and emergency mode.
XenServer Administrator's Guide Command line interface 144
If the UUID of a PIF object is specified, the XenServer host determines which IP address to rebind to itself.
It must not be in emergency mode when this command is executed.
Warning
Be careful when using this CLI command off-host and ensure you have network connectivity on the new
interface (by using xe pif-reconfigure to set one up first). Otherwise, subsequent CLI commands will not
be able to reach the XenServer host.
host-reboot
host-reboot [<host-selector>=<host_selector_value>...]
Reboot the specified XenServer hosts. The specified XenServer hosts must be disabled first using the xe
host-disable command, otherwise a HOST_IN_USE error message is displayed.
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above). Optional arguments can be any number of the host selectors listed at
the beginning of this section.
If the specified XenServer hosts are members of a pool, the loss of connectivity on shutdown will be handled
and the pool will recover when the XenServer hosts returns. If you shut down a pool member, other members
and the master will continue to function. If you shut down the master, the pool will be out of action until the
master is rebooted and back on line (at which point the members will reconnect and synchronize with the
master) or until you make one of the members into the master.
host-restore
host-restore [file-name=<backup_filename>] [<host-selector>=<host_selector_value>...]
Restore a backup named file-name of the XenServer host control software. Note that the use of the
word "restore" here does not mean a full restore in the usual sense, it merely means that the compressed
backup file has been uncompressed and unpacked onto the secondary partition. After you've done a xe
host-restore, you have to boot the Install CD and use its Restore from Backup option.
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above). Optional arguments can be any number of the host selectors listed at
the beginning of this section.
host-set-hostname-live
host-set-hostname host-uuid=<uuid_of_host> hostname=<new_hostname>
Change the hostname of the XenServer host specified by host-uuid. This command persistently sets
both the hostname in the control domain database and the actual Linux hostname of the XenServer host.
Note that hostname is not the same as the value of the name_label field.
host-shutdown
host-shutdown [<host-selector>=<host_selector_value>...]
XenServer Administrator's Guide Command line interface 145
Shut down the specified XenServer hosts. The specified XenServer hosts must be disabled first using the
xe host-disable command, otherwise a HOST_IN_USE error message is displayed.
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above). Optional arguments can be any number of the host selectors listed at
the beginning of this section.
If the specified XenServer hosts are members of a pool, the loss of connectivity on shutdown will be handled
and the pool will recover when the XenServer hosts returns. If you shut down a pool member, other members
and the master will continue to function. If you shut down the master, the pool will be out of action until
the master is rebooted and back on line, at which point the members will reconnect and synchronize with
the master, or until one of the members is made into the master. If HA is enabled for the pool, one of the
members will be made into a master automatically. If HA is disabled, you must manually designate the
desired server as master with the pool-designate-new-master command. See the section called “pool-
designate-new-master”.
host-syslog-reconfigure
host-syslog-reconfigure [<host-selector>=<host_selector_value>...]
Reconfigure the syslog daemon on the specified XenServer hosts. This command applies the configuration
information defined in the host logging parameter.
The host(s) on which this operation should be performed are selected using the standard selection mech-
anism (see host selectors above). Optional arguments can be any number of the host selectors listed at
the beginning of this section.
Log commands
Commands for working with logs.
log-get-keys
log-get-keys
log-reopen
log-reopen
Reopen all loggers. Use this command for rotating log files.
log-set-output
log-set-output output=nil | stderr | file:<filename> | syslog:<sysloglocation> [key=<key>] [level= debug
| info | warning | error]
Set the output of the specified logger. Log messages are filtered by the subsystem in which they originated
and the log level of the message. For example, send debug logging messages from the storage manager
to a file by running the following command:
The optional parameter key specifies the particular logging subsystem. If this parameter is not set, it will
default to all logging subsystems.
The optional parameter level specifies the logging level. Valid values are:
• debug
• info
• warning
• error
Message commands
Commands for working with messages. Messages are created to notify users of significant events, and are
displayed in XenCenter as system alerts.
Message parameters
timestamp The time that the message was generat- read only
ed.
message-create
message-create name=<message_name> body=<message_text> [[host-uuid=<uuid_of_host>] | [sr-
uuid=<uuid_of_sr>] | [vm-uuid=<uuid_of_vm>] | [pool-uuid=<uuid_of_pool>]]
message-list
message-list
Lists all messages, or messages that match the specified standard selectable parameters.
Network commands
Commands for working with networks.
XenServer Administrator's Guide Command line interface 147
The network objects can be listed with the standard object listing command (xe network-list), and the
parameters manipulated with the standard parameter commands. See the section called “Low-level param
commands” for details.
Network parameters
Networks have the following parameters:
VIF-uuids A list of unique identifiers of the VIFs (vir- read only set parameter
tual network interfaces) that are attached
from VMs to this network
PIF-uuids A list of unique identifiers of the PIFs read only set parameter
(physical network interfaces) that are at-
tached from XenServer hosts to this net-
work
network-create
network-create name-label=<name_for_network> [name-description=<descriptive_text>]
network-destroy
network-destroy uuid=<network_uuid>
The patch objects can be listed with the standard object listing command (xe patch-list), and the parameters
manipulated with the standard parameter commands. See the section called “Low-level param commands”
for details.
Patch parameters
Patches have the following parameters:
applied Whether or not the patch has been ap- read only
plied; true or false
size Whether or not the patch has been ap- read only
plied; true or false
XenServer Administrator's Guide Command line interface 149
patch-apply
patch-apply uuid=<patch_file_uuid>
patch-clean
patch-clean uuid=<patch_file_uuid>
patch-pool-apply
patch-pool-apply uuid=<patch_uuid>
patch-precheck
patch-precheck uuid=<patch_uuid> host-uuid=<host_uuid>
Run the prechecks contained within the specified patch on the specified XenServer host.
patch-upload
patch-upload file-name=<patch_filename>
Upload a specified patch file to the XenServer host. This prepares a patch to be applied. On suc-
cess, the UUID of the uploaded patch is printed out. If the patch has previously been uploaded, a
PATCH_ALREADY_EXISTS error is returned instead and the patch is not uploaded again.
PBD commands
Commands for working with PBDs (Physical Block Devices). These are the software objects through which
the XenServer host accesses storage repositories (SRs).
The PBD objects can be listed with the standard object listing command (xe pbd-list), and the parameters
manipulated with the standard parameter commands. See the section called “Low-level param commands”
for details.
PBD parameters
PBDs have the following parameters:
pbd-create
pbd-create host-uuid=<uuid_of_host>
sr-uuid=<uuid_of_sr>
[device-config:key=<corresponding_value>...]
Create a new PBD on a XenServer host. The read-only device-config parameter can only be set on
creation.
To add a mapping of 'path' -> '/tmp', the command line should contain the argument de-
vice-config:path=/tmp
For a full list of supported device-config key/value pairs on each SR type see Chapter 3, Storage.
pbd-destroy
pbd-destroy uuid=<uuid_of_pbd>
pbd-plug
pbd-plug uuid=<uuid_of_pbd>
Attempts to plug in the PBD to the XenServer host. If this succeeds, the referenced SR (and the VDIs
contained within) should then become visible to the XenServer host.
pbd-unplug
pbd-unplug uuid=<uuid_of_pbd>
PIF commands
Commands for working with PIFs (objects representing the physical network interfaces).
The PIF objects can be listed with the standard object listing command (xe pif-list), and the parameters
manipulated with the standard parameter commands. See the section called “Low-level param commands”
for details.
XenServer Administrator's Guide Command line interface 151
PIF parameters
PIFs have the following parameters:
VLAN VLAN tag for all traffic passing through read only
this interface; -1 indicates no VLAN tag
is assigned
bond-master-of the UUID of the bond this PIF is the mas- read only
ter of (if any)
bond-slave-of the UUID of the bond this PIF is the slave read only
of (if any)
io_read_kbs average read rate in kB/s for the device read only
io_write_kbs average write rate in kB/s for the device read only
Note
Changes made to the other-config fields of a PIF will only take effect after a reboot. Alternately, use
the xe pif-unplug and xe pif-plug commands to cause the PIF configuration to be rewritten.
pif-forget
pif-forget uuid=<uuid_of_pif>
pif-introduce
pif-introduce host-uuid=<UUID of XenServer host> mac=<mac_address_for_pif> device=<machine-
readable name of the interface (for example, eth0)>
Create a new PIF object representing a physical interface on the specified XenServer host.
XenServer Administrator's Guide Command line interface 154
pif-plug
pif-plug uuid=<uuid_of_pif>
pif-reconfigure-ip
Modify the IP address of the PIF. For static IP configuration, set the mode parameter to static, with the
gateway, IP, and netmask parameters set to the appropriate values. To use DHCP, set the mode param-
eter to DHCP and leave the static parameters undefined.
pif-scan
pif-unplug
pif-unplug uuid=<uuid_of_pif>
Pool commands
Commands for working with pools. A pool is an aggregate of one or more XenServer hosts. A pool uses
one or more shared storage repositories so that the VMs running on one XenServer host in the pool can
be migrated in near-real time (while still running, without needing to be shut down and brought back up)
to another XenServer host in the pool. Each XenServer host is really a pool consisting of a single member
by default. When a XenServer host is joined to a pool, it is designated as a member, and the pool it has
joined becomes the master for the pool.
The singleton pool object can be listed with the standard object listing command (xe pool-list), and its
parameters manipulated with the standard parameter commands. See the section called “Low-level param
commands” for details.
Pool parameters
ha-allow-overcommit read/write
XenServer Administrator's Guide Command line interface 156
pool-designate-new-master
pool-designate-new-master host-uuid=<UUID of member XenServer host to become new master>
Instruct the specified member XenServer host to become the master of an existing pool. This performs an
orderly handover of the role of master host to another host in the resource pool. This command only works
when the current master is online, and is not a replacement for the emergency mode commands listed below.
pool-dump-database
pool-dump-database file-name=<filename_to_dump_database_into_(on_client)>
Download a copy of the entire pool database and dump it into a file on the client.
pool-eject
pool-eject host-uuid=<UUID of XenServer host to eject>
pool-emergency-reset-master
pool-emergency-reset-master master-address=<address of the pool's master XenServer host>
Instruct a slave member XenServer host to reset its master address to the new value and attempt to connect
to it. This command should not be run on master hosts.
pool-emergency-transition-to-master
pool-emergency-transition-to-master
Instruct a member XenServer host to become the pool master. This command is only accepted by the
XenServer host if it has transitioned to emergency mode, meaning it is a member of a pool whose master
has disappeared from the network and could not be contacted for some number of retries.
XenServer Administrator's Guide Command line interface 157
Note that this command may cause the password of the host to reset if it has been modified since joining
the pool (see the section called “User commands”).
pool-ha-enable
pool-ha-enable heartbeat-sr-uuids=<SR_UUID_of_the_Heartbeat_SR>
Enable High Availability on the resource pool, using the specified SR UUID as the central storage heartbeat
repository.
pool-ha-disable
pool-ha-disable
pool-join
pool-join master-address=<address> master-username=<username> master-password=<password>
pool-recover-slaves
pool-recover-slaves
Instruct the pool master to try and reset the master address of all members currently running in emergency
mode. This is typically used after pool-emergency-transition-to-master has been used to set one of the
members as the new master.
pool-restore-database
pool-restore-database file-name=<filename_to_restore_from_(on_client)> [dry-run=<true | false>]
Upload a database backup (created with pool-dump-database) to a pool. On receiving the upload, the
master will restart itself with the new database.
There is also a dry run option, which allows you to check that the pool database can be restored without
actually perform the operation. By default, dry-run is set to false.
pool-sync-database
pool-sync-database
Force the pool database to be synchronized across all hosts in the resource pool. This is not necessary
in normal operation since the database is regularly automatically replicated, but can be useful for ensuring
changes are rapidly replicated after performing a significant set of CLI operations.
The storage manager objects can be listed with the standard object listing command (xe sm-list), and the
parameters manipulated with the standard parameter commands. See the section called “Low-level param
commands” for details.
XenServer Administrator's Guide Command line interface 158
SM parameters
SMs have the following parameters:
vendor name of the vendor who created this plu- read only
gin
SR commands
Commands for controlling SRs (storage repositories).
The SR objects can be listed with the standard object listing command (xe sr-list), and the parameters
manipulated with the standard parameter commands. See the section called “Low-level param commands”
for details.
SR parameters
SRs have the following parameters:
allowed-operations list of the operations allowed on the SR in read only set parameter
this state
XenServer Administrator's Guide Command line interface 159
current-operations list of the operations that are currently in read only set parameter
progress on this SR
VDIs unique identifier/object reference for the read only set parameter
virtual disks in this SR
PBDs unique identifier/object reference for the read only set parameter
PBDs attached to this SR
content-type the type of the SR's content. Used to dis- read only
tinguish ISO libraries from other SRs. For
storage repositories that store a library
of ISOs, the content-type must be
set to iso. In other cases, Citrix recom-
mends that this be set either to empty, or
the string user.
other-config list of key/value pairs that specify addi- read/write map parameter
tional configuration parameters for the
SR .
sr-create
sr-create name-label=<name> physical-size=<size> type=<type>
content-type=<content_type> device-config:<config_name>=<value>
[host-uuid=<XenServer host UUID>] [shared=<true | false>]
Creates an SR on the disk, introduces it into the database, and creates a PBD attaching the SR to a XenServ-
er host. If shared is set to true, a PBD is created for each XenServer host in the pool; if shared is not
specified or set to false, a PBD is created only for the XenServer host specified with host-uuid.
XenServer Administrator's Guide Command line interface 160
The exact device-config parameters differ depending on the device type. See Chapter 3, Storage for
details of these parameters across the different storage backends.
sr-destroy
sr-destroy uuid=<sr_uuid>
sr-forget
sr-forget uuid=<sr_uuid>
The xapi agent forgets about a specified SR on the XenServer host, meaning that the SR is detached and
you cannot access VDIs on it, but it remains intact on the source media (the data is not lost).
sr-introduce
sr-introduce name-label=<name>
physical-size=<physical_size>
type=<type>
content-type=<content_type>
uuid=<sr_uuid>
Just places an SR record into the database. The device-config parameters are specified by de-
vice-config:<parameter_key>=<parameter_value>, for example:
xe sr-introduce device-config:<device>=</dev/sdb1>
Note
This command is never used in normal operation. It is an advanced operation which might be useful if
an SR needs to be reconfigured as shared after it was created, or to help recover from various failure
scenarios.
sr-probe
sr-probe type=<type> [host-uuid=<uuid_of_host>] [device-config:<config_name>=<value>]
Performs a backend-specific scan, using the provided device-config keys. If the device-config is
complete for the SR backend, then this will return a list of the SRs present on the device, if any. If the
device-config parameters are only partial, then a backend-specific scan will be performed, returning
results that will guide you in improving the remaining device-config parameters. The scan results are
returned as backend-specific XML, printed out on the CLI.
The exact device-config parameters differ depending on the device type. See Chapter 3, Storage for
details of these parameters across the different storage backends.
sr-scan
sr-scan uuid=<sr_uuid>
Force an SR scan, syncing the xapi database with VDIs present in the underlying storage substrate.
XenServer Administrator's Guide Command line interface 161
Task commands
Commands for working with long-running asynchronous tasks. These are tasks such as starting, stopping,
and suspending a Virtual Machine, which are typically made up of a set of other atomic subtasks that together
accomplish the requested operation.
The task objects can be listed with the standard object listing command (xe task-list), and the parameters
manipulated with the standard parameter commands. See the section called “Low-level param commands”
for details.
Task parameters
Tasks have the following parameters:
progress if the Task is still pending, this field con- read only
tains the estimated percentage complete,
from 0. to 1. If the Task has complet-
ed, successfully or unsuccessfully, this
should be 1.
error_info if the Task has failed, this parameter con- read only
tains the set of associated error strings;
otherwise, this parameter's value is unde-
fined
subtask_of contains the UUID of the tasks this task is read only
a sub-task of
task-cancel
task-cancel [uuid=<task_uuid>]
Template commands
Commands for working with VM templates.
Templates are essentially VMs with the is-a-template parameter set to true. A template is a "gold
image" that contains all the various configuration settings to instantiate a specific VM. XenServer ships with
a base set of templates, which range from generic "raw" VMs that can boot an OS vendor installation CD
(RHEL, CentOS, SLES, Windows) to complete pre-configured OS instances (Debian Etch). With XenServer
you can create VMs, configure them in standard forms for your particular needs, and save a copy of them
as templates for future use in VM deployment.
The template objects can be listed with the standard object listing command (xe template-list), and the
parameters manipulated with the standard parameter commands. See the section called “Low-level param
commands” for details.
Template parameters
Templates have the following parameters:
xe vm-param-set \
uuid=<vm_uuid> \
VCPUs-params:mask=1,2,3
xe vm-param-set \
uuid=<vm_uuid> \
VCPUs-params:weight=512
xe vm-param-set \
uuid=<vm_uuid> \
VCPUs-params:cap=100
current-operations list of the operations that are read only set parameter
currently in progress on this
template
VCPUs-utilisation list of virtual CPUs and their read only map parameter
weight
form yyyymmddThh:mm:ss
z, where z is the single-letter
military timezone indicator, for
example, Z for UTC (GMT)
template-export
Exports a copy of a specified template to a file with the specified new filename.
Update commands
Commands for working with updates to the OEM edition of XenServer. For commands relating to updating
the standard non-OEM editions of XenServer, see the section called “Patch (update) commands” for details.
update-upload
update-upload file-name=<name_of_upload_file>
Streams a new software image to a OEM edition XenServer host. You must then restart the host for this
to take effect.
User commands
user-password-change
Changes the password of the logged-in user. The old password field is not checked because you require
supervisor privilege to make this call.
VBD commands
Commands for working with VBDs (Virtual Block Devices).
A VBD is a software object that connects a VM to the VDI, which represents the contents of the virtual disk.
The VBD has the attributes which tie the VDI to the VM (is it bootable, its read/write metrics, and so on),
XenServer Administrator's Guide Command line interface 170
while the VDI has the information on the physical attributes of the virtual disk (which type of SR, whether
the disk is shareable, whether the media is read/write or read only, and so on).
The VBD objects can be listed with the standard object listing command (xe vbd-list), and the parameters
manipulated with the standard parameter commands. See the section called “Low-level param commands”
for details.
VBD parameters
VBDs have the following parameters:
vbd-create
Appropriate values for the device field are listed in the parameter allowed-VBD-devices on the spec-
ified VM. Before any VBDs exist there, the allowable values are integers from 0-15.
If the type is CD, vdi-uuid is optional; if no VDI is specified, an empty VBD will be created for the CD.
Mode must be RO for a CD.
vbd-destroy
vbd-destroy uuid=<uuid_of_vbd>
If the VBD has its other-config:owner parameter set to true, the associated VDI will also be destroyed.
vbd-eject
vbd-eject uuid=<uuid_of_vbd>
Remove the media from the drive represented by a VBD. This command only works if the media is of a
removable type (a physical CD or an ISO); otherwise an error message VBD_NOT_REMOVABLE_MEDIA is
returned.
vbd-insert
vbd-insert uuid=<uuid_of_vbd> vdi-uuid=<uuid_of_vdi_containing_media>
Insert new media into the drive represented by a VBD. This command only works if the media is of a
removable type (a physical CD or an ISO); otherwise an error message VBD_NOT_REMOVABLE_MEDIA is
returned.
vbd-plug
vbd-plug uuid=<uuid_of_vbd>
vbd-unplug
vbd-unplug uuid=<uuid_of_vbd>
Attempts to detach the VBD from the VM while it is in the running state.
VDI commands
Commands for working with VDIs (Virtual Disk Images).
A VDI is a software object that represents the contents of the virtual disk seen by a VM, as opposed to the
VBD, which is a connector object that ties a VM to the VDI. The VDI has the information on the physical
attributes of the virtual disk (which type of SR, whether the disk is shareable, whether the media is read/
write or read only, and so on), while the VBD has the attributes which tie the VDI to the VM (is it bootable,
its read/write metrics, and so on).
The VDI objects can be listed with the standard object listing command (xe vdi-list), and the parameters
manipulated with the standard parameter commands. See the section called “Low-level param commands”
for details.
XenServer Administrator's Guide Command line interface 173
VDI parameters
VDIs have the following parameters:
allowed-operations a list of the operations allowed in this read only set parameter
state
current-operations a list of the operations that are currently read only set parameter
in progress on this VDI
vbd-uuids a list of VBDs that refer to this VDI read only set parameter
crashdump-uuids list of crash dumps that refer to this VDI read only set parameter
read-only true if this VDI can only be mounted read- read only
only
xenstore-data data to be inserted into the xenstore read only map parameter
tree (/local/domain/0/back-
end/vbd/<domid>/<device-id>/
sm-data) after the VDI is attached. This
is generally set by the SM backends on
vdi_attach.
vdi-clone
vdi-clone uuid=<uuid_of_the_vdi> [driver-params:<key=value>]
Create a new, writable copy of the specified VDI that can be used directly. It is a variant of vdi-copy that is
capable of exposing high-speed image clone facilities where they exist.
The optional driver-params map parameter can be used for passing extra vendor-specific configuration
information to the back end storage driver that the VDI is based on. See the storage vendor driver docu-
mentation for details.
vdi-copy
vdi-copy uuid=<uuid_of_the_vdi> sr-uuid=<uuid_of_the_destination_sr>
vdi-create
vdi-create sr-uuid=<uuid_of_the_sr_where_you_want_to_create_the_vdi>
name-label=<name_for_the_vdi>
type=<system | user | suspend | crashdump>
virtual-size=<size_of_virtual_disk>
sm-config-*=<storage_specific_configuration_data>
Create a VDI.
10
The virtual-size parameter can be specified in bytes or using the IEC standard suffixes KiB (2 bytes),
20 30 40
MiB (2 bytes), GiB (2 bytes), and TiB (2 bytes).
XenServer Administrator's Guide Command line interface 175
Note
SR types that support sparse allocation of disks (such as Local VHD and NFS) do not enforce virtual
allocation of disks. Users should therefore take great care when over-allocating virtual disk space on an
SR. If an over-allocated SR does become full, disk space must be made available either on the SR target
substrate or by deleting unused VDIs in the SR.
Note
Some SR types might round up the virtual-size value to make it divisible by a configured block size.
vdi-destroy
vdi-destroy uuid=<uuid_of_vdi>
Note
In the case of Local VHD and NFS SR types, disk space is not immediately released on vdi-destroy, but
periodically during a storage repository scan operation. Users that need to force deleted disk space to
be made available should call sr-scan manually.
vdi-forget
vdi-forget uuid=<uuid_of_vdi>
Unconditionally removes a VDI record from the database without touching the storage backend. In normal
operation, you should be using vdi-destroy instead.
vdi-import
vdi-import uuid=<uuid_of_vdi> filename=<filename_of_raw_vdi>
vdi-introduce
vdi-introduce uuid=<uuid_of_vdi>
sr-uuid=<uuid_of_sr_to_import_into>
name-label=<name_of_the_new_vdi>
type=<system | user | suspend | crashdump>
location=<device_location_(varies_by_storage_type)>
[name-description=<description_of_vdi>]
[sharable=<yes | no>]
[read-only=<yes | no>]
[other-config=<map_to_store_misc_user_specific_data>]
[xenstore-data=<map_to_of_additional_xenstore_keys>]
[sm-config<storage_specific_configuration_data>]
XenServer Administrator's Guide Command line interface 176
Create a VDI object representing an existing storage device, without actually modifying or creating any
storage. This command is primarily used internally to automatically introduce hot-plugged storage devices.
vdi-resize
vdi-resize uuid=<vdi_uuid> disk-size=<new_size_for_disk>
vdi-snapshot
vdi-snapshot uuid=<uuid_of_the_vdi> [driver-params=<params>]
Produces a read-write version of a VDI that can be used as a reference for backup and/or templating pur-
poses. You can perform a backup from a snapshot rather than installing and running backup software inside
the VM. The VM can continue running while external backup software streams the contents of the snapshot
to the backup media. Similarly, a snapshot can be used as a "gold image" on which to base a template. A
template can be made using any VDIs.
The optional driver-params map parameter can be used for passing extra vendor-specific configuration
information to the back end storage driver that the VDI is based on. See the storage vendor driver docu-
mentation for details.
vdi-unlock
vdi-unlock uuid=<uuid_of_vdi_to_unlock> [force=true]
Attempts to unlock the specified VDIs. If force=true is passed to the command, it will force the unlocking
operation.
VIF commands
Commands for working with VIFs (Virtual network interfaces).
The VIF objects can be listed with the standard object listing command (xe vif-list), and the parameters
manipulated with the standard parameter commands. See the section called “Low-level param commands”
for details.
VIF parameters
VIFs have the following parameters:
xe vif-param-set \
uuid=<vif_uuid> \
other-config:mtu=9000
vif-create
vif-create vm-uuid=<uuid_of_the_vm> device=<see below>
network-uuid=<uuid_of_the_network_the_vif_will_connect_to> [mac=<mac_address>]
Appropriate values for the device field are listed in the parameter allowed-VIF-devices on the spec-
ified VM. Before any VIFs exist there, the allowable values are integers from 0-15.
The mac parameter is the standard MAC address in the form aa:bb:cc:dd:ee:ff. If you leave it un-
specified, an appropriate random MAC address will be created. You can also explicitly set a random MAC
address by specifying mac=random.
vif-destroy
vif-destroy uuid=<uuid_of_vif>
XenServer Administrator's Guide Command line interface 179
Destroy a VIF.
vif-plug
vif-plug uuid=<uuid_of_vif>
vif-unplug
vif-unplug uuid=<uuid_of_vif>
VLAN commands
Commands for working with VLANs (virtual networks). To list and edit virtual interfaces, refer to the PIF
commands, which have a VLAN parameter to signal that they have an associated virtual network (see the
section called “PIF commands”). For example, to list VLANs you need to use xe pif-list.
vlan-create
vlan-create pif-uuid=<uuid_of_pif> vlan=<vlan_number> network-uuid=<uuid_of_network>
pool-vlan-create
vlan-create pif-uuid=<uuid_of_pif> vlan=<vlan_number> network-uuid=<uuid_of_network>
Create a new VLAN on all hosts on a pool, by determining which interface (for example, eth0) the specified
network is on on each host and creating and plugging a new PIF object one each host accordingly.
vlan-destroy
vlan-destroy uuid=<uuid_of_pif_mapped_to_vlan>
Destroy a VLAN. Requires the UUID of the PIF that represents the VLAN.
VM commands
Commands for controlling VMs and their attributes.
VM selectors
Several of the commands listed here have a common mechanism for selecting one or more VMs on which
to perform the operation. The simplest way is by supplying the argument vm=<name_or_uuid>. VMs
can also be specified by filtering the full list of VMs on the values of fields. For example, specifying pow-
er-state=halted will select all VMs whose power-state parameter is equal to halted. Where mul-
tiple VMs are matching, the option --multiple must be specified to perform the operation. The full list
of parameters that can be matched is described at the beginning of this section, and can be obtained by
the command xe vm-list params=all. If no parameters to select VMs are given, the operation will
be performed on all VMs.
The VM objects can be listed with the standard object listing command (xe vm-list), and the parameters
manipulated with the standard parameter commands. See the section called “Low-level param commands”
for details.
XenServer Administrator's Guide Command line interface 180
VM parameters
Note
All writeable VM parameter values can be changed while the VM is running, but the new parameters are
not applied dynamically and will not be applied until the VM is rebooted.
xe vm-param-set \
uuid=<vm_uuid> \
VCPUs-params:mask=1,2,3
xe vm-param-set \
uuid=<template_uuid> \
VCPUs-params:weight=512
xe vm-param-set \
uuid=<template UUID> \
VCPUs-params:cap=100
yyyymmddThh:mm:ss z,
where z is the single-letter
military timezone indicator, for
example, Z for UTC (GMT)
VCPUs-utilisation a list of virtual CPUs and their read only map parameter
weight
vm-cd-add
vm-cd-add cd-name=<name_of_new_cd> device=<integer_value_of_an_available_vbd>
[<vm-selector>=<vm_selector_value>...]
Add a new virtual CD to the selected VM. The device parameter should be selected from the value of the
allowed-VBD-devices parameter of the VM.
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-cd-eject
vm-cd-eject [<vm-selector>=<vm_selector_value>...]
Eject a CD from the virtual CD drive. This command will only work if there is one and only one CD attached
to the VM. When there are two or more CDs, please use the command xe vbd-eject and specify the UUID
of the VBD.
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-cd-insert
vm-cd-insert cd-name=<name_of_cd> [<vm-selector>=<vm_selector_value>...]
XenServer Administrator's Guide Command line interface 187
Insert a CD into the virtual CD drive. This command will only work if there is one and only one empty CD
device attached to the VM. When there are two or more empty CD devices, please use the command xe
vbd-insert and specify the UUIDs of the VBD and of the VDI to insert.
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-cd-list
vm-cd-list [vbd-params] [vdi-params] [<vm-selector>=<vm_selector_value>...]
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
You can also select which VBD and VDI parameters to list.
vm-cd-remove
vm-cd-remove cd-name=<name_of_cd> [<vm-selector>=<vm_selector_value>...]
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-clone
vm-clone new-name-label=<name_for_clone>
[new-name-description=<description_for_clone>] [<vm-selector>=<vm_selector_value>...]
Clone an existing VM, using storage-level fast disk clone operation where available. Specify the name
and the optional description for the resulting cloned VM using the new-name-label and new-name-
description arguments.
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-compute-maximum-memory
vm-compute-maximum-memory total=<amount_of_available_physical_ram_in_bytes>
[approximate=<add overhead memory for additional vCPUS? true | false>]
[<vm_selector>=<vm_selector_value>...]
Calculate the maximum amount of static memory which can be allocated to an existing VM, using the total
amount of physical RAM as an upper bound. The optional parameter approximate reserves sufficient
extra memory in the calculation to account for adding extra vCPUs into the VM at a later date.
For example:
uses the value of the memory-free parameter returned by the xe host-list command to set the maximum
memory of the VM named testvm.
The VM or VMs on which this operation will be performed are selected using the standard selection mech-
anism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the be-
ginning of this section.
vm-copy
vm-copy new-name-label=<name_for_copy> [new-name-description=<description_for_copy>]
[sr-uuid=<uuid_of_sr>] [<vm-selector>=<vm_selector_value>...]
Copy an existing VM, but without using storage-level fast disk clone operation (even if this is available).
The disk images of the copied VM are guaranteed to be "full images" - that is, not part of a copy-on-write
(CoW) chain.
Specify the name and the optional description for the resulting copied VM using the new-name-label and
new-name-description arguments.
Specify the destination SR for the resulting copied VM using the sr-uuid. If this parameter is not specified,
the destination is the same SR that the original VM is in.
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-crashdump-list
vm-crashdump-list [<vm-selector>=<vm selector value>...]
If the optional argument params is used, the value of params is a string containing a list of parameters of
this object that you want to display. Alternatively, you can use the keyword all to show all parameters. If
params is not used, the returned list shows a default subset of all available parameters.
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-data-source-forget
vm-data-source-forget data-source=<name_description_of_data-source> [<vm-selector>=<vm se-
lector value>...]
Stop recording the specified data source for a VM, and forget all of the recorded data.
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-data-source-list
vm-data-source-list [<vm-selector>=<vm selector value>...]
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-data-source-query
vm-data-source-query data-source=<name_description_of_data-source> [<vm-selector>=<vm selec-
tor value>...]
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-data-source-record
vm-data-source-record data-source=<name_description_of_data-source> [<vm-selector>=<vm se-
lector value>...]
This will write the information from the data source to the VM's persistent performance metrics database.
This database is distinct from the normal agent database for performance reasons.
Data sources have the true/false parameters standard and enabled, which can be seen in the output of
the vm-data-source-list command. If enabled=true, the data source's metrics are currently being record-
ed to the performance database; if enabled=false they are not. Data sources with standard=true have
enabled=true and have their metrics recorded to the performance database by default. Data sources
which have standard=false have enabled=false by default. The vm-data-source-record command
sets enabled=false.
Once enabled, you can stop recording the data source's metrics using the vm-data-source-forget com-
mand.
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-destroy
vm-destroy uuid=<uuid_of_vm>
Destroy the specified VM. This leaves the storage associated with the VM intact. To delete storage as well,
use xe vm-uninstall.
vm-disk-add
vm-disk-add disk-size=<size_of_disk_to_add> device=<uuid_of_device>
[<vm-selector>=<vm_selector_value>...]
Add a new disk to the specified VMs. Select the device parameter from the value of the allowed-VBD-
devices parameter of the VMs.
10
The disk-size parameter can be specified in bytes or using the IEC standard suffixes KiB (2 bytes),
20 30 40
MiB (2 bytes), GiB (2 bytes), and TiB (2 bytes).
XenServer Administrator's Guide Command line interface 190
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-disk-list
vm-disk-list [vbd-params] [vdi-params] [<vm-selector>=<vm_selector_value>...]
Lists disks attached to the specified VMs. The vbd-params and vdi-params parameters control the fields
of the respective objects to output and should be given as a comma-separated list, or the special key all
for the complete list.
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-disk-remove
vm-disk-remove device=<integer_label_of_disk> [<vm-selector>=<vm_selector_value>...]
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-export
vm-export filename=<export_filename>
[metadata=<true | false>]
[<vm-selector>=<vm_selector_value>...]
Export the specified VMs (including disk images) to a file on the local machine. Specify the filename to export
the VM into using the filename parameter. By convention, the filename should have a .xva extension.
If the metadata parameter is true, then the disks are not exported, and only the VM metadata is written
to the output file. This is intended to be used when the underlying storage is transferred through other
mechanisms, and permits the VM information to be recreated (see the section called “vm-import”).
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-import
vm-import filename=<export_filename>
[metadata=<true | false>]
[preserve=<true | false>]
[sr-uuid=<destination_sr_uuid>]
Import a VM from a previously-exported file. If preserve is set to true, the MAC address of the original
VM will be preserved. The sr-uuid determines the destination SR to import the VM into, and is the default
SR if not specified.
The filename parameter can also point to an XVA-format VM, which is the legacy export format from
XenServer 3.2 and is used by some third-party vendors to provide virtual appliances. This format uses a
XenServer Administrator's Guide Command line interface 191
directory to store the VM data, so set filename to the root directory of the XVA export and not an actual file.
Subsequent exports of the imported legacy guest will automatically be upgraded to the new filename-based
format, which stores much more data about the configuration of the VM.
Note
The older directory-based XVA format does not fully preserve all the VM attributes. In particular, imported
VMs will not have any virtual network interfaces attached by default. If networking is required, create one
using vif-create and vif-plug.
If the metadata is true, then a previously exported set of metadata can be imported without their
associated disk blocks. Metadata-only import will fail if any VDIs cannot be found (named by SR and
VDI.location) unless the --force option is specified, in which case the import will proceed regardless.
If disks can be mirrored or moved out-of-band then metadata import/export represents a fast way of moving
VMs between disjoint pools (e.g. as part of a disaster recovery plan).
Note
vm-install
vm-install new-name-label=<name>
[ template-uuid=<uuid_of_desired_template> | [template=<uuid_or_name_of_desired_template>]]
[ sr-uuid=<sr_uuid> | sr-name-label=<name_of_sr> ]
Install a VM from a template. Specify the template name using either the template-uuid or template
argument. Specify an SR other than the default SR using either the sr-uuid or sr-name-label argument.
vm-memory-shadow-multiplier-set
vm-memory-shadow-multiplier-set [<vm-selector>=<vm_selector_value>...]
[multiplier=<float_memory_multiplier>]
This is an advanced option which modifies the amount of shadow memory assigned to a hardware-assisted
VM. In some specialized application workloads, such as Citrix XenApp, extra shadow memory is required
to achieve full performance.
This memory is considered to be an overhead. It is separated from the normal memory calculations for
accounting memory to a VM. When this command is invoked, the amount of free XenServer host memory
will decrease according to the multiplier, and the HVM_shadow_multiplier field will be updated with the
actual value which Xen has assigned to the VM. If there is not enough XenServer host memory free, then
an error will be returned.
The VMs on which this operation should be performed are selected using the standard selection mechanism
(see VM selectors for more information).
XenServer Administrator's Guide Command line interface 192
vm-migrate
vm-migrate [[host-uuid=<destination XenServer host UUID> ] | [host=<name or UUID of destination
XenServer host> ]] [<vm-selector>=<vm_selector_value>...] [live=<true | false>]
Migrate the specified VMs between physical hosts. The host parameter can be either the name or the
UUID of the XenServer host.
By default, the VM will be suspended, migrated, and resumed on the other host. The live parameter acti-
vates XenMotion and keeps the VM running while performing the migration, thus minimizing VM downtime
to less than a second. In some circumstances such as extremely memory-heavy workloads in the VM, Xen-
Motion automatically falls back into the default mode and suspends the VM for a brief period of time before
completing the memory transfer.
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-reboot
vm-reboot [<vm-selector>=<vm_selector_value>...] [force=<true>]
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
Use the force argument to cause an ungraceful shutdown, akin to pulling the plug on a physical server.
vm-reset-powerstate
vm-reset-powerstate [<vm-selector>=<vm_selector_value>...] {force=true}
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
This is an advanced command only to be used when a member host in a pool goes down. You can use this
command to force the pool master to reset the power-state of the VMs to be halted. Essentially this forces
the lock on the VM and its disks so it can be subsequently started on another pool host. This call requires
the force flag to be specified, and fails if it is not on the command-line.
vm-resume
vm-resume [<vm-selector>=<vm_selector_value>...] [force=<true | false>] [on=<XenServer host UUID>]
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
If the VM is on a shared SR in a pool of hosts, use the on argument to specify which host in the pool
on which to start it. By default the system will determine an appropriate host, which might be any of the
members of the pool.
XenServer Administrator's Guide Command line interface 193
vm-shutdown
vm-shutdown [<vm-selector>=<vm_selector_value>...] [force=<true | false>]
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
Use the force argument to cause an ungraceful shutdown, similar to pulling the plug on a physical server.
vm-start
vm-start [<vm-selector>=<vm_selector_value>...] [force=<true | false>] [on=<XenServer host UUID>] [--
multiple]
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
If the VMs are on a shared SR in a pool of hosts, use the on argument to specify which host in the pool
on which to start the VMs. By default the system will determine an appropriate host, which might be any
of the members of the pool.
vm-suspend
vm-suspend [<vm-selector>=<vm_selector_value>...]
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-uninstall
vm-uninstall [<vm-selector>=<vm_selector_value>...] [force=<true | false>]
Uninstall a VM, destroying its disks (those VDIs that are marked RW and connected to this VM only) as well
as its metadata record. To simply destroy the VM metadata, use xe vm-destroy.
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Optional arguments can be any number of the VM parameters listed at the
beginning of this section.
vm-vcpu-hotplug
vm-vcpu-hotplug new-vcpus=<new_vcpu_count> [<vm-selector>=<vm_selector_value>...]
Dynamically adjust the number of VCPUs available to a running paravirtual Linux VM within the number
bounded by the parameter VCPUs-max. Windows VMs always run with the number of VCPUs set to VC-
PUs-max and must be rebooted to change this value.
XenServer Administrator's Guide Command line interface 194
The paravirtualized Linux VM or VMs on which this operation should be performed are selected using the
standard selection mechanism (see VM selectors). Optional arguments can be any number of the VM pa-
rameters listed at the beginning of this section.
vm-vif-list
vm-vif-list [<vm-selector>=<vm_selector_value>...]
The VM or VMs on which this operation should be performed are selected using the standard selection
mechanism (see VM selectors). Note that the selectors operate on the VM records when filtering, and not
on the VIF values. Optional arguments can be any number of the VM parameters listed at the beginning
of this section.
pool-initialize-wlb
pool-initialize-wlb wlb_url=<wlb_server_address> \
wlb_username=<wlb_server_username> \
wlb_password=<wlb_server_password> \
xenserver_username=<pool_master_username> \
xenserver_password=<pool_master_password>
pool-param-set other-config
Use the pool-param-set other-config command to specify the timeout when communicating with the WLB
server. All requests are serialized, and the timeout covers the time from a request being queued to its
response being completed. In other words, slow calls cause subsequent ones to be slow. Defaults to 30
seconds if unspecified or unparseable.
xe pool-param-set other-config:wlb_timeout=<0.01> \
uuid=<315688af-5741-cc4d-9046-3b9cea716f69>
host-retrieve-wlb-evacuate-recommendations
host-retrieve-wlb-evacuate-recommendations uuid=<vm_uuid>
Returns the evacuation recommendations for a host, and a reference to the UUID of the recommendations
object.
vm-retrieve-wlb-recommendations
Returns the workload balancing recommendations for the selected VM. The simplest way to select the
VM on which the operation is to be performed is by supplying the argument vm=<name_or_uuid>. VMs
can also be specified by filtering the full list of VMs on the values of fields. For example, specifying pow-
er-state=halted selects all VMs whose power-state is halted. Where multiple VMs are matching, specify
XenServer Administrator's Guide Command line interface 195
the option --multiple to perform the operation. The full list of fields that can be matched can be obtained
by the command xe vm-list params=all. If no parameters to select VMs are given, the operation will be
performed on all VMs.
pool-deconfigure-wlb
Permanently deletes all workload balancing configuration.
pool-retrieve-wlb-configuration
Prints all workload balancing configuration to standard out.
pool-retrieve-wlb-recommendations
Prints all workload balancing recommendations to standard out.
pool-retrieve-wlb-report
Gets a WLB report of the specified type and saves it to the specified file. The available reports are:
• pool_health
• host_health_history
• optimization_performance_history
• pool_health_history
• vm_movement_history
• vm_performance_history
Example usage for each report type is shown below. The utcoffset parameter specifies the number of
hours ahead or behind of UTC for your time zone. The start parameter and end parameters specify the
number of hours to report about. For example specifying start=-3 and end=0 will cause WLB to report
on the last 3 hour's activity.
xe pool-retrieve-wlb-report report=pool_health \
poolid=<51e411f1-62f4-e462-f1ed-97c626703cae> \
utcoffset=<-5> \
start=<-3> \
end=<0> \
filename=</pool_health.txt>
xe pool-retrieve-wlb-report report=host_health_history \
hostid=<e26685cd-1789-4f90-8e47-a4fd0509b4a4> \
utcoffset=<-5> \
start=<-3> \
end=<0> \
filename=</host_health_history.txt>
xe pool-retrieve-wlb-report report=optimization_performance_history \
poolid=<51e411f1-62f4-e462-f1ed-97c626703cae> \
utcoffset=<-5> \
start=<-3> \
end=<0> \
filename=</optimization_performance_history.txt>
XenServer Administrator's Guide Command line interface 196
xe pool-retrieve-wlb-report report=pool_health_history \
poolid=<51e411f1-62f4-e462-f1ed-97c626703cae> \
utcoffset=<-5> \
start=<-3> \
end=<0> \
<filename=/pool_health_history.txt>
xe pool-retrieve-wlb-report report=vm_movement_history \
poolid=<51e411f1-62f4-e462-f1ed-97c626703cae> \
utcoffset=<-5> \
start=<-5> \
end=<0> \
filename=</vm_movement_history.txt>
xe pool-retrieve-wlb-report report=vm_performance_history \
hostid=<e26685cd-1789-4f90-8e47-a4fd0509b4a4> \
utcoffset=<-5> \
start=<-3> \
end=<0> \
<filename=/vm_performance_history.txt>
Chapter 9. Troubleshooting
If you experience odd behavior, application crashes, or have other issues with a XenServer host, this chapter
is meant to help you solve the problem if possible and, failing that, describes where the application logs
are located and other information that can help your Citrix Solution Provider and Citrix track and resolve
the issue.
Important
We recommend that you follow the troubleshooting information in this chapter solely under the guidance
of your Citrix Solution Provider or Citrix Support.
Citrix provides two forms of support: you can receive free self-help support via the Support site, or you may
purchase our Support Services and directly submit requests by filing an online Support Case. Our free web-
based resources include product documentation, a Knowledge Base, and discussion forums.
Additionally, the XenServer host has several CLI commands to make it simple to collate the output of logs
and various other bits of system information using the utility xen-bugtool. Use the xe command host-bu-
greport-upload to collect the appropriate log files and system information and upload them to the Citrix
Support ftp site. Please refer to the section called “host-bugreport-upload” for a full description of this com-
mand and its optional parameters. If you are requested to send a crashdump to Citrix Support, use the xe
command host-crashdump-upload. Please refer to the section called “host-crashdump-upload” for a full
description of this command and its optional parameters.
Caution
It is possible that sensitive information might be written into the XenServer host logs.
By default, the server logs report only errors and warnings. If you need to see more detailed information,
you can enable more verbose logging. To do so, use the host-loglevel-set command:
host-loglevel-set log-level=level
where level can be 0, 1, 2, 3, or 4, where 0 is the most verbose and 4 is the least verbose.
Log files greater than 5 MB are rotated, keeping 4 revisions. The logrotate command is run hourly.
XenServer Administrator's Guide Troubleshooting 198
1. Set the syslog_destination parameter to the hostname or IP address of the remote server where you
want the logs to be written:
xe host-syslog-reconfigure uuid=<xenserver_host_uuid>
to enforce the change. (You can also execute this command remotely by specifying the host param-
eter.)
XenCenter logs
XenCenter also has client-side log. This file includes a complete description of all operations and errors
that occur when using XenCenter. It also contains informational logging of events that provide you with an
audit trail of various actions that have occurred. The XenCenter log file is stored in your profile folder. If
XenCenter is installed on Windows XP, the path is
%userprofile%\AppData\Citrix\XenCenter\logs\XenCenter.log
%userprofile%\AppData\Citrix\Roaming\XenCenter\logs\XenCenter.log
To quickly locate the XenCenter log files, for example, when you want to open or email the log file, click on
View Application Log Files in the XenCenter Help menu.
• Is your XenCenter an older version than the XenServer host you are attempting to connect to?
The XenCenter application is backward-compatible and can communicate properly with older XenServer
hosts, but an older XenCenter cannot communicate properly with newer XenServer hosts.
To correct this issue, install a XenCenter version that is the same, or newer, than the XenServer host
version.
• Is your license current?
You can see the expiration date for your License Key in the XenServer host General tab under the Li-
censes section in XenCenter.
XenServer Administrator's Guide Troubleshooting 199
Also, if you upgraded your software from version 3.2.0 to the current version, you should also have re-
ceived and applied a new License file.
For details on licensing a host, see the chapter "XenServer Licensing" in the XenServer Installation Guide .
• The XenServer host talks to XenCenter using HTTPS over port 443 (a two-way connection for commands
and responses using the XenAPI), and 5900 for graphical VNC connections with paravirtual Linux VMs.
If you have a firewall enabled between the XenServer host and the machine running the client software,
make sure that it allows traffic from these ports.
Filer, NetApp, 28
Index FlexVol, NetApp, 28
H
A Hardware virtualization
AMD-V (AMD hardware virtualization), 2 AMD-V, 2
Intel VT, 2
C HBA (see Host bus adapter)
CD commands, xe CLI, 132 Host (XenServer host) commands, xe CLI, 135
CLI (see command line interface) Host bus adapter, 39
Command line interface (CLI)
basic xe syntax, 126 I
Bonding commands, 131 Intel VT (Intel hardware virtualization), 2
CD commands, 132 iSCSI, 33
command types, 128
console commands, 133 L
event commands, 134
Log commands, xe CLI, 145
host (XenServer host) commands, 135
Logical Volume Management (LVM), 25, 26
log commands, 145
Logs, XenServer host, 197
low-level list commands, 130
low-level parameter commands, 130
message commands, 146 M
network commands, 146 Machine failures in a resource pool, 118
overview, Message commands, xe CLI, 146
parameter types, 129
patch commands, 148 N
PBD commands, 149 NAS (see Network attached storage (NFS))
PIF commands, 150 NetApp Filer, 28
Resource pool commands, 154 Network attached storage (NFS), 38
shorthand xe syntax, 127 Network bonding commands, xe CLI, 131
special characters and syntax, 127 Network commands, xe CLI, 146
Storage Manager commands, 157 Networking VMs,
Storage repository (SR) commands, 158 Networking XenServer hosts
task commands, 161 Initial configuration after installation, 53
Template commands, 162
update commands, 169 P
user commands, 169 Patch commands, xe CLI, 148
VBD commands, 169 PBD commands, xe CLI, 149
VDI commands, 172 PIF commands, xe CLI, 150
VIF commands, 176 Pool commands, xe CLI, 154
VLAN commands, 179
VM commands, 179 Q
xe command reference,
QoS settings
Console commands, xe CLI, 133
virtual disk, 48
Constraints on XenServer hosts joining resource
pool, 2
Creating a resource pool, 3 R
Removing XenServer host from a resource pool, 5
Requirements, for creating resource pools, 2
E Resource pool,
Event commands, xe CLI, 134 constraints on XenServer hosts joining, 2
coping with machine failures, 118
F creating, 3
Fibre Channel storage area network (SAN), 39 master, 2, 118, 118
XenServer Administrator's Guide Index 201
X
xe command reference,
basic xe syntax, 126
Bonding commands, 131
CD commands, 132