Virtual I/O Server: Power Systems
Virtual I/O Server: Power Systems
Virtual I/O Server: Power Systems
Power Systems
Virtual I/O Server
Power Systems
Virtual I/O Server
Note
Before using this information and the product it supports, read the information in Notices on
page 195.
This edition applies to IBM Virtual I/O Server version 2.1.2.0 and to all subsequent releases and modifications until
otherwise indicated in new editions.
Copyright IBM Corporation 2007, 2009.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
What's new in Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
Virtual I/O Server overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Operating system support for VIOS client logical partitions. . . . . . . . . . . . . . . . . . .3
Components of the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . .3
Virtual fibre channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Virtual fibre channel for HMC-managed systems . . . . . . . . . . . . . . . . . . . . .7
Virtual fibre channel on IVM-managed systems . . . . . . . . . . . . . . . . . . . . . .9
Virtual SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Virtual I/O Server storage subsystem overview . . . . . . . . . . . . . . . . . . . . . 12
Physical storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Physical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Virtual media repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Optical devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Virtual storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Optical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Device compatibility in a Virtual I/O Server environment . . . . . . . . . . . . . . . . . 20
Mapping devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Virtual networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Host Ethernet Adapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Internet Protocol version 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Link Aggregation or EtherChannel devices . . . . . . . . . . . . . . . . . . . . . . . 24
Virtual Ethernet adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Virtual local area networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Shared Ethernet Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Shared memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Paging VIOS partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Virtual I/O Server management . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Virtual I/O Server command-line interface . . . . . . . . . . . . . . . . . . . . . . . 35
IBM Tivoli software and the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . 37
IBM Systems Director software . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Configuration scenarios for the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . 40
Scenario: Configuring a Virtual I/O Server without VLAN tagging . . . . . . . . . . . . . . . . 40
Scenario: Configuring a Virtual I/O Server using VLAN tagging . . . . . . . . . . . . . . . . 43
Scenario: Configuring Shared Ethernet Adapter failover . . . . . . . . . . . . . . . . . . . 45
Scenario: Configuring Network Interface Backup in AIX client logical partitions without VLAN tagging . . . 48
Scenario: Configuring Multi-Path I/O for AIX client logical partitions . . . . . . . . . . . . . . . 50
Planning for the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Planning for Virtual I/O Server and client logical partitions using system plans . . . . . . . . . . . 53
Installing operating environments from a system plan by using the HMC . . . . . . . . . . . . 54
Creating a system plan by using the HMC . . . . . . . . . . . . . . . . . . . . . . . 56
System plan validation for the HMC . . . . . . . . . . . . . . . . . . . . . . . . . 57
Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Limitations and restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Capacity planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Planning for virtual SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Virtual SCSI latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Virtual SCSI bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Virtual SCSI sizing considerations . . . . . . . . . . . . . . . . . . . . . . . . . 61
Planning for Shared Ethernet Adapters . . . . . . . . . . . . . . . . . . . . . . . . 63
Network requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Contents v
Troubleshooting the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Troubleshooting the Virtual I/O Server logical partition . . . . . . . . . . . . . . . . . . . 155
Troubleshooting virtual SCSI problems . . . . . . . . . . . . . . . . . . . . . . . . 155
Correcting a failed Shared Ethernet Adapter configuration . . . . . . . . . . . . . . . . . 156
Debugging problems with Ethernet connectivity. . . . . . . . . . . . . . . . . . . . . 157
Enabling noninteractive shells on Virtual I/O Server 1.3 or later . . . . . . . . . . . . . . . 158
Recovering when disks cannot be located . . . . . . . . . . . . . . . . . . . . . . . . 158
Troubleshooting AIX client logical partitions . . . . . . . . . . . . . . . . . . . . . . . 160
Performance data collection for analysis by the IBM Electronic Service Agent . . . . . . . . . . . . 161
Reference information for the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . 162
Virtual I/O Server and Integrated Virtualization Manager command descriptions . . . . . . . . . . 162
Configuration attributes for IBM Tivoli agents and clients . . . . . . . . . . . . . . . . . . 162
GARP VLAN Registration Protocol statistics . . . . . . . . . . . . . . . . . . . . . . . 165
Network attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Shared Ethernet Adapter failover statistics. . . . . . . . . . . . . . . . . . . . . . . . 180
Shared Ethernet Adapter statistics . . . . . . . . . . . . . . . . . . . . . . . . . . 187
User types for the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . 193
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Programming interface information . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Terms and conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
The PowerVM Editions feature includes the installation media for the Virtual I/O Server software. The
Virtual I/O Server facilitates the sharing of physical I/O resources between client logical partitions within
the server.
When you install the Virtual I/O Server in a logical partition on a system that is managed by the HMC,
you can use the HMC and the Virtual I/O Server command-line interface to manage the Virtual I/O
Server and client logical partitions.
When you install the Virtual I/O Server on a managed system and there is no HMC attached to the
managed system when you install the Virtual I/O Server, then the Virtual I/O Server logical partition
becomes the management partition. The management partition provides the Integrated Virtualization
Manager Web-based system management interface and a command-line interface that you can use to
manage the system.
Related information:
PowerVM Information Roadmap
Integrated Virtualization Manager
Virtual I/O Server and Integrated Virtualization Manager commands
October 2009
You can back up and restore configuration information about user-defined virtual devices on the Virtual
I/O Server by using the viosbr command. The following information is new or updated:
v Backing up the Virtual I/O Server on page 127
v Backing up user-defined virtual devices by using the viosbr command on page 133
v Scheduling backups of the Virtual I/O Server and user-defined virtual devices on page 134
v Scheduling backups of user-defined virtual devices by using the viosbr command on page 135
v Restoring the Virtual I/O Server on page 137
v Restoring user-defined virtual devices by using the viosbr command on page 142
You can use a number of new and updated Virtual I/O Server commands to collect performance data for
use by the IBM Electronic Service Agent to diagnose and solve performance issues. For more information,
see the Performance data collection for analysis by the IBM Electronic Service Agent topic.
May 2009
November 2008
v With N_Port ID Virtualization (NPIV) and virtual fibre channel adapters, you can configure the
managed system so that multiple logical partitions can access independent physical storage through
the same physical fibre channel adapter. The following information is new for virtual fibre channel
adapters:
Virtual fibre channel
Configuring a virtual fibre channel adapter
Assigning the virtual fibre channel adapter to a physical fibre channel adapter on page 109
Managing virtual Fibre Channel on the Integrated Virtualization Manager
Redundancy configuration using virtual fibre channel adapters
The Virtual I/O Server is software that is located in a logical partition. This software facilitates the
sharing of physical I/O resources between client logical partitions within the server. The Virtual I/O
Server provides virtual SCSI target, virtual fibre channel, Shared Ethernet Adapter, and PowerVM Active
Memory Sharing capability to client logical partitions within the system. As a result, client logical
partitions can share SCSI devices, fibre channel adapters, Ethernet adapters, and expand the amount of
memory available to logical partitions using paging space devices. The Virtual I/O Server software
requires that the logical partition be dedicated solely for its use.
The Virtual I/O Server is part of the PowerVM Editions hardware feature.
Virtual SCSI
Physical adapters with attached disks or optical devices on the Virtual I/O Server logical partition can be
shared by one or more client logical partitions. The Virtual I/O Server offers a local storage subsystem
that provides standard SCSI-compliant logical unit numbers (LUNs). The Virtual I/O Server can export a
pool of heterogeneous physical storage as a homogeneous pool of block storage in the form of SCSI disks.
Unlike typical storage subsystems that are physically located in the storage area network (SAN), the SCSI
devices exported by the Virtual I/O Server are limited to the domain within the server. Although the
SCSI LUNs are SCSI compliant, they might not meet the needs of all applications, particularly those that
exist in a distributed environment.
The Integrated Virtualization Manager provides a browser-based interface and a command-line interface
that you can use to manage some servers that use the Virtual I/O Server. On the managed system, you
can create logical partitions, manage the virtual storage and virtual Ethernet, and view service
information related to the server. The Integrated Virtualization Manager is packaged with the Virtual I/O
Server, but it is activated and usable only on certain platforms and where no Hardware Management
Console (HMC) is present.
To access physical storage in a typical storage area network (SAN) that uses fibre channel, the physical
storage is mapped to logical units (LUNs) and the LUNs are mapped to the ports of physical fibre
channel adapters. Each physical port on each physical fibre channel adapter is identified using one
worldwide port name (WWPN).
NPIV is a standard technology for fibre channel networks that enables you to connect multiple logical
partitions to one physical port of a physical fibre channel adapter. Each logical partition is identified by a
unique WWPN, which means that you can connect each logical partition to independent physical storage
on a SAN.
To enable NPIV on the managed system, you must create a Virtual I/O Server logical partition (version
2.1, or later) that provides virtual resources to client logical partitions. You assign the physical fibre
channel adapters (that support NPIV) to the Virtual I/O Server logical partition. Then, you connect
virtual fibre channel adapters on the client logical partitions to virtual fibre channel adapters on the
Virtual I/O Server logical partition. A virtual fibre channel adapter is a virtual adapter that provides client
logical partitions with a fibre channel connection to a storage area network through the Virtual I/O
Server logical partition. The Virtual I/O Server logical partition provides the connection between the
virtual fibre channel adapters on the Virtual I/O Server logical partition and the physical fibre channel
adapters on the managed system.
Using their unique WWPNs and the virtual fibre channel connections to the physical fibre channel
adapter, the operating systems that run in the client logical partitions discover, instantiate, and manage
their physical storage located on the SAN. In the previous figure, Client logical partition 1 accesses
Physical storage 1, Client logical partition 2 accesses Physical storage 2, and Client logical partition 3
accesses Physical storage 3. For IBM i client partitions, the LUNs of the physical storage on the SAN must
be 520 byte LUNs. The LUNs cannot be 512 byte LUNs. The Virtual I/O Server cannot access and does
not emulate the physical storage to which the client logical partitions have access. The Virtual I/O Server
provides the client logical partitions with a connection to the physical fibre channel adapters on the
managed system.
There is always a one-to-one relationship between virtual fibre channel adapters on the client logical
partitions and the virtual fibre channel adapters on the Virtual I/O Server logical partition. That is, each
virtual fibre channel adapter on a client logical partition must connect to only one virtual fibre channel
adapter on the Virtual I/O Server logical partition, and each virtual fibre channel on the Virtual I/O
Server logical partition must connect to only one virtual fibre channel adapter on a client logical partition.
Using SAN tools, you can zone and mask LUNs that include WWPNs that are assigned to virtual fibre
channel adapters on client logical partitions. The SAN uses WWPNs that are assigned to virtual fibre
channel adapters on client logical partitions the same way it uses WWPNs that are assigned to physical
ports.
You can configure virtual fibre channel adapters on client logical partitions that run the following
operating systems:
v AIX version 6.1 Technology Level 2, or later
v AIX 5.3 Technology Level 9
v IBM i version 6.1.1, or later
v SUSE Linux Enterprise Server 11, or later
Related concepts:
Redundancy configuration using virtual fibre channel adapters on page 75
Redundancy configurations help protect your network from physical adapter failures as well as Virtual
I/O Server failures.
To enable N_Port ID Virtualization (NPIV) on the managed system, you create the required virtual fibre
channel adapters and connections as follows:
v You use the HMC to create virtual fibre channel adapters on the Virtual I/O Server logical partition
and associate them with virtual fibre channel adapters on the client logical partitions.
The HMC generates WWPNs based on the range of names available for use with the prefix in the vital
product data on the managed system. This 6digit prefix comes with the purchase of the managed system
and includes 32 000 pairs of WWPNs. When you remove a virtual fibre channel adapter from a client
logical partition, the hypervisor deletes the WWPNs that are assigned to the virtual fibre channel adapter
on the client logical partition. The HMC does not reuse the deleted WWPNs when generating WWPNs
for virtual fibre channel adapters in the future. If you run out of WWPNs, you must obtain an activation
code that includes another prefix with another 32 000 pairs of WWPNs.
To avoid configuring the physical fibre channel adapter to be a single point of failure for the connection
between the client logical partition and its physical storage on the SAN, do not connect two virtual fibre
channel adapters from the same client logical partition to the same physical fibre channel adapter.
Instead, connect each virtual fibre channel adapter to a different physical fibre channel adapter.
You can dynamically add and remove virtual fibre channel adapters to and from the Virtual I/O Server
logical partition and to and from client logical partitions.
Table 3. Dynamic logical partitioning tasks and results for virtual fibre channel adapters
To or from a client logical partition
Dynamically add or remove virtual or a Virtual I/O Server logical
fibre channel adapter partition Result
Add a virtual fibre channel adapter To a client logical partition The HMC generates the a pair of
unique WWPNs for the client virtual
fibre channel adapter.
Add a virtual fibre channel adapter To a Virtual I/O Server logical You need to connect the virtual fibre
partition channel adapter to a physical port on
a physical fibre channel adapter.
Remove a virtual fibre channel From a client logical partition v The hypervisor deletes the
adapter WWPNs and does not reuse them.
v You must either remove the
associated virtual fibre channel
adapter from the Virtual I/O
Server, or associate it with another
virtual fibre channel adapter on a
client logical partition.
Remove a virtual fibre channel From a Virtual I/O Server logical v The Virtual I/O Server removes
adapter partition the connection to the physical port
on the physical fibre channel
adapter.
v You must either remove the
associated virtual fibre channel
adapter from the client logical
partition, or associate it with
another virtual fibre channel
adapter on the Virtual I/O Server
logical partition.
You can also run the lshwres command on the HMC to display the remaining number of WWPNs and to
display the prefix that is currently used to generate the WWPNs.
To enable N_Port ID Virtualization (NPIV) on the managed system, you create a pair of WWPNs for a
logical partition and assign the pair directly to the physical ports of the physical fibre channel adapters.
You can assign multiple logical partitions to one physical port by assigning a pair of WWPNs for each
logical partition to the same physical port. When you assign a WWPN pair to a logical partition, the IVM
automatically creates the following connections:
v The IVM creates a virtual fibre channel adapter on the management partition and associates it with the
virtual fibre channel adapter on the logical partition.
v The IVM generates a pair of unique WWPNs and creates a virtual fibre channel adapter on the client
logical partition. The IVM assigns the WWPNs to the virtual fibre channel adapter on the client logical
partition, and associates the virtual fibre channel adapter on the client logical partition with the virtual
fibre channel adapter on the management partition.
When you assign the WWPNs for a logical partition to a physical port, the IVM connects the virtual fibre
channel adapter on the management partition to the physical port on the physical fibre channel adapter.
The IVM generates WWPNs based on the range of names available for use with the prefix in the vital
product data on the managed system. This 6digit prefix comes with the purchase of the managed system
and includes 32 768 pairs of WWPNs. When you remove the connection between a logical partition and a
physical port, the hypervisor deletes the WWPNs that are assigned to the virtual fibre channel adapter on
the logical partition. The IVM does not reuse the deleted WWPNs when generating WWPNs for virtual
fibre channel adapters in the future. If you run out of WWPNs, you must obtain an activation code that
includes another prefix with 32 768 pairs of WWPNs.
You can add WWPN pairs for a new logical partition without assigning them to a physical port. Being
able to generate WWPNs independently of a physical port assignment for a logical partition allows you
to communicate these names to the SAN administrator. This ensures that the SAN administrator can
configure the SAN connection appropriately such that the logical partition can connect successfully to the
SAN without regard for which physical port the partition uses for the connection.
You can dynamically add or remove a WWPN pair to and from a logical partition. You can also
dynamically change the physical port that is assigned to a WWPN pair.
Table 5. Dynamic logical partitioning tasks and results
Action Result
Dynamically add a WWPN pair to a logical partition v The IVM creates a virtual fibre channel adapter on the
management partition and associates it with the
virtual fibre channel adapter on the logical partition.
v The IVM generates a pair of unique WWPNs and
creates a virtual fibre channel adapter on the logical
partition. The IVM assigns the WWPNs to the virtual
fibre channel adapter on the logical partition, and
associates the virtual fibre channel adapter on the
logical partition with the virtual fibre channel adapter
on the management partition.
Dynamically assign a WWPN pair to a physical port The IVM connects the virtual fibre channel adapter on
the management partition to the physical port on the
physical fibre channel adapter.
Dynamically remove a WWPN pair from a logical v The IVM removes the connection between the virtual
partition fibre channel adapter on the management partition
and the physical port on the physical fibre channel
adapter.
v The IVM removes the virtual fibre channel adapter
from the management partition.
v The IVM removes the virtual fibre channel adapter
from the logical partition. The IVM deletes the
WWPNs and does not reuse them.
Dynamically change the physical port assignment of a The IVM changes the connection for the virtual fibre
WWPN pair channel adapter on the management partition to the
newly assigned physical port.
Virtual SCSI
Virtual SCSI allows client logical partitions to share disk storage and tape or optical devices that are
assigned to the Virtual I/O Server logical partition.
Disk, tape, or optical devices attached to physical adapters in the Virtual I/O Server logical partition can
be shared by one or more client logical partitions. The Virtual I/O Server is a standard storage subsystem
that provides standard SCSI-compliant LUNs. The Virtual I/O Server is capable of exporting a pool of
heterogeneous physical storage as a homogeneous pool of block storage in the form of SCSI disks. The
Virtual I/O Server is a localized storage subsystem. Unlike typical storage subsystems that are physically
located in the SAN, the SCSI devices exported by the Virtual I/O Server are limited to the domain within
the server. Therefore, although the SCSI LUNs are SCSI compliant, they might not meet the needs of all
applications, particularly those that exist in a distributed environment.
Virtual SCSI is based on a client-server relationship. The Virtual I/O Server owns the physical resources
as well as the virtual SCSI server adapter, and acts as a server, or SCSI target device. The client logical
partitions have a SCSI initiator referred to as the virtual SCSI client adapter, and access the virtual SCSI
targets as standard SCSI LUNs. You configure the virtual adapters by using the HMC or Integrated
Virtualization Manager. The configuration and provisioning of virtual disk resources is performed by
using the Virtual I/O Server. Physical disks owned by the Virtual I/O Server can be either exported and
assigned to a client logical partition as a whole or can be partitioned into parts, such as logical volumes
or files. The logical volumes and files can then be assigned to different logical partitions. Therefore, using
Note: In order for client logical partitions to be able to access virtual devices, the Virtual I/O Server must
be fully operational.
The Virtual I/O Server storage subsystem is a standard storage subsystem that provides standard
SCSI-compliant LUNs. The Virtual I/O Server is a localized storage subsystem. Unlike typical storage
subsystems that are physically located in the SAN, the SCSI devices exported by the Virtual I/O Server
are limited to the domain within the server.
Like typical disk storage subsystems, the Virtual I/O Server has a distinct front end and back end. The
front end is the interface to which client logical partitions attach to view standard SCSI-compliant LUNs.
Devices on the front end are called virtual SCSI devices. The back end is made up of physical storage
resources. These physical resources include physical disk storage, both SAN devices and internal storage
devices, optical devices, tape devices, logical volumes, and files.
To create a virtual device, some physical storage must be allocated and assigned to a virtual SCSI server
adapter. This process creates a virtual device instance (vtscsiX or vtoptX). The device instance can be
considered a mapping device. It is not a real device, but rather a mechanism for managing the mapping
Physical storage
Learn more about physical storage, logical volumes, and the devices and configurations that are
supported by the Virtual I/O Server.
Physical volumes:
Physical volumes can be exported to client partitions as virtual SCSI disks. The Virtual I/O Server is
capable of taking a pool of heterogeneous physical disk storage attached to its back end and exporting
this as homogeneous storage in the form of SCSI disk LUNs.
The Virtual I/O Server must be able to accurately identify a physical volume each time it boots, even if
an event such as a storage area network (SAN) reconfiguration or adapter change has taken place.
Physical volume attributes, such as the name, address, and location, might change after the system
reboots due to SAN reconfiguration. However, the Virtual I/O Server must be able to recognize that this
is the same device and update the virtual device mappings. For this reason, in order to export a physical
volume as a virtual device, the physical volume must have either a unique identifier (UDID), a physical
identifier (PVID), or an IEEE volume attribute.
For instructions on how to determine whether your disks have one of these identifiers, see Identifying
exportable disks on page 104.
Logical volumes:
Understand how logical volumes can be exported to client partitions as virtual SCSI disks. A logical
volume is a portion of a physical volume.
A hierarchy of structures is used to manage disk storage. Each individual disk drive or LUN, called a
physical volume, has a name, such as /dev/hdisk0. Every physical volume in use either belongs to a
volume group or is used directly for virtual storage. All of the physical volumes in a volume group are
divided into physical partitions of the same size. The number of physical partitions in each region varies,
depending on the total capacity of the disk drive.
Within each volume group, one or more logical volumes are defined. Logical volumes are groups of
information located on physical volumes. Data on logical volumes appears to the user to be contiguous
but can be discontiguous on the physical volume. This allows logical volumes to be resized or relocated
and to have their contents replicated.
Each logical volume consists of one or more logical partitions. Each logical partition corresponds to at
least one physical partition. Although the logical partitions are numbered consecutively, the underlying
physical partitions are not necessarily consecutive or contiguous.
You can use the commands described in the following table to manage logical volumes.
Table 8. Logical volume commands and their descriptions
Logical volume
command Description
chlv Changes the characteristics of a logical volume.
cplv Copies the contents of a logical volume to a new logical volume.
extendlv Increases the size of a logical volume.
lslv Displays information about the logical volume.
mklv Creates a logical volume.
mklvcopy Creates a copy of a logical volume.
rmlv Removes logical volumes from a volume group.
rmlvcopy Removes a copy of a logical volume.
Creating one or more distinct volume groups rather than using logical volumes that are created in the
rootvg volume group allows you to install any newer versions of the Virtual I/O Server while
maintaining client data by exporting and importing the volume groups created for virtual I/O.
Notes:
v Logical volumes used as virtual disks must be less than 1 TB (where TB equals 1 099 511 627 776 bytes)
in size.
v For best performance, avoid using logical volumes (on the Virtual I/O Server) as virtual disks that are
mirrored or striped across multiple physical volumes.
Volume groups:
A volume group is a type of storage pool that contains one or more physical volumes of varying sizes
and types. A physical volume can belong to only one volume group per system. There can be up to 4096
active volume groups on the Virtual I/O Server.
When a physical volume is assigned to a volume group, the physical blocks of storage media on it are
organized into physical partitions of a size determined by the system when you create the volume group.
For more information, see Physical partitions on page 15.
When you install the Virtual I/O Server, the root volume group called rootvg is automatically created
that contains the base set of logical volumes required to start the system logical partition. The rootvg
includes paging space, the journal log, boot data, and dump storage, each in its own separate logical
volume. The rootvg has attributes that differ from user-defined volume groups. For example, the rootvg
cannot be imported or exported. When using a command or procedure on the rootvg, you must be
familiar with its unique characteristics.
Table 9. Frequently used volume group commands and their descriptions
Command Description
activatevg Activates a volume group
chvg Changes the attributes of a volume group
deactivatevg Deactivates a volume group
Small systems might require only one volume group to contain all of the physical volumes (beyond the
rootvg volume group). You can create separate volume groups to make maintenance easier because
groups other than the one being serviced can remain active. Because the rootvg must always be online, it
contains only the minimum number of physical volumes necessary for system operation. It is
recommended that the rootvg not be used for client data.
You can move data from one physical volume to other physical volumes in the same volume group by
using the migratepv command. This command allows you to free a physical volume so it can be removed
from the volume group. For example, you could move data from a physical volume that is to be replaced.
Physical partitions:
When you add a physical volume to a volume group, the physical volume is partitioned into contiguous,
equal-sized units of space called physical partitions. A physical partition is the smallest unit of storage
space allocation and is a contiguous space on a physical volume.
Logical partitions:
When you create a logical volume, you specify its size in megabytes or gigabytes. The system allocates
the number of logical partitions that are required to create a logical volume of at least the specified size.
A logical partition is one or two physical partitions, depending on whether the logical volume is defined
with mirroring enabled. If mirroring is disabled, there is only one copy of the logical volume (the
default). In this case, there is a direct mapping of one logical partition to one physical partition. Each
instance, including the first, is called a copy.
Quorums:
A quorum exists when a majority of Volume Group Descriptor Areas and Volume Group Status Areas
(VGDA/VGSA) and their disks are active. A quorum ensures data integrity of the VGDA/VGSA in the
event of a disk failure. Each physical disk in a volume group has at least one VGDA/VGSA. When a
volume group is created onto a single disk, the volume group initially has two VGDA/VGSA on the disk.
If a volume group consists of two disks, one disk still has two VGDA/VGSA, but the other disk has one
VGDA/VGSA. When the volume group is made up of three or more disks, each disk is allocated just one
VGDA/VGSA.
When a quorum is lost, the volume group deactivates itself so that the disks are no longer accessible by
the logical volume manager. This prevents further disk I/O to that volume group so that data is not lost
or assumed to be written when physical problems occur. As a result of the deactivation, the user is
notified in the error log that a hardware error has occurred and service must be performed.
A volume group that has been deactivated because its quorum has been lost can be reactivated by using
the activatevg -f command.
The virtual media repository provides a single container to store and manage file-backed virtual optical
media files. Media stored in the repository can be loaded into file-backed virtual optical devices for
exporting to client partitions.
The virtual media repository is available with Virtual I/O Server version 1.5 or later.
The virtual media repository is created and managed using the following commands.
Table 10. Virtual media repository commands and their descriptions
Command Description
chrep Changes the characteristics of the virtual media repository
chvopt Changes the characteristics of a virtual optical media
loadopt Loads file-backed virtual optical media into a file-backed virtual optical device
lsrep Displays information about the virtual media repository
lsvopt Displays information about file-backed virtual optical devices
mkrep Creates the virtual media repository
mkvdev Creates file-backed virtual optical devices
mkvopt Creates file-backed virtual optical media
rmrep Removes the virtual media repository
rmvopt Removes file-backed virtual optical media
unloadopt Unloads file-backed virtual optical media from a file-backed virtual optical device
Storage pools:
Learn about logical volume storage pools and file storage pools.
In Virtual I/O Server version 1.5 and later, you can create the following types of storage pools:
v Logical volume storage pools (LVPOOL)
v File storage pools (FBPOOL)
Like volume groups, logical volume storage pools are collections of one or more physical volumes. The
physical volumes that comprise a logical volume storage pool can be of varying sizes and types. File
storage pools are created within a parent logical volume storage pool and contain a logical volume
containing a file system with files.
Using storage pools, you are not required to have extensive knowledge of how to manage volume groups
and logical volumes to create and assign logical storage to a client logical partition. Devices created using
a storage pool are not limited to the size of the individual physical volumes.
Storage pools are created and managed using the following commands.
Table 11. Storage pool commands and their descriptions
Command Description
chsp Changes the characteristics of a storage pool
chbdsp Changes the characteristics of a backing device within a storage pool
lssp Displays information about a storage pool
mkbdsp Assigns storage from a storage pool to be a backing device for a virtual SCSI adapter
mksp Creates a storage pool
rmbdsp Disassociates a backing device from its virtual SCSI adapter and removes it from the
system
rmsp Removes a file storage pool
Each Virtual I/O Server logical partition has a single default storage pool that can be modified only by
the prime administrator. If the default storage pool is not modified by the prime administrator, rootvg,
which is a logical volume pool, is used as the default storage pool.
Do not create client storage in rootvg. Creating one or more distinct logical volume storage pools rather
than using the rootvg volume group allows you to install any newer versions of the Virtual I/O Server
while maintaining client data by exporting and importing the volume groups created for virtual I/O.
Unless explicitly specified otherwise, the storage pool commands will operate on the default storage pool.
This situation can be useful on systems that contain most or all of its backing devices in a single storage
pool.
Note: Storage pools cannot be used when assigning whole physical volumes as backing devices.
Optical devices:
Optical devices can be exported by the Virtual I/O Server. This topic gives information about what types
of optical devices are supported.
The Virtual I/O Server supports exporting optical SCSI devices. These are referred to as a virtual SCSI
optical devices. Virtual optical devices can be backed by DVD drives or files. Depending on the backing
device, the Virtual I/O Server will export a virtual optical device with one of following profiles:
v DVD-ROM
v DVD-RAM
Virtual optical devices that are backed by physical optical devices can be assigned to only one client
logical partition at a time. In order to use the device on a different client logical partition, it must first be
removed from its current logical partition and reassigned to the logical partition that will use the device.
Tape:
Tape devices can be exported by the Virtual I/O Server. This topic gives information about what types of
tape devices are supported.
Virtual SCSI tape devices are assigned to only one client logical partition at any given time. To use the
device on a different client logical partition, it must first be removed from its current logical partition and
reassigned to the logical partition that will use the device.
Restriction:
v The physical tape device must be a SAS attached tape device.
v The Virtual I/O Server does not support function to move media, even if the backup device does
support it.
v It is recommended that you assign the tape device to its own Virtual I/O Server adapter because as
tape devices often send large amounts of data, which might affect the performance of any other device
on the adapter.
Virtual storage
Disks, tapes, and optical devices are supported as virtual SCSI devices. This topic describes how those
devices function in a virtualized environment and provides information on what devices are supported.
The Virtual I/O Server might virtualize, or export, disks, tapes, and optical devices, such as CD-ROM
drives and DVD drives, as virtual devices. For a list of supported disks and optical devices, see the
datasheet available on the Virtual I/O Server Support for UNIX servers and Midrange servers Web site.
For information about configuring virtual SCSI devices, see Creating the virtual target device on the
Virtual I/O Server on page 95.
Disk:
Disk devices can be exported by the Virtual I/O Server. This topic gives information about what types of
disks and configurations are supported.
The Virtual I/O Server supports exporting disk SCSI devices. These are referred to as virtual SCSI disks.
All virtual SCSI disks must be backed by physical storage. The following types of physical storage can be
used to back virtual disks:
v Virtual SCSI disk backed by a physical disk
v Virtual SCSI disk backed by a logical volume
v Virtual SCSI disk backed by a file
Regardless of whether the virtual SCSI disk is backed by a physical disk, logical volume, or a file, all
standard SCSI rules apply to the device. The virtual SCSI device will behave as a standard
SCSI-compliant disk device, and it can serve as a boot device or a Network Installation Management
(NIM) target, for example.
The virtual SCSI (VSCSI) Client Adapter Path Timeout feature allows the client adapter to detect whether
a Virtual I/O Server is not responding to I/O requests. Use this feature only in configurations in which
devices are available to a client logical partition from multiple Virtual I/O Servers. These configurations
could be either configurations where Multipath I/O (MPIO) is being used or where a volume group is
being mirrored by devices on multiple Virtual I/O Servers.
If no I/O requests issued to the VSCSI server adapter have been serviced within the number of seconds
specified by the VSCSI path timeout value, one more attempt is made to contact the VSCSI server
adapter, waiting up to 60 seconds for a response.
A configurable VSCSI client adapter ODM attribute, vscsi_path_to, is provided. This attribute is used to
both indicate if the feature is enabled and to store the value of the path timeout if the feature is enabled.
The system administrator sets the ODM attribute to 0 to disable the feature, or to the time, in seconds, to
wait before checking if the path to the server adapter has failed. If the feature is enabled, a minimum
setting of 30 seconds is required. If a setting between 0 and 30 seconds is entered, the value will be
changed to 30 seconds upon the next adapter reconfiguration or reboot.
This feature is disabled by default, thus the default value of vscsi_path_to is 0. Exercise careful
consideration when setting this value, keeping in mind that when the VSCSI server adapter is servicing
the I/O request, the storage device the request is being sent to may be either local to the VIO Server or
on a SAN.
The vscsi_path_to client adapter attribute can be set by using the SMIT utility or by using the chdev -P
command. The attribute setting can also be viewed by using SMIT or the lsattr command. The setting
will not take affect until the adapter is reconfigured or the machine is rebooted.
Optical:
Optical devices can be exported by the Virtual I/O Server. This topic gives information about what types
of optical devices are supported.
The Virtual I/O Server supports exporting physical optical devices to client logical partitions. These are
referred to as virtual SCSI optical devices. Virtual SCSI optical devices can be backed by DVD drives or
files. Depending on the backing device, the Virtual I/O Server will export a virtual optical device with
one of following profiles:
v DVD-ROM
v DVD-RAM
For example, file-backed virtual SCSI optical devices are exported as DVD-RAM devices. File-backed
virtual SCSI optical devices can be backed by read-write or read-only files. Depending on the file
permissions, the device can appear to contain a DVD-ROM or DVD-RAM disk. Read-write media files
(DVD-RAM) cannot be loaded into more than one file-backed virtual SCSI optical device simultaneously.
Read-only media files (DVD-ROM) can be loaded into multiple file-backed virtual SCSI optical devices
simultaneously.
Virtual SCSI optical devices that are backed by physical optical devices can be assigned to only one client
logical partition at any given time. To use the device on a different client logical partition, it must first be
removed from its current logical partition and reassigned to the logical partition that will use the device.
Virtual SCSI optical devices will always appear as SCSI devices on the client logical partitions regardless
of whether the device type exported from the Virtual I/O Server is a SCSI, IDE, USB device, or a file.
Tape devices can be exported by the Virtual I/O Server. This topic gives information about what types of
tape devices are supported.
The Virtual I/O Server supports exporting physical tape devices to client logical partitions. These are
referred to as virtual SCSI tape devices. Virtual SCSI tape devices are backed up by physical tape devices.
Virtual SCSI tape devices are assigned to only one client logical partition at any given time. To use the
device on a different client logical partition, it must first be removed from its current logical partition and
reassigned to the logical partition that will use the device.
Restriction:
v The physical tape device must be a SAS attached tape device.
v The Virtual I/O Server does not support function to move media, even if the backup device does
support it.
v It is recommended that you assign the tape device to its own Virtual I/O Server adapter because as
tape devices often send large amounts of data, which might affect the performance of any other device
on the adapter.
Learn more about virtual-to-physical device compatibility in a Virtual I/O Server environment.
The virtual-to-physical device (p2v) compatibility described in this topic refers only to the data on the
device, not necessarily to the capabilities of the device. A device is p2v compatible when the data
retrieved from that device is identical regardless of whether it is accessed directly through a physical
attachment or virtually (for example, through the Virtual I/O Server). That is, every logical block (for
example, LBA 0 through LBA n-1) returns identical data for both physical and virtual devices. Device
capacity must also be equal in order to claim p2v compliance. You can use the Virtual I/O Server chkdev
command to determine if a device is p2v compatible.
Virtual disk devices exported by the Virtual I/O Server are referred to as virtual SCSI disks. A vitual SCSI
disk device may be backed by an entire physical volume, a logical volume, a multi-path device, or a file.
Data replication (such as copy services) and device movement between physical and virtual environments
are common operations in today's datacenter. These operations, involving devices in a virtualized
environment, often have a dependency on p2v compliance.
Copy Services refer to various solutions that provide data replication function including data migration,
flashcopy, point-in-time copy, and remote mirror and copy solutions. These capabilities are commonly
used for disaster recovery, cloning, backup/restore, and more.
Device movement between physical and virtual environments refers to the ability to move a disk device
between physical (for example, a direct-attached SAN) and virtual I/O (for example, . Virtual I/O
Server-attached SAN) environments and use the disk without having to backup or restore the data. This
capability is very useful for server consolidation.
The operations above may work if the device is p2v compatible. However, not all device combinations
and data replication solutions have been tested by IBM. See claims by the Copy Services vendor for
support claims for devices managed by Virtual I/O Server.
Devices managed by the following multipathing solutions within the Virtual I/O Server are expected to
be UDID devices.
v All multipath I/O (MPIO) versions, including Subsystem Device Driver Path Control Module
(SDDPCM), EMC PCM, and Hitachi Dynamic Link Manager (HDLM) PCM
v EMC PowerPath 4.4.2.2 or later
v IBM Subsystem Device Driver (SDD) 1.6.2.3 or later
v Hitachi HDLM 5.6.1 or later
Virtual SCSI devices created with earlier versions of PowerPath, HDLM, and SDD are not managed by
UDID format and are not expected to be p2v compliant. The operations mentioned above (for example,
data replication or movement between Virtual I/O Server and non-Virtual I/O Server environments) are
not likely to work in these cases.
Related information:
chkdev command
Determine whether a physical volume is or can be managed by a unit device identifier (UDID) or IEEE.
You can use the Virtual I/O Server chkdev command to display this data.
In order to determine whether a physical volume is or can be managed by the UDID format, the
following must be verified:
v If it is an existing Virtual I/O Server LUN, determine if its format is UDID.
v If it is a LUN to be moved to Virtual I/O Server, first verify that the Virtual I/O Server is prepared to
see that LUN as a UDID LUN by checking it at the source host.
Note: Moving a physical disk to a Virtual I/O Server that is not capable of managing the device using
UDID may result in data loss. In this case, backup the data prior to allocating the LUN to the Virtual
I/O Server.
To determine whether a device has a UDID or IEEE volume attribute identifier, complete the following
steps:
1. To determine whether a device has an UDID or an IEEE volume attribute identifier for the Virtual
I/O Server, type chkdev -verbose. Output similar to the following example is displayed:
NAME: hdisk1
IDENTIFIER: 210ChpO-c4HkKBc904N37006NETAPPfcp
PHYS2VIRT_CAPABLE: YES
VIRT2NPIV_CAPABLE: NA
VIRT2PHYS_CAPABLE: NA
PVID: 00c58e40599f2f900000000000000000
UDID: 2708ECVBZ1SC10IC35L146UCDY10-003IBXscsi
IEEE:
VTD:
NAME: hdisk2
IDENTIFIER: 600A0B800012DD0D00000AB441ED6AC
PHYS2VIRT_CAPABLE: YES
VIRT2NPIV_CAPABLE: NA
VIRT2PHYS_CAPABLE: NA
PVID: 00c58e40dcf83c850000000000000000
UDID:
IEEE: 600A0B800012DD0D00000AB441ED6AC
VTD:
CuAt:
name = "hdisk2"
attribute = "unique_id"
value = "210800038FB50AST373453LC03IBXscsi"
type = "R"
generic = ""
rep = "nl"
nls_index = 79
3. To determine whether a device has an IEEE volume attribute identifier for the AIX operating system,
run the following command: lsattr -l hdiskX. Disks with an IEEE volume attribute identifier have a
value in the ieee_volname field. Output similar to the following example is displayed:
...
cache_method fast_write Write Caching method
ieee_volname 600A0B800012DD0D00000AB441ED6AC IEEE Unique volume name
lun_id 0x001a000000000000 Logical Unit Number
...
If the ieee_volname field does not appear, then the device does not have an IEEE volume attribute
identifier.
Note: DS4K and FAStT storage that are using the Redundant Disk Array Controller (RDAC) driver
for multipathing are managed using an IEEE ID.
Related information:
chkdev command
Mapping devices
Mapping devices are used to facilitate the mapping of physical resources to a virtual device.
Virtual networking
Learn about virtual Ethernet, Host Ethernet Adapter (or Integrated Virtual Ethernet), Internet Protocol
version 6 (IPv6), Link Aggregation (or EtherChannel), Shared Ethernet Adapter, Shared Ethernet Adapter
failover, and VLAN.
Virtual Ethernet technology facilitates IP-based communication between logical partitions on the same
system using virtual local area network (VLAN)-capable software switch systems. Using Shared Ethernet
Adapter technology, logical partitions can communicate with other systems outside the hardware unit
without assigning physical Ethernet slots to the logical partitions.
To connect a logical partition to an HEA, you must create a logical Host Ethernet Adapter (LHEA) for the
logical partition. A logical Host Ethernet Adapter (LHEA) is a representation of a physical HEA on a logical
partition. An LHEA appears to the operating system as if it were a physical Ethernet adapter, just as a
virtual Ethernet adapter appears as if it were a physical Ethernet adapter. When you create an LHEA for
a logical partition, you specify the resources that the logical partition can use on the actual physical HEA.
Each logical partition can have one LHEA for each physical HEA on the managed system. Each LHEA
can have one or more logical ports, and each logical port can connect to a physical port on the HEA.
You can create an LHEA for a logical partition using either of the following methods:
v You can add the LHEA to a partition profile, shut down the logical partition, and reactivate the logical
partition using the partition profile with the LHEA.
v You can add the LHEA to a running logical partition using dynamic logical partitioning. (This method
can be used for Linux logical partitions only if you install Red Hat Enterprise Linux version 5.1, Red
Hat Enterprise Linux version 4.6, or a later version of Red Hat Enterprise Linux on the logical
partition.)
When you activate a logical partition, the LHEAs in the partition profile are considered to be required
resources. If the physical HEA resources required by the LHEAs are not available, then the logical
partition cannot be activated. However, when the logical partition is active, you can remove any LHEAs
you want from the logical partition. For every active LHEA that you assign to an IBM i logical partition,
IBM i requires 40 MB of memory.
After you create an LHEA for a logical partition, a network device is created in the logical partition. This
network device is named entX on AIX logical partitions, CMNXX on IBM i logical partitions, and ethX on
Linux logical partitions, where X represents sequentially assigned numbers. The user can then set up
TCP/IP configuration similar to a physical Ethernet device to communicate with other logical partitions.
You can configure a logical partition so that it is the only logical partition that can access a physical port
of an HEA by specifying promiscuous mode for an LHEA that is assigned to the logical partition. When an
LHEA is in promiscuous mode, no other logical partitions can access the logical ports of the physical port
that is associated with the LHEA that is in promiscuous mode. You might want to configure a logical
partition to promiscuous mode in the following situations:
v If you want to connect more than 16 logical partitions to each other and to an external network
through a physical port on an HEA, you can create a logical port on a Virtual I/O Server logical
partition and configure an Ethernet bridge between the logical port and a virtual Ethernet adapter on a
virtual LAN. This allows all logical partitions with virtual Ethernet adapters on the virtual LAN to
communicate with the physical port through the Ethernet bridge. If you configure an Ethernet bridge
between a logical port and a virtual Ethernet adapter, the physical port that is connected to the logical
port must have the following properties:
The physical port must be configured so that the Virtual I/O Server logical partition is the
promiscuous mode logical partition for the physical port.
The physical port can have only one logical port.
v You want the logical partition to have dedicated access to a physical port.
v You want to use tools such as tcpdump or iptrace.
A logical port can communicate with all other logical ports that are connected to the same physical port
on the HEA. The physical port and its associated logical ports form a logical Ethernet network. Broadcast
and multicast packets are distributed on this logical network as though it was a physical Ethernet
network. You can connect up to 16 logical ports to a physical port using this logical network. By
You can set each logical port to restrict or allow packets that are tagged for specific VLANs. You can set a
logical port to accept packets with any VLAN ID, or you can set a logical port to accept only the VLAN
IDs that you specify. You can specify up to 20 individual VLAN IDs for each logical port.
The physical ports on an HEA are always configured on the managed system level. If you use an HMC
to manage a system, you must use the HMC to configure the physical ports on any HEAs belonging to
the managed system. Also, the physical port configuration applies to all logical partitions that use the
physical port. (Some properties might require setup in the operating system as well. For example, the
maximum packet size for a physical port on the HEA must be set on the managed system level using the
HMC. However, you must also set the maximum packet size for each logical port within the operating
system.) By contrast, if a system is unpartitioned and is not managed by an HMC, you can configure the
physical ports on an HEA within the operating system just as if the physical ports were ports on a
regular physical Ethernet adapter.
You can change the properties of a logical port on an LHEA by using dynamic logical partitioning to
remove the logical port from the logical partition and add the logical port back to the logical partition
using the changed properties. If the operating system of the logical partition does not support dynamic
logical partitioning for LHEAs, and you want to change any logical port property other than the VLANs
on which the logical port participates, you must set a partition profile for the logical partition so that the
partition profile contains the desired logical port properties, shut down the logical partition, and activate
the logical partition using the new or changed partition profile. If the operating system of the logical
partition does not support dynamic logical partitioning for LHEAs, and you want to change the VLANs
on which the logical port participates, you must remove the logical port from a partition profile
belonging to the logical partition, shut down and activate the logical partition using the changed partition
profile, add the logical port back to the partition profile using the changed VLAN configuration, and shut
down and activate the logical partition again using the changed partition profile.
IPv6 provides several advantages over IPv4, including expanded routing and addressing, routing
simplification, header format simplification, improved traffic control, autoconfiguration, and security.
Link Aggregation can help provide more redundancy because individual links might fail, and the Link
Aggregation device will fail over to another adapter in the device to maintain connectivity. For example,
in the previous example, if ent0 fails, the packets are automatically sent on the next available adapter,
ent1, without disruption to existing user connections. ent0 automatically returns to service on the Link
Aggregation device when it recovers.
You can configure a Shared Ethernet Adapter to use a Link Aggregation, or EtherChannel, device as the
physical adapter.
Virtual Ethernet adapters allow logical partitions within the same system to communicate without having
to use physical Ethernet adapters. Within the system, virtual Ethernet adapters are connected to an IEEE
802.1q virtual Ethernet switch. Using this switch function, logical partitions can communicate with each
other by using virtual Ethernet adapters and assigning VIDs. With VIDs, virtual Ethernet adapters can
share a common logical network. The system transmits packets by copying the packet directly from the
memory of the sender logical partition to the receive buffers of the receiver logical partition without any
intermediate buffering of the packet.
Virtual Ethernet adapters can be used without using the Virtual I/O Server, but the logical partitions will
not be able to communicate with external systems. However, in this situation, you can use another
device, called a Host Ethernet Adapter (or Integrated Virtual Ethernet), to facilitate communication
between logical partitions on the system and external networks.
You can create virtual Ethernet adapters using the Hardware Management Console (HMC) and configure
them using the Virtual I/O Server command-line interface. You can also use the Integrated Virtualization
Manager to create and manage virtual Ethernet adapters.
Consider using virtual Ethernet on the Virtual I/O Server in the following situations:
v When the capacity or the bandwidth requirement of the individual logical partition is inconsistent
with, or is less than, the total bandwidth of a physical Ethernet adapter. Logical partitions that use the
full bandwidth or capacity of a physical Ethernet adapter should use dedicated Ethernet adapters.
v When you need an Ethernet connection, but there is no slot available in which to install a dedicated
adapter.
VLAN is a method to logically segment a physical network so that layer 2 connectivity is restricted to
members that belong to the same VLAN. This separation is achieved by tagging Ethernet packets with
their VLAN membership information and then restricting delivery to members of that VLAN. VLAN is
described by the IEEE 802.1Q standard.
The VLAN tag information is referred to as VLAN ID (VID). Ports on a switch are configured as being
members of a VLAN designated by the VID for that port. The default VID for a port is referred to as the
Port VID (PVID). The VID can be added to an Ethernet packet either by a VLAN-aware host, or by the
switch in the case of VLAN-unaware hosts. Ports on an Ethernet switch must therefore be configured
with information indicating whether the host connected is VLAN-aware.
A Shared Ethernet Adapter is a Virtual I/O Server component that bridges a physical Ethernet adapter
and one or more virtual Ethernet adapters:
v The real adapter can be a physical Ethernet adapter, a Link Aggregation or EtherChannel device, or a
Logical Host Ethernet Adapter . The real adapter cannot be another Shared Ethernet Adapter or a
VLAN pseudo-device.
v The virtual Ethernet adapter must be a virtual I/O Ethernet adapter. It cannot be any other type of
device or adapter.
Using a Shared Ethernet Adapter, logical partitions on the virtual network can share access to the
physical network and communicate with stand-alone servers and logical partitions on other systems. The
Shared Ethernet Adapter eliminates the need for each client logical partition to a dedicated physical
adapter to connect to the external network.
A Shared Ethernet Adapter provides access by connecting the internal VLANs with the VLANs on the
external switches. Using this connection, logical partitions can share the IP subnet with stand-alone
systems and other external logical partitions. The Shared Ethernet Adapter forwards outbound packets
received from a virtual Ethernet adapter to the external network and forwards inbound packets to the
appropriate client logical partition over the virtual Ethernet link to that logical partition. The Shared
Ethernet Adapter processes packets at layer 2, so the original MAC address and VLAN tags of the packet
are visible to other systems on the physical network.
The Shared Ethernet Adapter has a bandwidth apportioning feature, also known as Virtual I/O Server
quality of service (QoS). QoS allows the Virtual I/O Server to give a higher priority to some types of
packets. In accordance with the IEEE 801.q specification, Virtual I/O Server administrators can instruct
the Shared Ethernet Adapter to inspect bridged VLAN-tagged traffic for the VLAN priority field in the
VLAN header. The 3-bit VLAN priority field allows each individual packet to be prioritized with a value
from 0 to 7 to distinguish more important traffic from less important traffic. More important traffic is sent
preferentially and uses more Virtual I/O Server bandwidth than less important traffic.
Note: To use this feature, when the Virtual I/O Server Trunk Virtual Ethernet Adapter is configured on
an HMC, the adapter must be configured with additional VLAN IDs because only the traffic on these
VLAN IDs is delivered to the Virtual I/O Server with a VLAN tag. Untagged traffic is always treated as
though it belonged to the default priority class that is, as if it had a priority value of 0.
Depending on the VLAN priority values found in the VLAN headers, packets are prioritized as follows.
The Virtual I/O Server administrator can use QoS by setting the Shared Ethernet Adapter qos_mode
attribute to either strict or loose mode. The default is disabled mode. The following definitions describe
these modes:
disabled mode
This is the default mode. VLAN traffic is not inspected for the priority field. An example follows:
chdev -dev <SEA device name> -attr qos_mode=disabled
strict mode
More important traffic is bridged over less important traffic. This mode provides better
performance and more bandwidth to more important traffic; however, it can result in substantial
delays for less important traffic. An example follows:
chdev -dev <SEA device name> -attr qos_mode=strict
loose mode
A cap is placed on each priority level so that after a number of bytes is sent for each priority
level, the following level is serviced. This method ensures that all packets are eventually sent.
More important traffic is given less bandwidth with this mode than with strict mode; however,
the caps in loose mode are such that more bytes are sent for the more important traffic, so it still
gets more bandwidth than less important traffic. An example follows:
chdev -dev <SEA device name> -attr qos_mode=loose
Note: In either strict or loose mode, because the Shared Ethernet Adapter uses several threads to bridge
traffic, it is still possible for less important traffic from one thread to be sent before more important traffic
of another thread.
In SEA, QoS is provided per SEA thread. By default, SEA runs in thread mode with seven threads. When
SEA receives traffic, it routes the traffic to a thread, based on source and destination information. If the
QoS mode is enabled, it further queues the traffic, based on its priority, to the appropriate priority queue
associated with the selected thread. Queued traffic for a particular thread is serviced in the order of
higher to lower priority. All threads handle all priorities.
Note: SEA QoS does not assure bandwidth for a particular priority.
The effect of SEA QoS can be seen when there is enough traffic to keep all SEA threads busy, such that
when a SEA thread is scheduled to run, it has enough higher priority traffic to service before it can get to
the lower priority traffic. SEA QoS is not effective when the traffic pattern is such that the higher and
lower priority traffic is spread across different threads.
Shared Ethernet Adapters, in Virtual I/O Server version 1.4 or later, support GARP VLAN Registration
Protocol (GVRP), which is based on Generic Attribute Registration Protocol (GARP). GVRP allows for the
dynamic registration of VLANs over networks, which can reduce the number of errors in the
When GVRP is enabled, communication travels one way, from the Shared Ethernet Adapter to the switch.
The Shared Ethernet Adapter notifies the switch which VLANs can communicate with the network. The
Shared Ethernet Adapter does not configure VLANs to communicate with the network based on
information received from the switch. Rather, the configuration of VLANs that communicate with the
network is statically determined by the virtual Ethernet adapter configuration settings.
With Virtual I/O Server version 1.4, you can assign a logical host Ethernet port, of a logical host Ethernet
adapter (LHEA), which is sometimes referred to as Integrated Virtual Ethernet, as the real adapter of a
Shared Ethernet Adapter. The logical host Ethernet port is associated with a physical port on the Host
Ethernet Adapter. The Shared Ethernet Adapter uses the standard device driver interfaces provided by
the Virtual I/O Server to communicate with the Host Ethernet Adapter.
To use a Shared Ethernet Adapter with a Host Ethernet Adapter, the following requirements must be met:
v The logical host Ethernet port must be the only port assigned to the physical port on the Host Ethernet
Adapter. No other ports of the LHEA can be assigned to the physical port on the Host Ethernet
Adapter.
v The LHEA on the Virtual I/O Server logical partition must be set to promiscuous mode. (In an
Integrated Virtualization Manager environment, the mode is set to promiscuous by default.) Promiscuous
mode allows the LHEA (on the Virtual I/O Server) to receive all unicast, multicast, and broadcast
network traffic from the physical network.
Recommendations
Consider using Shared Ethernet Adapters on the Virtual I/O Server in the following situations:
v When the capacity or the bandwidth requirement of the individual logical partition is inconsistent or is
less than the total bandwidth of a physical Ethernet adapter. Logical partitions that use the full
bandwidth or capacity of a physical Ethernet adapter should use dedicated Ethernet adapters.
v If you plan to migrate a client logical partition from one system to another.
Consider assigning a Shared Ethernet Adapter to a Logical Host Ethernet port when the number of
Ethernet adapters that you need is more than the number of ports available on the LHEA, or you
anticipate that your needs will grow beyond that number. If the number of Ethernet adapters that you
need is fewer than or equal to the number of ports available on the LHEA, and you do not anticipate
needing more ports in the future, you can use the ports of the LHEA for network connectivity rather than
the Shared Ethernet Adapter.
Shared memory
Shared memory is physical memory that is assigned to the shared memory pool and shared among
multiple logical partitions. The shared memory pool is a defined collection of physical memory blocks that
are managed as a single memory pool by the hypervisor. Logical partitions that you configure to use
shared memory (hereafter referred to as shared memory partitions) share the memory in the pool with other
shared memory partitions.
For example, you create a shared memory pool with 16 GB of physical memory. You then create three
logical partitions, configure them to use shared memory, and activate the shared memory partitions. Each
shared memory partition can use the 16 GB that are in the shared memory pool.
The amount of memory that you assign to the shared memory partitions can be greater than the amount
of memory in the shared memory pool. For example, you can assign 12 GB to shared memory partition 1,
8 GB to shared memory partition 2, and 4 GB to shared memory partition 3. Together, the shared memory
partitions use 24 GB of memory, but the shared memory pool has only 16 GB of memory. In this
situation, the memory configuration is considered overcommitted.
Overcommitted memory configurations are possible because the hypervisor virtualizes and manages all
of the memory for the shared memory partitions in the shared memory pool as follows:
1. When shared memory partitions are not actively using their memory pages, the hypervisor allocates
those unused memory pages to shared memory partitions that currently need them. When the sum of
the physical memory currently used by the shared memory partitions is less than or equal to the
amount of memory in the shared memory pool, the memory configuration is logically overcommitted. In
a logically overcommitted memory configuration, the shared memory pool has enough physical
memory to contain the memory used by all shared memory partitions at one point in time. The
hypervisor does not need to store any data in auxiliary storage.
2. When a shared memory partition requires more memory than the hypervisor can provide to it by
allocating unused portions of the shared memory pool, the hypervisor stores some of the memory that
belongs to a shared memory partition in the shared memory pool and stores the remainder of the
memory that belongs to the shared memory partition in auxiliary storage. When the sum of the
physical memory currently used by the shared memory partitions is greater than the amount of
memory in the shared memory pool, the memory configuration is physically overcommitted. In a
physically overcommitted memory configuration, the shared memory pool does not have enough
physical memory to contain the memory used by all the shared memory partitions at one point in
time. The hypervisor stores the difference in auxiliary storage. When the operating system attempts to
access the data, the hypervisor might need to retrieve the data from auxiliary storage before the
operating system can access it.
Because the memory that you assign to a shared memory partition might not always reside in the shared
memory pool, the memory that you assign to a shared memory partition is logical memory. Logical
memory is the address space, assigned to a logical partition, that the operating system perceives as its
main storage. For a shared memory partition, a subset of the logical memory is backed up by physical
main storage (or physical memory from the shared memory pool) and the remaining logical memory is
kept in auxiliary storage.
A Virtual I/O Server logical partition provides access to the auxiliary storage, or paging space devices,
required for shared memory partitions in an overcommitted memory configuration. A paging space device
is a physical or logical device that is used by a Virtual I/O Server to provide the paging space for a
shared memory partition. The paging space is an area of nonvolatile storage used to hold portions of a
shared memory partition's logical memory that do not reside in the shared memory pool. When the
operating system that runs in a shared memory partition attempts to access data, and the data is located
in the paging space device that is assigned to the shared memory partition, the hypervisor sends a
request to a Virtual I/O Server to retrieve the data and write it to the shared memory pool so that the
operating system can access it.
On systems that are managed by a Hardware Management Console (HMC), you can assign up to two
Virtual I/O Server (VIOS) logical partitions to the shared memory pool at a time (hereafter referred to as
paging VIOS partitions). When you assign two paging VIOS partitions to the shared memory pool, you
can configure the paging space devices such that both paging VIOS partitions have access to the same
You cannot configure paging VIOS partitions to use shared memory. Paging VIOS partitions do not use
the memory in the shared memory pool. You assign paging VIOS partitions to the shared memory pool
so that they can provide access to the paging space devices for the shared memory partitions that are
assigned to the shared memory pool.
Driven by workload demands from the shared memory partitions, the hypervisor manages
overcommitted memory configurations by continually performing the following tasks:
v Allocating portions of physical memory from the shared memory pool to the shared memory partitions
as needed
v Requesting a paging VIOS partition to read and write data between the shared memory pool and the
paging space devices as needed
The ability to share memory among multiple logical partitions is known as the PowerVM Active Memory
Sharing technology. The PowerVM Active Memory Sharing technology is available with the PowerVM
Enterprise Edition for which you must obtain and enter a PowerVM Editions activation code.
Related reference:
Configuration requirements for shared memory on page 69
Review the requirements for the system, Virtual I/O Server (VIOS), logical partitions, and paging space
devices so that you can successfully configure shared memory.
Related information:
Paging space device
When the operating system that runs in a shared memory partition attempts to access data, and the data
is located in the paging space device that is assigned to the shared memory partition, the hypervisor
sends a request to a paging VIOS partition to retrieve the data and write it to the shared memory pool so
that the operating system can access it.
A paging VIOS partition is not a shared memory partition and does not use the memory in the shared
memory pool. A paging VIOS partition provides access to the paging space devices for the shared
memory partitions.
On systems that are managed by the Integrated Virtualization Manager, the management partition is the
paging VIOS partition for the shared memory partitions that are assigned to the shared memory pool.
When you create the shared memory pool, you assign a paging storage pool to the shared memory pool.
The paging storage pool provides the paging space devices for the shared memory partitions that are
assigned to the shared memory pool.
HMC
On systems that are managed by a Hardware Management Console (HMC), you can assign one or two
paging VIOS partitions to the shared memory pool. When you assign a single paging VIOS partition to
the shared memory pool, the paging VIOS partition provides access to all of the paging space devices for
the shared memory partitions. The paging space devices can be located in physical storage in the server
If you configure the shared memory pool with two paging VIOS partitions, you can configure a shared
memory partition to use either a single paging VIOS partition or redundant paging VIOS partitions.
When you configure a shared memory partition to use redundant paging VIOS partitions, you assign a
primary paging VIOS partition and a secondary paging VIOS partition to the shared memory partition.
The hypervisor uses the primary paging VIOS partition to access the shared memory partition's paging
space device. At this point, the primary paging VIOS partition is the current paging VIOS partition for
the shared memory partition. The current paging VIOS partition is the paging VIOS partition that the
hypervisor uses at any point in time to access data in the paging space device that is assigned to the
shared memory partition. If the primary VIOS partition becomes unavailable, the hypervisor uses the
secondary paging VIOS partition to access the shared memory partition's paging space device. At this
point, the secondary paging VIOS partition becomes the current paging VIOS partition for the shared
memory partition and continues as the current paging VIOS partition even after the primary paging VIOS
partition becomes available again.
You do not need to assign the same primary and secondary paging VIOS partitions to all of the shared
memory partitions. For example, you assign paging VIOS partition A and paging VIOS partition B to the
shared memory pool. For one shared memory partition, you can assign paging VIOS partition A as the
primary paging VIOS partition and paging VIOS partition B as the secondary paging VIOS partition. For
a different shared memory partition, you can assign paging VIOS partition B as the primary paging VIOS
partition and paging VIOS partition A as the secondary paging VIOS partition.
The following figure shows an example of a system with four shared memory partitions, two paging
VIOS partitions, and four paging space devices.
When a single paging VIOS partition is assigned to the shared memory pool, you must shut down the
shared memory partitions before you shut down the paging VIOS partition so that the shared memory
partitions are not suspended when they attempt to access their paging space devices. When two paging
VIOS partitions are assigned to the shared memory pool and the shared memory partitions are
configured to use redundant paging VIOS partitions, you do not need to shut down the shared memory
partitions to shut down a paging VIOS partition. When one paging VIOS partition is shut down, the
shared memory partitions use the other paging VIOS partition to access their paging space devices. For
example, you can shut down a paging VIOS partition and install VIOS updates without shutting down
the shared memory partitions.
You can configure multiple VIOS logical partitions to provide access to paging space devices. However,
you can only assign up to two of those VIOS partitions to the shared memory pool at any given time.
After you configure the shared memory partitions, you can later change the redundancy configuration of
the paging VIOS partitions for a shared memory partition by modifying the partition profile of the shared
memory partition and restarting the shared memory partition with the modified partition profile:
v You can change which paging VIOS partitions are assigned to a shared memory partition as the
primary and secondary paging VIOS partitions.
v You can change the number of paging VIOS partitions that are assigned to a shared memory partition.
For systems that are not managed by a Hardware Management Console (HMC), the Virtual I/O Server
becomes the management partition and provides a graphical user interface, called the Integrated
Virtualization Manager, to help you manage the system. For more information, see Integrated
Virtualization Manager.
In addition, in environments managed by the Integrated Virtualization Manager, you can use the Virtual
I/O Server command-line interface to manage logical partitions.
The first time you log in to the Virtual I/O Server, use the padmin user ID, which is the prime
administrator user ID. You will be prompted for a new password.
Restricted shell
Upon logging in, you will be placed into a restricted Korn shell. The restricted Korn shell works in the
same way as a standard Korn shell, except that you cannot do the following:
v Change the current working directory
v Set the value of the SHELL, ENV, or PATH variables
v Specify the path name of the command that contains a forward slash (/)
v Redirect output of a command using any of the following characters: >, >|, <>, >>
As a result of these restrictions, you will not be able to execute commands that are not accessible to your
PATH variables. In addition, these restrictions prevent you from sending command output directly to a
file. Instead, command output can be piped to the tee command.
After you log in, you can type help to get information about the supported commands. For example, to
get help on the errlog command, type help errlog.
Execution mode
The Virtual I/O Server command-line interface functions similarly to a standard command-line interface.
Commands are issued with appropriate accompanying flags and parameters. For example, to list all
adapters, type the following:
lsdev -type adapter
In addition, scripts can be run within the Virtual I/O Server command-line interface environment.
In addition to the Virtual I/O Server command-line interface commands, the following standard shell
commands are provided.
Table 14. Standard shell commands and their functions
Command Function
awk Matches patterns and performs actions on them.
cat Concatenates or displays files.
chmod Changes file modes.
cp Copies files.
As each command is executed, the user log and the global command log are updated.
The user log will contain a list of each Virtual I/O Server command, including arguments, that a user has
executed. One user log for each user in the system is created. This log is located in the user's home
directory and can be viewed by using either the cat or the vi commands.
The global command log is made up of all the Virtual I/O Server command-line interface commands
executed by all users, including arguments, the date and time the command was executed, and from
which user ID it was executed. The global command log is viewable only by the padmin user ID, and it
can be viewed by using the lsgcl command. If the global command log exceeds 1 MB, the log will be
truncated to 250 KB to prevent the file system from reaching capacity.
Note: Integrated Virtualization Manager commands are audited in a separate place and are viewable
either in Application Logs, or by running the following command from the command line:
lssvcevents -t console --filter severities=audit
Related information:
Virtual I/O Server and Integrated Virtualization Manager commands
IBM Tivoli Application Dependency Discovery Manager (TADDM) discovers infrastructure elements
found in the typical data center, including application software, hosts and operating environments
(including the Virtual I/O Server), network components (such as routers, switches, load balancers,
firewalls, and storage), and network services (such as LDAP, NFS, and DNS). Based on the data it
collects, TADDM automatically creates and maintains application infrastructure maps that include
runtime dependencies, configuration values, and change history. With this information, you can
TADDM includes an agent-free discovery engine, which means that the Virtual I/O Server does not
require that an agent or client be installed and configured in order to be discovered by TADDM. Instead,
TADDM uses discovery sensors that rely on open and secure protocols and access mechanisms to
discover the data center components.
With IBM Tivoli Identity Manager, you can manage identities and users across several platforms,
including AIX systems, Windows systems, Solaris systems, and so on. With Tivoli Identity Manager 4.7
and later, you can also include Virtual I/O Server users. Tivoli Identity Manager provides a Virtual I/O
Server adapter that acts as an interface between the Virtual I/O Server and the Tivoli Identity Manager
Server. The adapter might not be located on the Virtual I/O Server and the Tivoli Identity Manager
Server manages access to the Virtual I/O Server by using your security system.
The adapter runs as a service, independent of whether a user is logged on to the Tivoli Identity Manager
Server. The adapter acts as a trusted virtual administrator on the Virtual I/O Server, performing tasks like
the following:
v Creating a user ID to authorize access to the Virtual I/O Server.
v Modifying an existing user ID to access the Virtual I/O Server.
v Removing access from a user ID. This deletes the user ID from the Virtual I/O Server.
v Suspending a user account by temporarily deactivating access to the Virtual I/O Server.
v Restoring a user account by reactivating access to the Virtual I/O Server.
v Changing a user account password on the Virtual I/O Server.
v Reconciling the user information of all current users on the Virtual I/O Server.
v Reconciling the user information of a particular user account on the Virtual I/O Server by performing
a lookup.
Virtual I/O Server V1.3.0.1 (fix pack 8.1), includes the IBM Tivoli Monitoring System Edition for System
p agent. With Tivoli Monitoring System Edition for System p, you can monitor the health and
availability of multiple IBM System p servers (including the Virtual I/O Server) from the Tivoli
Enterprise Portal. Tivoli Monitoring System Edition for System p gathers data from the Virtual I/O
Server, including data about physical volumes, logical volumes, storage pools, storage mappings, network
mappings, real memory, processor resources, mounted file system sizes, and so on. From the Tivoli
Enterprise Portal, you can view a graphical representation of the data, use predefined thresholds to alert
you on key metrics, and resolve issues based on recommendations provided by the Expert Advice feature
of Tivoli Monitoring.
Virtual I/O Server 1.4 includes the IBM Tivoli Storage Manager client. With Tivoli Storage Manager, you
can protect Virtual I/O Server data from failures and other errors by storing backup and
Virtual I/O Server 1.4 includes the IBM Tivoli Usage and Accounting Manager agent on the Virtual I/O
Server. Tivoli Usage and Accounting Manager helps you track, allocate, and invoice your IT costs by
collecting, analyzing, and reporting on the actual resources used by entities such as cost centers,
departments, and users. Tivoli Usage and Accounting Manager can gather data from multi-tiered
datacenters that include Windows, AIX, Virtual I/O Server, HP/UX Sun Solaris, Linux, IBM i, and
VMware.
With Virtual I/O Server 1.5.2, you can configure the IBM TotalStorage Productivity Center agents on the
Virtual I/O Server. TotalStorage Productivity Center is an integrated, storage infrastructure management
suite that is designed to help simplify and automate the management of storage devices, storage
networks, and capacity utilization of file systems and databases. When you install and configure the
TotalStorage Productivity Center agents on the Virtual I/O Server, you can use the TotalStorage
Productivity Center user interface to collect and view information about the Virtual I/O Server. You can
then perform the following tasks using the TotalStorage Productivity Center user interface:
1. Run a discovery job for the agents on the Virtual I/O Server.
2. Run probes, run scans, and ping jobs to collect storage information about the Virtual I/O Server.
3. Generate reports using the Fabric Manager and the Data Manager to view the storage information
gathered.
4. View the storage information gathered using the topology Viewer.
Related tasks:
Configuring the IBM Tivoli agents and clients on the Virtual I/O Server on page 111
You can configure and start the IBM Tivoli Monitoring agent, IBM Tivoli Usage and Accounting Manager,
the IBM Tivoli Storage Manager client, and the IBM Tivoli TotalStorage Productivity Center agents.
Related information:
IBM Tivoli Application Dependency Discovery Manager Information Center
IBM Tivoli Identity Manager
IBM Tivoli Monitoring version 6.2.1 documentation
IBM Tivoli Monitoring Virtual I/O Server Premium Agent User's Guide
IBM Tivoli Storage Manager
IBM Tivoli Usage and Accounting Manager Information Center
IBM TotalStorage Productivity Center Information Center
IBM Systems Director is a platform-management foundation that streamlines the way you manage
physical and virtual systems across a heterogeneous environment. By leveraging industry standards, IBM
Systems Director supports multiple operating systems and virtualization technologies across IBM and
non-IBM platforms.
IBM Systems Director's Web and command-line interfaces provide a consistent interface focused on these
common tasks:
v Discovering, navigating and visualizing systems on the network with the detailed inventory and
relationships to the other network resources
v Notifying users of problems that occur on systems and the ability to navigate to the source of the
problem
v Notifying users when systems need updates, and distributing and installing updates on a schedule
v Analyzing real-time data for systems, and setting critical thresholds that notify the administrator of
emerging problems
v Configuring settings of a single system, and creating a configuration plan that can apply those settings
to multiple systems
v Updating installed plug-ins to add new features and function to the base capabilities
v Managing the life cycle of virtual resources
Related tasks:
Configuring the IBM Director agent on page 116
You can configure and start the IBM Director agent on the Virtual I/O Server.
Related information:
IBM Systems Director technical overview
Situation
You are the system administrator responsible for planning and configuring the network in an
environment with the Virtual I/O Server running. You want to configure a single logical subnet on the
system that communicates with the switch.
Objective
The objective of this scenario is to configure the network where only Port Virtual LAN ID (PVID) is used,
the packets are not tagged, and a single internal network is connected to a switch. There are no virtual
local area networks (VLAN) tagged ports set up on the Ethernet switch, and all virtual Ethernet adapters
are defined using a single default PVID and no additional VLAN IDs (VIDs).
While this procedure describes configuration in an HMC environment, this configuration is also possible
in an Integrated Virtualization Manager environment.
Configuration steps
The following figure shows the configuration that will be completed during this scenario.
The Shared Ethernet Adapter on the Virtual I/O Server logical partition can also be configured with IP
addresses on the same subnet. This is required only for network connectivity to the Virtual I/O Server
logical partition.
Situation
You are the system administrator responsible for planning and configuring the network in an
environment with the Virtual I/O Server running. You would like to configure the network so that two
logical subnets exist, with some logical partitions on each subnet.
Objective
The objective of this scenario is to configure multiple networks to share a single physical Ethernet
adapter. Systems on the same subnet are required to be on the same VLAN and therefore have the same
VLAN ID, which allows communication without having to go through the router. The separation in the
subnets is achieved by ensuring that the systems on the two subnets have different VLAN IDs.
Configuration steps
The following figure shows the configuration that will be completed during this scenario.
You can configure the Shared Ethernet Adapter on the Virtual I/O Server logical partition with an IP
address. This is required only for network connectivity to the Virtual I/O Server.
As the tagged VLAN network is being used, you must define additional VLAN devices over the Shared
Ethernet Adapters before configuring IP addresses.
Situation
You are the system administrator responsible for planning and configuring the network in an
environment with the Virtual I/O Server running. You want to provide higher network availability to the
client logical partition on the system. This can be accomplished by configuring a backup Shared Ethernet
Adapter in a different Virtual I/O Server logical partition.
Objective
The objective of this scenario is to configure primary and backup Shared Ethernet Adapters in the Virtual
I/O Server logical partitions so that network connectivity in the client logical partitions will not be lost in
the case of adapter failure.
You cannot use the Integrated Virtualization Manager with multiple Virtual I/O Server logical partitions
on the same server.
The following image depicts a configuration where the Shared Ethernet Adapter failover feature is set up.
The client logical partitions H1 and H2 are accessing the physical network using the Shared Ethernet
Adapters, which are the primary adapters. The virtual Ethernet adapters used in the shared Ethernet
setup are configured with the same VLAN membership information (PVID, VID), but have different
priorities. A dedicated virtual network forms the control channel and is required to facilitate
communication between the primary and backup shared Ethernet device.
For example, in this scenario, we ran the following command on both Virtual I/O Server logical
partitions:
mkvdev -sea ent0 -vadapter ent1 -default ent1 -defaultid 60 -attr ha_mode=auto
ctl_chan=ent2
Situation
In this scenario, you want to configure a highly available virtual environment for your bridged network
using the Network Interface Backup (NIB) approach to access external networks from your Virtual I/O
clients. You do not plan to use VLAN tagging in your network setup. This approach requires you to
configure a second Ethernet adapter on a different VLAN for each client and requires a Link Aggregation
adapter with NIB features. This configuration is available for AIX logical partitions.
Typically, a Shared Ethernet Adapter failover configuration is the recommended configuration for most
environments because it supports environments with or without VLAN tagging. Also, the NIB
configuration is more complex than a Shared Ethernet Adapter failover configuration because it must be
implemented on each of the clients. However, Shared Ethernet Adapter failover was not available prior to
version 1.2 of Virtual I/O Server, and NIB was the only approach to a highly available virtual
environment. Also, you might consider that in an NIB configuration you can distribute clients over both
Shared Ethernet Adapters in such a way that half of them will use the first Shared Ethernet Adapter and
the other half will use the second Shared Ethernet Adapter as primary adapter.
Objective
Create a virtual Ethernet environment using a Network Interface Backup configuration as depicted in the
following figure.
Before completing the configuration tasks, review the following prerequisites and assumptions.
v The Hardware Management Console (HMC) is already set up. To view the PDF file of Installing and
configuring the Hardware Management Console, approximately 3 MB in size, see
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphai/iphai.pdf .
v Two separate Virtual I/O Server logical partitions have been created and the Virtual I/O Server has
been installed in each logical partition. See the instructions in Installing the Virtual I/O Server and
client logical partitions on page 79.
v You have created the remaining logical partitions that you want added to the network configuration.
v Each Virtual I/O Server logical partition has an available physical Ethernet adapter assigned to it.
v You have IP addresses for all logical partitions and systems that will be added to the configuration.
Configuration tasks
Using the figure as a guide, complete the following tasks to configure the NIB virtual environment.
1. Create a LAN connection between the Virtual I/O Servers and the external network:
Note: Keep in mind, when you configure NIB with two virtual Ethernet adapters, the internal networks
used must stay separated in the hypervisor. You must use different PVIDs for the two adapters in the
client and cannot use additional VIDs on them.
In order to provide MPIO to AIX client logical partitions, you must have two Virtual I/O Server logical
partitions configured on your system. This procedure assumes that the disks are already allocated to both
the Virtual I/O Server logical partitions involved in this configuration.
To configure MPIO, follow these steps. In this scenario, hdisk5 in the first Virtual I/O Server logical
partition, and hdisk7 in the second Virtual I/O Server logical partition, are used in the configuration.
The following figure shows the configuration that will be completed during this scenario.
Select which disk that you want to use in the MPIO configuration. In this scenario, we selected
hdisk5.
4. Determine the ID of the disk that you have selected. For instructions, see Identifying exportable
disks on page 104. In this scenario, the disk does not have an IEEE volume attribute identifier or a
unique identifier (UDID), so we determine the physical identifier (PVID) by running the lspv hdisk5
command. Your results look similar to the following:
hdisk5 00c3e35ca560f919 None
The second value is the PVID. In this scenario, the PVID is 00c3e35ca560f919. Note this value.
5. List the attributes of the disk using the lsdev command. In this scenario, we typed lsdev -dev
hdisk5 -attr. Your results look similar to the following
..
lun_id 0x5463000000000000 Logical Unit Number ID False
..
Note the values for lun_id and reserve_policy. If the reserve_policy attribute is set to anything other
than no_reserve, then you must change it. Set the reserve_policy to no_reserve by typing chdev -dev
hdiskx -attr reserve_policy=no_reserve.
6. On the second Virtual I/O Server logical partition, list the physical volumes by typing lspv. In the
output, locate the disk that has the same PVID as the disk identified previously. In this scenario, the
PVID for hdisk7 matched:
hdisk7 00c3e35ca560f919 None
Tip: Although the PVID values should be identical, the disk numbers on the two Virtual I/O Server
logical partitions might vary.
7. Determine if the reserve_policy attribute is set to no_reserve using the lsdev command. In this
scenario, we typed lsdev -dev hdisk7 -attr. You see results similar to the following:
..
lun_id 0x5463000000000000 Logical Unit Number ID False
..
pvid 00c3e35ca560f9190000000000000000 Physical volume identifier False
..
reserve_policy single_path Reserve Policy
If the reserve_policy attribute is set to anything other than no_reserve, you must change it. Set the
reserve_policy to no_reserve by typing chdev -dev hdiskx -attr reserve_policy=no_reserve.
8. On both Virtual I/O Server logical partitions, use the mkvdev to create the virtual devices. In each
case, use the appropriate hdisk value. In this scenario, we type the following commands:
v On the first Virtual I/O Server logical partition, we typed mkvdev -vdev hdisk5 -vadapter vhost5
-dev vhdisk5
v On the second Virtual I/O Server logical partition, we typed mkvdev -vdev hdisk7 -vadapter
vhost7 -dev vhdisk7
The same LUN is now exported to the client logical partition from both Virtual I/O Server logical
partitions.
9. AIX can now be installed on the client logical partition. For instructions on installing AIX, see
Installing AIX in a Partitioned Environment in the IBM System p and AIX Information Center.
10. After you have installed AIX on the client logical partition, check for MPIO by running the following
command:
lspath
You see results similar to the following:
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1
If one of the Virtual I/O Server logical partitions fails, the results of the lspath command look
similar to the following:
Failed hdisk0 vscsi0
Enabled hdisk0 vscsi1
Unless a health check is enabled, the state continues to show Failed even after the disk has
recovered. To have the state updated automatically, type chdev -l hdiskx -a hcheck_interval=60
-P. The client logical partition must be rebooted for this change to take effect.
Planning for Virtual I/O Server and client logical partitions using
system plans
You can use the System Planning Tool (SPT) to create a system plan that includes configuration
specifications for a Virtual I/O Server and client logical partitions. You can also use the Hardware
Management Console (HMC) to create a system plan based on an existing system configuration.
SPT is a PC-based browser application that can assist you in planning and designing a new system. SPT
validates your plan against system requirements and prevents you from creating a plan that exceeds
those requirements. It also incorporates the IBM Systems Workload Estimator (WLE) to help you plan for
workloads and performance. The output is a system-plan file that you can deploy to a managed system.
With SPT version 3.0 and later, you can include configuration specifications for the following components
of a Virtual I/O Server logical partition in your system plan.
Table 15. Networking and storage components included in system plans
Networking components Storage components
With SPT version 3.0 and later, you can include AIX and Linux installation information for client logical
partitions in the system plan. For more information, see Installing operating environments from a system
plan by using the HMC on page 54.
SPT currently does not help you plan for high availability on client logical partitions or Redundant Array
of Independent Disks (RAID) solutions for the Virtual I/O Server. For planning information about RAID
and high availability, see RAID on page 75 and High Availability Cluster Multi-Processing on page
72.
The HMC must be at version 7, or later, to deploy the Virtual I/O Server logical partition and operating
environment, and it must be at V7R3.3.0, or later, to deploy AIX and Linux operating environments on
client logical partitions. When you deploy the system plan, the HMC automatically performs the
following tasks based on the information provided in the system plan:
v Creates the Virtual I/O Server logical partition and logical partition profile.
v Installs the Virtual I/O Server operating environment and provisions virtual resources.
v Creates the client logical partitions and logical partition profiles.
v Installs the AIX and Linux operating environments on client logical partitions.
When you deploy the Virtual I/O Server logical partition to a new system, or to a system that does not
already have a Virtual I/O Server logical partition configured, you must deploy the Virtual I/O Server
logical partition in its entirety, including provisioning items, such as Shared Ethernet Adapters,
EtherChannel adapters (or Link Aggregation devices), storage pools, and backing devices. If the HMC is
at version V7R3.3.0, or later, you can deploy a system plan that includes additional provisioning items to
an existing logical partition on the managed server, as long as the items and the system plan itself meet
all the appropriate validation requirements. For more detailed information about the restrictions that
apply, see System plan validation for the HMC on page 57.
Related information:
Introduction to virtualization
For HMCs prior to HMC V7R3.3.0, you can deploy a system plan to install only the Virtual I/O Server
operating environment. Beginning with HMC V7R3.3.0, you also can deploy a system plan to install AIX
or Linux operating environments on logical partitions in a system plan.
Note: You can create a system plan with installation information for an AIX or Linux operating
environment only in the System Planning Tool (SPT). If a system plan has installation information for an
AIX or Linux operating environment, you still can deploy the system plan to systems that are managed
by an earlier version of the HMC. Earlier versions of the HMC Deploy System Plan Wizard can deploy
the logical partitions in the system plan and ignore any operating environment installation information in
the system plan. The earlier versions of the HMC can deploy all other aspects of the system plan
successfully, as long as the other items in the system plan are validated successfully.
System plans that contain AIX or Linux installation information can be deployed only to new logical
partitions or to logical partitions that do not already have an operating environment installed on them. If
the logical partition already has an operating environment installed, the HMC does not deploy the
operating environment that the system plan specifies for that logical partition.
If you plan to deploy a system plan that includes the installation of an operating environment for a
logical partition, ensure that the Power off the system after all the logical partitions are powered off
attribute for the managed system is not selected. If this attribute is selected, system plan deployment will
fail because the deployment process starts partitions and then powers off partitions as part of installing
operating environments. Consequently, the managed system will power off during deployment when the
deployment process powers off the partitions. To verify the setting for this system attribute, complete
these steps:
1. In the HMC navigation area, select Systems Management > Servers.
2. In the Tasks area, click Properties. The Properties window for the selected managed system opens.
The wizard provides support for installing the following operating environments:
v AIX: Version 5.3 or 6.1
v Red Hat Enterprise Linux: Support is provided for any of the following versions:
Red Hat Enterprise Linux EL-AS: Version 4, 4 QUI, 4 QU2, 4 QU3, 4QU4, 4.5, or 4.6
Red Hat Enterprise Linux EL-Server: Version 5 or version 5.1
v SUSE Linux Enterprise Server: Version 10, 10 SP1, 9, 9 SP1, 9 SP2, 9 SP3, or 9 SP4
v Virtual I/O Server: Version 1.5 and 1.5.2
When you deploy a system plan that contains operating environment installation information, you can
use the Deploy System Plan Wizard to specify the resource location that the wizard needs for installing
the operating system environment. You also can specify or change operating environment installation
settings. However, you cannot use the wizard to create or edit any automatic installation files that might
be specified for an operating environment installation.
If you want to use a customized automatic installation file with an operating environment in the system
plan, you must create or obtain the necessary file and import the file into the System Planning Tool (SPT).
You can then use the SPT to edit the file, if necessary, and to associate the customized file with the
system plan for installation of the appropriate operating environment on a logical partition. These
automatic installation files, which allow you to provide specialized installation settings, include kickstart
files for Red Hat Enterprise Linux, AutoYaST files for SUSE Linux Enterprise Server, and BOSinit.data
files for AIX. For example, you might want to create a customized automatic installation file with the
necessary setting so that the operating environment is installed specific virtualized resource that is
provided to a client partition by the Virtual I/O Server partition over a virtual SCSI connection.
During the Customize Operating Environment Install step of the Deploy System Plan Wizard, you
provide required resource information for the operating environment installation and make any needed
changes to installation settings. This step does not occur if the plan does not contain operating
environment installation information. This step includes the following configuration options for installing
an operating environment on the target logical partition:
v Operating Environment Install Image Resource. This configuration option allows you to specify an
existing location for the operating environment installation files. You also can choose to create a new
resource location for the installation files that you need to install an operating environment.
v Modify Install Settings. This configuration option allows you to provide or change late-binding
installation settings for the target logical partition of the operating environment installation. In almost
all cases, you need to update a number of late-binding installation settings before you can deploy an
operating environment on a logical partition. Late-binding installation settings are those settings that
are specific to an individual installation instance, for example, the IP address and subnet mask for the
target logical partition on which the operating environment is to be deployed. You also can view
settings for any custom automatic installation files that are included with the system plan. However,
you cannot change these settings. You can use custom installation files to customize the installation of
an operating environment during the system plan deployment process only if the system plan already
contains the necessary files. You can create these files and associate them with a system plan only in
the System Planning Tool (SPT).
When you create a system plan on the HMC, you can deploy the resulting system plan to create identical
logical partition configurations on managed systems with identical hardware. The system plan contains
specifications for the logical partitions and partition profiles of the managed system that you used as the
basis of creating the system plan.
If you use HMC V7R3.3.0 or later to create the system plan, the system plan can also include operating
environment information for a logical partition. You still can deploy the system plan to systems that are
managed by an earlier version of the HMC. Earlier versions of the HMC Deploy System Plan Wizard can
deploy the logical partitions in the system plan and ignore any operating environment installation
information in the system plan.
Note: Although the system plan that you create by using HMC V7R3.3.0 or later can contain some
information about AIX or Linux operating environments on logical partitions in the system plan, it does
not contain the information needed to install those operating environments as part of deploying the
system plan. If you want a system plan to have the necessary information for installing an AIX or Linux
operating environment, you need to use the System Planning Tool (SPT). You can use the SPT either to
create a system plan or to convert a system plan that you create with the HMC to the format that the SPT
uses and then change the system plan in the SPT. You also must use the SPT if you want to include
specific automatic installation files with the system plan, or to customize any automatic installation files
for the system plan.
The new system plan also can contain hardware information that the HMC is able to obtain from the
selected managed system. However, the amount of hardware information that the HMC can capture for
the new system plan varies based on the method that the HMC uses to gather the hardware information.
There are two methods that the HMC potentially can use: inventory gathering and hardware discovery.
For example, when using inventory gathering, the HMC can detect virtual device configuration
information for the Virtual I/O Server. Additionally, the HMC can use one or both of these methods to
detect disk and tape information for IBM i logical partitions.
Ensure that you meet the requirements for using either or both of the inventory gathering and hardware
discovery methods before you create your system plan. See System plan creation requirements for more
information.
To create a system plan by using the Hardware Management Console, complete the following steps:
1. In the navigation area, select System Plans. The System Plans page opens.
2. In the Tasks area, select Create System Plan. The Create System Plan window opens.
3. Select the managed system that you want to use as the basis for the new system plan.
4. Enter a name and description for the new system plan.
5. Optional: For Hardware Management Console V7R3.3.2, or later, select whether you want to retrieve
inactive and unallocated hardware resources. This option appears only if the managed system is
capable of hardware discovery, and the option is selected by default.
Note: If you do not select the Retrieve inactive and unallocated hardware resources option, the
HMC does not perform a new hardware discovery, but instead uses the data in the inventory cache
Now that you have a new system plan, you can export the system plan, import it onto another managed
system, and deploy the system plan to that managed system.
Note: As an alternative to the HMC Web user interface, you can use the mksysplan command on the
HMC to create a system plan based on the configuration of an existing managed system.
The Deploy System Plan wizard validates the system plan prior to deployment to ensure that it can be
deployed successfully. The wizard validates the system plan in two phases. The first phase of the
validation process is hardware validation. During this phase, the wizard is validating that the processors,
memory, and I/O adapters that are available on the managed system match or exceed those that the
system plan specifies. The wizard also validates that the hardware placement on the managed system
matches the hardware placement that the system plan specifies.
The second phase of the validation process is partition validation. During this phase, the wizard validates
that the logical partitions on the managed system match those in the system plan. If the system plan
contains provisioning information, the wizard also validates the provisioning items in the system plan to
determine which items are deployable.
If any step in the partition validation process fails for the system plan, validation of the entire system
plan fails.
Specifications
This topic defines the range of configuration possibilities, including the minimum number of resources
needed and the maximum number of resources allowed.
To activate the Virtual I/O Server, the PowerVM Editions (or Advanced POWER Virtualization)
hardware feature is required. A logical partition with enough resources to share with other logical
partitions is required. The following is a list of minimum hardware requirements that must be available
to create the Virtual I/O Server.
Table 16. Resources that are required
Resource Requirement
Hardware Management The HMC or Integrated Virtualization Manager is required to create the logical
Console or Integrated partition and assign resources.
Virtualization Manager
Storage adapter The server logical partition needs at least one storage adapter.
Physical disk The disk must be at least 30 GB. This disk can be shared.
Ethernet adapter If you want to route network traffic from virtual Ethernet adapters to a Shared
Ethernet Adapter, you need an Ethernet adapter.
The Virtual I/O Server supports client logical partitions running the following operating systems on
POWER5 processor-based servers:
v AIX 5.3 (or later)
v SUSE Linux Enterprise Server 9 (or later)
v SUSE Linux Enterprise Server 10 (or later)
v Red Hat Enterprise Linux version 4 (or later)
v Red Hat Enterprise Linux version 5 (or later)
Capacity planning
This topic includes capacity-planning considerations for the Virtual I/O Server, including information
about hardware resources and limitations.
Different I/O subsystems have different performance qualities, as does virtual SCSI. This section
discusses the performance differences between physical and virtual I/O. The following topics are
described in this section:
I/O latency is the amount of time that passes between the initiation and completion of a disk I/O
operation. For example, consider a program that performs 1000 random disk I/O operations, one at a
time. If the time to complete an average operation is 6 milliseconds, the program runs in no fewer than 6
seconds. However, if the average response time is reduced to 3 milliseconds, the run time might be
reduced by 3 seconds. Applications that are multithreaded or use asynchronous I/O might be less
sensitive to latency, but in most circumstances, lower latency can help improve performance.
Because virtual SCSI is implemented as a client and server model, there is some latency that does not
exist with directly attached storage. The latency might range from 0.03 to 0.06 milliseconds per I/O
operation depending primarily on the block size of the request. The average latency is comparable for
both physical disk and logical volume-backed virtual drives. The latency experienced when using a
Virtual I/O Server in a shared-processor logical partition can be higher and more variable than using a
Virtual I/O Server in a dedicated logical partition. For additional information about the performance
differences between dedicated logical partitions and shared-processor logical partitions, see Virtual SCSI
sizing considerations on page 61.
The following table identifies latency (in milliseconds) for different block-size transmissions on both
physical disk and logical-volume-backed virtual SCSI disks.
Table 19. Increase in disk I/O response time based on block size (in milliseconds)
Backing type 4K 8K 32 K 64 K 128 K
Physical disk 0.032 0.033 0.033 0.040 0.061
Logical volume 0.035 0.036 0.034 0.040 0.063
The average disk-response time increases as the block size increases. The latency increases for a virtual
SCSI operation are relatively greater on smaller block sizes because of their shorter response time.
I/O bandwidth is the maximum amount of data that can be read or written to a storage device in a unit
of time. Bandwidth can be measured from a single thread or from a set of threads running concurrently.
Although many customer applications are more sensitive to latency than bandwidth, bandwidth is crucial
for many typical operations, such as backing up and restoring persistent data.
Understand the processor and memory-sizing considerations when implementing virtual SCSI .
When you are designing and implementing a virtual SCSI application environment, consider the
following sizing issues:
v The amount of memory allocated to the Virtual I/O Server
v The processor entitlement of the Virtual I/O Server
v Whether the Virtual I/O Server is run as a shared-processor logical partition or as a dedicated
processor logical partition
v The maximum transfer size limitation for physical devices and AIX clients
The processor impacts of using virtual I/O on the client are insignificant. The processor cycles run on the
client to perform a virtual SCSI I/O operation are comparable to that of a locally attached I/O device.
Thus, there is no increase or decrease in sizing on the client logical partition for a known task. These
sizing techniques do not anticipate combining the function of shared Ethernet with the virtual SCSI
server. If the two are combined, consider adding resources to account for the shared Ethernet activity
with virtual SCSI.
The amount of processor entitlement required for a virtual SCSI server is based on the maximum I/O
rates required of it. Because virtual SCSI servers do not normally run at maximum I/O rates all of the
time, the use of surplus processor time is potentially wasted when using dedicated processor logical
partitions. In the first of the following sizing methodologies, you need a good understanding of the I/O
rates and I/O sizes required of the virtual SCSI server. In the second, we will size the virtual SCSI server
based on the I/O configuration.
The sizing methodology used is based on the observation that the processor time required to perform an
I/O operating on the virtual SCSI server is fairly constant for a given I/O size. It is a simplification to
make this statement, because different device drivers have subtly varying efficiencies. However, under
most circumstances, the I/O devices supported by the virtual SCSI server are sufficiently similar. The
following table shows approximate cycles per second for both physical disk and logical volume
operations on a 1.65 Ghz processor. These numbers are measured at the physical processor; simultaneous
multithreading (SMT) operation is assumed. For other frequencies, scaling by the ratio of the frequencies
(for example, 1.5 Ghz = 1.65 Ghz / 1.5 Ghz cycles per operation) is sufficiently accurate to produce a
reasonable sizing.
Virtual I/O Server 61
Table 21. Approximate cycles per second on a 1.65 Ghz logical partition
Disk type 4 KB 8 KB 32 KB 64 KB 128 KB
Physical disk 45,000 47,000 58,000 81,000 120,000
Logical volume 49,000 51,000 59,000 74,000 105,000
Consider a Virtual I/O Server that uses three client logical partitions on physical disk-backed storage.
The first client logical partition requires a maximum of 7,000 8-KB operations per second. The second
client logical partition requires a maximum of 10,000 8-KB operations per second. The third client logical
partition requires a maximum of 5,000 128-KB operations per second. The number of 1.65 Ghz processors
for this requirement is approximately ((7,000 47,000 + 10,000 47,000 + 5,000 120,000) / 1,650,000,000)
= 0.85 processors, which rounds up to a single processor when using a dedicated processor logical
partition.
If the I/O rates of the client logical partitions are not known, you can size the Virtual I/O Server to the
maximum I/O rate of the storage subsystem attached. The sizing could be biased toward small I/O
operations or large I/O operations. Sizing to maximum capacity for large I/O operations will balance the
processor capacity of the Virtual I/O Server to the potential I/O bandwidth of the attached I/O. The
negative aspect of this sizing methodology is that, in nearly every case, more processor entitlement will
be assigned to the Virtual I/O Server than it will typically consume.
Consider a case in which a Virtual I/O Server manages 32 physical SCSI disks. An upper limit of
processors required can be established based on assumptions about the I/O rates that the disks can
achieve. If it is known that the workload is dominated by 8096-byte operations that are random, then
assume that each disk is capable of approximately 200 disk I/O operations per second (15k rpm drives).
At peak, the Virtual I/O Server would need to serve approximately 32 disks 200 I/O operations per
second 47,000 cycles per operation, resulting in a requirement for approximately 0.19 processor
performance. Viewed another way, a Virtual I/O Server running on a single processor should be capable
of supporting more than 150 disks doing 8096-byte random I/O operations.
Alternatively, if the Virtual I/O Server is sized for maximum bandwidth, the calculation results in a
higher processor requirement. The difference is that maximum bandwidth assumes sequential I/O.
Because disks are more efficient when they are performing large, sequential I/O operations than they are
when performing small, random I/O operations, a higher number of I/O operations per second can be
performed. Assume that the disks are capable of 50 MB per second when doing 128 KB I/O operations.
That situation implies each disk could average 390 disk I/O operations per second. Thus, the amount of
processing power necessary to support 32 disks, each doing 390 I/O operations per second with an
operation cost of 120,000 cycles (32 390 120,000 / 1,650,000,000) results in approximately 0.91
processors. Consequently, a Virtual I/O Server running on a single processor should be capable of
driving approximately 32 fast disks to maximum throughput.
Defining virtual SCSI servers in shared processor logical partitions allows more specific processor
resource sizing and potential recovery of unused processor time by uncapped logical partitions. However,
using shared-processor logical partitions for virtual SCSI servers can frequently increase I/O response
time and make for somewhat more complex processor entitlement sizings.
The sizing methodology should be based on the same operation costs for dedicated logical partition I/O
servers, with added entitlement for running in shared-processor logical partitions. Configure the Virtual
I/O Server as uncapped, so that, if the Virtual I/O Server is undersized, there is opportunity to get more
processor time to serve I/O operations.
Because I/O latency with virtual SCSI can vary due to a number of conditions, consider the following if a
logical partition has high I/O requirements:
Memory sizing in virtual SCSI is simplified because there is no caching of file data in the memory of the
virtual SCSI server. Because there is no data caching, the memory requirements for the virtual SCSI server
are fairly modest. With large I/O configurations and very high data rates, a 1 GB memory allocation for
the virtual SCSI server is likely to be sufficient. For low I/O rate situations with a small number of
attached disks, 512 MB will most likely suffice.
If you add another virtual target device to the virtual SCSI server adapter and the new virtual target
device has a smaller maximum transfer size than the other configured devices on that adapter, the Virtual
I/O Server does not show a new virtual device to the client. At the time the virtual target device is
created, the Virtual I/O Server displays a message stating that the new target device will not be visible to
the client until you reboot the client.
To display the maximum transfer size of a physical device, use the following command: lsdev -attr
max_transfer -dev hdiskN
Network requirements:
This topic includes information you need in order to accurately size your Shared Ethernet Adapter
environment.
To plan for using Shared Ethernet Adapters, you must determine your network needs. This section gives
overview information of what should be considered when sizing the Shared Ethernet Adapter
environment. Sizing the Virtual I/O Server for the Shared Ethernet Adapter involves the following
factors:
v Defining the target bandwidth (MB per second), or transaction rate requirements (operations per
second). The target performance of the configuration must be determined from your workload
requirements.
v Defining the type of workload (streaming or transaction oriented).
v Identifying the maximum transmission unit (MTU) size that will be used (1500 or jumbo frames).
v Determining if the Shared Ethernet Adapter will run in a threaded or nonthreaded environment.
v Knowing the throughput rates that various Ethernet adapters can provide (see Adapter selection).
v Knowing the processor cycles required per byte of throughput or per transaction (see Processor
allocation).
Bandwidth requirement
The primary consideration is determining the target bandwidth on the physical Ethernet adapter of the
Virtual I/O Server. This will determine the rate that data can be transferred between the Virtual I/O
Server and the client logical partitions. After the target rate is known, the correct type and number of
network adapters can be selected. For example, Ethernet adapters of various speeds could be used. One
or more adapters could be used on individual networks, or they could be combined using Link
Aggregation (or EtherChannel).
The type of workload to be performed must be considered, whether it is streaming of data for workloads
such as file transfer, data backup, or small transaction workloads, such as remote procedure calls. The
streaming workload consists of large, full-sized network packets and associated small, TCP
acknowledgment packets. Transaction workloads typically involve smaller packets or might involve small
requests, such as a URL, and a larger response, such as a Web page. A Virtual I/O Server will need to
frequently support streaming and small packet I/O during various periods of time. In that case, approach
the sizing from both models.
MTU size
The MTU size of the network adapters must also be considered. The standard Ethernet MTU is 1500
bytes. Gigabit Ethernet and 10 gigabit Ethernet can support 9000-byte MTU jumbo frames. Jumbo frames
might reduce the processor cycles for the streaming types of workloads. However, for small workloads,
the larger MTU size might not help reduce processor cycles.
Use threaded mode when virtual SCSI will be run on the same Virtual I/O Server logical partition as
Shared Ethernet Adapter. Threaded mode helps ensure that virtual SCSI and the Shared Ethernet Adapter
can share the processor resource appropriately. However, threading increases instruction-path length,
which uses additional processor cycles. If the Virtual I/O Server logical partition will be dedicated to
running shared Ethernet devices (and associated virtual Ethernet devices) only, the adapters should be
configured with threading disabled. For more information, see Processor allocation on page 67.
Adapter throughput
Knowing the throughput capability of different Ethernet adapters can help you determine which adapters
to use as Shared Ethernet Adapters and how many adapters to use. For more information, see Adapter
selection.
Processor entitlement
You must determine how much processor power is required to move data through the adapters at the
desired rate. Networking device drivers are typically processor-intensive. Small packets can come in at a
faster rate and use more processor cycles than larger packet workloads. Larger packet workloads are
typically limited by network wire bandwidth and come in at a slower rate, thus requiring less processor
power than small packet workloads for the amount of data transferred.
Adapter selection:
Use this section to find the attributes and performance characteristics of various types of Ethernet
adapters to help you select which adapters to use in your environment.
This section provides approximate throughput rates for various Ethernet adapters set at various MTU
sizes. Use this information to determine which adapters will be needed to configure a Virtual I/O Server.
To make this determination, you must know the desired throughput rate of the client logical partitions.
Following are general guidelines for network throughput. These numbers are not specific, but they can
serve as a general guideline for sizing. In the following tables, the 100 MB, 1 GB, and 10 GB speeds are
rounded down for estimating.
Table 23. Full duplex (two direction) streaming rates on full duplex network
Adapter speed Approximate throughput rate
10 Mb Ethernet 2 MB/second
100 Mb Ethernet 20 MB/second
1000 Mb Ethernet (Gb Ethernet) 150 MB/second
10000 Mb Ethernet (10 Gb Ethernet, Host Ethernet 1500 MB/second
Adapter or Integrated Virtual Ethernet)
The following tables list maximum network payload speeds, which are user payload data rates that can
be obtained by sockets-based programs for applications that are streaming data. The rates are a result of
the network bit rate, MTU size, physical level overhead (such as interframe gaps and preamble bits), data
link headers, and TCP/IP headers. A gigahertz-speed processor is assumed. These numbers are optimal
for a single LAN. If your network traffic is going through additional network devices, your results might
vary.
In the following tables, raw bit rate is the physical media bit rate and does not reflect interframe gaps,
preamble bits, data link headers, and trailers. Interframe gaps, preamble bits, data link headers, and
trailers can all reduce the effective usable bit rate of the wire.
Single direction (simplex) TCP streaming rates are rates that can be achieved by sending data from one
machine to another in a memory-to-memory test. Full-duplex media can usually perform slightly better
than half-duplex media because the TCP acknowledgment packets can flow without contending for the
same wire that the data packets are flowing on.
Table 24. Single direction (simplex) TCP streaming rates
Network type Raw bit rate (Mb) Payload rate (Mb) Payload rate (MB)
10 Mb Ethernet, Half 10 6 0.7
Duplex
10 Mb Ethernet, Full 10 (20 Mb full duplex) 9.48 1.13
Duplex
100 Mb Ethernet, Half 100 62 7.3
Duplex
100 Mb Ethernet, Full 100 (200 Mb full duplex) 94.8 11.3
Duplex
1000 Mb Ethernet, Full 1000 (2000 Mb full duplex) 948 113
Duplex, MTU 1500
1000 Mb Ethernet, Full 1000 (2000 Mb full duplex) 989 117.9
Duplex, MTU 9000
Full-duplex TCP streaming workloads have data streaming in both directions. Workloads that can send
and receive packets concurrently can take advantage of full duplex media. Some media, for example
Ethernet in half-duplex mode, cannot send and receive concurrently, thus they will not perform any
better, and can usually degrade performance, when running duplex workloads. Duplex workloads will
not increase at a full doubling of the rate of a simplex workload because the TCP acknowledgment
packets returning from the receiver must now compete with data packets flowing in the same direction.
Table 25. Two direction (duplex) TCP streaming rates
Network type Raw bit rate (Mb) Payload rate (Mb) Payload rate (MB)
10 Mb Ethernet, Half 10 5.8 0.7
Duplex
10 Mb Ethernet, Full 10 (20 Mb full duplex) 18 2.2
Duplex
100 Mb Ethernet, Half 100 58 7
Duplex
100 Mb Ethernet, Full 100 (200 Mb full duplex) 177 21.1
Duplex
1000 Mb Ethernet, Full 1000 (2000 Mb full duplex) 1470 (1660 peak) 175 (198 peak)
Duplex, MTU 1500
1000 Mb Ethernet, Full 1000 (2000 Mb full duplex) 1680 (1938 peak) 200 (231 peak)
Duplex, MTU 9000
10000 Mb Ethernet, Host 10000 14680 (15099 peak) 1750 (1800 peak)
Ethernet Adapter (or
Integrated Virtual Ethernet)
Full Duplex, MTU 1500
10000 Mb Ethernet, Host 10000 16777 (19293 pack) 2000 (2300 peak)
Ethernet Adapter (or
Integrated Virtual Ethernet)
Full Duplex, MTU 9000
Note:
1. Peak numbers represent optimal throughput with multiple TCP sessions running in each direction.
Other rates are for a single TCP session.
2. 1000 MB Ethernet (gigabit Ethernet) duplex rates are for the PCI-X adapter in PCI-X slots.
3. Data rates are for TCP/IP using the IPv4 protocol. Adapters with MTU set to 9000 have RFC 1323
enabled.
This section contains processor-allocation guidelines for both dedicated processor logical partitions and
shared processor logical partitions.
Because Ethernet running MTU size of 1500 bytes consumes more processor cycles than Ethernet running
Jumbo frames (MTU 9000), the guidelines are different for each situation. In general, the processor
utilization for large packet workloads on jumbo frames is approximately half that required for MTU 1500.
If MTU is set to 1500, provide one processor (1.65 Ghz) per Gigabit Ethernet adapter to help reach
maximum bandwidth. This equals ten 100-Mb Ethernet adapters if you are using smaller networks. For
smaller transaction workloads, plan to use one full processor to drive the Gigabit Ethernet workload to
maximum throughput. For example, if two Gigabit Ethernet adapters will be used, allocate up to two
processors to the logical partition.
If MTU is set to 9000 (jumbo frames), provide 50% of one processor (1.65 Ghz) per Gigabit Ethernet
adapter to reach maximum bandwidth. Small packet workloads should plan to use one full processor to
drive the Gigabit Ethernet workload. Jumbo frames have no effect on the small packet workload case.
The sizing provided is divided into two workload types: TCP streaming and TCP request and response.
Both MTU 1500 and MTU 9000 networks were used in the sizing, which is provided in terms of machine
cycles per byte of throughput for streaming or per transaction for request/response workloads.
The data in the following tables was derived using the following formula:
For the purposes of this test, the numbers were measured on a logical partition with one 1.65 Ghz
processor with simultaneous multi-threading (SMT) enabled.
For other processor frequencies, the numbers in these tables can be scaled by the ratio of the processor
frequencies for approximate values to be used for sizing. For example, for a 1.5 Ghz processor speed, use
1.65/1.5 cycles per byte value from the table. This example would result in a value of 1.1 times the
value in the table, thus requiring 10% more cycles to adjust for the 10% slower clock rate of the 1.5 Ghz
processor.
To use these values, multiply your required throughput rate (in bytes or transactions) by the cycles per
byte value in the following tables. This result will give you the required machine cycles for the workload
for a 1.65 Ghz speed. Then adjust this value by the ratio of the actual machine speed to this 1.65 Ghz
speed. To find the number of processors, divide the result by 1,650,000,000 cycles (or the cycles rate if you
adjusted to a different speed machine). You would need the resulting number of processors to drive the
workload.
For example, if the Virtual I/O Server must deliver 200 MB of streaming throughput, the following
formula would be used:
200 1024 1024 11.2 = 2,348,810,240 cycles / 1,650,000,000 cycles per processor = 1.42 processors.
In round numbers, it would require 1.5 processors in the Virtual I/O Server to handle this workload.
Such a workload could then be handled with either a 2-processor dedicated logical partition or a
1.5-processor shared-processor logical partition.
The following tables show the machine cycles per byte for a TCP-streaming workload.
The following tables show the machine cycles per transaction for a request and response workload. A
transaction is defined as a round-trip request and reply size.
Table 28. Shared Ethernet with threading option enabled
Transactions per second and Virtual MTU 1500 or 9000, cycles per
Size of transaction I/O Server utilization transaction
Small packets (64 bytes) 59,722 TPS at 83.4% processor 23,022
Large packets (1024 bytes) 51,956 TPS at 80% processor 25,406
The preceding tables demonstrate that the threading option of the shared Ethernet adds approximately
16% 20% more machine cycles per transaction for MTU 1500 streaming, and approximately 31% 38%
more machine cycles per transaction for MTU 9000. The threading option adds more machine cycles per
transaction at lower workloads due to the threads being started for each packet. At higher workload
rates, like full duplex or the request and response workloads, the threads can run longer without waiting
and being redispatched. The thread option is a per-shared Ethernet option that can be configured by
Virtual I/O Server commands. Disable the thread option if the shared Ethernet is running in a Virtual
I/O Server logical partition by itself (without virtual SCSI in the same logical partition).
You can enable or disable threading using the -attr thread option of the mkvdev command. To enable
threading, use the -attr thread=1 option. To disable threading, use the -attr thread=0 option. For
example, the following command disables threading for Shared Ethernet Adapter ent1:
mkvdev -sea ent1 -vadapter ent5 -default ent5 -defaultid 1 -attr thread=0
Sizing a Virtual I/O Server for shared Ethernet on a shared processor logical partition
Creating a shared-processor logical partition for a Virtual I/O Server can be done if the Virtual I/O
Server is running slower-speed networks (for example 10/100 Mb) and a full processor logical partition is
not needed. It is recommended that this be done only if the Virtual I/O Server workload is less than half
If you are creating a Virtual I/O Server in a shared-processor logical partition, add additional processor
entitlement as a sizing contingency.
Memory allocation:
In general, 512 MB of memory per logical partition is sufficient for most configurations. Enough memory
must be allocated for the Virtual I/O Server data structures. Ethernet adapters and virtual devices use
dedicated receive buffers. These buffers are used to store the incoming packets, which are then sent over
the outgoing device.
A physical Ethernet adapter typically uses 4 MB for MTU 1500 or 16 MB for MTU 9000 for dedicated
receive buffers for gigabit Ethernet. Other Ethernet adapters are similar. Virtual Ethernet, typically uses 6
MB for dedicated receive buffers. However, this number can vary based on workload. Each instance of a
physical or virtual Ethernet would need memory for this number of buffers. In addition, the system has
an mbuf buffer pool per processor that is used if additional buffers are needed. These mbufs typically
occupy 40 MB.
System requirements
v The server must be a POWER6 processor-based server.
v The server firmware must be at release 3.4.2, or later.
v The Hardware Management Console (HMC) must be at version 7 release 3.4.2, or later.
v The Integrated Virtualization Manager must be at version 2.1.1, or later.
v The PowerVM Active Memory Sharing technology must be activated. The PowerVM Active Memory
Sharing technology is available with the PowerVM Enterprise Edition for which you must obtain and
enter a PowerVM Editions activation code.
Redundancy considerations
Redundancy options are available at several levels in the virtual I/O environment. Multipathing,
mirroring, and RAID redundancy options exist for the Virtual I/O Server and some client logical
partitions. Ethernet Link Aggregation (also called EtherChannel) is also an option for the client logical
partitions, and the Virtual I/O Server provides Shared Ethernet Adapter failover. There is also support for
node failover (HACMP) for nodes using virtual I/O resources.
This section contains information about redundancy for both the client logical partitions and the Virtual
I/O Server. While these configurations help protect from the failure of one of the physical components,
such as a disk or network adapter, they might cause the client logical partition to lose access to its
devices if the Virtual I/O Server fails. The Virtual I/O Server can be made redundant by running a
second instance of it in another logical partition. When running two instances of the Virtual I/O Server,
you can use LVM mirroring, multipath I/O, network interface backup, or multipath routing with dead
gateway detection in the client logical partition to provide highly available access to virtual resources
hosted in separate Virtual I/O Server logical partitions.
Multipath I/O:
Multiple virtual SCSI or virtual fibre channel adapters in a client logical partition can access the same
disk through multiple Virtual I/O Server logical partitions. This section describes a virtual SCSI
multipath device configuration. If correctly configured, the client recognizes the disk as a multipath
device. If you are using PowerVM Active Memory Sharing technology (or shared memory), you can also
use a multipath configuration to allow two paging VIOS logical partitions to access common paging
space devices.
MPIO is not available for IBM i client logical partitions. Instead, you must use mirroring to create
redundancy. For more information, see Mirroring for client logical partitions on page 72.
Not all virtual SCSI devices are capable of MPIO. To create an MPIO configuration, the exported device
at the Virtual I/O Server must conform to the following rules:
v The device must be backed by a physical volume. Logical volume-backed virtual SCSI devices are not
supported in an MPIO configuration.
v The device must be accessible from multiple Virtual I/O Server logical partitions.
v The device must be an MPIO-capable device.
Note: MPIO-capable devices are those that contain a unique identifier (UDID) or IEEE volume
identifier. For instructions about how to determine whether disks have a UDID or IEEE volume
identifier, see Identifying exportable disks on page 104.
When setting up an MPIO configuration for virtual SCSI devices on the client logical partition, you must
consider the reservation policy of the device on the Virtual I/O Server. To use an MPIO configuration at
the client, none of the virtual SCSI devices on the Virtual I/O Server can be reserving the virtual SCSI
device. Ensure the reserve_policy attribute of the device is set to no_reserve.
Achieve mirroring for client logical partitions by using two virtual SCSI adapters.
The client partition can mirror its logical volumes using two virtual SCSI client adapters. Each of these
adapters should be assigned to separate Virtual I/O Server partitions. The two physical disks are each
attached to a separate Virtual I/O Server partition and made available to the client partition through a
virtual SCSI server adapter. This configuration protects virtual disks in a client partition against the
failure of any of the following:
v One physical disk
v One physical adapter
v One Virtual I/O Server
The performance of your system might be impacted when using a RAID 1 configuration.
Learn about High Availability Cluster Multi-Processing (HACMP) in the Virtual I/O Server.
HACMP supports certain configurations that utilize the Virtual I/O Server, virtual SCSI and virtual
networking capabilities. For the most recent support and configuration information, see the HACMP for
System p Web site. For HACMP documentation, see High Availability Cluster Multi-Processing for AIX.
For IBM i client partitions, you must use mirroring to create redundancy. For details, see Mirroring for
client logical partitions.
Be aware of the following considerations when implementing HACMP and virtual SCSI:
v The volume group must be defined as Enhanced Concurrent Mode. Enhanced Concurrent Mode is the
preferred mode for sharing volume groups in HACMP clusters because volumes are accessible by
multiple HACMP nodes. If file systems are used on the standby nodes, those file systems are not
mounted until the point of failover. If shared volumes are accessed directly (without file systems) in
Enhanced Concurrent Mode, these volumes are accessible from multiple nodes, and as a result, access
must be controlled at a higher layer.
v If any one cluster node accesses shared volumes through virtual SCSI, then all nodes must. This means
that disks cannot be shared between a logical partition using virtual SCSI and a node directly accessing
those disks.
Be aware of the following considerations when implementing HACMP and virtual Ethernet:
v IP Address Takeover (IPAT) by way of aliasing must be used. IPAT by way of Replacement and MAC
Address Takeover are not supported.
v Avoid using the HACMP PCI Hot Plug facility in a Virtual I/O Server environment. PCI Hot Plug
operations are available through the Virtual I/O Server. When an HACMP node is using virtual I/O,
the HACMP PCI Hot Plug facility is not meaningful because the I/O adapters are virtual rather than
physical.
v All virtual Ethernet interfaces defined to HACMP should be treated as single-adapter networks. In
particular, you must use the ping_client_list attribute to monitor and detect failure of the network
interfaces.
v If the Virtual I/O Server has multiple physical interfaces on the same network, or if there are two or
more HACMP nodes using the Virtual I/O Server in the same frame, HACMP is not informed of, and
does not react to, single physical interface failures. This does not limit the availability of the entire
cluster because the Virtual I/O Server routes traffic around the failure.
v If the Virtual I/O Server has only a single physical interface on a network, failure of that physical
interface is detected by HACMP. However, that failure isolates the node from the network.
For example, ent0 and ent1 can be aggregated to ent3. The system considers these aggregated adapters
as one adapter, and all adapters in the Link Aggregation device are given the same hardware address, so
they are treated by remote systems as if they are one adapter.
Link Aggregation can help provide more redundancy because individual links might fail, and the Link
Aggregation device will fail over to another adapter in the device to maintain connectivity. For example,
in the previous example, if ent0 fails, the packets are automatically sent on the next available adapter,
ent1, without disruption to existing user connections. ent0 automatically returns to service on the Link
Aggregation device when it recovers.
You can configure a Shared Ethernet Adapter to use a Link Aggregation, or EtherChannel, device as the
physical adapter.
Shared Ethernet Adapter failover provides redundancy by configuring a backup Shared Ethernet Adapter
on a different Virtual I/O Server logical partition that can be used if the primary Shared Ethernet
Adapter fails. The network connectivity in the client logical partitions continues without disruption.
A Shared Ethernet Adapter is comprised of a physical adapter (or several physical adapters grouped
under a Link Aggregation device) and one or more virtual Ethernet adapters. It can provide layer 2
connectivity to multiple client logical partitions through the virtual Ethernet adapters.
The Shared Ethernet Adapter failover configuration uses the priority value given to the virtual Ethernet
adapters during their creation to determine which Shared Ethernet Adapter will serve as the primary and
which will serve as the backup. The Shared Ethernet Adapter that has the virtual Ethernet configured
A Shared Ethernet Adapter in failover mode might optionally have more than one trunk virtual Ethernet.
In this case, all the virtual Ethernet adapters in a Shared Ethernet Adapter must have the same priority
value. Also, the virtual Ethernet adapter used specifically for the control channel does not need to have
the trunk adapter setting enabled. The virtual Ethernet adapters used for the control channel on each
Shared Ethernet Adapter in failover mode must have an identical PVID value, and that PVID value must
be unique in the system, so that no other virtual Ethernet adapters on the same system are using that
PVID.
To ensure prompt recovery times, when you enable the Spanning Tree Protocol on the switch ports
connected to the physical adapters of the Shared Ethernet Adapter, you can also enable the portfast
option on those ports. The portfast option allows the switch to immediately forward packets on the port
without first completing the Spanning Tree Protocol. (Spanning Tree Protocol blocks the port completely
until it is finished.)
The Shared Ethernet Adapter is designed to prevent network loops. However, as an additional
precaution, you can enable Bridge Protocol Data Unit (BPDU) Guard on the switch ports connected to the
physical adapters of the Shared Ethernet Adapter. BPDU Guard detects looped Spanning Tree Protocol
BPDU packets and shuts down the port. This helps prevent broadcast storms on the network. A broadcast
storm is a situation where one message that is broadcast across a network results in multiple responses.
Each response generates more responses, causing excessive transmission of broadcast messages. Severe
broadcast storms can block all other network traffic, but they can usually be prevented by carefully
configuring a network to block illegal broadcast messages.
Note: When the Shared Ethernet Adapter is using GARP VLAN Registration Protocol (GVRP), it
generates BPDU packets, which causes BPDU Guard to shut down the port unnecessarily. Therefore,
when the Shared Ethernet Adapter is using GVRP, do not enable BPDU Guard.
For information about how to enable the Spanning Tree Protocol, the portfast option, and BPDU Guard
on the ports, see the documentation provided with the switch.
Related tasks:
Scenario: Configuring Shared Ethernet Adapter failover on page 45
Use this article to help you become familiar with typical Shared Ethernet Adapter failover scenario.
Multipathing:
Multipathing for the physical storage within the Virtual I/O Server provides failover physical path
redundancy and load-balancing. The multipathing solutions available in the Virtual I/O Server include
MPIO as well as solutions provided by the storage vendors.
For information about supported storage and multipathing software solutions, see the datasheet available
on the Virtual I/O Server Support for UNIX servers and Midrange servers Web site.
Redundant Array of Independent Disks (RAID) solutions provide for device level redundancy within the
Virtual I/O Server. Some RAID options, such as LVM mirroring and striping, are provided by the Virtual
I/O Server software, while other RAID options are made available by the physical storage subsystem.
See the Virtual I/O Server datasheet available on the Virtual I/O Server Support for UNIX servers and
Midrange servers Web site for supported hardware RAID solutions.
For example, ent0 and ent1 can be aggregated to ent3. The system considers these aggregated adapters
as one adapter, and all adapters in the Link Aggregation device are given the same hardware address, so
they are treated by remote systems as if they are one adapter.
Link Aggregation can help provide more redundancy because individual links might fail, and the Link
Aggregation device will fail over to another adapter in the device to maintain connectivity. For example,
in the previous example, if ent0 fails, the packets are automatically sent on the next available adapter,
ent1, without disruption to existing user connections. ent0 automatically returns to service on the Link
Aggregation device when it recovers.
You can configure a Shared Ethernet Adapter to use a Link Aggregation, or EtherChannel, device as the
physical adapter.
With N_Port ID Virtualization (NPIV), you can configure the managed system so that multiple logical
partitions can access independent physical storage through the same physical fibre channel adapter. Each
virtual fibre channel adapter is identified by a unique worldwide port name (WWPN), which means that
you can connect each virtual fibre channel adapter to independent physical storage on a SAN.
Similar to virtual SCSI redundancy, virtual fibre channel redundancy can be achieved using Multi-path
I/O (MPIO) and mirroring at the client partition. The difference between traditional redundancy with
SCSI adapters and the NPIV technology using virtual fibre channel adapters, is that the redundancy
occurs on the client, because only the client recognizes the disk. The Virtual I/O Server is essentially just
a pipe. The second example below uses multiple Virtual I/O Server logical partitions to add redundancy
at the Virtual I/O Server level as well.
This example uses Host bus adapter (HBA) failover to provide a basic level of redundancy for the client
logical partition. The figure shows the following connections:
v The storage area network (SAN) connects physical storage to two physical fibre channel adapters
located on the managed system.
v The physical fibre channel adapters are assigned to the Virtual I/O Server and support NPIV.
v The physical fibre channel ports are each connected to a virtual fibre channel adapter on the Virtual
I/O Server. The two virtual fibre channel adapters on the Virtual I/O Server are connected to ports on
two different physical fibre channel adapters in order to provide redundancy for the physical adapters.
The virtual fibre channel adapters always has a one-to-one relationship between the client logical
partitions and the virtual fibre channel adapters on the Virtual I/O Server logical partition. That is, each
virtual fibre channel adapter that is assigned to a client logical partition must connect to only one virtual
fibre channel adapter on the Virtual I/O Server, and each virtual fibre channel on the Virtual I/O Server
must connect to only one virtual fibre channel adapter on a client logical partition.
The client can write to the physical storage through client virtual fibre channel adapter 1 or 2. If a
physical fibre channel adapter fails, the client uses the alternative path. This example does not show
redundancy in the physical storage, but rather assumes it would be built into the SAN.
Note: It is recommended that you configure virtual fibre channel adapters from multiple logical
partitions to the same HBA, or you configure virtual fibre channel adapters from the same logical
partition to different HBAs.
This example uses HBA and Virtual I/O Server failover to provide a more advanced level of redundancy
for the client logical partition. The figure shows the following connections:
v The storage area network (SAN) connects physical storage to two physical fibre channel adapters
located on the managed system.
v There are two Virtual I/O Server logical partitions to provide redundancy at the Virtual I/O Server
level.
v The physical fibre channel adapters are assigned to their respective Virtual I/O Server and support
NPIV.
The client can write to the physical storage through virtual fibre channel adapter 1 or 2 on the client
logical partition through VIOS 2. The client can also write to physical storage through virtual fibre
channel adapter 3 or 4 on the client logical partition through VIOS 1. If a physical fibre channel adapter
fails on VIOS 1, the client uses the other physical adapter connected to VIOS 1 or uses the paths
connected through VIOS 2. If VIOS 1 fails, then the client uses the path through VIOS 2. This example
does not show redundancy in the physical storage, but rather assumes it would be built into the SAN.
Considerations
These examples can become more complex as you add physical storage redundancy and multiple clients,
but the concepts remain the same. Consider the following points:
v To avoid configuring the physical fibre channel adapter to be a single point of failure for the
connection between the client logical partition and its physical storage on the SAN, do not connect two
virtual fibre channel adapters from the same client logical partition to the same physical fibre channel
adapter. Instead, connect each virtual fibre channel adapter to a different physical fibre channel
adapter.
v Consider load balancing when mapping a virtual fibre channel adapter on the Virtual I/O Server to a
physical port on the physical fiber channel adapter.
Security considerations
Review the security considerations for virtual SCSI, virtual Ethernet, and Shared Ethernet Adapter and
the additional security options available.
IBM systems allow cross-partition device sharing and communication. Functions such as dynamic LPAR,
shared processors, virtual networking, virtual storage, and workload management all require facilities to
ensure that system-security requirements are met. Cross-partition and virtualization features are designed
to not introduce any security exposure beyond what is implied by the function. For example, a virtual
LAN connection would have the same security considerations as a physical network connection.
Carefully consider how to utilize cross-partition virtualization features in high-security environments.
Any visibility between logical partitions must be manually created through administrative
system-configuration choices.
Using virtual SCSI, the Virtual I/O Server provides storage to client logical partitions. However, instead
of SCSI or fiber cable, the connection for this functionality is done by the firmware. The virtual SCSI
device drivers of the Virtual I/O Server and the firmware ensure that only the system administrator of
the Virtual I/O Server has control over which logical partitions can access data on Virtual I/O Server
storage devices. For example, a client logical partition that has access to a logical volume lv001 exported
by the Virtual I/O Server logical partition cannot access lv002, even if it is in the same volume group.
Similar to virtual SCSI, the firmware also provides the connection between logical partitions when using
virtual Ethernet. The firmware provides the Ethernet switch functionality. The connection to the external
network is provided by the Shared Ethernet Adapter function on the Virtual I/O Server. This part of the
Virtual I/O Server acts as a layer-2 bridge to the physical adapters. A VLAN ID tag is inserted into every
Ethernet frame. The Ethernet switch restricts the frames to the ports that are authorized to receive frames
with that VLAN ID. Every port on an Ethernet switch can be configured to be a member of several
VLANs. Only the network adapters, both virtual and physical, that are connected to a port (virtual or
physical) that belongs to the same VLAN can receive the frames. The implementation of this VLAN
standard ensures that the logical partitions cannot access restricted data.
Support for virtual tape requires IVM version 2.1.0 or later and support for virtual fibre channel requires
IVM version 2.1.2 or later. IBM i must be at 6.1 or later. IBM i must be at 6.1.1 or later to use virtual fibre
channel.
The following limitations and restrictions apply to IBM i client logical partitions of the Virtual I/O Server
that are running on HMC-managed systems. IBM i client logical partitions that run on systems that are
managed by the Integrated Virtualization Manager have additional limitations and restrictions. For
details, see Limitations and restrictions for IBM i client partitions on systems managed by the Integrated
Virtualization Manager.
These instructions apply to installing the Virtual I/O Server and client logical partitions on a system that
is managed by a Hardware Management Console (HMC). If you plan to install the Virtual I/O Server on
Before you start, ensure that you meet the following requirements:
v The system to which you plan to deploy the system plan is managed by a Hardware Management
Console (HMC).
v The HMC is at version 7 or later.
v If you plan to deploy different entities of the Virtual I/O Server configuration at different times, ensure
that the HMC is at version V7R3.3.0, or later. (Virtual I/O Server entities include Shared Ethernet
Adapters, EtherChannel adapters, or Link Aggregation devices, storage pools, and backing devices.) If
the HMC is not at V7R3.3.0, or later, system plans that include the Virtual I/O Server can be deployed
only to new systems, or to systems that do not already have a Virtual I/O Server logical partition
configured. (The Virtual I/O Server can be installed, but not configured.) More specifically, no Virtual
I/O Server entities can be configured on the managed system, including Shared Ethernet Adapters,
EtherChannel adapters, or Link Aggregation devices, storage pools, and backing devices.
v If you plan to deploy a system plan that includes AIX or Linux installation information for at least one
client logical partition, ensure that you meet the following requirements:
The HMC must be at V7R3.3.0.
The client logical partition does not have an operating system already installed. The HMC installs
AIX and Linux on client logical partitions that do not already have an operating system installed. If
the client logical partition already has an operating system installed, the HMC does not deploy the
operating system specified in the system plan.
Entering the activation code for PowerVM Editions using the HMC version 7
Use these instructions to enter the PowerVM Editions (or Advanced POWER Virtualization) activation
code using the Hardware Management Console (HMC) version 7, or later.
If PowerVM Editions is not enabled on your system, you can use the HMC to enter the activation code
that you received when you ordered the feature.
You must use the HMC graphical user interface to complete the import task. This task is not available
from the command line.
You can import a system-plan file into the HMC from any of the following locations:
v From the computer on which you remotely access the HMC.
v From various media that is mounted on the HMC, such as optical discs or USB drives.
v From a remote site by using FTP. To use this option, you must fulfill the following requirements:
The HMC must have a network connection to the remote site.
An FTP server must be active on the remote site.
Port 21 must be open on the remote site.
Note: You cannot import a system plan that has an identical name to any system plan that is available on
the HMC.
To import a system-plan file, you must be a super administrator. For more information about user roles,
see Managing HMC users and tasks.
To import a system-plan file into the HMC, complete the following steps:
1. In the navigation area of the HMC, select System Plans.
2. In the tasks area, select Import System Plan. The Import System Plan window opens.
3. Select the source of the system-plan file that you want to import. Use the following table to complete
the appropriate steps for importing the system plan from the selected source location of the file.
4. Click Import. If the HMC returns an error, return to the Import System Plan window and verify that
the information you entered is correct. If necessary, click Cancel, return to step 2, and redo the
procedure, ensuring that the information you specify at each step is correct.
Note: As an alternative to the HMC Web user interface, you can use the cpysysplan command from the
HMC command line interface to import a system plan.
When you complete the process of importing the system-plan file, you can deploy the system plan in the
system-plan file to a system that the HMC manages. If you imported the system-plan file from media,
you can unmount the media by using the umount command from the HMC command line interface.
Related tasks:
Deploying a system plan by using the HMC on page 83
You can use the Hardware Management Console (HMC) to deploy all or part of a system plan to a
managed system.
Related information:
When you deploy a system plan, the HMC creates logical partitions on the managed system according to
the specifications in the system plan. Depending on the contents of the system plan, you can also install
operating environments on the logical partitions in the plan, including the Virtual I/O Server (VIOS), AIX
or Linux.
Note: The HMC cannot install the IBM i operating environment on a logical partition.
If the plan contains VIOS provisioning information for a logical partition, such as storage assignments
and virtual networking for the client logical partitions of the VIOS. the HMC can make these resource
assignments for the client logical partitions.
You do not have to deploy a system plan in its entirety, but can instead partially deploy a system plan on
the target system by selecting which logical partitions in the plan to deploy. You can run the Deploy
System Plan Wizard again at another time to deploy the remainder of the logical partitions in the system
plan. However, if you select a VIOS partition to be deployed, the wizard deploys all the VIOS
provisioning items that are planned for that partition even if the client logical partition that uses the
provisioned item is not selected for deployment.
If the system plan contains installation information for the VIOS, you can use the Deploy System Plan
Wizard to install the VIOS and to set up virtual networking and storage resources for the client logical
partitions of the VIOS.
To use the HMC to deploy a system plan on a managed system, complete the following steps:
1. In the navigation area of the HMC, select System Plans.
2. In the contents area, select the system plan that you want to deploy.
3. Select Tasks > Deploy system plan. The Deploy System Plan Wizard starts.
4. On the Welcome page, complete the following steps:
a. Select the system-plan file that contains the system plan that you want to deploy.
b. Choose the managed system to which you want to deploy the system plan and click Next. If the
system plan does not match the managed system to which you want to deploy the plan, the
wizard displays a window that informs you of this. Click OK to continue or Cancel to select a
different system plan.
Note: If the system-plan file contains multiple system plans, the wizard provides a step so that
you can select a specific system plan from the file. This step does not occur unless there is more
than one system plan in the specified file.
This action creates a new system plan that you can view and compare to the old system plan to
help diagnose any problems.
6. Optional: On the Partition Deployment page, if you do not want to create all of the logical partitions,
partition profiles, virtual adapter types, or virtual adapters in the system plan, clear the boxes in the
Deploy column beside the logical partitions, partition profiles, virtual adapter types, or virtual
adapters that you do not want to create. Virtual serial adapters are required in virtual slots 0 and 1
for each logical partition. You cannot create the logical partition unless you create these virtual serial
adapters.
7. Optional: On the Operating Environment Install page, if there is operating environment installation
information specified in the system plan, complete the following steps:
a. Select the operating environments that you want to deploy to the managed system for each logical
partition. For HMC V7R3.2.0 or V7R3.1.0, you can deploy only the Virtual I/O Server operating
environment. For HMC V7R3.3.0, or later, versions, you also can select to deploy the AIX or Linux
operating environments if the system plan contains installation information for them.
b. Enter the location of the Virtual I/O Server installation image.
c. Enter or change late-binding installation settings for the specified Virtual I/O Server, AIX, or Linux
operating environment. Late-binding installation settings are settings that are specific to the
installation instance and must be supplied during the installation step to ensure that the settings
are accurate for the installation instance. For example, you can enter the IP address of the target
logical partition on which you are installing the operating environment.
Note: If you need to use automatic installation files to deploy an operating environment, you
cannot add them during the HMC deployment process. You must use the System Planning Tool
(SPT) to create any necessary automatic installation files separately and attach them to the system
plan prior to deploying the system plan.
d. Save any changes that you make to late-binding installation settings. You can save them to the
current system-plan file or to a new system-plan file.
8. On the Summary page, view the system deployment step order and click Finish. The HMC uses the
system plan to create the specified logical partitions and to install any specified operating
environments. This process can take several minutes.
After you finish the deployment of the system plan, install operating environments and software on the
logical partitions, if they did not install as part of system plan deployment.
Related tasks:
Importing a system plan into an HMC on page 81
You can import a system-plan file into a Hardware Management Console (HMC) from various types of
media, a remote FTP site, or the computer from which you remotely access the HMC. You can then
deploy the imported system plan to a system that the HMC manages.
Related information:
This procedure assumes that Virtual I/O Server is installed. For instructions, see Installing the Virtual
I/O Server and client logical partitions on page 79.
Installing the Virtual I/O Server manually using the HMC version 7
You can create the Virtual I/O Server logical partition and logical partition profile and install the Virtual
I/O Server using the Hardware Management Console (HMC) version 7 or later.
Before you start, ensure that the following statements are true:
v The system on which you plan install the Virtual I/O Server is managed by a Hardware Management
Console (HMC).
v The HMC is at version 7 or later. If the HMC is at a version 6 or earlier, then see Installing the Virtual
I/O Server manually using the HMC version 6.
If PowerVM Editions is not enabled on your system, you can use the HMC to enter the activation code
that you received when you ordered the feature.
Use the following procedure to enter the activation code for the PowerVM Standard Edition and the
PowerVM Enterprise Edition. For information about the PowerVM Editions, see PowerVM Editions
overview.
Creating the Virtual I/O Server logical partition and partition profile using HMC
version 7
You can use the Hardware Management Console (HMC) version 7 to create a logical partition and
partition profile for the Virtual I/O Server.
Before you start, ensure that the following statements are true:
v You are a super administrator or an operator.
v The PowerVM Editions (or Advanced POWER Virtualization) feature is activated. For instructions, see
Entering the activation code for PowerVM Editions using the HMC version 7 on page 80.
To create a logical partition and a partition profile on your server using the HMC, follow these steps:
1. In the Navigation area, expand Systems Management.
2. Select Servers.
3. In the contents area, select the server on which you want to create the partition profile.
4. Click Tasks and select Configuration > Create Logical Partition > VIO Server.
5. On the Create Partition page, enter a name and ID for the Virtual I/O Server partition.
6. On the Partition Profile page, complete the following steps:
a. Enter a profile name for the Virtual I/O Server partition.
b. Make sure that the Use all the resources in the system check box is cleared (not checked).
7. On the Processors page, decide if you want to use shared or dedicated processors (based on your
environment) by making the appropriate selection.
8. On the Processing Settings page, enter the appropriate amount of processing units and virtual
processors that you want to assign to the Virtual I/O Server partition.
9. On the Memory page, select the appropriate amount of memory that you want to assign to the
Virtual I/O Server partition. The required minimum is 512 MB.
10. On the I/O page, select the physical I/O resources that you want in the Virtual I/O Server partition.
11. On the Virtual Adapters page, create the appropriate adapters for your environment.
After you create the partition and partition profile, you are ready to install the Virtual I/O Server. For
instructions, see one of the following procedures:
v Installing the Virtual I/O Server from the HMC
v Installing the Virtual I/O Server from CD or DVD on page 88
After you install the Virtual I/O Server, finish the installation by checking for updates, setting up remote
connections, creating additional user IDs, and so on. For instructions, see Finishing the Virtual I/O
Server installation on page 85.
Before you start, ensure that the following statements are true:
v There is an HMC attached to the managed system.
v The Virtual I/O Server logical partition and logical partition profile are created. For instructions, see
Creating the Virtual I/O Server logical partition and partition profile using HMC version 7 on page
86.
v A CD or DVD optical device is assigned to the Virtual I/O Server logical partition.
To install the Virtual I/O Server from CD or DVD, follow these steps:
1. Activate the Virtual I/O Server logical partition using the HMC version 7 (or later) or HMC version 6
(or earlier):
v Activate the Virtual I/O Server using the HMC version 7 or later:
a. Insert the Virtual I/O Server CD or DVD into the Virtual I/O Server logical partition.
b. In the HMC navigation area, expand Systems Management > Servers.
c. Select the server on which the Virtual I/O Server logical partition is located.
d. In the contents area, select the Virtual I/O Server logical partition.
e. Click Tasks > Operations > Activate. The Activate Partition menu opens with a selection of
logical partition profiles. Ensure the correct profile is highlighted.
f. Select Open a terminal window or console session to open a virtual terminal (vterm) window.
g. Click (Advanced) to open the advanced options menu.
h. For the boot mode, select SMS.
i. Click OK to close the advanced options menu.
j. Click OK. A virtual terminal window opens for the logical partition.
v Activate the Virtual I/O Server using the HMC version 6 or earlier:
a. Insert the Virtual I/O Server CD or DVD into the Virtual I/O Server logical partition.
b. On the HMC, right-click the logical partition to open the menu.
c. Click Activate. The Activate Partition menu opens with a selection of logical partition profiles.
Ensure the correct profile is highlighted.
d. Select Open a terminal window or console session to open a virtual terminal (vterm) window.
e. Click (Advanced) to open the advanced options menu.
f. For the boot mode, select SMS.
g. Click OK to close the advanced options menu.
h. Click OK. A virtual terminal window opens for the logical partition.
2. Select the boot device:
a. Select Select Boot Options and press Enter.
b. Select Select Install/Boot Device and press Enter.
c. Select Select 1st Boot Device and press Enter.
d. Select CD/DVD and press Enter.
e. Select the media type that corresponds to the optical device and press Enter.
f. Select the device number that corresponds to the optical device and press Enter.
g. Set the boot sequence to configure the first boot device. The optical device is now the first device
in the Current Boot Sequence list.
h. Exit the SMS menu by pressing the x key, and confirm that you want to exit SMS.
3. Install the Virtual I/O Server:
After you install the Virtual I/O Server, finish the installation by checking for updates, setting up remote
connects, creating additional user IDs, and so on. For instructions, see Finishing the Virtual I/O Server
installation on page 85.
This procedure assumes that Virtual I/O Server is installed. For instructions, see Installing the Virtual
I/O Server and client logical partitions on page 79.
You must view and accept the license before using the Virtual I/O Server.
Before you start, ensure that the Virtual I/O Server logical partition profile is created and the Virtual I/O
Server is installed. For instructions, see Installing the Virtual I/O Server and client logical partitions on
page 79.
To view and accept the Virtual I/O Server license, complete the following steps:
1. Log in to the Virtual I/O Server using the padmin user ID.
2. Choose a new password. The software maintenance terms and conditions appear.
Virtual I/O Server 89
3. If Virtual I/O Server is at version 1.5 or later, view and accept the software maintenance terms and
conditions.
a. To view the software maintenance terms and conditions, type v on the command line and press
Enter.
b. To accept the software maintenance terms and conditions, type a on the command line and press
Enter.
4. View and accept the Virtual I/O Server product license.
Note: If you installed the Virtual I/O Server by deploying a system plan, then you have already
accepted the Virtual I/O Server product license and do not need to complete this step.
a. To view the Virtual I/O Server product license, type license -ls on the command line. By
default, the license is displayed in English. To change the language in which the license is
displayed, follow these steps:
1) View the list of available locales to display the license by typing the following command:
license -ls
2) View the license in another language by typing the following command:
license -view -lang Name
For example, to view the license in Japanese, type the following command:
license -view -lang ja_JP
b. To accept the Virtual I/O Server product license, type license -accept on the command line.
5. In the installation program, English is the default language. If you need to change the language
setting for the system, follow these steps:
a. View the available languages by typing the following command:
chlang -ls
b. Change the language by typing the following command, replacing Name with the name of the
language you are switching to:
chlang -lang Name
Note: If the language fileset is not installed, use the -dev Media flag to install it.
For example, to install and change the language to Japanese, type the following command:
chlang -lang ja_JP -dev /dev/cd0
The paging VIOS partitions store information about the paging space devices that are assigned to a
shared memory pool. The Hardware Management Console (HMC) obtains information about the paging
space devices that are assigned to the shared memory pool from the paging VIOS partitions. When you
reinstall the VIOS, the information about the paging space devices is lost. For the paging VIOS partitions
to regain the information, you must assign the paging space devices again to the share memory pool after
you reinstall the VIOS.
The following table shows the reconfiguration tasks that you must perform in the shared memory
environment when you resinstall the Virtual I/O Server of a paging VIOS partition.
Before you start, verify that the following statements are true:
v The system on which you plan to migrate the Virtual I/O Server is managed by a Hardware
Management Console (HMC) version 7, or later.
v The Virtual I/O Server is at version 1.3, or later.
v The rootvg volume group has been assigned to the Virtual I/O Server.
In most cases, user configuration files from the previous version of the Virtual I/O Server are saved when
the new version is installed. If you have two or more Virtual I/O Server logical partitions in your
environment for redundancy, you are able to shut down and migrate one Virtual I/O Server logical
partition without interrupting any clients. After the migration is complete and the Virtual I/O Server
logical partition is running again, the logical partition will be available to clients without additional
configuration.
Attention: Do not use the Virtual I/O Server updateios command to migrate the Virtual I/O Server.
Related information:
Migrating the Virtual I/O Server using NIM
Attention: Do not use the Virtual I/O Server updateios command to migrate the Virtual I/O Server.
4. Follow the installation instructions according to the system prompts.
After the migration is complete, the Virtual I/O Server logical partition is restarted to its preserved
configuration prior to the migration installation. It is recommended to perform the following tasks:
v Verify that migration was successful by checking results of the installp command and running the
ioslevel command. It should indicate the ioslevel is now $ ioslevel 2.1.0.0.
v Restart previously running daemons and agents:
1. Log on to the Virtual I/O Server as padmin user.
Remember: The Virtual I/O Server migration media is separate from the Virtual I/O Server
installation media. Do not use the installation media for updates after you perform a migration. It does
not contain updates and you will lose your current configuration. Only apply updates using the
instructions from the Virtual I/O Server support site.
Related tasks:
Backing up the Virtual I/O Server to a remote file system by creating a mksysb image on page 130
You can back up the Virtual I/O Server base code, applied fix packs, custom device drivers to support
disk subsystems, and some user-defined metadata to a remote file system by creating a mksysb file.
Related information:
Migrating the Virtual I/O Server from DVD when using the Integrated Virtualization Manager
Migrating the Virtual I/O Server from DVD when using the Integrated Virtualization Manager
Before you start, ensure that the following statements are true:
v An HMC is attached to the managed system.
v A DVD optical device is assigned to the Virtual I/O Server logical partition.
v The Virtual I/O Server migration installation media is required.
Note: The Virtual I/O Server migration installation media is separate from the Virtual I/O Server
installation media.
v The Virtual I/O Server is currently at version 1.3, or later.
v The rootvg volume group has been assigned to the Virtual I/O Server
v Back up the mksysb image before migrating the Virtual I/O Server. Run the backupios command and
save the mksysb image to a safe location.
Note: If you are using an Integrated Virtualization Manager (IVM) environment, see Migrating the
Virtual I/O Server from DVD when using the Integrated Virtualization Manager.
To migrate the Virtual I/O Server from a DVD, follow these steps:
1. Activate the Virtual I/O Server logical partition using the HMC, version 7 (or later):
a. Insert the Virtual I/O Server migration DVD into the DVD drive assigned to the Virtual I/O
Server logical partition.
b. In the HMC navigation area, expand Systems Management > Servers.
c. Select the server on which the Virtual I/O Server logical partition is located.
d. In the contents area, select the Virtual I/O Server logical partition.
e. Click Tasks > Operations > Activate. The Activate Partition menu opens with a selection of logical
partition profiles. Ensure that the correct profile is highlighted.
f. Select Open a terminal window or console session to open a virtual terminal (vterm) window.
g. Click Advanced to open the advanced options menu.
h. For the boot mode, select SMS.
i. Click OK to close the advanced options menu.
Note: You should not have to change installation settings simply to select the migration
installation method. If a previous version of the operating system exists, the installation method
defaults to migration.
d. Select Continue with Install. The system will reboot after the installation is complete.
After the migration is complete, the Virtual I/O Server logical partition is restarted to its preserved
configuration prior to the migration installation. It is recommended that you perform the following tasks:
v Verify that migration was successful by checking results of the installp command and running the
ioslevel command. It should indicate the ioslevel is now $ ioslevel 2.1.0.0.
v Restart previously running daemons and agents:
1. Log on to the Virtual I/O Server as padmin user.
2. Complete the following command: $ motd -overwrite "<enter previous banner message>"
3. Start any previously running daemons, such as FTP and Telnet.
4. Start any previously running agents, such as ituam.
v Check for updates to the Virtual I/O Server. For instructions, see Virtual I/O Server support site.
Remember: The Virtual I/O Server migration media is separate from the Virtual I/O Server
installation media. Do not use the installation media for updates after you perform a migration. It does
not contain updates and you will lose your current configuration. Only apply updates using the
instructions from the Virtual I/O Server support site.
Related tasks:
Backing up the Virtual I/O Server to a remote file system by creating a mksysb image on page 130
You can back up the Virtual I/O Server base code, applied fix packs, custom device drivers to support
disk subsystems, and some user-defined metadata to a remote file system by creating a mksysb file.
Related information:
Migrating the Virtual I/O Server from DVD when using the Integrated Virtualization Manager
Migrating the Virtual I/O Server from DVD when using the Integrated Virtualization Manager
Provisioning virtual disk resources occurs on the Virtual I/O Server. Physical disks owned by the Virtual
I/O Server can either be exported and assigned to a client logical partition as a whole or can be
partitioned into parts, such as logical volumes or files. These logical volumes and files can be exported as
virtual disks to one or more client logical partitions. Therefore, by using virtual SCSI, you can share
adapters as well as disk devices.
To make a physical volume, logical volume, or file available to a client logical partition requires that it be
assigned to a virtual SCSI server adapter on the Virtual I/O Server. The SCSI client adapter is linked to a
particular virtual SCSI server adapter in the Virtual I/O Server logical partition. The client logical
partition accesses its assigned disks through the virtual SCSI client adapter. The Virtual I/O Server client
adapter sees standard SCSI devices and LUNs through this virtual adapter. Assigning disk resources to a
SCSI server adapter in the Virtual I/O Server effectively allocates resources to a SCSI client adapter in the
client logical partition.
For information about SCSI devices that you can use, see the Virtual I/O Server Support for UNIX
servers and Midrange servers Web site.
With the Virtual I/O Server version 2.1 and later, you can export the following types of physical devices:
v Virtual SCSI disk backed by a physical volume
v Virtual SCSI disk backed by a logical volume
v Virtual SCSI disk backed by a file
v Virtual SCSI optical backed by a physical optical device
v Virtual SCSI optical backed by a file
v Virtual SCSI tape backed by a physical tape device
After a virtual device is assigned to a client partition, the Virtual I/O Server must be available before the
client logical partitions can access it.
Creating a virtual target device on a Virtual I/O Server that maps to a physical or logical volume, tape
or physical optical device:
You can create a virtual target device on a Virtual I/O Server that maps the virtual SCSI adapter to a
physical disk, tape, or physical optical device, or to a logical volume that is based on a volume group.
The following procedure can be repeated to provide additional virtual disk storage to any client logical
partition.
Tip: If you are using the HMC, version 7 release 3.4.2 or later, you can use the HMC graphical interface
to create a virtual target device on the Virtual I/O Server.
To create a virtual target device that maps a virtual SCSI server adapter to a physical device or logical
volume, complete the following steps from the Virtual I/O Server command-line interface:
1. Use the lsdev command to ensure that the virtual SCSI adapter is available. For example, running
lsdev -virtual returns results similar to the following:
name status description
ent3 Available Virtual I/O Ethernet Adapter (l-lan)
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter
vtscsi0 Available Virtual Target Device - Logical Volume
vtscsi1 Available Virtual Target Device - File-backed Disk
vtscsi2 Available Virtual Target Device - File-backed Disk
2. To create a virtual target device, which maps the virtual SCSI server adapter to a physical device or
logical volume, run the mkvdev command:
mkvdev -vdev TargetDevice -vadapter VirtualSCSIServerAdapter
Where:
v TargetDevice is the name of the target device, as follows:
To map a logical volume to the virtual SCSI server adapter, use the name of the logical volume.
For example, lv_4G.
To map a physical volume to the virtual SCSI server adapter, use hdiskx. For example, hdisk5.
To map an optical device to the virtual SCSI server adapter, use cdx. For example, cd0.
To map a tape device to a virtual SCSI adapter, use rmtx. For example, rmt1.
v VirtualSCSIServerAdapter is the name of the virtual SCSI server adapter.
Note: If needed, use the lsdev and lsmap -all commands to determine the target device and virtual
SCSI server adapter that you want to map to one another.
The storage is available to the client logical partition either the next time it starts, or the next time the
appropriate virtual SCSI client adapter is probed (on a Linux logical partition), or configured (on an
AIX logical partition), or appears as a either a DDXXX or DPHXXX device (on an IBM i partition).
3. View the newly created virtual target device by running the lsdev command. For example, running
lsdev -virtual returns results similar to the following:
name status description
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter
vtscsi0 Available Virtual Target Device - Logical Volume
vttape0 Available Virtual Target Device - Tape
4. View the logical connection between the newly created devices by running the lsmap command. For
example, running lsmap -vadapter vhost3 returns results similar to the following:
SVSA Physloc Client PartitionID
-------------------------------------------------------
vhost3 U9111.520.10DDEEC-V1-C20 0x00000000
VTD vtscsi0
The physical location is a combination of the slot number, in this case 20, and the logical partition ID.
The storage is now available to the client logical partition either the next time it starts, or the next
time the appropriate virtual SCSI client adapter is probed, or configured.
If you later need to remove the virtual target device, you can do so by using the rmvdev command.
Related concepts:
Virtual SCSI sizing considerations on page 61
Understand the processor and memory-sizing considerations when implementing virtual SCSI .
Related information:
Creating a virtual disk for a VIOS logical partition using the HMC
Virtual I/O Server and Integrated Virtualization Manager commands
Creating a virtual target device on a Virtual I/O Server that maps to a file or logical volume:
You can create a virtual target device on a Virtual I/O Server that maps the virtual SCSI adapter to a file
or a logical volume that is based on a storage pool.
The following procedure can be repeated to provide additional virtual disk storage to any client logical
partition.
Tip: If you are using the HMC, version 7 release 3.4.2 or later, you can use the HMC graphical interface
to create a virtual target device on the Virtual I/O Server.
To create a virtual target device that maps a virtual SCSI server adapter to a file or logical volume,
complete the following steps from the Virtual I/O Server command-line interface:
1. Use the lsdev command to ensure that the virtual SCSI adapter is available. For example, running
lsdev -virtual returns results similar to the following:
name status description
ent3 Available Virtual I/O Ethernet Adapter (l-lan)
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter
vtscsi0 Available Virtual Target Device - Logical Volume
vtscsi1 Available Virtual Target Device - File-backed Disk
vtscsi2 Available Virtual Target Device - File-backed Disk
2. To create a virtual target device, which maps the virtual SCSI server adapter to a file or logical
volume, run the mkbdsp command:
mkbdsp -sp StoragePool -bd BackingDevice -vadapter VirtualSCSIServerAdapter -tn TargetDeviceName
VTD fbvtd1
Status Available
LUN 0x8100000000000000
Backing device /var/vio/storagepools/fbPool/devFile
Physloc
The physical location is a combination of the slot number, in this case 2, and the logical partition ID.
The virtual device can now be attached from the client logical partition.
If you later need to remove the virtual target device and backup device (file or logical volume), use the
rmbdsp command. An option is available on the rmbdsp command to remove the virtual target device
without removing the backup device. A backup device file is associated with a virtual target device by
inode number rather than by file name, so do not change the inode number of a backing device file. The
inode number might change if you alter a backup device file (using the AIX rm, mv, and cp commands),
while the backup device file is associated with a virtual target device.
Related information:
Creating a virtual disk for a VIOS logical partition using the HMC
Virtual I/O Server and Integrated Virtualization Manager commands
Creating a virtual target device on a Virtual I/O Server that maps to a file-backed virtual optical
device:
You can create a virtual target device on a Virtual I/O Server that maps the virtual SCSI adapter to a
file-backed virtual optical device.
The following procedure can be repeated to provide additional virtual disk storage to any client logical
partition.
Tip: If you are using the HMC, version 7 release 3.4.2 or later, you can use the HMC graphical interface
to create a virtual target device on the Virtual I/O Server.
To create a virtual target device that maps a virtual SCSI server adapter to a file-backed virtual optical
device, complete the following steps from the Virtual I/O Server command-line interface:
1. Use the lsdev command to ensure that the virtual SCSI adapter is available. For example, running
lsdev -virtual returns results similar to the following:
name status description
ent3 Available Virtual I/O Ethernet Adapter (l-lan)
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter
vtscsi0 Available Virtual Target Device - Logical Volume
vtscsi1 Available Virtual Target Device - File-backed Disk
vtscsi2 Available Virtual Target Device - File-backed Disk
2. To create a virtual target device, which maps the virtual SCSI server adapter to a file-backed virtual
optical device, run the mkvdev command:
mkvdev -fbo -vadapter VirtualSCSIServerAdapter
where VirtualSCSIServerAdapter is the name of the virtual SCSI server adapter. For example, vhost1.
Note: No backing device is specified when creating virtual target devices for file-backed virtual
optical devices because the drive is considered to contain no media. For information about loading
media into a file-backed optical drive, see the loadopt command.
The optical device is available to the client logical partition either the next time it starts, or the next
time the appropriate virtual SCSI client adapter is probed (on a Linux logical partition), or configured
(on an AIX logical partition), or appears as an OPTXXX device (on an IBM i logical partition).
3. View the newly created virtual target device by running the lsdev command. For example, running
lsdev -virtual returns results similar to the following:
name status description
vhost4 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter
vtopt0 Available Virtual Target Device - File-backed Optical
4. View the logical connection between the newly created devices by running the lsmap command. For
example, running lsmap -vadapter vhost1 returns results similar to the following:
SVSA Physloc Client PartitionID
----------------------------------------------------
vhost1 U9117.570.10C8BCE-V6-C2 0x00000000
VTD vtopt0
LUN 0x8200000000000000
Backing device Physloc
The physical location is a combination of the slot number, in this case 2, and the logical partition ID.
The virtual device can now be attached from the client logical partition.
You can use the loadopt command to load file-backed virtual optical media into the file-backed virtual
optical device.
If you later need to remove the virtual target device, you can do so by using the rmvdev command.
Related information:
Creating a virtual disk for a VIOS logical partition using the HMC
In some configurations, you must consider the reservation policy of the device on the Virtual I/O Server
(VIOS).
The following table explains the situations in which the reservation policy of the device on the VIOS is
important for systems that are managed by the Hardware Management Console (HMC) and the
Integrated Virtualization Manager (IVM).
Table 31. Situations where the reservation policy of a device is important
HMC-managed systems IVM-managed systems
v To use a Multipath I/O (MPIO) configuration at the For Live Partition Mobility, the reserve attribute on the
client, none of the virtual SCSI devices on the VIOS physical storage that is used by the mobile partition can
can be reserving the virtual SCSI device. Set the be set as follows:
reserve_policy attribute of the device to no_reserve. v You can set the reserve policy attribute to no_reserve.
v For Live Partition Mobility, the reserve attribute on the v You can set the reserve policy attribute to pr_shared
physical storage that is used by the mobile partition when the following products are at the following
can be set as follows: versions:
You can set the reserve policy attribute to IVM version 2.1.2.0, or later
no_reserve. The physical adapters support the SCSI-3 Persistent
You can set the reserve policy attribute to pr_shared Reserves standard
when the following products are at the following
The reserve attribute must be the same on the source and
versions:
destination management partitions for successful
- HMC version 7 release 3.5.0, or later Partition Mobility.
- VIOS version 2.1.2.0, or later
- The physical adapters support the SCSI-3
Persistent Reserves standard
The reserve attribute must be the same on the source
and destination VIOS partitions for successful Partition
Mobility.
v For PowerVM Active Memory Sharing, the VIOS
automatically sets the reserve attribute on the physical
volume to no reserve when you add a paging space
device to the shared memory pool.
1. From a VIOS partition, list the disks (or paging space devices) to which the VIOS has access. Run the
following command:
lsdev -type disk
2. To determine the reserve policy of a disk, run the following command, where hdiskX is the name of
the disk that you identified in step 1. For example, hdisk5.
lsdev -dev hdiskX -attr reserve_policy
Based on the information in Table 31, you might need to change the reserve_policy so that you can
use the disk in any of the described configurations.
3. To set the reserve_policy, run the chdev command. For example:
chdev -dev hdiskX -attr reserve_policy=reservation
where:
Requirements:
a. Although the reserve_policy attribute is an attribute of the device, each VIOS saves the value of
the attribute. You must set the reserve_policy attribute from both VIOS partitions so that both
VIOS partitions recognize the reserve_policy of the device.
b. For Partition Mobility, the reserve_policy on the destination VIOS partition must be the same as
the reserve_policy on the source VIOS partition. For example, if the reserve_policy on the source
VIOS partition is pr_shared, the reserve_policy on the destination VIOS partition must also be
pr_shared.
Before you start, ensure that the Virtual I/O Server is at version 1.5 or later. To update the Virtual I/O
Server, see Updating the Virtual I/O Server on page 127.
Tip: If you are using the HMC, version 7 release 3.4.2 or later, you can use the HMC graphical interface
to create logical volume storage pools on the Virtual I/O Server.
Logical volume storage pools are volume groups, which are collections of one or more physical volumes.
The physical volumes that comprise a logical volume storage pool can be of varying sizes and types.
To create a logical volume storage pool, complete the following steps from the Virtual I/O Server
command-line interface:
1. Create a logical volume storage pool by running the mksp command:
mksp -f dev_clients hdisk2 hdisk4
In this example, the name of the storage pool is dev_clients and it contains hdisk2 and hdisk4.
2. Define a logical volume, which will be visible as a disk to the client logical partition. The size of this
logical volume will act as the size of disks that will be available to the client logical partition. Use the
mkbdsp command to create a 11 GB logical volume called dev_dbsrv as follows:
mkbdsp -sp dev_clients 11G -bd dev_dbsrv
If you also want to create a virtual target device, which maps the virtual SCSI server adapter to the
logical volume, add -vadapter vhostx to the end of the command. For example:
mkbdsp -sp dev_clients 11G -bd dev_dbsrv -vadapter vhost4
Related information:
Creating storage pools on a Virtual I/O Server by using the HMC
Virtual I/O Server and Integrated Virtualization Manager commands
Before you start, ensure that the Virtual I/O Server is at version 1.5 or later. To update the Virtual I/O
Server, see Updating the Virtual I/O Server on page 127.
Tip: If you are using the HMC, version 7 release 3.4.2 or later, you can use the HMC graphical interface
to create file storage pools on the Virtual I/O Server.
To create a file storage pool, complete the following steps from the Virtual I/O Server command-line
interface:
1. Create a file storage pool by running the mksp command:
mksp -fb dev_fbclt -sp dev_clients -size 7g
In this example, the name of the file storage pool is dev_fbclt and the parent storage pool is
dev_clients.
2. Define a file, which will be visible as a disk to the client logical partition. The size of the file
determines the size of the disk presented to the client logical partition. Use the mkbdsp command to
create a 3 GB file called dev_dbsrv as follows:
mkbdsp -sp dev_fbclt 3G -bd dev_dbsrv
If you also want to create a virtual target device, which maps the virtual SCSI server adapter to the
file, add -vadapter vhostx to the end of the command. For example:
mkbdsp -sp dev_fbclt 3G -bd dev_dbsrv -vadapter vhost4
Related information:
Creating storage pools on a Virtual I/O Server by using the HMC
Virtual I/O Server and Integrated Virtualization Manager commands
Before you start, ensure that the Virtual I/O Server is at version 1.5 or later. To update the Virtual I/O
Server, see Updating the Virtual I/O Server on page 127.
The virtual media repository provides a single container to store and manage file-backed virtual optical
media files. Media stored in the repository can be loaded into file-backed virtual optical devices for
exporting to client partitions.
Tip: If you are using the HMC, version 7 release 3.4.2 or later, you can use the HMC graphical interface
to create a virtual media repository on the Virtual I/O Server.
To create the virtual media repository from the Virtual I/O Server command-line interface, run the mkrep
command:
mkrep -sp prod_store -size 6g
If you are using the HMC, version 7 release 3.4.2 or later, you can use the HMC graphical interface to
create volume groups and logical volumes on a Virtual I/O Server.
Virtual I/O Server versions 1.3 and later provide support for applications that are enabled to use SCSI-2
reserve functions that are controlled by the client logical partition. Typically, SCSI reserve and release is
used in clustered environments where contention for SCSI disk resources might require greater control. To
ensure that Virtual I/O Server supports these environments, configure the Virtual I/O Server to support
SCSI-2 reserve and release. If the applications you are using provide information about the policy to use
for the SCSI-2 reserve functions on the client logical partition, follow those procedures for setting the
reserve policy.
Complete the following tasks to configure the Virtual I/O Server to support SCSI-2 reserve environments:
1. Configure the Virtual I/O Server reserve_policy for single_path, using the following command:
chdev -dev1 hdiskN -attr reserve_policy=single_path
Note: Perform this task when the device is not in use. If you run this command while the device is
open or in use, then you must use the -perm flag with this command. If you use the -perm flag, the
changes do not take effect until the device is unconfigured and reconfigured.
2. Configure the client_reserve feature on the Virtual I/O Server.
v If you are creating a virtual target device, use the following command:
mkvdev -vdev hdiskN -vadapter vhostN -attr client_reserve=yes
where hdiskN is the virtual target device name and vhostN is the virtual SCSI server adapter name.
v If the virtual target device has already been created, use the following command:
chdev -dev vtscsiN -attr client_reserve=yes
Note: Perform this task when the device is not in use. If you run this command while the device
is open or in use, then you must use the -p flag. In that case, the changes do not take effect until
the device is unconfigured and reconfigured.
Disks with an IEEE volume attribute identifier have a value in the ieee_volname field. Output similar
to the following is displayed:
...
cache_method fast_write Write Caching method
False
ieee_volname 600A0B800012DD0D00000AB441ED6AC IEEE Unique volume name
False
lun_id 0x001a000000000000 Logical Unit Number
False
...
If the ieee_volname field does not appear, then the device does not have an IEEE volume attribute
identifier.
2. If the device does not have an IEEE volume attribute identifier, then determine whether the device
has a UDID by completing the following steps:
a. Type oem_setup_env.
b. Type odmget -qattribute=unique_id CuAt. The disks that have a UDID are listed. Output similar
to the following is displayed:
CuAt:
name = "hdisk1"
attribute = "unique_id"
value = "2708ECVBZ1SC10IC35L146UCDY10-003IBXscsi"
type = "R"
generic = ""
rep = "nl"
nls_index = 79
CuAt:
name = "hdisk2"
attribute = "unique_id"
value = "210800038FB50AST373453LC03IBXscsi"
type = "R"
generic = ""
rep = "nl"
nls_index = 79
Devices in the list that are accessible from other Virtual I/O Server partitions can be used in
virtual SCSI MPIO configurations.
c. Type exit.
3. If the device does not have either an IEEE volume attribute identifier or a UDID, then determine
whether the device has a PVID by running the following command:
lspv
If you plan to use a Shared Ethernet Adapter with a Host Ethernet Adapter (or Integrated Virtual
Ethernet), ensure that the Logical Host Ethernet Adapter (LHEA) on the Virtual I/O Server is set to
promiscuous mode. For instructions, see Setting the LHEA to promiscuous mode on page 106.
To create a virtual Ethernet adapter on the Virtual I/O Server using the Hardware Management Console
(HMC), version 7 or later, complete the following steps:
1. In the navigation area, expand Systems Management > Servers and select the server on which the
Virtual I/O Server logical partition is located.
2. In the contents are, select the Virtual I/O Server logical partition.
3. Click Tasks and select Configuration > Manage Profiles. The Managed Profiles page is displayed.
4. Select the profile in which you want to create the Shared Ethernet Adapter and click Actions > Edit.
The Logical Partition Profile Properties page is displayed.
5. Click the Virtual Adapters tab.
6. Click Actions > Create > Ethernet adapter.
7. Select IEEE 802.1Q-compatible adapter.
8. If you are using multiple VLANs, add any additional VLAN IDs for the client logical partitions that
must communicate with the external network using this virtual adapter.
9. Select Access external network to use this adapter as a gateway between VLANs and an external
network. This Ethernet adapter is configured as part of the Shared Ethernet Adapter.
10. If you are not using Shared Ethernet Adapter failover, you can use the default trunk priority. If you
are using Shared Ethernet Adapter failover, then set the trunk priority for the primary share Ethernet
adapter to a lower number than that of the backup Shared Ethernet Adapter.
11. When you are finished, click OK.
12. Assign or create one of the following real adapters:
v Assign a physical Ethernet adapter to the Virtual I/O Server.
When you are finished, configure the Shared Ethernet Adapter using the Virtual I/O Server
command-line interface or the Hardware Management Console graphical interface, version 7 release 3.4.2
or later.
Related tasks:
Configuring a Shared Ethernet Adapter
Find instructions for configuring Shared Ethernet Adapters.
To use a Shared Ethernet Adapter with a Host Ethernet Adapter (or Integrated Virtual Ethernet), you
must set the Logical Host Ethernet Adapter (LHEA) to promiscuous mode.
Before you start, use the Hardware Management Console (HMC) to determine the physical port of the
Host Ethernet Adapter that is associated with the Logical Host Ethernet port. Determine this information
for the Logical Host Ethernet port that is the real adapter of the Shared Ethernet Adapter on the Virtual
I/O Server. You can find this information in the partition properties of the Virtual I/O Server, and the
managed system properties of the server on which the Virtual I/O Server is located.
To set the Logical Host Ethernet port (that is the real adapter of the Shared Ethernet Adapter) to
promiscuous mode, complete the following steps using the HMC:
1. In the navigation area, expand Systems Management and click Servers.
2. In the contents area, select the server on which the Virtual I/O Server logical partition is located.
3. Click Tasks and select Hardware (information) > Adapters > Host Ethernet. The HEAs page is
shown.
4. Select the physical location code of the Host Ethernet Adapter.
5. Select the physical port associated with the Logical Host Ethernet port on the Virtual I/O Server
logical partition, and click Configure. The HEA Physical Port Configuration page is shown.
6. Select VIOS in the Promiscuous LPAR field.
7. Click OK twice to return to the contents area.
Before you can configure a Shared Ethernet Adapter, you must first create the adapter using the
Hardware Management Console (HMC). For instructions, see Creating a virtual Ethernet adapter using
HMC version 7 on page 105.
To configure a Shared Ethernet Adapter using the HMC, version 7 release 3.4.2 or later, see Creating a
shared Ethernet adapter for a Virtual I/O Server logical partition using the Hardware Management
Console.
To configure a Shared Ethernet Adapter using versions prior to the HMC, version 7 release 3.4.2,
complete the following steps from the Virtual I/O Server command-line interface:
1. Verify that the virtual Ethernet trunk adapter is available by running the following command:
Notes:
v Ensure that TCP/IP is not configured on the interface for the physical Ethernet adapter. If TCP/IP
is configured, the mkvdev command in the next step fails.
v You can also use a Link Aggregation, or EtherChannel, device as the Shared Ethernet Adapter.
v If you plan to use the Host Ethernet Adapter or Integrated Virtual Ethernet with the Shared
Ethernet Adapter, ensure that you use the Logical Host Ethernet Adapter to create the Shared
Ethernet Adapter.
3. Configure the Shared Ethernet Adapter by running the following command:
mkvdev -sea target_device -vadapter virtual_ethernet_adapters \
-default DefaultVirtualEthernetAdapter -defaultid SEADefaultPVID
Where:
target_device
The physical adapter being used as part of the Shared Ethernet Adapter device.
virtual_ethernet_adapters
The virtual Ethernet adapter or adapters that will use the Shared Ethernet Adapter.
DefaultVirtualEthernetAdapter
The default virtual Ethernet adapter used to handle untagged packets. If you have only one
virtual Ethernet adapter for this logical partition, use it as the default.
SEADefaultPVID
The PVID associated with your default virtual Ethernet adapter.
For example:
v To create Shared Ethernet Adapter ent3 with ent0 as the physical Ethernet adapter (or Link
Aggregation) and ent2 as the only virtual Ethernet adapter (defined with a PVID of 1), type the
following command:
mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1
v To obtain the value for the SEADefaultPVID attribute in the mkvdev command, type the following
command:
enstat -all ent2 | grep "Port VLAN ID:"
Where:
v TargetAdapter is the Shared Ethernet Adapter.
v TagID is the VLAN ID that you defined when creating the virtual Ethernet adapter associated
with the Shared Ethernet Adapter.
For example, to create a VLAN pseudo-device using the Shared Ethernet Adapter ent3 that you
just created with a VLAN ID of 1, type the following command:
mkvdev -vlan ent3 -tagid 1
b. Verify that the VLAN pseudo-device was created by running the following command:
lsdev -virtual
c. Repeat this step for any additional VLAN pseudo-devices that you need.
9. Run the following command to configure the first TCP/IP connection. The first connection must be
on the same VLAN and logical subnet as the default gateway.
mktcpip -hostname Hostname -inetaddr Address -interface Interface -netmask \
SubnetMask -gateway Gateway -nsrvaddr NameServerAddress -nsrvdomain Domain
Where:
v Hostname is the host name of the Virtual I/O Server
v Address is the IP address you want to use for the TCP/IP connection
v Interface is the interface associated with either the Shared Ethernet Adapter device or a VLAN
pseudo-device. For example, if the Shared Ethernet Adapter device is ent3, the associated interface
is en3.
v Subnetmask is the subnet mask address for your subnet.
v Gateway is the gateway address for your subnet.
v NameServerAddress is the address of your domain name server.
v Domain is the name of your domain.
If you do not have additional VLANs, then you are finished with this procedure and do not need to
complete the remaining step.
10. Run the following command to configure additional TCP/IP connections:
chdev -dev interface -perm -attr netaddr=IPaddress -attr netmask=netmask
-attr state=up
When using this command, enter the interface (enX) associated with either the Shared Ethernet
Adapter device or VLAN pseudo-device.
11. Enable the Shared Ethernet Adapter device to prioritize traffic. Client logical partitions must insert a
VLAN priority value in their VLAN header. For AIX clients, a VLAN pseudo-device must be created
over the Virtual I/O Ethernet Adapter, and the VLAN priority attribute must be set (the default
value is 0). Do the following steps to enable traffic prioritization on an AIX client:
a. Set the Shared Ethernet Adapter qos_mode attribute to either strict or loose mode. Use one of the
following commands: chdev -dev <SEA device name> -attr qos_mode=strict or chdev -dev <SEA
device name> -attr qos_mode=loose. For more information about the modes, see Shared Ethernet
Adapter.
b. From the HMC, create a Virtual I/O Ethernet Adapter for the AIX client with all of the tagged
VLANs that are required (specified in the Additional VLAN ID list). Packets sent over the default
The Shared Ethernet Adapter is now configured. After you configure the TCP/IP connections for the
virtual adapters on the client logical partitions using the client logical partitions' operating systems, those
logical partitions can communicate with the external network.
Related concepts:
Shared Ethernet Adapter failover on page 73
Shared Ethernet Adapter failover provides redundancy by configuring a backup Shared Ethernet Adapter
on a different Virtual I/O Server logical partition that can be used if the primary Shared Ethernet
Adapter fails. The network connectivity in the client logical partitions continues without disruption.
Shared Ethernet Adapters on page 26
With Shared Ethernet Adapters on the Virtual I/O Server logical partition, virtual Ethernet adapters on
client logical partitions can send and receive outside network traffic.
Related information:
Creating a shared Ethernet adapter for a VIOS logical partition using the HMC
Virtual I/O Server and Integrated Virtualization Manager commands
For example, to create Link Aggregation device ent5 with physical Ethernet adapters ent3, ent4, and
backup adapter ent2, type the following:
mkvdev -lnagg ent3,ent4 -attr backup_adapter=ent2
After the Link Aggregation device is configured, you can add adapters to it, remove adapters from it, or
modify its attributes using the cfglnagg command.
Before you start, verify that the following statements are true:
After the virtual fibre channel adapters are created, you need to connect the virtual fibre channel adapter
on the Virtual I/O Server logical partition to the physical ports of the physical fibre channel adapter. The
physical fibre channel adapter should be connected to the physical storage that you want the associated
client logical partition to access.
Tip: If you are using the HMC, version 7 release 3.4.2 or later, you can use the HMC graphical interface
to assign the virtual fibre channel adapter on a Virtual I/O Server to a physical fibre channel adapter.
To assign the virtual fibre channel adapter to a physical port on a physical fibre channel adapter,
complete the following steps from the Virtual I/O Server command-line interface:
1. Use the lsnports command to display information for the available number of NPIV ports and
available worldwide port names (WWPNs). For example, running lsnports returns results similar to
the following:
Name Physloc fabric tports aports swwpns awwpns
-----------------------------------------------------------------------------------
fcs0 U789D.001.DQDMLWV-P1-C1-T1 1 64 64 2048 2047
fcs1 U787A.001.DPM0WVZ-P1-C1-T2 1 63 62 504 496
Note: If there are no NPIV ports in the Virtual I/O Server logical partition, the error code
E_NO_NPIV_PORTS(62) is displayed.
2. To connect the virtual fibre channel adapter on the Virtual I/O Server logical partition to a physical
port on a physical fibre channel adapter, run the vfcmap command: vfcmap -vadapter virtual fibre
channel adapter -fcp fibre channel port name where:
v Virtual fibre channel adapter is the name of the virtual fibre channel adapter created on the Virtual
I/O Server logical partition.
v Fibre channel port name is the name of the physical fibre channel port.
Note: If no parameter is specified with the -fcp flag, the command unmaps the virtual fibre channel
adapter from the physical fibre channel port.
3. Use the lsmap command to display the mapping between virtual host adapters and the physical
devices to which they are backed. To list NPIV mapping information, type: lsmap -all -npiv. The
system displays a message similar to the following:
Name Physloc ClntID ClntName ClntOS
---------------------------------------------------------------
vfchost0 U8203.E4A.HV40026-V1-C12 1 HV-40026 AIX
Status:NOT_LOGGED_IN
FC name:fcs0 FC loc code:U789C.001.0607088-P1-C5-T1
Ports logged in:0
Flags:1 <not_mapped, not_connected>
VFC client name: VFC client DRC:
Note: To determine the WWPNs that are assigned to a logical partition, use the Hardware
Management Console (HMC) to view the partition properties or partition profile properties of the
client logical partition.
Configuring the IBM Tivoli agents and clients on the Virtual I/O Server
You can configure and start the IBM Tivoli Monitoring agent, IBM Tivoli Usage and Accounting Manager,
the IBM Tivoli Storage Manager client, and the IBM Tivoli TotalStorage Productivity Center agents.
Related concepts:
IBM Tivoli software and the Virtual I/O Server on page 37
Learn about integrating the Virtual I/O Server into your Tivoli environment for IBM Tivoli Application
Dependency Discovery Manager, IBM Tivoli Monitoring, IBM Tivoli Storage Manager, IBM Tivoli Usage
and Accounting Manager, IBM Tivoli Identity Manager, and IBM TotalStorage Productivity Center.
Related information:
cfgsvc command
With Tivoli Monitoring System Edition for System p, you can monitor the health and availability of
multiple IBM System p servers (including the Virtual I/O Server) from the Tivoli Enterprise Portal. IBM
Tivoli Monitoring System Edition for System p gathers data from the Virtual I/O Server, including data
about physical volumes, logical volumes, storage pools, storage mappings, network mappings, real
memory, processor resources, mounted file system sizes, and so on. From the Tivoli Enterprise Portal, you
can view a graphical representation of the data, use predefined thresholds to alert you on key metrics,
and resolve issues based on recommendations provided by the Expert Advice feature of Tivoli
Monitoring.
To configure and start the monitoring agent, complete the following steps:
1. List all of the available monitoring agents using the lssvc command. For example,
$lssvc
ITM_premium
2. Based on the output of the lssvc command, decide which monitoring agent you want to configure.
For example, ITM_premium
3. List all of the attributes that are associated with the monitoring agent using the cfgsvc command. For
example:
$cfgsvc ls ITM_premium
HOSTNAME
RESTART_ON_REBOOT
MANAGING_SYSTEM
4. Configure the monitoring agent with its associated attributes using the cfgsvc command:
Where:
v ITM_agent_name is the name of the monitoring agent. For example, ITM_premium.
v value must be either TRUE of FALSE as follows:
TRUE: ITM_agent_name restarts whenever the Virtual I/O Server restarts
FALSE: ITM_agent_name does not restart whenever the Virtual I/O Server restarts
v name_or_address1 is either the hostname or IP address of the Tivoli Enterprise Monitoring Server
(TEMS) server to which ITM_agent_name sends data.
v name_or_address2 is either the hostname of IP address of the Hardware Management Console
(HMC) attached to the managed system on which the Virtual I/O Server with the monitoring agent
is located.
For example:
cfgsvc ITM_premium attr Restart_On_Reboot=TRUE hostname=tems_server managing_system=hmc_console
In this example, the ITM_premium monitoring agent is configured to send data to tems_server, and to
restart whenever the Virtual I/O Server restarts.
5. Start the monitoring agent using the startsvc command. For example:
startsvc ITM_premium
6. From the HMC, complete the following steps so that the monitoring agent can gather information
from the HMC.
Note: After you configure a secure shell connection for one monitoring agent, you do not need to
configure it again for any additional agents.
a. Determine the name of the managed system on which the Virtual I/O Server with the monitoring
agent is located.
b. Obtain the public key for the Virtual I/O Server by running the following command:
viosvrcmd -m managed_system_name -p vios_name -c "cfgsvc -key ITM_agent_name"
Where:
v managed_system_name is the name of the managed system on which the Virtual I/O Server with
the monitoring agent or client is located.
v vios_name is the name of the Virtual I/O Server logical partition (with the monitoring agent) as
defined on the HMC.
v ITM_agent_name is the name of the monitoring agent. For example, ITM_premium.
c. Update the authorized_key2 file on the HMC by running the mkauthkeys command:
mkauthkeys --add public_key
where public_key is the output from the viosvrcmd command in step 6b.
For example:
$ viosvrcmd -m commo126041 -p VIOS7 -c "cfgsvc ITM_premium -key"
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvjDZ
sS0guWzfzfp9BbweG0QMXv1tbDrtyWsgPbA2ExHA+xduWA51K0oFGarK2F
C7e7NjKW+UmgQbrh/KSyKKwozjp4xWGNGhLmfan85ZpFR7wy9UQG1bLgXZ
xYrY7yyQQQODjvwosWAfzkjpG3iW/xmWD5PKLBmob2QkKJbxjne+wqGwHT
RYDGIiyhCBIdfFaLZgkXTZ2diZ98rL8LIv3qb+TsM1B28AL4t+1OGGeW24
2lsB+8p4kamPJCYfKePHo67yP4NyKyPBFHY3TpTrca4/y1KEBT0Va3Pebr
5JEIUvWYs6/RW+bUQk1Sb6eYbcRJFHhN5l3F+ofd0vj39zwQ== root@vi
os7.vios.austin.ibx.com
$ mkauthkeys --add ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvjDZ
sS0guWzfzfp9BbweG0QMXv1tbDrtyWsgPbA2ExHA+xduWA51K0oFGarK2F
C7e7NjKW+UmgQbrh/KSyKKwozjp4xWGNGhLmfan85ZpFR7wy9UQG1bLgXZ
When you are finished, you can view the data gathered by the monitoring agent from the Tivoli
Enterprise Portal.
Related information:
IBM Tivoli Monitoring version 6.2.1 documentation
Tivoli Monitoring Virtual I/O Server Premium Agent User's Guide
With Virtual I/O Server 1.4, you can configure the IBM Tivoli Usage and Accounting Manager agent on
the Virtual I/O Server. Tivoli Usage and Accounting Manager helps you track, allocate, and invoice your
IT costs by collecting, analyzing, and reporting on the actual resources used by entities such as cost
centers, departments, and users. Tivoli Usage and Accounting Manager can gather data from multi-tiered
datacenters that include Windows, AIX, Virtual I/O Server, HP/UX Sun Solaris, Linux, IBM i, and
VMware.
Before you start, ensure that the Virtual I/O Server is installed. The Tivoli Usage and Accounting
Manager agent is packaged with the Virtual I/O Server and is installed when the Virtual I/O Server is
installed. For instructions, see Installing the Virtual I/O Server and client logical partitions on page 79.
To configure and start the Tivoli Usage and Accounting Manager agent, complete the following steps:
1. Optional: Add optional variables to the A_config.par file to enhance data collection. The A_config.par
file is located at /home/padmin/tivoli/ituam/A_config.par. For more information about additional
data collectors available for the ITUAM agent on the Virtual I/O Server, see the IBM Tivoli Usage and
Accounting Manager Information Center.
2. List all of the available Tivoli Usage and Accounting Manager agents using the lssvc command. For
example,
$lssvc
ITUAM_base
3. Based on the output of the lssvc command, decide which Tivoli Usage and Accounting Manager
agent you want to configure. For example, ITUAM_base
4. List all of the attributes that are associated with the Tivoli Usage and Accounting Manager agent
using the cfgsvc command. For example:
$cfgsvc ls ITUAM_base
ACCT_DATA0
ACCT_DATA1
ISYSTEM
IPROCESS
5. Configure the Tivoli Usage and Accounting Manager agent with its associated attributes using the
cfgsvc command:
cfgsvc ITUAM_agent_name -attr ACCT_DATA0=value1 ACCT_DATA1=value2 ISYSTEM=value3 IPROCESS=value4
Where:
v ITUAM_agent_name is the name of the Tivoli Usage and Accounting Manager agent. For example,
ITUAM_base.
v value1 is the size (in MB) of the first data file that holds daily accounting information.
v value2 is the size (in MB) of the second data file that holds daily accounting information.
After you start the Tivoli Usage and Accounting Manager agent, it begins to collect data and generate log
files. You can configure the Tivoli Usage and Accounting Manager server to retrieve the log files, which
are then processed by the Tivoli Usage and Accounting Manager Processing Engine. You can work with
the data from the Tivoli Usage and Accounting Manager Processing Engine as follows:
v You can generate customized reports, spreadsheets, and graphs. Tivoli Usage and Accounting Manager
provides full data access and reporting capabilities by integrating Microsoft SQL Server Reporting
Services or Crystal Reports with a Database Management System (DBMS).
v You can view high-level and detailed cost and usage information.
v You can allocate, distribute, or charge IT costs to users, cost centers, and organizations in a manner that
is fair, understandable, and reproducible.
For more information, see the IBM Tivoli Usage and Accounting Manager Information Center.
Related reference:
Configuration attributes for IBM Tivoli agents and clients on page 162
Learn about required and optional configuration attributes and variables for the IBM Tivoli Monitoring
agent, the IBM Tivoli Usage and Accounting Manager agent, the IBM Tivoli Storage Manager client, and
the IBM TotalStorage Productivity Center agents.
With Virtual I/O Server 1.4, you can configure the Tivoli Storage Manager client on the Virtual I/O
Server. With Tivoli Storage Manager, you can protect your data from failures and other errors by storing
backup and disaster-recovery data in a hierarchy of offline storage. Tivoli Storage Manager can help
protect computers running a variety of different operating environments, including the Virtual I/O
Server, on a variety of different hardware, including IBM System p servers. If you configure the Tivoli
Storage Manager client on the Virtual I/O Server, you can include the Virtual I/O Server in your
standard backup framework.
Before you start, ensure that the Virtual I/O Server is installed. The Tivoli Storage Manager client is
packaged with the Virtual I/O Server and is installed when the Virtual I/O Server is installed. For
instructions, see Installing the Virtual I/O Server and client logical partitions on page 79.
To configure and start the Tivoli Storage Manager client, complete the following steps:
1. List all of the available Tivoli Storage Manager clients using the lssvc command. For example,
$lssvc
TSM_base
2. Based on the output of the lssvc command, decide which Tivoli Storage Manager client you want to
configure. For example, TSM_base
3. List all of the attributes that are associated with the Tivoli Storage Manager client using the cfgsvc
command. For example:
$cfgsvc ls TSM_base
SERVERNAME
SERVERIP
NODENAME
4. Configure the Tivoli Storage Manager client with its associated attributes using the cfgsvc command:
cfgsvc TSM_client_name -attr SERVERNAME=hostname SERVERIP=name_or_address NODENAME=vios
After you are finished, you are ready to back up and restore the Virtual I/O Server using the Tivoli
Storage Manager. For instructions, see the following procedures:
v Backing up the Virtual I/O Server using IBM Tivoli Storage Manager on page 136
v Restoring the Virtual I/O Server using IBM Tivoli Storage Manager on page 143
With Virtual I/O Server 1.5.2, you can configure the IBM TotalStorage Productivity Center agents on the
Virtual I/O Server. TotalStorage Productivity Center is an integrated, storage infrastructure management
suite that is designed to help simplify and automate the management of storage devices, storage
networks, and capacity utilization of file systems and databases. When you configure the TotalStorage
Productivity Center agents on the Virtual I/O Server, you can use the TotalStorage Productivity Center
user interface to collect and view information about the Virtual I/O Server.
To configure and start the TotalStorage Productivity Center agents, complete the following steps:
1. List all of the available TotalStorage Productivity Center agents using the lssvc command. For
example,
$lssvc
TPC
The TPC agent includes both the TPC_data and TPC_fabric agents. When you configure the TPC
agent, you configure both the TPC_data and TPC_fabric agents.
2. List all of the attributes that are associated with the TotalStorage Productivity Center agent using the
lssvc command. For example:
$lssvc TPC
A:
S:
devAuth:
caPass:
caPort:
amRegPort:
amPubPort:
The A, S, devAuth, and caPass attributes are required. The remainder of the attributes are optional.
For more information about the attributes, see Configuration attributes for IBM Tivoli agents and
clients on page 162.
3. Configure the TotalStorage Productivity Center agent with its associated attributes using the cfgsvc
command:
cfgsvc TPC -attr S=tpc_server_hostname A=agent_manager_hostname devAuth=password_1 caPass=password_2
Where:
v tpc_server_hostname is the host name or IP address of the TotalStorage Productivity Center server
that is associated with the TotalStorage Productivity Center agent.
v agent_manager_hostname is the name or IP address of the Agent Manager.
v password_1 is the password required to authenticate to the TotalStorage Productivity Center device
server.
v password_2 is the password required to authenticate to the common agent.
4. Select the language that you want to use during the installation and configuration.
5. Accept the license agreement to install the agents according to the attributes specified in step 3.
6. Start each TotalStorage Productivity Center agent using the startsvc command:
v To start the TPC_data agent, run the following command:
startsvc TPC_data
v To start the TPC_fabric agent, run the following command:
startsvc TPC_fabric
After you start the TotalStorage Productivity Center agents, you can perform the following tasks using
the TotalStorage Productivity Center user interface:
1. Run a discovery job for the agents on the Virtual I/O Server.
2. Run probes, scans, and ping jobs to collect storage information about the Virtual I/O Server.
3. Generate reports using the Fabric Manager and the Data Manager to view the storage information
gathered.
4. View the storage information gathered using the topology Viewer.
For more information, see the IBM TotalStorage Productivity Center support for agents on a Virtual I/O Server
PDF file. To view or download the PDF file, go to the IBM TotalStorage Productivity Center v3.3.1.81
Interim Fix Web site.
Before you start, use the ioslevel command to verify that the Virtual I/O Server is at version 1.5.2, or
later.
With Virtual I/O Server 1.5.2, you can configure the IBM Director agent on the Virtual I/O Server. Using
the IBM Director agent, you can view and track hardware configuration details of the system and
monitor performance and use of critical components, such as processors, disks, and memory.
RESTART_ON_REBOOT designates whether the IBM Director agent restarts if the Virtual I/O Server
is rebooted.
3. Start the IBM Director agent using the startsvc command. To start the DIRECTOR_agent agent, run
the following command:
startsvc DIRECTOR_agent
Related concepts:
IBM Systems Director software on page 39
Learn about integrating the Virtual I/O Server into your IBM Systems Director environment.
Related information:
cfgsvc command
To configure the Virtual I/O Server as an LDAP client, complete the following steps:
1. Change Virtual I/O Server users to LDAP users by running the following command:
chuser -ldap username
where username is the name of the user you want to change to an LDAP user.
2. Set up the LDAP client by running the following command:
mkldap host ldapserv1 bind cn=admin passwd adminpwd
Where:
v ldapserv1 is the LDAP server or list of LDAP servers to which you want the Virtual I/O Server to
be an LDAP client
v cn=admin is the administrator DN of ldapserv1
v adminpwd is the password for cn=admin
Configuring the LDAP client automatically starts communication between the LDAP server and the
LDAP client (the Virtual I/O Server). To stop communication, use the stopnetsvc command.
Notes:
A maximum that is lower than 11 can be incompatible with newer versions of the Hardware
Management Console (HMC).
The maximum slot number can be greater than 11.
Excess virtual slots use a small amount of additional memory but have no other effects.
v All customer-defined virtual Ethernet, virtual serial, and virtual SCSI slots must use virtual slot IDs 11
or greater.
Note: For existing virtual SCSI adapters, you must map all client profiles to the new server adapters.
These configuration rules apply to partitions on POWER6 systems only. In a mixture of POWER5 and
POWER6 systems on a V7 HMC, the POWER5 systems can use slots 0 through 10.
Most of the information in this topic is specific to management in an HMC environment. For information
about management tasks in an Integrated Virtualization Manager environment, see Integrated
Virtualization Manager.
Managing storage
You can import and export volume groups and storage pools, map virtual disks to physical disks,
increase virtual SCSI device capacity, change the virtual SCSI queue depth, back up and restore files and
file systems, and collect and view information using the IBM TotalStorage Productivity Center.
Importing and exporting volume groups and logical volume storage pools
You can use the importvg and exportvg commands to move a user-defined volume group from one
system to another.
Consider the following when importing and exporting volume groups and logical volume storage pools:
v The import procedure introduces the volume group to its new system.
v You can use the importvg command to reintroduce a volume group or logical volume storage pool to
the system that it had been previously associated with and had been exported from.
v The importvg command changes the name of an imported logical volume if a logical volume of that
name already exists on the new system. If the importvg command must rename a logical volume, it
prints an error message to standard error.
v The export procedure removes the definition of a volume group from a system.
v You can use the importvg and exportvg commands to add a physical volume that contains data to a
volume group by putting the disk to be added in its own volume group.
v The rootvg volume group cannot be exported or imported.
You can use the importvg command to import a volume group or logical volume storage pool.
To import a volume group or logical volume storage pool, complete the following steps:
1. Run the following command to import the volume group or logical volume storage pool:
importvg -vg volumeGroupName physicalVolumeName
To export a volume group or logical volume storage pool, see Exporting volume groups and logical
volume storage pools.
You can use the exportvg command to export a volume group or logical volume storage pool.
The results list the parent volume group or logical volume storage pool of the file storage pool.
2. If the volume group or logical volume storage pool that you plan to export is a parent of the virtual
media repository or a file storage pool, then complete the following steps.
1. Unload the backing device of each file-backed optical 1. Unconfigure the virtual target devices (VTDs)
virtual target device (VTD) that has a media file associated with the files contained in the file storage
loaded, by completing the following steps: pools by completing the following steps:
a. Retrieve a list of the file-backed optical VTDs by a. Retrieve a list of VTDs by running the following
running the following command: command:
lsmap -all -type file_opt lssp -bd -sp FilePoolName
b. For each device that shows a backing device, run
the following command to unload the backing where FilePoolName is the name of a file storage
device: pool that is a child of the volume group or logical
volume storage pool that you plan to export.
unloadopt -vtd VirtualTargetDevice
b. For each file that lists a VTD, run the following
2. Unmount the Virtual Media Repository file system by
command:
running the following command:
rmdev -dev VirtualTargetDevice -ucfg
unmount /var/vio/VMLibrary
2. Unmount the file storage pool by running the
following command:
unmount /var/vio/storagepools/FilePoolName
To export the volume group or logical volume storage pool, run the following commands:
1. deactivatevg VolumeGroupName
2. exportvg VolumeGroupName
To import a volume group or logical volume storage pool, see Importing volume groups and logical
volume storage pools on page 118.
This procedure shows how to map a virtual SCSI disk on an AIX client logical partition to the physical
device (disk or logical volume) on the Virtual I/O Server.
To map a virtual disk to a physical disk, you need the following information. This information is
gathered during this procedure:
v Virtual device name
v Slot number of the virtual SCSI client adapter
v Logical unit number (LUN) of the virtual SCSI device
v Client logical partition ID
Follow these steps to map a virtual disk on an AIX client logical partition to its physical disk on the
Virtual I/O Server:
1. Display virtual SCSI device information on the AIX client logical partition by typing the following
command:
lscfg -l devicename
The logical partition ID is the first number listed. In this example, the logical partition ID is 2. This
number is used in the next step.
c. Type exit.
5. If you have multiple Virtual I/O Server logical partitions running on your system, determine which
Virtual I/O Server logical partition is serving the virtual SCSI device. Use the slot number of the
client adapter that is linked to a Virtual I/O Server, and a server adapter. Use the HMC command
line to list information about virtual SCSI client adapters in the client logical partition.
Log in to the HMC, and from the HMC command line, type lshwres . Specify the managed console
name for the -m parameter and the client logical partition ID for the lpar_ids parameter.
Note:
v The managed console name, which is used for the -m parameter, is determined by typing lssyscfg
-r sys -F name from the HMC command line.
v Use the client logical partition ID recorded in Step 4 for the -lpar_ids parameter.
For example:
lshwres -r virtualio --rsubtype scsi -m fumi --filter lpar_ids=2
Record the name of the Virtual I/O Server located in the remote_lpar_name field and slot number of
the virtual SCSI server adapter, which is located in the remote_slot_num=2 field. In this example, the
name of the Virtual I/O Server is fumi01 and the slot number of the virtual SCSI server adapter is 2.
6. Log in to the Virtual I/O Server.
7. List virtual adapters and devices on the Virtual I/O Server by typing the following command:
lsmap -all
8. Find the virtual SCSI server adapter (vhostX) that has a slot ID that matches the remote slot ID
recorded in Step 5. On that adapter, run the following command:
lsmap -vadapter devicename
9. From the list of devices, match the LUN recorded in Step 3 with LUNs listed. This is the physical
device.
You can increase the capacity of your virtual SCSI devices by increasing the size of physical or logical
volumes. With Virtual I/O Server version 1.3 and later, you can do this without disrupting client
Tip: If you are using the HMC, version 7 release 3.4.2 or later, you can use the HMC graphical interface
to increase the capacity of a virtual SCSI device on aVirtual I/O Server.
In this example, AIX examines all the disks in volume group vg1 to see if they have grown in size.
For the disks that have grown in size, AIX attempts to add additional physical partitions to physical
volumes. If necessary, AIX will determine proper 1016 multiplier and conversion to the big volume
group.
Related information:
chvg Command
chlv Command
IBM System p Advanced POWER Virtualization Best Practices RedPaper
Changing a storage pool for a VIOS logical partition using the HMC
The virtual SCSI queue depth value determines how many requests the disk head driver will queue to
the virtual SCSI client driver at any one time. For AIX and Linux client logical partitions, you can change
this value from the default value of 3 to any value from 1 to 256. You modify this value using the chdev
command. For IBM i client logical partitions, the queue depth value is 32 and cannot be changed.
Increasing this value might improve the throughput of the disk in specific configurations. However,
several factors must be taken into consideration. These factors include the value of the queue-depth
attribute for all of the physical storage devices on the Virtual I/O Server being used as a virtual target
device by the disk instance on the client logical partition, and the maximum transfer size for the virtual
SCSI client adapter instance that is the parent device for the disk instance.
For AIX and Linux client logical partitions, the maximum transfer size for virtual SCSI client adapters is
set by the Virtual I/O Server, which determines the value based on the resources available on the server
and the maximum transfer size set for the physical storage devices on that server. Other factors include
the queue depth and maximum transfer size of other devices involved in mirrored-volume-group or
Multipath I/O (MPIO) configurations. Increasing the queue depth for some devices might reduce the
resources available for other devices on that same shared adapter and decrease the throughput for those
devices. For IBM i client logical partitions, the queue depth value is 32 and cannot be changed.
hdiskN represents the name of a physical volume and value is the value you assign between 1 and 256.
To view the current setting for the queue_depth value, from the client logical partition issue the following
command:
lsattr -El hdiskN
Backing up and restoring files and files systems can be useful for tasks, such as saving IBM i to physical
tape or saving a file-backed device.
The following commands are used to back up and restore files and files systems.
Table 33. Backup and restore commands and their descriptions
Command Description
backup Backs up files and file systems to media, such as physical tape and disk. For example:
v You can back up all the files and subdirectories in a directory using full path names or
relative path names.
v You can back up the root file system.
v You can back up all the files in the root file system that have been modified since the
last backup.
v You can back up virtual optical media files from the virtual media repository.
restore Reads archives created by the backup command and extracts the files stored there. For
example:
v You can restore a specific file into the current directory.
v You can restore a specific file from tape into the virtual media repository.
v You can restore a specific directory and the contents of that directory from a file name
archive or a file system archive.
v You can restore an entire file system.
v You can restore only the permissions or only the ACL attributes of the files from the
archive.
With Virtual I/O Server 1.5.2, you can install and configure the TotalStorage Productivity Center agents
on the Virtual I/O Server. TotalStorage Productivity Center is an integrated, infrastructure management
suite for storage that is designed to help simplify and automate the management of storage devices,
storage networks, and capacity utilization of file systems and databases. When you install and configure
the TotalStorage Productivity Center agents on the Virtual I/O Server, you can use the TotalStorage
Productivity Center interface to collect and view information about the Virtual I/O Server. You can then
perform the following tasks using the TotalStorage Productivity Center interface:
1. Run a discovery job for the agents on the Virtual I/O Server.
2. Run probes, run scans, and ping jobs to collect storage information about the Virtual I/O Server.
For more information, see Configuring the IBM TotalStorage Productivity Center agents on page 115.
Managing networks
You can change the network configuration of the Virtual I/O Server logical partition, enable and disable
GARP VLAN Registration Protocol (GVRP) on your Shared Ethernet Adapters, use Simple Network
Management Protocol (SNMP) to manage systems and devices in complex networks, and upgrade to
Internet Protocol version 6 (IPv6).
Changing the network configuration of the Virtual I/O Server logical partition
Follow these steps to change or remove the network settings on the Virtual I/O Server logical partition,
such as the IP address, subnet mask, gateway, and nameserver address
In this scenario, the Virtual I/O Server logical partition already has its network configuration set. The
current configuration will be removed, and the updated configuration will then be set. If you plan to
undo your Internet Protocol version 6 (IPv6) configuration, use the following process and commands to
completely remove the TCP/IP interface and then configure a new TCP/IP interface for Internet Protocol
version 4 (IPv4).
1. View the current network configuration using the lstcpip command.
2. Remove the current network configuration by running the rmtcpip command. You can remove all
network settings or just the specific settings that need to be updated.
3. Configure the new network settings using the mktcpip command.
The following example is for IPv4 where the Virtual I/O Server logical partition needs to have its domain
name server (DNS) information updated from its current address to 9.41.88.180:
1. Run lstcpip -namesrv to view the current configuration. Ensure you want to update this
configuration.
2. Run rmtcpip -namesrv to remove the current configuration.
3. Run mktcpip -nsrvaddr 9.41.88.180 to update the nameserver address.
With Virtual I/O Server version 1.4, Shared Ethernet Adapters support GARP VLAN Registration
Protocol (GVRP) which is based on GARP (Generic Attribute Registration Protocol). GVRP allows for the
dynamic registration of VLANs over networks.
Before you start, create and configure the Shared Ethernet Adapter. For instructions, see Creating a
virtual Ethernet adapter using HMC version 7 on page 105.
Where:
v Name is the name of the Shared Ethernet Adapter.
v yes/no defines whether GVRP is enabled or disabled. Type yes to enable GVRP and type no to disable
GVRP.
Simple Network Management Protocol (SNMP) is a set of protocols for monitoring systems and devices
in complex networks. SNMP network management is based on the familiar client-server model that is
widely used in Internet protocol (IP) network applications. Each managed host runs a process called an
agent. The agent is a server process that maintains information about managed devices in the
Management Information Base (MIB) database for the host. Hosts that are involved in network
management decision-making can run a process called a manager. A manager is a client application that
generates requests for MIB information and processes responses. In addition, a manager might send
requests to agent servers to modify MIB information.
In general, network administrators use SNMP to more easily manage their networks for the following
reasons:
v It hides the underlying system network
v The administrator can manage and monitor all network components from one console
The following table lists the SNMP management tasks available on the Virtual I/O Server, as well as the
commands you need to run to accomplish each task.
Table 34. Tasks and associated commands for working with SNMP on the Virtual I/O Server
Task Command
Enable SNMP startnetsvc
Select which SNMP agent you want to run snmpv3_ssw
Issue SNMP requests to agents cl_snmp
Process SNMP responses returned by agents cl_snmp
Request MIB information managed by an SNMP agent snmp_info
Modify MIB information managed by an SNMP agent snmp_info
Generate a notification, or trap, that reports an event to snmp_trap
the SNMP manager with a specified message
Disable SNMP stopnetsvc
Related information:
Network Management
IPv6 is the next generation of Internet protocol and is gradually replacing the current Internet standard,
Internet Protocol version 4 (IPv4). The key IPv6 enhancement is the expansion of the IP address space
from 32 bits to 128 bits, providing virtually unlimited, unique IP addresses. IPv6 provides several
advantages over IPv4 including expanded routing and addressing, routing simplification, header format
simplification, improved traffic control, autoconfiguration, and security.
Run the following command to upgrade from the Virtual I/O Server from IPv4 to IPv6:
mktcpip auto [-interface interface]
If you decide that you want to undo the IPv6 configuration, you must completely remove the TCP/IP
interface and then configure a new TCP/IP interface for IPv4. For instructions, see Changing the
network configuration of the Virtual I/O Server logical partition on page 125.
After subscribing, you are notified of all Virtual I/O Server news and product updates.
Notes:
v The updateios command installs all updates located in the specified directory.
v To perform Live Partition Mobility after you install an update to the VIOS, ensure that you restart
the HMC.
The VIOS contains the following types of information that you need to back up: the VIOS itself and
user-defined virtual devices.
v The VIOS includes the base code, applied fix packs, custom device drivers to support disk subsystems,
and some user-defined metadata. All this information is backed up when you use the backupios
command.
v User-defined virtual devices include metadata, like virtual devices mappings, that define the
relationship between the physical environment and the virtual environment. You can back up
user-defined virtual devices in one of the following ways:
Related tasks:
Restoring the Virtual I/O Server on page 137
You can restore the Virtual I/O Server (VIOS) and user-defined virtual devices using the installios
command, the viosbr command, or IBM Tivoli Storage Manager.
Related information:
backupios command
viosbr command
If the system is managed by the Integrated Virtualization Manager, then you need to back up your
partition profile data for the management partition and its clients before you back up the Virtual I/O
Server. For instructions, see Backing up and restoring partition data. (Alternatively, you can use the
bkprofdata command.)
This command creates a bootable tape that you can use to restore the Virtual I/O Server.
4. If you plan to restore the Virtual I/O Server to a different system from which it was backed up, you
need to back up the user-defined virtual devices. For instructions, see Backing up user-defined
virtual devices by using the backupios command on page 131.
Related information:
IBM System p Advanced POWER Virtualization Best Practices RedPaper
If the system is managed by the Integrated Virtualization Manager, then you need to back up your
partition profile data for the management partition and its clients before you back up the Virtual I/O
Server. For instructions, see Backing up and restoring partition data. (Alternatively, you can use the
bkprofdata command.)
To back up the Virtual I/O Server to one or more DVDs, follow these steps. Only DVD-RAM media can
be used to back up the Virtual I/O Server.
Note: Vendor disk drives might support burning to additional disk types, such as CD-RW and DVD-R.
Refer to the documentation for your drive to determine which disk types are supported.
1. Assign an optical drive to the Virtual I/O Server logical partition.
2. Get the device name by typing the following command:
lsdev -type optical
Note: If the Virtual I/O Server does not fit on one DVD, then the backupios command provides
instructions for disk replacement and removal until all the volumes have been created.
This command creates one or more bootable DVDs that you can use to restore the Virtual I/O Server.
4. If you plan to restore the Virtual I/O Server to a different system from which it was backed up, then
you need to back up the user-defined virtual devices. For instructions, see Backing up user-defined
virtual devices by using the backupios command on page 131.
Related information:
IBM System p Advanced POWER Virtualization Best Practices RedPaper
The backupios command empties the target_disks_stanza section of bosinst.data and sets
RECOVER_DEVICES=Default. This allows the mksysb file generated by the command to be cloned to another
logical partition. If you plan to use the nim_resources.tar image to install to a specific disk, then you need
to repopulate the target_disk_stanza section of bosinst.data and replace this file in the nim_resources.tar
image. All other parts of the nim_resources.tar image must remain unchanged.
To back up the Virtual I/O Server to a remote file system, follow these steps:
1. Create a mount directory where the backup image, nim_resources.tar, will be written. For example, to
create the directory /home/backup, type:
mkdir /home/backup
2. Mount an exported directory on the mount directory. For example:
mount server1:/export/ios_backup /home/backup
3. Run the backupios command with the -file option. Specify the path to the mounted directory. For
example:
backupios -file /home/backup
This command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from
the HMC.
4. If you plan to restore the Virtual I/O Server to a different system from which it was backed up, then
you need to back up the user-defined virtual devices. For instructions, see Backing up user-defined
virtual devices by using the backupios command on page 131.
Related information:
IBM System p Advanced POWER Virtualization Best Practices RedPaper
Backing up the Virtual I/O Server to a remote file system by creating a mksysb
image
You can back up the Virtual I/O Server base code, applied fix packs, custom device drivers to support
disk subsystems, and some user-defined metadata to a remote file system by creating a mksysb file.
Backing up the Virtual I/O Server to a remote file system will create the mksysb image in the directory
you specify. The mksysb image is an installable image of the root volume group in a file.
To back up the Virtual I/O Server to a remote file system, follow these steps:
1. Create a mount directory where the backup image, mksysb image, will be written. For example, to
create the directory /home/backup, type:
mkdir /home/backup
2. Mount an exported directory on the mount directory. For example:
mount server1:/export/ios_backup /home/backup
where server1 is the NIM server from which you plan to restore the Virtual I/O Server.
3. Run the backupios command with the -file option. Specify the path to the mounted directory. For
example:
backupios -file /home/backup/filename.mksysb -mksysb
where filename is the name of mksysb image that this command creates in the specified directory. You
can use the mksysb image to restore the Virtual I/O Server from a NIM server.
4. If you plan to restore the Virtual I/O Server to a different system from which it was backed up, then
you need to back up the user-defined virtual devices. For instructions, see Backing up user-defined
virtual devices by using the backupios command.
User-defined virtual devices include metadata, such as virtual device mappings, that define the
relationship between the physical environment and the virtual environment. You can back up
user-defined virtual devices in one of the following ways:
v You can back up user-defined virtual devices by saving the data to a location that is automatically
backed up when you use the backupios command to back up the VIOS. Use this option in situations
where you plan to restore the VIOS to a new or different system. (For example, in the event of a
system failure or disaster.)
v You can back up user-defined virtual devices by using the viosbr command. Use this option in
situations where you plan to restore the configuration information to the same VIOS partition from
which it was backed up.
Related tasks:
Restoring user-defined virtual devices on page 141
You can restore user-defined virtual devices on the Virtual I/O Server (VIOS) by restoring volume groups
and manually re-creating virtual device mappings. Alternatively, you can restore user-defined virtual
devices by using the viosbr command.
In addition to backing up the Virtual I/O Server (VIOS), you must back up user-defined virtual devices
(such as virtual device mappings) in case you have a system failure or disaster. In this situation, back up
user-defined virtual devices by saving the data to a location that is automatically backed up when you
use the backupios command to back up the VIOS.
where volume_group is the name of the volume group (or storage pool) that you want to activate.
3. Back up each volume group (and storage pool) by running the following command for each volume
group:
savevgstruct volume_group
where volume_group is the name of the volume group (or storage pool) that you want to back up. This
command writes a backup of the structure of a volume group (and therefore a storage pool) to the
/home/ios/vgbackups directory.
4. Save the information about network settings, adapters, users, and security settings to the
/home/padmin directory by running each command with the tee command as follows:
command | tee /home/padmin/filename
Where:
v command is the command that produces the information you want to save.
v filename is the name of the file to which you want to save the information.
Table 36. Commands that provide the information to save
Command Information provided
cfgnamesrv -ls Shows all system configuration database entries related
to domain name server information used by local
resolver routines.
entstat -all devicename Shows Ethernet driver and device statistics for the device
specified.
devicename is the name of a device whose attributes or
statistics you want to save. Run this command for each
device whose attributes or statistics you want to save.
hostmap -ls Shows all entries in the system configuration database.
Related tasks:
Scheduling backups of the Virtual I/O Server and user-defined virtual devices by creating a script and
crontab file entry on page 134
You can schedule regular backups of the Virtual I/O Server (VIOS) and user-defined virtual devices to
ensure that your backup copy accurately reflects the current configuration.
Backing up user-defined virtual devices by using the viosbr command
You can back up user-defined virtual devices by using the viosbr command. Use the viosbr command
when you plan to restore the information to the same Virtual I/O Server (VIOS) logical partition from
which it was backed up.
Related information:
IBM System p Advanced POWER Virtualization Best Practices RedPaper
You can back up user-defined virtual devices by using the viosbr command. Use the viosbr command
when you plan to restore the information to the same Virtual I/O Server (VIOS) logical partition from
which it was backed up.
You can use the viosbr command to back up all the relevant data to recover a VIOS after an installation.
The viosbr command backs up all the device properties and the virtual devices configuration on the
VIOS. You can include information about some or all of the following devices in the backup:
v Logical devices, such as storage pools, file-backed storage pools, the virtual media repository, and
paging space devices.
v Virtual devices, such as Etherchannel, Shared Ethernet Adapter, virtual server adapters, and
virtual-server fiber-channel adapters.
v Device attributes for devices like disks, optical devices, tape devices, fscsi controllers, Ethernet
adapters, Ethernet interfaces, and logical Host Ethernet Adapters.
Before you start, run the ioslevel command to verify that the VIOS is at version 2.1.2.0, or later.
where /tmp/myserverbackup is the file to which you want to back up the configuration information.
Related tasks:
Restoring user-defined virtual devices by using the viosbr command on page 142
You can restore user-defined virtual devices by using the viosbr command. Use the viosbr command
when you plan to restore the information to the same Virtual I/O Server (VIOS) logical partition from
which it was backed up.
Scheduling backups of user-defined virtual devices by using the viosbr command on page 135
You can schedule regular backups of the user-defined virtual devices on the Virtual I/O Server (VIOS)
logical partition. Scheduling regular backups ensures that your backup copy accurately reflects the
current configuration.
Backing up user-defined virtual devices by using the backupios command on page 131
In addition to backing up the Virtual I/O Server (VIOS), you must back up user-defined virtual devices
(such as virtual device mappings) in case you have a system failure or disaster. In this situation, back up
user-defined virtual devices by saving the data to a location that is automatically backed up when you
use the backupios command to back up the VIOS.
Related information:
ioslevel command
viosbr command
Scheduling backups of the Virtual I/O Server and user-defined virtual devices
You can schedule regular backups of the Virtual I/O Server (VIOS) and user-defined virtual devices to
ensure that your backup copy accurately reflects the current configuration.
To ensure that your backup of the VIOS accurately reflects your current running VIOS, back up the VIOS
and the user-defined virtual devices each time that the configuration changes. For example:
v Changing the VIOS, like installing a fix pack.
v Adding, deleting, or changing the external device configuration, like changing the SAN configuration.
v Adding, deleting, or changing resource allocations and assignments for the VIOS, like memory,
processors, or virtual and physical devices.
v Adding, deleting, or changing user-defined virtual device configurations, like virtual device mappings.
Scheduling backups of the Virtual I/O Server and user-defined virtual devices by creating a script and
crontab file entry:
You can schedule regular backups of the Virtual I/O Server (VIOS) and user-defined virtual devices to
ensure that your backup copy accurately reflects the current configuration.
To ensure that your backup of the VIOS accurately reflects your current running VIOS, back up the VIOS
each time that its configuration changes. For example:
Before you start, ensure that you are logged in to the VIOS as the prime administrator (padmin).
To back up the VIOS and user-defined virtual devices, complete the following tasks:
1. Create a script for backing up the VIOS, and save it in a directory that is accessible to the padmin
user ID. For example, create a script called backup and save it in the /home/padmin directory. Ensure
that your script includes the following information:
v The backupios command for backing up the VIOS.
v Commands for saving information about user-defined virtual devices.
v Commands to save the virtual devices information to a location that is automatically backed up
when you use the backupios command to back up the VIOS.
2. Create a crontab file entry that runs the backup script on a regular interval. For example, to run backup
every Saturday at 2:00 a.m., type the following commands:
a. crontab -e
b. 0 2 * * 6 /home/padmin/backup
When you are finished, remember to save and exit.
Related information:
backupios command
crontab command
IBM System p Advanced POWER Virtualization Best Practices RedPaper
You can schedule regular backups of the user-defined virtual devices on the Virtual I/O Server (VIOS)
logical partition. Scheduling regular backups ensures that your backup copy accurately reflects the
current configuration.
To ensure that your backup of the user-defined virtual devices accurately reflects your currently running
VIOS, back up the configuration information of the user-defined virtual devices each time that the
configuration changes.
Before you start, run the ioslevel command to verify that the VIOS is at version 2.1.2.0, or later.
To back up the configuration information of the user-defined virtual devices, run the viosbr command as
follows:
viosbr -backup -file /tmp/myserverbackup -frequency how_often
where:
v /tmp/myserverbackup is the file to which you want to back up the configuration information.
v how_often is the frequency with which you want to back up the configuration information. You can
specify one of the following values:
daily: Daily backups occur every day at 00:00.
weekly: Weekly backups occur every Sunday at 00:00.
monthly: Monthly backups occur on the first day of every month at 00:01.
Related tasks:
Backing up the Virtual I/O Server using IBM Tivoli Storage Manager
You can use the IBM Tivoli Storage Manager to automatically back up the Virtual I/O Server on regular
intervals, or you can perform incremental backups.
Backing up the Virtual I/O Server using IBM Tivoli Storage Manager automated backup:
You can automate backups of the Virtual I/O Server using the crontab command and the IBM Tivoli
Storage Manager scheduler.
To automate backups of the Virtual I/O Server, complete the following steps:
1. Write a script that creates a mksysb image of the Virtual I/O Server and save it in a directory that is
accessible to the padmin user ID. For example, create a script called backup and save it in the
/home/padmin directory. If you plan to restore the Virtual I/O Server to a different system from which
it was backed up, then ensure that your script includes commands for saving information about
user-defined virtual devices. For more information, see the following tasks:
v For instructions about how to create a mksysb image, see Backing up the Virtual I/O Server to a
remote file system by creating a mksysb image on page 130.
v For instructions about how to save user-defined virtual devices, see Backing up user-defined
virtual devices by using the backupios command on page 131.
2. Create a crontab file entry that runs the backup script on a regular interval. For example, to create a
mksysb image every Saturday at 2:00 a.m., type the following commands:
a. crontab -e
b. 0 2 0 0 6 /home/padmin/backup
When you are finished, remember to save and exit.
3. Work with the Tivoli Storage Manager administrator to associate the Tivoli Storage Manager client
node with one or more schedules that are part of the policy domain. This task is not performed on
the Tivoli Storage Manager client on the Virtual I/O Server. This task is performed by the Tivoli
Storage Manager administrator on the Tivoli Storage Manager server.
4. Start the client scheduler and connect to the server schedule using the dsmc command as follows:
dsmc -schedule
5. If you want the client scheduler to restart when the Virtual I/O Server restarts, then add the
following entry to the /etc/inittab file:
itsm::once:/usr/bin/dsmc sched > /dev/null 2>&1 # TSM scheduler
Related information:
IBM Tivoli Storage Manager for UNIX and Linux Backup-Archive Clients Installation and User's
Guide
You can back up the Virtual I/O Server at any time by performing an incremental backup with the IBM
Tivoli Storage Manager.
Perform incremental backups in situations where the automated backup does not suit your needs. For
example, before you upgrade the Virtual I/O Server, perform an incremental backup to ensure that you
have a backup of the current configuration. Then, after you upgrade the Virtual I/O Server, perform
another incremental backup to ensure that you have a backup of the upgraded configuration.
To perform an incremental backup of the of the Virtual I/O Server, run the dsmc command. For example,
dsmc -incremental sourcefilespec
Where sourcefilespec is the directory path to where the mksysb file is located. For example,
/home/padmin/mksysb_image.
Related information:
IBM Tivoli Storage Manager for UNIX and Linux Backup-Archive Clients Installation and User's
Guide
The VIOS contains the following types of information that you need to restore: the VIOS itself and
user-defined virtual devices.
v The VIOS includes the base code, applied fix packs, custom device drivers to support disk subsystems,
and some user-defined metadata. All this information is restored when you use the installios
command.
v User-defined virtual devices include metadata, such as virtual devices mappings, that define the
relationship between the physical environment and the virtual environment. You can restore
user-defined virtual devices in one of the following ways:
You can restore user-defined virtual devices by using the viosbr command. Use this option in
situations where you plan to restore the configuration information to the same VIOS partition from
which it was backed up.
You can restore user-defined virtual devices by restoring the volume groups and manually
recreating virtual device mappings. Use this option in situations where you plan to restore the VIOS
to a new or different system. (For example, in the event of a system failure or disaster.) Furthermore,
in these situations, you also need to restore the following components of your environment. Back up
these components to fully recover your VIOS configuration:
- External device configurations, such as Storage Area Network (SAN) devices.
Note: To perform Live Partition Mobility after you restore the VIOS, ensure that you restart the HMC.
Related tasks:
Backing up the Virtual I/O Server on page 127
You can back up the Virtual I/O Server (VIOS) and user-defined virtual devices using the backupios
command or the viosbr command. You can also use IBM Tivoli Storage Manager to schedule backups
and to store backups on another server.
Related information:
installios command
viosbr command
If the system is managed by the Integrated Virtualization Manager, then you need to restore your
partition profile data for the management partition and its clients before you restore the Virtual I/O
Server. For instructions, see Backing up and restoring partition data. (Alternatively, you can use the
rstprofdata command.)
To restore the Virtual I/O Server from tape, follow these steps:
1. Specify the Virtual I/O Server logical partition to boot from the tape by using the bootlist command.
Alternatively, you can alter the bootlist in the System Management Services (SMS).
2. Insert the tape into the tape drive.
3. From the SMS menu, select to install from the tape drive.
4. Follow the installation steps according to the system prompts.
5. If you restored the Virtual I/O Server to a different system from which it was backed up, then you
need to restore the user-defined virtual devices. For instructions, see Restoring user-defined virtual
devices manually on page 141.
Related information:
IBM System p Advanced POWER Virtualization Best Practices RedPaper
If the system is managed by the Integrated Virtualization Manager, then you need to restore your
partition profile data for the management partition and its clients before you restore the Virtual I/O
Server. For instructions, see Backing up and restoring partition data. (Alternatively, you can use the
rstprofdata command.)
To restore the Virtual I/O Server from a one or more DVDs, follow these steps:
1. Specify the Virtual I/O Server partition to boot from the DVD by using the bootlist command.
Alternatively, you can alter the bootlist in the System Management Services (SMS).
2. Insert the DVD into the optical drive.
3. From the SMS menu, select to install from the optical drive.
4. Follow the installation steps according to the system prompts.
5. If you restored the Virtual I/O Server to a different system from which it was backed up, then you
need to restore the user-defined virtual devices. For instructions, see Restoring user-defined virtual
devices manually on page 141.
Related information:
IBM System p Advanced POWER Virtualization Best Practices RedPaper
Restoring the Virtual I/O Server from the HMC using a nim_resources.tar file
You can restore the Virtual I/O Server base code, applied fix packs, custom device drivers to support
disk subsystems, and some user-defined metadata from a nim_resources.tar image stored in a remote file
system.
If the system is managed by the Integrated Virtualization Manager, then you need to restore your
partition profile data for the management partition and its clients before you restore the Virtual I/O
Server. For instructions, see Backing up and restoring partition data. (Alternatively, you can use the
rstprofdata command.)
To restore the Virtual I/O Server from a nim_resources.tar image in a file system, complete the following
steps:
1. Run the installios command from the HMC command line. This restores a backup image,
nim_resources.tar, that was created using the backupios command.
2. Follow the installation procedures according to the system prompts. The source of the installation
images is the exported directory from the backup procedure. For example, server1:/export/
ios_backup.
3. When the restoration is finished, open a virtual terminal connection (for example, using telnet) to the
Virtual I/O Server that you restored. Some additional user input might be required.
4. If you restored the Virtual I/O Server to a different system from which it was backed up, you must
restore the user-defined virtual devices. For instructions, see Restoring user-defined virtual devices
manually on page 141.
Related information:
IBM System p Advanced POWER Virtualization Best Practices RedPaper
Restoring the Virtual I/O Server from a NIM server using a mksysb file
You can restore the Virtual I/O Server base code, applied fix packs, custom device drivers to support
disk subsystems, and some user-defined metadata from a mksysb image stored in a remote file system.
To restore the Virtual I/O Server from a mksysb image in a file system, complete the following tasks:
1. Define the mksysb file as a NIM resource, specifically, a NIM object, by running the nim command. To
view a detailed description of the nim command, see nim Command. For example:
nim -o define -t mksysb -a server=servername -alocation=/export/ios_backup/
filename.mksysb objectname
Where:
v servername is the name of the server that holds the NIM resource.
v filename is the name of the mksysb file.
v objectname is the name by which NIM registers and recognizes the mksysb file.
2. Define a Shared Product Object Tree (SPOT) resource for the mksysb file by running the nim
command. For example:
nim -o define -t spot -a server=servername -a location=/export/ios_backup/
SPOT -a source=objectname SPOTname
Where:
v servername is the name of the server that holds the NIM resource.
v objectname is the name by which NIM registers and recognizes the mksysb file.
v SPOTname is the NIM object name for the mksysb image that was created in the previous step.
3. Install the Virtual I/O Server from the mksysb file using the smit command. For example:
smit nim_bosinst
4. Start the Virtual I/O Server logical partition. For instructions, see step 3, Boot the Virtual I/O Server,
of Installing the Virtual I/O Server using NIM.
5. If you restored the Virtual I/O Server to a different system from which it was backed up, you must
restore the user-defined virtual devices. For instructions, see Restoring user-defined virtual devices
manually on page 141.
Related information:
IBM System p Advanced POWER Virtualization Best Practices RedPaper
Using the NIM define operation
Defining a SPOT resource
Installing a client using NIM
User-defined virtual devices include metadata, such as virtual device mappings, that define the
relationship between the physical environment and the virtual environment. You can restore user-defined
virtual devices in one of the following ways:
v You can restore user-defined virtual devices by restoring volume groups and manually re-creating
virtual device mappings. Use this option in situations where you plan to restore the VIOS to a new or
different system. (For example, use this option in the event of a system failure or disaster.)
v You can restore user-defined virtual devices by using the viosbr command. Use this option in
situations where you plan to restore the configuration information to the same VIOS partition from
which it was backed up.
Related tasks:
Backing up user-defined virtual devices on page 131
You can back up user-defined virtual devices by saving the data to a location that is automatically backed
up when you use the backupios command to back up the Virtual I/O Server (VIOS). Alternatively, you
can back up user-defined virtual devices by using the viosbr command.
In addition to restoring the Virtual I/O Server (VIOS), you might need to restore user-defined virtual
devices (such as virtual device mappings). For example, in the event of a system failure, system
migration, or disaster, you need to restore both the VIOS and user-defined virtual devices. In this
situation, restore the volume groups by using the restorevgstruct command and manually re-create the
virtual device mappings by using the mkvdev command.
User-defined virtual devices include metadata, such as virtual device mappings, that define the
relationship between the physical environment and the virtual environment. In situations where you plan
to restore the VIOS to a new or different system, you need to back up both the VIOS and user-defined
virtual devices. (For example, in the event of a system failure or disaster, you must restore both the VIOS
and user-defined virtual devices.)
Before you start, restore the VIOS from tape, DVD, or a remote file system. For instructions, see one of
the following procedures:
v Restoring the Virtual I/O Server from tape on page 138
v Restoring the Virtual I/O Server from one or more DVDs on page 139
v Restoring the Virtual I/O Server from the HMC using a nim_resources.tar file on page 139
v Restoring the Virtual I/O Server from a NIM server using a mksysb file on page 139
Where:
v volumegroup is the name of a volume group (or storage pool) from step 1.
Virtual I/O Server 141
v hdiskx is the name of an empty disk from step 2.
4. Re-create the mappings between the virtual devices and physical devices by using the mkvdev
command. Re-create mappings for storage device mappings, shared Ethernet and Ethernet adapter
mappings, and virtual LAN settings. You can find mapping information in the file that you specified
in the tee command from the backup procedure. For example, /home/padmin/filename.
Related tasks:
Restoring user-defined virtual devices by using the viosbr command
You can restore user-defined virtual devices by using the viosbr command. Use the viosbr command
when you plan to restore the information to the same Virtual I/O Server (VIOS) logical partition from
which it was backed up.
Related information:
mkvdev command
restorevgstruct command
tee command
IBM System p Advanced POWER Virtualization Best Practices RedPaper
You can restore user-defined virtual devices by using the viosbr command. Use the viosbr command
when you plan to restore the information to the same Virtual I/O Server (VIOS) logical partition from
which it was backed up.
The viosbr command restores the VIOS partition to the same state as when the backup was taken. With
the information available from the backup, the command performs the following actions:
v Sets the attribute values for physical devices, such as controllers, adapters, disks, optical devices, tape
devices, and Ethernet interfaces.
v Imports logical devices, such as volume groups or storage pools, logical volumes, file systems, and
repositories.
v Creates virtual devices and their corresponding mappings for devices like Etherchannel, Shared
Ethernet Adapter, virtual target devices, virtual fiber channel adapters, and paging space devices.
To restore all the possible devices and display a summary of deployed and nondeployed devices, run the
following command:
viosbr -restore file /home/padmin/cfgbackups/myserverbackup.002.tar.gz
Restoring the Virtual I/O Server using IBM Tivoli Storage Manager
You can use the IBM Tivoli Storage Manager to restore the mksysb image of the Virtual I/O Server.
You can restore the Virtual I/O Server to the system from which it was backed up, or to a new or
different system (for example, in the event of a system failure or disaster). The following procedure
applies to restoring the Virtual I/O Server to the system from which it was backed up. First, you restore
the mksysb image to the Virtual I/O Server using the dsmc command on the Tivoli Storage Manager
client. But restoring the mksysb image does not restore the Virtual I/O Server. You then need to transfer
the mksysb image to another system and convert the mksysb image to an installable format.
To restore the Virtual I/O Server to a new or different system, use one of the following procedures:
v Restoring the Virtual I/O Server from tape on page 138
v Restoring the Virtual I/O Server from one or more DVDs on page 139
v Restoring the Virtual I/O Server from the HMC using a nim_resources.tar file on page 139
v Restoring the Virtual I/O Server from a NIM server using a mksysb file on page 139
Restriction: Interactive mode is not supported on the Virtual I/O Server. You can view session
information by typing dsmc on the Virtual I/O Server command line.
To restore the Virtual I/O Server using Tivoli Storage Manager, complete the following tasks:
1. Determine which file you want to restore by running the dsmc command to display the files that have
been backed up to the Tivoli Storage Manager server:
dsmc -query
2. Restore the mksysb image using the dsmc command. For example:
dsmc -restore sourcefilespec
Where sourcefilespec is the directory path to the location where you want to restore the mksysb image.
For example, /home/padmin/mksysb_image
3. Transfer the mksysb image to a server with a DVD-RW or CD-RW drive by running the following File
Transfer Protocol (FTP) commands:
a. Run the following command to make sure that the FTP server is started on the Virtual I/O Server:
startnetsvc ftp
The Virtual I/O Server includes a PCI Hot Plug Manager that is similar to the PCI Hot Plug Manager in
the AIX operating system. The PCI Hot Plug Manager allows you to hot plug PCI adapters into the
server and then activate them for the logical partition without having to reboot the system. Use the PCI
Hot Plug Manager for adding, identifying, or replacing PCI adapters in the system that are currently
assigned to the Virtual I/O Server.
Getting started
Prerequisites:
v If you are installing a new adapter, an empty system slot must be assigned to the Virtual I/O Server
logical partition. This task can be done through dynamic logical partitioning (DLPAR) operations.
If you are using a Hardware Management Console (HMC), you must also update the logical
partition profile of the Virtual I/O Server so that the new adapter is configured to the Virtual I/O
Server after you restart the system.
If you are using the Integrated Virtualization Manager, an empty slot is probably already assigned to
the Virtual I/O Server logical partition because all slots are assigned to the Virtual I/O Server by
default. You only need to assign an empty slot to the Virtual I/O Server logical partition if you
previously assigned all empty slots to other logical partitions.
v If you are installing a new adapter, ensure that you have the software required to support the new
adapter and determine whether there are any existing PTF prerequisites to install. To do this, use the
IBM Prerequisite Web site at http://www-912.ibm.com/e_dir/eServerPrereq.nsf
v If you need help determining the PCI slot in which to place a PCI adapter, see the PCI adapter
placement for machine types 82xx and 91xx or the PCI adapter placement for machine type 94xx.
Follow these steps to access the Virtual I/O Server, PCI Hot Plug Manager:
1. If you are using the Integrated Virtualization Manager, connect to the command-line interface.
2. Use the diagmenu command to open the Virtual I/O Server diagnostic menu. The menus are similar
to the AIX diagnostic menus.
If you are installing a PCI, Fibre Channel adapter, it is now ready to be attached to a SAN and have
LUNs assigned to the Virtual I/O Server for virtualization.
To replace a PCI adapter with the system power on in Virtual I/O Server, do the following steps:
1. From the PCI Hot Plug Manager, select Unconfigure a Device, then press Enter.
2. Press F4 (or Esc +4) to display the Device Names menu.
3. Select the adapter you are removing in the Device Names menu.
4. In the Keep Definition field, use the Tab key to answer Yes. In the Unconfigure Child Devices
field, use the Tab key again to answer YES, then press Enter.
5. Press Enter to verify the information on the ARE YOU SURE screen. Successful unconfiguration is
indicated by the OK message displayed next to the Command field at the top of the screen.
6. Press F4 (or Esc +4) twice to return to the Hot Plug Manager.
7. Select replace/remove PCI Hot Plug adapter.
8. Select the slot that has the device to be removed from the system.
9. Select replace. A fast-blinking amber LED located at the back of the machine near the adapter
indicates that the slot has been identified.
10. Press Enter which places the adapter in the action state, meaning it is ready to be removed from the
system.
If the adapter supports physical volumes that are in use by a client logical partition, then You can
perform steps on the client logical partition before unconfiguring the storage adapter. For instructions, see
Preparing the client logical partitions. For example, the adapter might be in use because the physical
volume was used to create a virtual target device, or it might be part of a volume group used to create a
virtual target device.
Follow these steps to unconfigure SCSI, SSA, and Fibre Channel storage adapters:
1. Connect to the Virtual I/O Server command-line interface.
2. Use the oem_setup_env command to close all applications that are using the adapter you are
unconfiguring.
3. Type lsslot-c pci to list all the hot plug slots in the system unit and display their characteristics.
4. Type lsdev -C to list the current state of all the devices in the system unit.
5. Type unmount to unmount previously mounted file systems, directories, or files using this adapter.
6. Type rmdev -l adapter -R to make the adapter unavailable.
Attention: Do not use the -d flag with the rmdev command for hot plug operations because this
action removes your configuration.
The virtual target devices must be in the define state before the Virtual I/O Server adapter can be
replaced. Do not remove the virtual devices permanently.
To prepare the client logical partitions so that you can unconfigure an adapter, complete the following
steps depending on your situation.
Table 39. Situations and steps for preparing the client logical partitions
Situation Steps
You have redundant hardware on the Virtual I/O Server No action is required on the client logical partition.
for the adapter.
HMC-managed systems only: You have redundant No action is required on the client logical partition.
Virtual I/O Server logical partitions that, in conjunction However, path errors might be logged on the client
with virtual client adapters, provide multiple paths to logical partition.
the physical volume on the client logical partition.
HMC-managed systems only: You have redundant See the procedures for your client operating system. For
Virtual I/O Server logical partitions that, in conjunction example, for AIX, see Replacing a disk on the Virtual
with virtual client adapters, provide multiple physical I/O Server in the IBM System p Advanced POWER
volumes that are used to mirror a volume group. Virtualization Best Practices Redpaper. The procedure for
Linux is similar to this procedure for AIX.
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphdx/power_systems.htm .
Use any role other than View Only to perform this task.
The Integrated Virtualization Manager provides the following types of shutdown options for logical
partitions:
v Operating System (recommended)
v Delayed
v Immediate
The recommended shutdown method is to use the client operating systems shutdown command. Use the
immediate shutdown method only as a last resort because using this method causes an abnormal
shutdown which might result in data loss.
If you choose the Delayed shutdown method, then be aware of the following considerations:
v Shutting down the logical partitions is equivalent to pressing and holding the white control-panel
power button on a server that is not partitioned.
v Use this procedure only if you cannot successfully shut down the logical partitions through operating
system commands. When you use this procedure to shut down the selected logical partitions, the
logical partitions wait a predetermined amount of time to shut down. This allows the logical partitions
time to end jobs and write data to disks. If the logical partition is unable to shut down within the
predetermined amount of time, it ends abnormally, and the next restart might take a long time.
If you plan to shut down the entire managed system, shut down each client logical partition, then shut
down the Virtual I/O Server management partition.
To shut down a logical partition, complete the following steps in the Integrated Virtualization Manager:
1. In the navigation area, select View/Modify Partitions under Partition Management. The
View/Modify Partitions page is displayed.
2. Select the logical partition that you want to shut down.
3. From the Tasks menu, click Shutdown. The Shutdown Partitions page is displayed.
4. Select the shutdown type.
Viewing information and statistics about the Virtual I/O Server, the
server, and virtual resources
You can view information and statistics about the Virtual I/O Server, the server, and virtual resources to
help you manage and monitor the system, and troubleshoot problems.
The following table lists the information and statistics available on the Virtual I/O Server, as well as the
commands you need to run to view the information and statistics.
Table 40. Information and associated commands for the Virtual I/O Server
Information to view Command
Statistics about kernel threads, virtual memory, disks, vmstat
traps, and processor activity.
Statistics for a Fibre Channel device driver. fcstat
A summary of virtual memory usage. svmon
Information about the Virtual I/O Server and the server, uname
such as the server model, machine ID, Virtual I/O Server
logical partition name and ID, and the LAN network
number.
Generic and device-specific statistics for an Ethernet enstat
driver or device, including the following information for
a Shared Ethernet Adapter:
v Shared Ethernet Adapter statistics:
Number of real and virtual adapters (If you are
using Shared Ethernet Adapter failover, this number
does not include the control channel adapter)
Shared Ethernet Adapter flags
VLAN IDs
Information about real and virtual adapters
v Shared Ethernet Adapter failover statistics:
High availability statistics
Packet types
State of the Shared Ethernet Adapter
Bridging mode
v GARP VLAN Registration Protocol (GVRP) statistics:
Bridge Protocol Data Unit (BPDU) statistics
Generic Attribute Registration Protocol (GARP)
statistics
GARP VLAN Registration Protocol (GVRP) statistics
v Listing of the individual adapter statistics for the
adapters associated with the Shared Ethernet Adapter
The vmstat, fcstat, svmon, and uname commands are available with Virtual I/O Server version 1.5 or later.
To update the Virtual I/O Server, see Updating the Virtual I/O Server on page 127.
Error logs
AIX and Linux client logical partitions log errors against failing I/O operations. Hardware errors on the
client logical partitions associated with virtual devices usually have corresponding errors logged on the
server. However, if the failure is within the client logical partition, there will not be errors on the server.
Also, on Linux client logical partitions, if the algorithm for retrying SCSI temporary errors is different
from the algorithm used by AIX, the errors might not be recorded on the server.
With Virtual I/O Server V1.3.0.1 (fix pack 8.1), you can install and configure the IBM Tivoli Monitoring
System Edition for System p agent on the Virtual I/O Server. With Tivoli Monitoring System Edition for
System p, you can monitor the health and availability of multiple System p servers (including the Virtual
I/O Server) from the Tivoli Enterprise Portal. Tivoli Monitoring System Edition for System p gathers data
from the Virtual I/O Server, including data about physical volumes, logical volumes, storage pools,
storage mappings, network mappings, real memory, processor resources, mounted file system sizes, and
so on. From the Tivoli Enterprise Portal, you can view a graphical representation of the data, use
predefined thresholds to alert you on key metrics, and resolve issues based on recommendations
provided by the Expert Advice feature of Tivoli Monitoring.
Beginning with version 1.3 of the Virtual I/O Server, you can set security options that provide tighter
security controls over your Virtual I/O Server environment. These options allow you to select a level of
system security hardening and specify the settings allowable within that level. The Virtual I/O Server
security feature also allows you to control network traffic by enabling the Virtual I/O Server firewall. You
can configure these options using the viosecure command. To help you set up system security when you
initially install the Virtual I/O Server, the Virtual I/O Server provides the configuration assistance menu.
You can access the configuration assistance menu by running the cfgassist command.
Using the viosecure command, you can set, change, and view current security settings. By default, no
Virtual I/O Server security levels are set. You must run the viosecure command to change the settings.
The system security hardening feature protects all elements of a system by tightening security or
implementing a higher level of security. Although hundreds of security configurations are possible with
the Virtual I/O Server security settings, you can easily implement security controls by specifying a high,
medium, or low security level.
Using the system security hardening features provided by Virtual I/O Server, you can specify values such
as the following:
v Password policy settings
v usrck, pwdck, grpck, and sysck actions
v Default file-creation settings
v Settings included in the crontab command
Using the Virtual I/O Server firewall, you can enforce limitations on IP activity in your virtual
environment. With this feature, you can specify which ports and network services are allowed access to
the Virtual I/O Server system. For example, if you need to restrict login activity from an unauthorized
port, you can specify the port name or number and specify deny to remove it from the allow list. You can
also restrict a specific IP address.
You can use the Open Source Secure Sockets Layer (OpenSSL) and Portable Secure Shell (OpenSSH)
software to connect to the Virtual I/O Server using secure connections. For more information about
OpenSSL and OpenSSH, see the OpenSSL Project and Portable SSH Web sites.
To connect to the Virtual I/O Server using OpenSSH, complete the following tasks:
1. If you are using a version of Virtual I/O Server prior to version 1.3.0, then install OpenSSH before
you connect. For instructions, see Downloading, installing, and updating OpenSSH and OpenSSL
on page 151.
2. Connect to the Virtual I/O Server. If you are using version 1.3.0 or later, then connect using either an
interactive or noninteractive shell. If you are using a version prior to 1.3.0, then connect using only an
interactive shell.
v To connect using an interactive shell, type the following command from the command line of a
remote system:
ssh username@vioshostname
where username is your user name for the Virtual I/O Server and vioshostname is the name of the
Virtual I/O Server.
v To connect using a noninteractive shell, run the following command:
ssh username@vioshostname command
Where:
username is your user name for the Virtual I/O Server.
vioshostname is the name of the Virtual I/O Server.
command is the command that you want to run. For example, ioscli lsmap -all.
Note: When using a noninteractive shell, remember to use the full command form (including the
ioscli prefix) for all Virtual I/O Server commands.
3. Authenticate SSH. If you are using version 1.3.0 or later, then authenticate using either passwords or
keys. If you are using a version prior to 1.3.0, then authenticate using only passwords.
v To authenticate using passwords, enter your user name and password when prompted by the SSH
client.
v To authenticate using keys, perform the following steps on the SSH client's operating system:
Where:
public_key_file is the public key file that is generated in the previous step. For example,
id_rsa.pub.
username is your user name for the Virtual I/O Server.
vioshostname is the name of the Virtual I/O Server.
The Virtual I/O Server might not include the latest version of OpenSSH or OpenSSL with each release. In
addition, there might be OpenSSH or OpenSSL updates released in between Virtual I/O Server releases.
In these situations, you can update OpenSSH and OpenSSL on the Virtual I/O Server by downloading
and installing OpenSSH and OpenSSL. For instructions, see Downloading, installing, and updating
OpenSSH and OpenSSL.
OpenSSH and OpenSSL might need to be updated on your Virtual I/O Server if the Virtual I/O Server
did not include the latest version of OpenSSH or OpenSSL, or if there were OpenSSH or OpenSSL
updates released in between Virtual I/O Server releases. In these situations, you can update OpenSSH
and OpenSSL on the Virtual I/O Server by downloading and installing OpenSSH and OpenSSL using the
following procedure.
For more information about OpenSSL and OpenSSH, see the OpenSSL Project and Portable SSH Web
sites.
Note: Alternatively, you can install the software from the AIX Expansion Pack.
Restrictions:
v The sftp command is not supported on versions of Virtual I/O Server earlier than 1.3.
v Noninteractive shells are not supported using OpenSSH with the Virtual I/O Server versions earlier
than 1.3.
To implement system security hardening rules, you can use the viosecure command to specify a security
level of high, medium, or low. A default set of rules is defined for each level. You can also set a level of
default, which returns the system to the system standard settings and removes any level settings that
have been applied.
The low level security settings are a subset of the medium level security settings, which are a subset of
the high level security settings. Therefore, the high level is the most restrictive and provides the greatest
level of control. You can apply all of the rules for a specified level or select which rules to activate for
your environment. By default, no Virtual I/O Server security levels are set; you must run the viosecure
command to modify the settings.
The Virtual I/O Server firewall is not enabled by default. To enable the Virtual I/O Server firewall, you
must turn it on by using the viosecure command with the -firewall option. When you enable it, the
default setting is activated, which allows access for the following IP services:
v ftp
v ftp-data
v ssh
v web
v https
v rmc
v cimom
Note: The firewall settings are contained in the file viosecure.ctl in the /home/ios/security directory. If
for some reason the viosecure.ctl file does not exist when you run the command to enable the firewall,
you receive an error. You can use the -force option to enable the standard firewall default ports.
You can use the default setting or configure the firewall settings to meet the needs of your environment
by specifying which ports or port services to allow. You can also turn off the firewall to deactivate the
settings.
Use the following tasks at the Virtual I/O Server command line to configure the Virtual I/O Server
firewall settings:
1. Enable the Virtual I/O Server firewall by running the following command:
viosecure -firewall on
2. Specify the ports to allow or deny, by using the following command:
Before you start, ensure that the Virtual I/O Server version 1.5 or later. To update the Virtual I/O Server,
see Updating the Virtual I/O Server on page 127.
Kerberos is a network authentication protocol that provides authentication for client and server
applications by using a secret-key cyrptography. It negotiates authenticated, and optionally encrypted,
communications between two points anywhere on the Internet. Kerberos authentication generally works
as follows:
1. A Kerberos client sends a request for a ticket to the Key Distribution Center (KDC).
2. The KDC creates a ticket-granting ticket (TGT) for the client and encrypts it using the client's
password as the key.
3. The KDC returns the encrypted TGT to the client.
4. The client attempts to decrypt the TGT, using its password.
5. If the client successfully decrypts the TGT (for example, if the client gives the correct password), the
client keeps the decrypted TGT. The TGT indicates proof of the client's identity.
To configure a Kerberos client on the Virtual I/O Server, run the follwoing command.
mkkrb5clnt -c KDC_server -r realm_name \ -s Kerberos_server -d Kerberos_client
Where:
v KDC_server is the name of the KDC server.
v realm_name is the name of the realm to which you want to configure the Kerberos client.
v Kerberos_server is the fully qualified host name of the Kerberos server.
v Kerberos_client is the domain name of the Kerberos client.
For example:
mkkrb5clnt -c bob.kerberso.com -r KERBER.COM \ -s bob.kerberso.com -d testbox.com
In this example, you configure the Kerberos client, testbox.com, to the Kerberos server, bob.kerberso.com.
The KDC is running on bob.kerberso.com.
When the Virtual I/O Server is installed, the only user type that is active is the prime administrator
(padmin). The prime administrator can create additional user IDs with types of system administrator,
service representative, or development engineer.
Note: You cannot create the prime administrator (padmin) user ID. It is automatically created and
enabled after the Virtual I/O Server is installed.
You can use the IBM Tivoli Identity Manager to automate the management of Virtual I/O Server users.
Tivoli Identity Manager provides a Virtual I/O Server adapter that acts as an interface between the
Virtual I/O Server and the Tivoli Identity Manager Server. The adapter acts as a trusted virtual
administrator on the Virtual I/O Server, performing tasks like the following:
v Creating a user ID to authorize access to the Virtual I/O Server.
v Modifying an existing user ID to access the Virtual I/O Server.
v Removing access from a user ID. This deletes the user ID from the Virtual I/O Server.
v Suspending a user account by temporarily deactivating access to the Virtual I/O Server.
v Restoring a user account by reactivating access to the Virtual I/O Server.
v Changing a user account password on the Virtual I/O Server.
v Reconciling the user information of all current users on the Virtual I/O Server.
v Reconciling the user information of a particular user account on the Virtual I/O Server by performing
a lookup.
For more information, see the IBM Tivoli Identity Manager product manuals.
This section includes information about troubleshooting the Virtual I/O Server. For information about
troubleshooting the Integrated Virtualization Manager, see Troubleshooting the Integrated Virtualization
Manager.
If you are still having problems after using the diagmenu command, contact your next level of support
and ask for assistance.
Refer to the AIX fast-path problem-isolation documentation in the Service provider information
because, in certain cases, the diagnostic procedures described in the AIX fast-path problem-isolation
documentation are not available from the diagmenu command menu.
When you configure a Shared Ethernet Adapter the configuration can fail with the following error:
Method error (/usr/lib/methods/cfgsea):
0514-040 Error initializing a device into the kernel.
Important: None of the interfaces of the adapters must be listed in the output. If any interface name
(for example, en0) does is listed in the output, detach it as follows:
chdev -dev interface_name -attr state=detach
You might want to perform this step from a console connection because it is possible that detaching
this interface will end your network connection to the Virtual I/O Server.
3. Verify that the virtual adapters that are used for data are trunk adapters by running the following
command:
entstat -all entX | grep Trunk
Note:
v The trunk adapter does not apply to the virtual adapter that is used as the control channel in a
Shared Ethernet Adapter Failover configuration.
v If any of the virtual adapters that are used for data are not trunk adapters, you need to enable
them to access external networks from the HMC.
4. Verify that the physical device and the virtual adapters in the Shared Ethernet Adapter are in
agreement on the checksum offload setting:
a. Determine the checksum offload setting on physical device by running the following command:
lsdev -dev device_name -attr chksum_offload
where device_name is the name of the physical device. For example, ent0.
b. If chksum_offload is set to yes, enable checksum offload for all of the virtual adapters in the
Shared Ethernet Adapter by running the following command:
chdev -dev device_name -attr chksum_offload=yes
where device_name is the name of a virtual adapter in the Shared Ethernet Adapter.
d. If there is no output, the physical device does not support checksum offload and therefore does
not have the attribute. To resolve the error, disable checksum offload for all of the virtual adapters
in the Shared Ethernet Adapter by running the following command:
chdev -dev device_name -attr chksum_offload=no
where device_name is the name of a virtual adapter in the Shared Ethernet Adapter.
5. If the real adapter is a Host Ethernet Adapter port, also known as, a Logical Integrated Virtual
Ethernet adapter port, make sure that the Virtual I/O Server has been configured as the promiscuous
logical partition for the physical port of the logical Integrated Virtual Ethernet adapter from the HMC.
If you installed OpenSSH on a level of the Virtual I/O Server prior to 1.3, and then upgraded to 1.3 or
later, noninteractive shells might not work because the SSH configuration file needs modification.
To enable noninteractive shells in Virtual I/O Server 1.3 or later, run the following command from the
SSH client:
ioscli startnetsvc ssh
Note: You can run the startnetsvc command when the SSH service is running. In this situation, the
command appears to fail, but is successful.
Occasionally, the disk that is needed to install the client logical partition cannot be located. In this
situation, if the client is already installed, start the client logical partition. Ensure that you have the latest
levels of the software and firmware. Then ensure that the Slot number of the virtual SCSI server adapter
matches the Remote partition virtual slot number of the virtual SCSI client adapter.
1. Ensure that you have the latest levels of the Hardware Management Console, firmware, and Virtual
I/O Server. Follow these steps:
a. To check whether you have the latest level of the HMC, see the Installing and configuring the
Hardware Management Console. To view the PDF file of Installing and configuring the Hardware
Management Console, approximately 3 MB in size, see
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphai/iphai.pdf .
b. Ensure that you have the latest firmware.
c. To check whether you have the latest level of the Virtual I/O Server, see Updating the Virtual
I/O Server on page 127.
2. Ensure the server virtual SCSI adapter slot number is mapped correctly to the client logical partition
remote slot number:
a. In the navigation area, expand Systems Management > Servers and click the server on which the
Virtual I/O Server logical partition is located.
b. In the contents area, select the Virtual I/O Server logical partition.
c. Click Tasks and select Properties.
d. Click the Virtual Adapters tab.
VTD vhdisk0
LUN 0x8100000000000000
Backing device hdisk5
Physloc U787B.001.DNW025F-P1-C5-T1-W5005076300C10899-L536F000000000000
Note: If the client logical partition is not yet installed, the Client Partition ID is 0x00000000.
The slot number of the server SCSI adapter is displayed under Physloc column. The digits following
the -C specify the slot number. In this case, the slot number is 3.
8. From the Virtual I/O Server command line, type lsdev -virtual. You see results similar to the
following:
name status description
If your client partition is using virtual I/O resources, check the Service Focal Point and Virtual I/O
Server first to ensure that the problem is not on the server.
On client partitions running the current level of AIX, when a hardware error is logged on the server and
a corresponding error is logged on the client partition, the Virtual I/O Server provides a correlation error
message in the error report.
Description
Underlying transport error
Probable Causes
PROCESSOR
Failure Causes
PROCESSOR
Recommended Actions
PERFORM PROBLEM DETERMINATION PROCEDURES
Detail Data
Error Log Type
01
Reserve
00
Error Number
0006
RC
0000 0002
VSCSI Pointer
Compare the LABEL, IDENTIFIER, and Error Number values from your error report to the values in the
following table to help identify the problem and determine a resolution.
The Virtual I/O Server version 2.1.2.0 provides commands that you can use to capture performance data.
You can then convert this data into a format and file for diagnostic use by the IBM Electronic Service
Agent.
You can use the cfgassist command to manage the various types of data recording that the topas and
topasrec commands provide. You can use the wkldout command to convert recording data from binary
format to ASCII text format. You also can configure the performance management agent to gather data
about performance of the Virtual I/O Server.
With the topasrec command, the Virtual I/O Server supports local, central electronics process (CEC), and
cluster recording capabilities. These recordings can be either persistent or normal. Persistent recordings
are recordings that run on the Virtual I/O Server and continue to run after the Virtual I/O Server
reboots. Normal recordings are recordings that run for a specified time interval. The recording data files
that are generated are stored in the /home/ios/perf/topas directory path.
Local recordings gather data about the Virtual I/O Server. CEC recordings gather data about any AIX
logical partitions that are running on the same CEC as the Virtual I/O Server. The data collected consists
of dedicated and shared logical partition data and includes a set of aggregated values that provide an
overview of the partition set. Cluster recordings gather data from a list of hosts that are specified in a
cluster configuration file.
See the Virtual I/O Server and Integrated Virtualization Manager commands. To view the PDF file of
Virtual I/O Server and Integrated Virtualization Manager commands, approximately 4 MB in size, see
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphcg/iphcg.pdf .
In the following tables, the term attribute refers to an option that you can add to a Virtual I/O Server
command. The term variable refers to an option that you can specify in a configuration file for Tivoli
Storage Manager or Tivoli Usage and Accounting Manager.
Related information:
IBM Tivoli Application Dependency Discovery Manager Information Center
IBM Tivoli Identity Manager
IBM Tivoli Monitoring version 6.2.1 documentation
IBM Tivoli Monitoring Virtual I/O Server Premium Agent User's Guide
IBM Tivoli Storage Manager
IBM Tivoli Usage and Accounting Manager Information Center
IBM TotalStorage Productivity Center Information Center
BPDU refers to all protocol packets that are exchanged between the switch and the Shared Ethernet
Adapter. The only bridge protocol currently available with the Shared Ethernet Adapter is GARP. GARP
is a generic protocol used to exchange attribute information between two entities. The only type of GARP
currently available on the Shared Ethernet Adapter is GVRP. With GVRP, the attributes exchanged are
VLAN values.
The GARP statistics include those BPDU packets sent or received that are of type GARP.
Table 49. Descriptions of GARP statistics
GARP statistic Description
Transmit
Packets
Number of packets sent.
Failed packets
Number of packets that could not be sent (for
example, packets that could not be sent because
there was no memory to allocate the outgoing
packet).
Leave All Events
Packets sent with event type Leave All.
Join Empty Events
Packets sent with event type Join Empty
Join In Events
Packets sent with event type Join In
Leave Empty Events
Packets sent with event type Leave Empty
Leave In Events
Packets sent with event type Leave In
Empty Events
Packets sent with event type Empty
Receive
Packets
Number of packets received
Unprocessed Packets
Packets that could not be processed because the
protocol was not running at the time.
Packets with Unknown Attr Type:
Packets with an unsupported attribute type. A
high number is typical because the switch might
be exchanging other GARP protocol packets that
the Shared Ethernet Adapter does not support.
For example, GARP Multicast Registration
Protocol (GMRP).
Leave All Events
Packets received with event type Leave All
Join Empty Events
Packets received with event type Join Empty
Join In Events
Packets received with event type Join In
Leave Empty Events
Packets received with event type Leave Empty
Leave In Events
Packets received with event type Leave In
Empty Events
Packets received with event type Empty
The GVRP statistics include those GARP packets sent or received that are exchanging VLAN information
using GVRP.
Table 50. Descriptions of GVRP statistics
GVRP statistic Description
Transmit
Packets
Number of packets sent
Failed packets
Number of packets that could not be sent (for
example, packets that could not be sent because
there was no memory to allocate the outgoing
packet).
Leave All Events
Packets sent with event type Leave All.
Join Empty Events
Packets sent with event type Join Empty
Join In Events
Packets sent with event type Join In
Leave Empty Events
Packets sent with event type Leave Empty
Leave In Events
Packets sent with event type Leave In
Empty Events
Packets sent with event type Empty
Example statistics
Running the entstat -all command returns results similar to the following:
--------------------------------------------------------------
Statistics for adapters in the Shared Ethernet Adapter ent3
--------------------------------------------------------------
Number of adapters: 2
SEA Flags: 00000009
< THREAD >
< GVRP >
VLAN IDs :
ent2: 1
Real Side Statistics:
Packets received: 0
Packets bridged: 0
--------------------------------------------------------------
Bridge Protocol Data Units (BPDU) Statistics:
---------------------------------------------------------------
General Attribute Registration Protocol (GARP) Statistics:
---------------------------------------------------------------
GARP VLAN Registration Protocol (GVRP) Statistics:
You can use several of the Virtual I/O Server commands, including chdev, mkvdev, and cfglnagg, to
change device or network attributes. This section defines attributes that can be modified.
Ethernet attributes
Attribute Description
Maximum Transmission Specifies maximum transmission unit (MTU). This value can be any number from 60
Unit (mtu) through 65535, but it is media dependent.
Interface State (state)
detach Removes an interface from the network interface list. If the last interface is
detached, the network interface driver code is unloaded. To change the
interface route of an attached interface, that interface must be detached and
added again with the chdev -dev Interface -attr state=detach command.
down Marks an interface as inactive, which keeps the system from trying to
transmit messages through that interface. Routes that use the interface,
however, are not automatically disabled. (chdev -dev Interface -attr state=down)
up Marks an interface as active. This parameter is used automatically when
setting the first address for an interface. It can also be used to enable an
interface after the chdev -dev Interface -attr state=up command.
Network Mask (netmask) Specifies how much of the address to reserve for subdividing networks into
subnetworks.
The mask includes both the network part of the local address and the subnet part,
which is taken from the host field of the address. The mask can be specified as a
single hexadecimal number beginning with 0x, in standard Internet dotted-decimal
notation.
In the 32-bit address, the mask contains bits with a value of 1 for the bit positions
reserved for the network and subnet parts, and a bit with the value of 0 for the bit
positions that specify the host. The mask contains the standard network portion, and
the subnet segment is contiguous with the network segment.
Attribute Description
PVID (pvid) Specifies the PVID to use for the Shared Ethernet Adapter.
PVID adapter Specifies the default virtual adapter to use for non-VLAN tagged packets.
(pvid_adapter)
Physical adapter Specifies the physical adapter associated with the Shared Ethernet Adapter.
(real_adapter)
Threaded mode should be used when virtual SCSI will be run on the same Virtual I/O
Server logical partition as Shared Ethernet Adapter. Threaded mode helps ensure that
virtual SCSI and the Shared Ethernet Adapter can share the processor resource
appropriately. However, threading adds more instruction path length, which uses
additional processor cycles. If the Virtual I/O Server logical partition will be dedicated
to running shared Ethernet devices (and associated virtual Ethernet devices) only, the
adapters should be configured with threading disabled.
You can enable or disable threading using the -attr thread option of the mkvdev
command. To enable threading, use the -attr thread=1 option. To disable threading,
use the -attr thread=0 option. For example, the following command disables
threading for Shared Ethernet Adapter ent1:
mkvdev -sea ent1 -vadapter ent5 -default ent5 -defaultid 1 -attr thread=0
Virtual adapters Lists the virtual Ethernet adapters associated with the Shared Ethernet Adapter.
(virt_adapter)
TCP segmentation offload Enables TCP largesend capability (also known as segmentation offload) from logical
(largesend) partitions to the physical adapter. The physical adapter must be enabled for TCP
largesend for the segmentation offload from the logical partition to the Shared Ethernet
Adapter to work. Also, the logical partition must be capable of performing a largesend
operation. On AIX, largesend can be enabled on a logical partition using the ifconfig
command.
You can enable or disable TCP largesend using the -a largesend option of the chdev
command. To enable it, use the '-a largesend=1' option. To disable it, use the '-a
largesend=0' option.
For example, the following command enables largesend for Shared Ethernet Adapter
ent1:
chdev -l ent1 -a largesend=1
You can modify the following Shared Ethernet Adapter failover attributes.
Attribute Description
High availability mode Determines whether the devices participate in a failover setup. The default is disabled.
(ha_mode) Typically, a Shared Ethernet Adapter in a failover setup is operating in auto mode, and
the primary adapter is decided based on which adapter has the highest priority
(lowest numerical value). A shared Ethernet device can be forced into the standby
mode, where it will behave as the backup device as long as it can detect the presence
of a functional primary.
Control Channel (ctl_chan) Sets the virtual Ethernet device that is required for a Shared Ethernet Adapter in a
failover setup so that it can communicate with the other adapter. There is no default
value for this attribute, and it is required when the ha_mode is not set to disabled.
Internet address to ping Optional attribute that can be specified for a Shared Ethernet Adapter that has been
(netaddr) configured in a failover setup. When this attribute is specified, a shared Ethernet
device will periodically ping the IP address to verify connectivity (in addition to
checking for link status of the physical devices). If it detects a loss of connectivity to
the specified ping host, it will initiate a failover to the backup Shared Ethernet
Adapter. This attribute is not supported when you use a Shared Ethernet Adapter with
a Host Ethernet Adapter (or Integrated Virtual Ethernet).
INET attributes
Attribute Description
Host Name (hostname) Specify the host name that you want to assign to the current machine.
When specifying the host name, use ASCII characters, preferably alphanumeric only.
Do not use a period in the host name. Avoid using hexadecimal or decimal values as
the first character (for example 3Comm, where 3C might be interpreted as a hexadecimal
character). For compatibility with earlier hosts, use an unqualified host name of fewer
than 32 characters.
If the host uses a domain name server for name resolution, the host name must
contain the full domain name.
In a hierarchical network, certain hosts are designated as name servers that resolve
names into Internet addresses for other hosts. This arrangement has two advantages
over the flat name space: resources of each host on the network are not consumed in
resolving names, and the person who manages the system does not need to maintain
name-resolution files on each machine on the network. The set of names managed by a
single name server is known as its zone of authority.
Gateway (gateway) Identifies the gateway to which packets are addressed. The Gateway parameter can be
specified either by symbolic name or numeric address.
Adapter attributes
You can modify the following adapter attributes. The attribute behavior can vary, based on the adapter
and driver you have.
Attribute Description
Link Aggregation adapters The adapters that currently make up the Link Aggregation device. If you want to
(adapter_names) modify these adapters, modify this attribute and select all the adapters that should
belong to the Link Aggregation device. When you use this attribute to select all of the
adapters that should belong to the Link Aggregation device, its interface must not
have an IP address configured.
Mode (mode) The type of channel that is configured. In standard mode, the channel sends the
packets to the adapter based on an algorithm (the value used for this calculation is
determined by the Hash Mode attribute). In round_robin mode, the channel gives one
packet to each adapter before repeating the loop. The default mode is standard.
Using the 802.3ad mode, the Link Aggregation Control Protocol (LACP) negotiates the
adapters in the Link Aggregation device with an LACP-enabled switch.
If the Hash Mode attribute is set to anything other than the default, this attribute must
be set to standard or 802.3ad. Otherwise, the configuration of the Link Aggregation
device will fail.
Hash Mode (hash_mode) If operating under standard or IEEE 802.3ad mode, the hash mode attribute
determines how the outgoing adapter for each packet is chosen. Following are the
different modes:
v default: uses the destination IP address to determine the outgoing adapter.
v src_port: uses the source TCP or UDP port for that connection.
v dst_port: uses the destination TCP or UDP port for that connection.
v src_dst_port: uses both the source and destination TCP or UDP ports for that
connection to determine the outgoing adapter.
You cannot use round-robin mode with any hash mode value other than default. The
Link Aggregation device configuration will fail if you attempt this combination.
If the packet is not TCP or UDP, it uses the default hashing mode (destination IP
address).
Using TCP or UDP ports for hashing can make better use of the adapters in the Link
Aggregationdevice, because connections to the same destination IP address can be sent
over different adapters (while still retaining the order of the packets), thus increasing
the bandwidth of the Link Aggregation device.
Internet Address to Ping This field is optional. The IP address that the Link Aggregation device should ping to
(netaddr) verify that the network is up. This is only valid when there is a backup adapter and
when there are one or more adapters in the Link Aggregation device. An address of
zero (or all zeros) is ignored and disables the sending of ping packets if a valid
address was previously defined. The default is to leave this field blank.
Retry Timeout (retry_time) This field is optional. It controls how often the Link Aggregation device sends out a
ping packet to poll the current adapter for link status. This is valid only when the Link
Aggregation device has one or more adapters, a backup adapter is defined, and the
Internet Address to Ping field contains a non-zero address. Specify the timeout value
in seconds. The range of valid values is 1 to 100 seconds. The default value is 1
second.
Number of Retries This field is optional. It specifies the number of lost ping packets before the Link
(num_retries) Aggregation device switches adapters. This is valid only when the Link Aggregation
device has one or more adapters, a backup adapter is defined, and the Internet
Address to Ping field contains a non-zero address. The range of valid values is 2 to
100 retries. The default value is 3.
VLAN attributes
Attribute Value
VLAN Tag ID (vlan_tag_id) The unique ID associated with the VLAN driver. You can specify from 1 to 4094.
Base Adapter (base_adapter) The network adapter to which the VLAN device driver is connected.
To gather network statistics at a client level, enable advanced accounting on the Shared Ethernet Adapter
to provide more information about its network traffic. To enable client statistics, set the Shared Ethernet
Adapter accounting attribute to enabled (the default value is disabled). When advanced accounting is
enabled, the Shared Ethernet Adapter keeps track of the hardware (MAC) addresses of all of the packets
it receives from the LPAR clients, and increments packet and byte counts for each client independently.
After advanced accounting is enabled on the Shared Ethernet Adapter, you can generate a report to view
per-client statistics by running the seastat command.
Note: Advanced accounting must be enabled on the Shared Ethernet Adapter before you can use the
seastat command to print any statistics.
To enable advanced accounting on the Shared Ethernet Adapter, enter the following command:
chdev -dev <sea device name> -attr accounting=enabled
The following command clears all of the per-client Shared Ethernet Adapter statistics that have been
gathered:
seastat -d <sea device name> -c
Statistic descriptions
Table 51. Descriptions of Shared Ethernet Adapter failover statistics
Statistic Description
High availability
Control Channel PVID
Port VLAN ID of the virtual Ethernet adapter
used as the control channel.
Control Packets in
Number of packets received on the control
channel.
Control Packets out
Number of packets sent on the control channel.
Packet types
Keep-Alive Packets
Number of keep-alive packets received on the
control channel. Keep-alive packets are received
on the backup Shared Ethernet Adapter while
the primary Shared Ethernet Adapter is active.
Recovery Packets
Number of recovery packets received on the
control channel. Recovery packets are sent by
the primary Shared Ethernet Adapter when it
recovers from a failure and is ready to be active
again.
Notify Packets
Number of notify packets received on the
control channel. Notify packets are sent by the
backup Shared Ethernet Adapter when it detects
that the primary Shared Ethernet Adapter has
recovered.
Limbo Packets
Number of limbo packets received on the
control channel. Limbo packets are sent by the
primary Shared Ethernet Adapter when it
detects that its physical network is not
operational, or when it cannot ping the specified
remote host (to inform the backup that it needs
to become active).
Example statistics
Running the entstat -all command returns results similar to the following:
ETHERNET STATISTICS (ent8) :
Device Type: Shared Ethernet Adapter
Hardware Address: 00:0d:60:0c:05:00
Elapsed Time: 3 days 20 hours 34 minutes 26 seconds
General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 0
Driver Flags: Up Broadcast Running
Simplex 64BitSupport ChecksumOffLoad
DataRateSet
--------------------------------------------------------------
Statistics for adapters in the Shared Ethernet Adapter ent8
--------------------------------------------------------------
Number of adapters: 2
SEA Flags: 00000001
< THREAD >
VLAN IDs :
ent7: 1
Real Side Statistics:
Packets received: 5701344
Packets bridged: 5673198
Packets consumed: 3963314
Packets fragmented: 0
Packets transmitted: 28685
Packets dropped: 0
Virtual Side Statistics:
Packets received: 0
Packets bridged: 0
Packets consumed: 0
Packets fragmented: 0
Packets transmitted: 5673253
Packets dropped: 0
Other Statistics:
Output packets generated: 28685
Output packets dropped: 0
Device output failures: 0
Memory allocation failures: 0
ICMP error packets sent: 0
Non IP packets larger than MTU: 0
Thread queue overflow packets: 0
High Availability Statistics:
Control Channel PVID: 99
Control Packets in: 0
Control Packets out: 818825
Type of Packets Received:
Keep-Alive Packets: 0
Recovery Packets: 0
Notify Packets: 0
Limbo Packets: 0
State: LIMBO
Bridge Mode: All
Number of Times Server became Backup: 0
Number of Times Server became Primary: 0
High Availability Mode: Auto
Priority: 1
--------------------------------------------------------------
Real Adapter: ent2
General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 200
Driver Flags: Up Broadcast Running
Simplex Promiscuous AlternateAddress
64BitSupport ChecksumOffload PrivateSegment LargeSend DataRateSet
--------------------------------------------------------------
Virtual Adapter: ent7
General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 20000
Driver Flags: Up Broadcast Running
Simplex Promiscuous AllMulticast
64BitSupport ChecksumOffload DataRateSet
--------------------------------------------------------------
Control Adapter: ent9
General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 20000
Driver Flags: Up Broadcast Running
Simplex 64BitSupport ChecksumOffload DataRateSet
Statistic descriptions
Table 52. Descriptions of Shared Ethernet Adapter statistics
Statistic Description
Number of adapters Includes the real adapter and all of the virtual adapters.
Note: If you are using Shared Ethernet Adapter failover,
then the control channel adapter is not included.
Shared Ethernet Adapter flags Denotes the features that the Shared Ethernet Adapter is
currently running.
THREAD
The Shared Ethernet Adapter is operating in
threaded mode, where incoming packets are
queued and processed by different threads; its
absence denotes interrupt mode, where packets
are processed in the same interrupt where they
are received.
LARGESEND
The large send feature has been enabled on the
Shared Ethernet Adapter.
JUMBO_FRAMES
The jumbo frames feature has been enabled on
the Shared Ethernet Adapter.
GVRP The GVRP feature has been enabled on the
Shared Ethernet Adapter.
VLAN IDs List of VLAN IDs that have access to the network
through the Shared Ethernet Adapter (this includes PVID
and all tagged VLANs).
Output packets generated Number of packets with a valid VLAN tag or no VLAN
tag sent out of the interface configured over the Shared
Ethernet Adapter.
Output packets dropped Number of packets sent out of the interface configured
over the Shared Ethernet Adapter that are dropped
because of an invalid VLAN tag.
Device output failures Number of packets that could not be sent due to
underlying device errors. This includes errors sent on the
physical network and virtual network, including
fragments and Internet Control Message Protocol (ICMP)
error packets generated by the Shared Ethernet Adapter.
Memory allocation failures Number of packets that could not be sent because there
was insufficient network memory to complete an
operation.
Example statistics
ETHERNET STATISTICS (ent8) :
Device Type: Shared Ethernet Adapter
Hardware Address: 00:0d:60:0c:05:00
Elapsed Time: 3 days 20 hours 34 minutes 26 seconds
General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 0
Driver Flags: Up Broadcast Running
Simplex 64BitSupport ChecksumOffLoad
DataRateSet
--------------------------------------------------------------
Statistics for adapters in the Shared Ethernet Adapter ent8
--------------------------------------------------------------
Number of adapters: 2
SEA Flags: 00000001
< THREAD >
VLAN IDs :
ent7: 1
--------------------------------------------------------------
Real Adapter: ent2
General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 200
Driver Flags: Up Broadcast Running
Simplex Promiscuous AlternateAddress
64BitSupport ChecksumOffload PrivateSegment LargeSend DataRateSet
--------------------------------------------------------------
Virtual Adapter: ent7
General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 20000
Driver Flags: Up Broadcast Running
Simplex Promiscuous AllMulticast
64BitSupport ChecksumOffload DataRateSet
The Virtual I/O Server has the following user types: prime administrator, system administrator, service
representative user, and development engineer user. After installation, the only user type that is active is
the prime administrator.
Prime administrator
The prime administrator (padmin) user ID is the only user ID that is enabled after installation of the
Virtual I/O Server and can run every Virtual I/O Server command. There can be only one prime
administrator in the Virtual I/O Server.
System administrator
The system administrator user ID has access to all commands except the following commands:
v lsfailedlogin
v lsgcl
v mirrorios
v mkuser
v oem_setup_env
v rmuser
v shutdown
v unmirrorios
The prime administrator can create an unlimited number of system administrator IDs.
Create the service representative (SR) user so that an IBM service representative can log in to the system
and perform diagnostic routines. Upon logging in, the SR user is placed directly into the diagnostic
menus.
Development engineer
Create a Development engineer (DE) user ID so that an IBM development engineer can log in to the
system and debug problems.
View
This role is a read-only role and can perform only list-type (ls) functions. Users with this role do not have
the authority to change the system configuration and do not have write permission to their home
directories.
The manufacturer may not offer the products, services, or features discussed in this document in other
countries. Consult the manufacturer's representative for information on the products and services
currently available in your area. Any reference to the manufacturer's product, program, or service is not
intended to state or imply that only that product, program, or service may be used. Any functionally
equivalent product, program, or service that does not infringe any intellectual property right of the
manufacturer may be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any product, program, or service.
The manufacturer may have patents or pending patent applications covering subject matter described in
this document. The furnishing of this document does not grant you any license to these patents. You can
send license inquiries, in writing, to the manufacturer.
For license inquiries regarding double-byte character set (DBCS) information, contact the Intellectual
Property Department in your country or send inquiries, in writing, to the manufacturer.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: THIS INFORMATION IS PROVIDED AS IS WITHOUT
WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain
transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
The manufacturer may make improvements and/or changes in the product(s) and/or the program(s)
described in this publication at any time without notice.
Any references in this information to Web sites not owned by the manufacturer are provided for
convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at
those Web sites are not part of the materials for this product and use of those Web sites is at your own
risk.
The manufacturer may use or distribute any of the information you supply in any way it believes
appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact the
manufacturer.
Such information may be available, subject to appropriate terms and conditions, including in some cases,
payment of a fee.
The licensed program described in this document and all licensed material available for it are provided
by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement,
IBM License Agreement for Machine Code, or any equivalent agreement between us.
Any performance data contained herein was determined in a controlled environment. Therefore, the
results obtained in other operating environments may vary significantly. Some measurements may have
Information concerning products not produced by this manufacturer was obtained from the suppliers of
those products, their published announcements or other publicly available sources. This manufacturer has
not tested those products and cannot confirm the accuracy of performance, compatibility or any other
claims related to products not produced by this manufacturer. Questions on the capabilities of products
not produced by this manufacturer should be addressed to the suppliers of those products.
All statements regarding the manufacturer's future direction or intent are subject to change or withdrawal
without notice, and represent goals and objectives only.
The manufacturer's prices shown are the manufacturer's suggested retail prices, are current and are
subject to change without notice. Dealer prices may vary.
This information is for planning purposes only. The information herein is subject to change before the
products described become available.
This information contains examples of data and reports used in daily business operations. To illustrate
them as completely as possible, the examples include the names of individuals, companies, brands, and
products. All of these names are fictitious and any similarity to the names and addresses used by an
actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs
in any form without payment to the manufacturer, for the purposes of developing, using, marketing or
distributing application programs conforming to the application programming interface for the operating
platform for which the sample programs are written. These examples have not been thoroughly tested
under all conditions. The manufacturer, therefore, cannot guarantee or imply reliability, serviceability, or
function of these programs. The sample programs are provided "AS IS", without warranty of any kind.
The manufacturer shall not be liable for any damages arising out of your use of the sample programs.
Each copy or any portion of these sample programs or any derivative work, must include a copyright
notice as follows:
(your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs.
Copyright IBM Corp. _enter the year or years_.
If you are viewing this information in softcopy, the photographs and color illustrations may not appear.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
Copyright and trademark information at www.ibm.com/legal/copytrade.shtml.
Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or
both.
Red Hat, the Red Hat "Shadow Man" logo, and all Red Hat-based trademarks and logos are trademarks
or registered trademarks of Red Hat, Inc., in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
Personal Use: You may reproduce these publications for your personal, noncommercial use provided that
all proprietary notices are preserved. You may not distribute, display or make derivative works of these
publications, or any portion thereof, without the express consent of the manufacturer.
Commercial Use: You may reproduce, distribute and display these publications solely within your
enterprise provided that all proprietary notices are preserved. You may not make derivative works of
these publications, or reproduce, distribute or display these publications or any portion thereof outside
your enterprise, without the express consent of the manufacturer.
Except as expressly granted in this permission, no other permissions, licenses or rights are granted, either
express or implied, to the publications or any information, data, software or other intellectual property
contained therein.
The manufacturer reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the publications is detrimental to its interest or, as determined by the manufacturer,
the above instructions are not being properly followed.
You may not download, export or re-export this information except in full compliance with all applicable
laws and regulations, including all United States export laws and regulations.
Notices 197
198 Power Systems: Virtual I/O Server
Printed in USA