Sfha Config Upgrade 802 Aix
Sfha Config Upgrade 802 Aix
Sfha Config Upgrade 802 Aix
Legal Notice
Copyright © 2023 Veritas Technologies LLC. All rights reserved.
Veritas and the Veritas Logo are trademarks or registered trademarks of Veritas Technologies
LLC or its affiliates in the U.S. and other countries. Other names may be trademarks of their
respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third-party (“Third-Party Programs”). Some of the Third-Party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the third-party legal notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
Veritas Technologies LLC
2625 Augustine Drive
Santa Clara, CA 95054
http://www.veritas.com
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan CustomerCare_Japan@veritas.com
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The latest documentation is available on the Veritas
website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
infoscaledocs@veritas.com
You can also see documentation information or ask a question on the Veritas community site:
http://www.veritas.com/community/
Upgrading the operating system from AIX 7.2 to AIX 7.3 in Veritas
InfoScale 8.0.2 .............................................................. 183
Upgrading the AIX operating system .............................................. 184
Upgrading Volume Replicator ........................................................ 185
Upgrading VVR without disrupting replication ............................. 185
Upgrading SFDB ........................................................................ 191
Changing NFS server major numbers for VxVM volumes .................... 311
Cluster Server (VCS) Cluster Server is a clustering solution that provides the
following benefits:
Veritas agents Veritas agents provide high availability for specific resources
and applications. Each agent manages resources of a
particular type. For example, the Oracle agent manages
Oracle databases. Agents typically start, stop, and monitor
resources and report state changes.
Disk-based I/O fencing I/O fencing that uses coordinator disks is referred
to as disk-based I/O fencing.
Server-based I/O fencing I/O fencing that uses at least one CP server system
is referred to as server-based I/O fencing.
Server-based fencing can include only CP servers,
or a mix of CP servers and coordinator disks.
Majority-based I/O fencing Majority-based I/O fencing mode does not need
coordination points to provide protection against
data corruption and data consistency in a clustered
environment.
Note: Veritas recommends that you use I/O fencing to protect your cluster against
split-brain situations.
Prepare for your next ■ List product installation and upgrade requirements, including
installation or upgrade operating system versions, memory, disk space, and
architecture.
■ Analyze systems to determine if they are ready to install or
upgrade Veritas products.
■ Download the latest patches, documentation, and high
availability agents from a central repository.
■ Access up-to-date compatibility lists for hardware, software,
databases, and operating systems.
Improve efficiency ■ Find and download patches based on product version and
platform.
■ List installed Veritas products and license keys.
■ Tune and optimize your environment.
Note: Certain features of SORT are not available for all products. Access to SORT
is available at no extra cost.
About I/O fencing for SFHA in virtual machines that do not support
SCSI-3 PR
In a traditional I/O fencing implementation, where the coordination points are
coordination point servers (CP servers) or coordinator disks, Clustered Volume
Manager (CVM) and Veritas I/O fencing modules provide SCSI-3 persistent
reservation (SCSI-3 PR) based protection on the data disks. This SCSI-3 PR
protection ensures that the I/O operations from the losing node cannot reach a disk
that the surviving sub-cluster has already taken over.
See the Cluster Server Administrator's Guide for more information on how I/O
fencing works.
In virtualized environments that do not support SCSI-3 PR, SFHA attempts to
provide reasonable safety for the data disks. SFHA requires you to configure
non-SCSI-3 I/O fencing in such environments. Non-SCSI-3 fencing either uses
server-based I/O fencing with only CP servers as coordination points or
majority-based I/O fencing, which does not use coordination points, along with some
additional configuration changes to support such environments.
See “Setting up non-SCSI-3 I/O fencing in virtual environments using installer”
on page 104.
See “Setting up non-SCSI-3 fencing in virtual environments manually” on page 129.
Note: Typically, a fencing configuration for a cluster must have three coordination
points. Veritas also supports server-based fencing with a single CP server as its
only coordination point with a caveat that this CP server becomes a single point of
failure.
Note: The dmp disk policy for I/O fencing supports both single and multiple
hardware paths from a node to the coordinator disks. If few coordinator disks
have multiple hardware paths and few have a single hardware path, then we
support only the dmp disk policy. For new installations, Veritas only supports
dmp disk policy for IO fencing even for a single hardware path.
The coordination point server (CP server) is a software solution which runs on
a remote system or cluster. CP server provides arbitration functionality by
allowing the SFHA cluster nodes to perform the following tasks:
■ Self-register to become a member of an active SFHA cluster (registered with
CP server) with access to the data drives
■ Check which other nodes are registered as members of this active SFHA
cluster
■ Self-unregister from this active SFHA cluster
■ Forcefully unregister other nodes (preempt) as members of this active SFHA
cluster
In short, the CP server functions as another arbitration mechanism that integrates
within the existing I/O fencing module.
Note: With the CP server, the fencing arbitration logic still remains on the SFHA
cluster.
■ Coordinator devices can be attached over iSCSI protocol but they must be DMP
devices and must support SCSI-3 persistent reservations.
■ Veritas recommends using hardware-based mirroring for coordinator disks.
■ Coordinator disks must not be used to store data or must not be included in disk
groups that store user data.
■ Coordinator disks cannot be the special devices that array vendors use. For
example, you cannot use EMC gatekeeper devices as coordinator disks.
■ The coordinator disk size must be at least 128 MB.
CP server requirements
SFHA 8.0.2 clusters (application clusters) support coordination point servers (CP
servers) that are hosted on the following VCS and SFHA versions:
■ VCS 7.3.1 or later single-node cluster
■ SFHA 7.3.1 or later cluster
Upgrade considerations for CP servers
■ Upgrade VCS or SFHA on CP servers to version 8.0.2 if the current release
version is prior to version 7.3.1.
■ You do not need to upgrade CP servers to version 8.0.2 if the release version
is 7.3.1 or later.
■ CP servers on version 7.3.1 or later support HTTPS-based communication with
application clusters on version 7.3.1 or later.
■ You need to configure VIPs for HTTPS-based communication if release version
of application clusters is 7.3.1 or later.
Make sure that you meet the basic hardware requirements for the VCS/SFHA cluster
to host the CP server.
See the Veritas InfoScale Installation Guide.
Note: While Veritas recommends at least three coordination points for fencing, a
single CP server as coordination point is a supported server-based fencing
configuration. Such single CP server fencing configuration requires that the
coordination point be a highly available CP server that is hosted on an SFHA cluster.
Make sure you meet the following additional CP server requirements which are
covered in this section before you install and configure CP server:
■ Hardware requirements
Preparing to configure 26
I/O fencing requirements
Table 2-2 displays the CP server supported operating systems and versions. An
application cluster can use a CP server that runs any of the following supported
operating systems.
CP server hosted on a VCS CP server supports any of the following operating systems:
single-node cluster or on an
■ AIX 7.2 and 7.3
SFHA cluster
Review other details such as supported operating system
levels and architecture for the supported operating systems.
■ The CP server uses the TCP/IP protocol to connect to and communicate with
the application clusters by these network paths. The CP server listens for
messages from the application clusters using TCP port 443 if the communication
happens over the HTTPS protocol. TCP port 443 is the default port that can be
changed while you configure the CP server.
Veritas recommends that you configure multiple network paths to access a CP
server. If a network path fails, CP server does not require a restart and continues
to listen on all the other available virtual IP addresses.
■ The CP server only supports Internet Protocol version 4 (IPv4) when
communicating with the application clusters over the HTTPS protocol.
■ When placing the CP servers within a specific network configuration, you must
take into consideration the number of hops from the different application cluster
nodes to the CP servers. As a best practice, Veritas recommends that the
number of hops and network latency from the different application cluster nodes
to the CP servers should be equal. This ensures that if an event occurs that
results in an I/O fencing scenario, there is no bias in the race due to difference
in number of hops or network latency between the CPS and various nodes.
You use majority fencing mechanism if you do not want to use coordination points
to protect your cluster. Veritas recommends that you configure I/O fencing in majority
mode if you have a smaller cluster environment and you do not want to invest
additional disks or servers for the purposes of configuring fencing.
If you have installed SFHA in a virtual environment that is not SCSI-3 PR compliant,
you can configure non-SCSI-3 fencing.
See Figure 3-2 on page 31.
Figure 3-1 illustrates a high-level flowchart to configure I/O fencing for the SFHA
cluster.
Preparing to configure SFHA clusters for data integrity 30
About planning to configure I/O fencing
Initialize disks as VxVM disks Establish TCP/IP connection between CP server and
SFHA cluster
(OR)
vxfenadm and vxfentsthdw utilities Set up a CP server
Check disks for I/O fencing Install and configure VCS or SFHA on CP server
compliance systems
Run the installer -fencing, choose Run -configcps and follow the prompts (or) Manually
option 2, and follow the prompts configure CP server
or Configuration tasks
Use one of the following methods
Manually configure disk-based I/O Run the installer -fencing, choose option 1, and
fencing follow the prompts
or
Figure 3-2 illustrates a high-level flowchart to configure non-SCSI-3 I/O fencing for
the SFHA cluster in virtual environments that do not support SCSI-3 PR.
Preparing to configure SFHA clusters for data integrity 31
About planning to configure I/O fencing
SFHA in non-SCSI3
compliant virtual
environment ?
Configuration tasks
Use one of the following methods
or
After you perform the preparatory tasks, you can use any of the following methods
to configure I/O fencing:
Preparing to configure SFHA clusters for data integrity 32
About planning to configure I/O fencing
Using the installer See “Setting up disk-based I/O fencing using installer” on page 81.
Using response files See “Response file variables to configure disk-based I/O fencing”
on page 152.
Manually editing configuration files See “Setting up disk-based I/O fencing manually” on page 111.
You can also migrate from one I/O fencing configuration to another.
See the Storage foundation High Availability Administrator's Guide for more details.
CP server
TCP/IP
Fiber channel
Client Cluster
LLT links
Node 1 Node 2
Application Storage
TCP/IP
application clusters
(clusters which run VCS, SFHA, SFCFS, or SF Oracle RAC to
provide high availability for applications)
Figure 3-5 Single CP server with two coordinator disks for each application
cluster
Fibre channel
application clusters
Fibre channel
(clusters which run VCS, SFHA, SFCFS, or SF Oracle RAC to
provide high availability for applications) Public network
TCP/IP
application clusters
(clusters which run VCS, SFHA, SFCFS, or SF Oracle RAC to provide high availability for applications)
See “Configuration diagrams for setting up server-based I/O fencing” on page 306.
Preparing to configure SFHA clusters for data integrity 36
Setting up the CP server
Task Reference
Set up shared storage for the CP server See “Setting up shared storage for the CP
database server database” on page 38.
CP server setup uses a Install Veritas InfoScale Enterprise or Veritas InfoScale Availability and configure VCS to
single system create a single-node VCS cluster.
See the Veritas InfoScale Installation Guide for instructions on CP server installation.
See the Cluster Server Configuration and Upgrade Guide for configuring VCS.
See “ Configuring the CP server using the installer program” on page 39.
CP server setup uses Install Veritas InfoScale Enterprise and configure SFHA to create an SFHA cluster. This
multiple systems makes the CP server highly available.
Note: If you already configured the CP server cluster in secure mode during the
VCS configuration, then skip this section.
# /opt/VRTS/install/installer -security
For CP servers on See “To configure the CP server on a single-node VCS cluster”
single-node VCS on page 39.
cluster:
For CP servers on an See “To configure the CP server on an SFHA cluster” on page 43.
SFHA cluster:
# /opt/VRTS/install/installer -configcps
Preparing to configure SFHA clusters for data integrity 40
Setting up the CP server
3 Installer checks the cluster information and prompts if you want to configure
CP Server on the cluster.
Enter y to confirm.
4 Select an option based on how you want to configure Coordination Point server.
8 Enter valid virtual IP addresses for the CP Server with HTTPS-based secure
communication. A CP Server can be configured with more than one virtual IP
address.
Note: Ensure that the virtual IP address of the CP server and the IP address
of the NIC interface on the CP server belongs to the same subnet of the IP
network. This is required for communication to happen between client nodes
and CP server.
9 Enter the corresponding CP server port number for each virtual IP address or
press Enter to accept the default value (443).
10 Enter the absolute path of the CP server database or press Enter to accept
the default value (/etc/VRTScps/db).
-------------------------------------------------
12 The installer proceeds with the configuration process, and creates a vxcps.conf
configuration file.
Answer the following questions for each NIC resource that you want to
configure.
14 Enter a valid network interface for the virtual IP address for the CP server
process.
15 Enter the NIC resource you want to associate with the virtual IP addresses.
Enter the NIC resource you want to associate with the virtual IP 10.200.58.231 (1 to 2): 1
Enter the NIC resource you want to associate with the virtual IP 10.200.58.232 (1 to 2): 2
Do you want to add NetworkHosts attribute for the NIC device en0
on system sys1? [y,n,q] y
Enter a valid IP address to configure NetworkHosts for NIC en0
on system sys1: 10.200.56.22
17 Enter the netmask for virtual IP addresses. If you entered an IPv6 address,
enter the prefix details at the prompt.
For example:
Updating main.cf with CPSSG service group.. Done
Successfully added the CPSSG service group to VCS configuration.
Trying to bring CPSSG service group
ONLINE and will wait for upto 120 seconds
19 Run the hagrp -state command to ensure that the CPSSG service group
has been added.
For example:
# hagrp -state CPSSG
#Group Attribute System Value
CPSSG State.... |ONLINE|
# ./installer -configcps
6 Select an option based on how you want to configure Coordination Point server.
10 Enter the corresponding CP server port number for each virtual IP address or
press Enter to accept the default value (443).
Enter the default port '443' to be used for all the virtual IP addresses
for HTTPS communication or assign the corresponding port number in the range [49152,
65535] for each virtual IP address. Ensure that each port number is separated by
a single space: [b] (443) 65535 65534 65537
13 The installer proceeds with the configuration process, and creates a vxcps.conf
configuration file.
Answer the following questions for each NIC resource that you want to configure.
15 Enter a valid network interface for the virtual IP address for the CP server
process.
16 Enter the NIC resource you want to associate with the virtual IP addresses.
Enter the NIC resource you want to associate with the virtual IP 10.200.58.231 (1 to 2): 1
Enter the NIC resource you want to associate with the virtual IP 10.200.58.232 (1 to 2): 2
Preparing to configure SFHA clusters for data integrity 46
Setting up the CP server
Do you want to add NetworkHosts attribute for the NIC device en0
on system sys1? [y,n,q] y
Enter a valid IP address to configure NetworkHosts for NIC en0
on system sys1: 10.200.56.22
18 Enter the netmask for virtual IP addresses. If you entered an IPv6 address,
enter the prefix details at the prompt.
19 Configure a disk group for CP server database. You can choose an existing
disk group or create a new disk group.
24 After the VCS configuration files are updated, a success message appears.
For example:
Updating main.cf with CPSSG service group .... Done
Successfully added the CPSSG service group to VCS configuration.
27 Run the hagrp -state command to ensure that the CPSSG service group
has been added.
For example:
# hagrp -state CPSSG
#Group Attribute System Value
CPSSG State cps1 |ONLINE|
CPSSG State cps2 |OFFLINE|
Task Reference
Note: If a CP server should support pure IPv6 communication, use only IPv6
addresses in the /etc/vxcps.conf file. If the CP server should support both IPv6
and IPv4 communications, use both IPv6 and IPv4 addresses in the configuration
file.
# hastop -local
2 Edit the main.cf file to add the CPSSG service group on any node. Use the
CPSSG service group in the sample main.cf as an example:
See “Sample configuration files for CP server” on page 288.
Customize the resources under the CPSSG service group as per your
configuration.
3 Verify the main.cf file using the following command:
Note the IP address, VIP, and FQDN values used in the [alt_names] section
of the configuration file are sample values. Replace the sample values with
your configuration values. Do not change the rest of the values in the
configuration file.
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
[req_distinguished_name]
countryName = Country Name (2 letter code)
countryName_default = US
localityName = Locality Name (eg, city)
organizationalUnitName = Organizational Unit Name (eg, section)
commonName = Common Name (eg, YOUR name)
commonName_max = 64
emailAddress = Email Address
emailAddress_max = 40
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = cpsone.company.com
DNS.2 = cpsone
DNS.3 = 192.168.1.201
'/C=countryname/L=localityname/OU=COMPANY/CN=CACERT' -out \
/var/VRTScps/security/certs/ca.crt
Where, days is the days you want the certificate to remain valid, countryname
is the name of the country, localityname is the city, CACERT is the certificate
name.
5 Generate a 2048-bit private key for CP server.
The key must be stored at /var/VRTScps/security/keys/server_private
key.
/var/VRTScps/security/keys/server_private.key 2048
'/C=CountryName/L=LocalityName/OU=COMPANY/CN=UUID' \
-out /var/VRTScps/security/certs/server.csr
Where, countryname is the name of the country, localityname is the city, UUID
is the certificate name.
7 Generate the server certificate by using the key certificate of the CA.
# /opt/VRTSperl/non-perl-libs/bin/openssl x509 -req -days days
-sha256 -in /var/VRTScps/security/certs/server.csr \
/var/VRTScps/security/keys/ca.key \
-out /var/VRTScps/security/certs/server.crt
Where, days is the days you want the certificate to remain valid,
https_ssl_cert.conf is the configuration file name.
You successfully created the key and certificate required for the CP server.
Preparing to configure SFHA clusters for data integrity 53
Setting up the CP server
8 Ensure that no other user except the root user can read the keys and
certificates.
9 Complete the CP server configuration.
See “Completing the CP server configuration” on page 53.
# hastart
On a SFHA cluster:
◆ Run the installer command with the responsefile option to configure the CP
server on a SFHA cluster.
# /opt/VRTS/install/installer -responsefile '/tmp/sample1.res'
# Configuration Values:
#
our %CFG;
$CFG{cps_db_dir}="/etc/VRTScps/db";
$CFG{cps_https_ports}=[ 443 ];
$CFG{cps_https_vips}=[ "192.168.59.77" ];
$CFG{cps_netmasks}=[ "255.255.248.0" ];
$CFG{cps_network_hosts}{cpsnic1}=
Preparing to configure SFHA clusters for data integrity 56
Setting up the CP server
[ "10.200.117.70" ];
$CFG{cps_nic_list}{cpsvip1}=[ "en0" ];
$CFG{cps_singlenode_config}=1;
$CFG{cps_vip2nicres_map}{"192.168.59.77"}=1;
$CFG{cpsname}="cps1";
$CFG{opt}{configcps}=1;
$CFG{opt}{configure}=1;
$CFG{opt}{noipc}=1;
$CFG{opt}{redirect}=1;
$CFG{prod}="AVAILABILITY802";
$CFG{systems}=[ "aix1" ];
$CFG{vcs_clusterid}=23172;
$CFG{vcs_clustername}="clus72";
1;
#
# Configuration Values:
#
our %CFG;
$CFG{cps_db_dir}="/cpsdb";
$CFG{cps_diskgroup}="cps_dg1";
$CFG{cps_https_ports}=[ qw(50006 50007) ];
$CFG{cps_https_vips}=[ qw(10.198.90.6 10.198.90.7) ];
$CFG{cps_netmasks}=[ qw(255.255.248.0 255.255.248.0 255.255.248.0) ];
$CFG{cps_network_hosts}{cpsnic1}=[ qw(10.198.88.18) ];
$CFG{cps_network_hosts}{cpsnic2}=[ qw(10.198.88.18) ];
$CFG{cps_newdg_disks}=[ qw(emc_clariion0_249) ];
$CFG{cps_newvol_volsize}=10;
$CFG{cps_nic_list}{cpsvip1}=[ qw(en0 en0) ];
$CFG{cps_sfha_config}=1;
$CFG{cps_vip2nicres_map}{"10.198.90.6"}=1;
Preparing to configure SFHA clusters for data integrity 57
Setting up the CP server
$CFG{cps_volume}="volcps";
$CFG{cpsname}="cps1";
$CFG{opt}{configcps}=1;
$CFG{opt}{configure}=1;
$CFG{opt}{noipc}=1;
$CFG{prod}="ENTERPRISE802";
1;
2 Run the cpsadm command to check if the vxcpserv process is listening on the
configured Virtual IP.
If the application cluster is configured for HTTPS-based communication, no
need to provide the port number assigned for HTTP communication.
■ Configuring SFDB
Task Reference
Specify the systems where you want to See “Specifying systems for configuration”
configure SFHA on page 60.
Task Reference
Configure virtual IP address of the cluster See “Configuring the virtual IP of the cluster”
(optional) on page 66.
Configure the cluster in secure mode See “Configuring SFHA in secure mode”
(optional) on page 67.
Add VCS users (required if you did not See “Adding VCS users” on page 72.
configure the cluster in secure mode)
Configure SMTP email notification (optional) See “Configuring SMTP email notification”
on page 73.
Configure SNMP email notification (optional) See “Configuring SNMP trap notification”
on page 74.
Configure global clusters (optional) See “Configuring global clusters” on page 76.
Note: If you want to reconfigure SFHA, before you start the installer you must stop
all the resources that are under VCS control using the hastop command or the
hagrp -offline command.
# /opt/VRTS/install/installer -configure
The installer starts the product installation program with a copyright message
and specifies the directory where the logs are created.
3 Select the component to configure.
4 Continue with the configuration procedure by responding to the installer
questions.
2 Review the output as the installer verifies the systems you specify.
The installer does the following tasks:
■ Checks that the local node running the installer can communicate with
remote nodes
If the installer finds ssh binaries, it confirms that ssh can operate without
requests for passwords or passphrases. If ssh binaries cannot communicate
with remote nodes, the installer tries rsh binaries. And if both ssh and rsh
binaries fail, the installer prompts to help the user to setup ssh or rsh
binaries.
■ Makes sure that the systems are running with the supported operating
system
■ Checks whether Veritas InfoScale Enterprise is installed
■ Exits if Veritas InfoScale Enterprise8.0.2 is not installed
3 Review the installer output about the I/O fencing configuration and confirm
whether you want to configure fencing in enabled mode.
VCS provides the option to use LLT over Ethernet or LLT over UDP (User Datagram
Protocol). Veritas recommends that you configure heartbeat links that use LLT over
Ethernet for high performance, unless hardware requirements force you to use LLT
over UDP. If you want to configure LLT over UDP, make sure you meet the
prerequisites.
You must not configure LLT heartbeat using the links that are part of aggregated
links. For example, link1, link2 can be aggregated to create an aggregated link,
aggr1. You can use aggr1 as a heartbeat link, but you must not use either link1 or
link2 as heartbeat links.
See “Using the UDP layer for LLT” on page 313.
The following procedure helps you configure LLT heartbeat links.
To configure private heartbeat links
1 Choose one of the following options at the installer prompt based on whether
you want to configure LLT over Ethernet or LLT over UDP.
■ Option 1: Configure the heartbeat links using LLT over Ethernet (answer
installer questions)
Enter the heartbeat link details at the installer prompt to configure LLT over
Ethernet.
Skip to step 2.
■ Option 2: Configure the heartbeat links using LLT over UDP (answer installer
questions)
Make sure that each NIC you want to use as heartbeat link has an IP
address configured. Enter the heartbeat link details at the installer prompt
to configure LLT over UDP. If you had not already configured IP addresses
to the NICs, the installer provides you an option to detect the IP address
for a given NIC.
Skip to step 3.
■ Option 3: Automatically detect configuration for LLT over Ethernet
Allow the installer to automatically detect the heartbeat link details to
configure LLT over Ethernet. The installer tries to detect all connected links
between all systems.
Skip to step 5.
Configuring SFHA 63
Configuring Storage Foundation High Availability using the installer
2 If you chose option 1, enter the network interface card details for the private
heartbeat links.
The installer discovers and lists the network interface cards.
You must not enter the network interface card that is used for the public network
(typically en0.)
Enter the NIC for the first private heartbeat link on sys1:
[b,q,?] en2
Would you like to configure a second private heartbeat link?
[y,n,q,b,?] (y)
Enter the NIC for the second private heartbeat link on sys1:
[b,q,?] en3
Would you like to configure a third private heartbeat link?
[y,n,q,b,?](n)
3 If you chose option 2, enter the NIC details for the private heartbeat links. This
step uses examples such as private_NIC1 or private_NIC2 to refer to the
available names of the NICs.
Enter the NIC for the first private heartbeat link on sys1: [b,q,?]
private_NIC1
Some configured IP addresses have been found on
the NIC private_NIC1 in sys1,
Do you want to choose one for the first private heartbeat link? [y,n,q,?]
Please select one IP address:
1) 192.168.0.1/24
2) 192.168.1.233/24
b) Back to previous menu
Enter the NIC for the second private heartbeat link on sys1: [b,q,?]
private_NIC2
Some configured IP addresses have been found on the
NIC private_NIC2 in sys1,
Do you want to choose one for the second
private heartbeat link? [y,n,q,?] (y)
Please select one IP address:
1) 192.168.1.1/24
2) 192.168.2.233/24
b) Back to previous menu
Enter the NIC for the low-priority heartbeat link on sys1: [b,q,?]
private_NIC0
Some configured IP addresses have been found on
Configuring SFHA 65
Configuring Storage Foundation High Availability using the installer
4 Choose whether to use the same NIC details to configure private heartbeat
links on other systems.
Are you using the same NICs for private heartbeat links on all
systems? [y,n,q,b,?] (y)
If you want to use the NIC details that you entered for sys1, make sure the
same NICs are available on each system. Then, enter y at the prompt.
For LLT over UDP, if you want to use the same NICs on other systems, you
still must enter unique IP addresses on each NIC for other systems.
If the NIC device names are different on some of the systems, enter n. Provide
the NIC details for each system as the program prompts.
5 If you chose option 3 , the installer detects NICs on each system and network
links, and sets link priority.
If the installer fails to detect heartbeat links or fails to find any high-priority links,
then choose option 1 or option 2 to manually configure the heartbeat links.
See step 2 for option 1, or step 3 for option 2, or step 5 for option 3.
6 Enter a unique cluster ID:
3 Confirm whether you want to use the discovered public NIC on the first system.
Do one of the following:
■ If the discovered NIC is the one to use, press Enter.
■ If you want to use a different NIC, type the name of a NIC to use and press
Enter.
4 Confirm whether you want to use the same public NIC on all nodes.
Do one of the following:
■ If all nodes use the same public NIC, enter y.
■ If unique NICs are used, enter n and enter a NIC for each node.
If you want to set up trust relationships for your secure cluster, refer to the following
topics:
See “Configuring a secure cluster node by node” on page 67.
Configuring SFHA 67
Configuring Storage Foundation High Availability using the installer
# ./installer -security
2 The installer displays the following question before the installer stops the product
processes:
■ Do you want to grant read access to everyone? [y,n,q,?]
■ To grant read access to all authenticated users, type y.
■ To grant usergroup specific permissions, type n.
■ Do you want to provide any usergroups that you would like to grant read
access?[y,n,q,?]
■ To specify usergroups and grant them read access, type y
■ To grant read access only to root users, type n. The installer grants read
access read access to the root users.
■ Enter the usergroup names separated by spaces that you would like to
grant read access. If you would like to grant read access to a usergroup on
a specific node, enter like 'usrgrp1@node1', and if you would like to grant
read access to usergroup on any cluster node, enter like 'usrgrp1'. If some
usergroups are not created yet, create the usergroups after configuration
if needed. [b]
3 To verify the cluster is in secure mode after configuration, run the command:
Table 4-2 lists the tasks that you must perform to configure a secure cluster.
Task Reference
Configure security on one node See “Configuring the first node” on page 68.
Configure security on the See “Configuring the remaining nodes” on page 69.
remaining nodes
# /opt/VRTS/install/installer -securityonenode
The installer lists information about the cluster, nodes, and service groups. If
VCS is not configured or if VCS is not running on all nodes of the cluster, the
installer prompts whether you want to continue configuring security. It then
prompts you for the node that you want to configure.
VCS is not running on all systems in this cluster. All VCS systems
must be in RUNNING state. Do you want to continue? [y,n,q] (n) y
Warning: All VCS configurations about cluster users are deleted when you
configure the first node. You can use the /opt/VRTSvcs/bin/hauser command
to create cluster users manually.
Configuring SFHA 69
Configuring Storage Foundation High Availability using the installer
3 The installer completes the secure configuration on the node. It specifies the
location of the security configuration files and prompts you to copy these files
to the other nodes in the cluster. The installer also specifies the location of log
files, summary file, and response file.
4 Copy the security configuration files from the location specified by the installer
to temporary directories on the other nodes in the cluster.
# /opt/VRTS/install/installer -securityonenode
The installer lists information about the cluster, nodes, and service groups. If
VCS is not configured or if VCS is not running on all nodes of the cluster, the
installer prompts whether you want to continue configuring security. It then
prompts you for the node that you want to configure. Enter 2.
VCS is not running on all systems in this cluster. All VCS systems
must be in RUNNING state. Do you want to continue? [y,n,q] (n) y
The installer completes the secure configuration on the node. It specifies the
location of log files, summary file, and response file.
# /opt/VRTSvcs/bin/haconf -makerw
For example:
To grant read access to everyone:
Cluster clus1 (
SecureClus=1
DefaultGuestAccess=1
)
Or
To grant access to only root:
Cluster clus1 (
SecureClus=1
)
Or
To grant read access to specific user groups, add or modify SecureClus=1 and
GuestGroups={} to the cluster definition.
For example:
cluster clus1 (
SecureClus=1
GuestGroups={staff, guest}
For example:
Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart -secure"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = {"/opt/VRTSvcs/bin/wac -secure"}
RestartLimit = 3
)
Configuring SFHA 72
Configuring Storage Foundation High Availability using the installer
# touch /etc/VRTSvcs/conf/config/.secure
7 On the first node, start VCS. Then start VCS on the remaining nodes.
# /opt/VRTSvcs/bin/hastart
# /opt/VRTSvcs/bin/haconf -makerw
Enter Again:*******
Enter the privilege for user smith (A=Administrator, O=Operator,
G=Guest): [b,q,?] a
6 Review the summary of the newly added users and confirm the information.
Note: If you installed a HA/DR license to set up replicated data cluster or campus
cluster, skip this installer option.
configures a cluster UUID value for the cluster at the end of the configuration. After
the installer successfully configures SFHA, it restarts SFHA and its related
processes.
To complete the SFHA configuration
1 If prompted, press Enter at the following prompt.
2 Review the output as the installer stops various processes and performs the
configuration. The installer then restarts SFHA and its related processes.
3 Enter y at the prompt to send the installation information to Veritas.
4 After the installer configures SFHA successfully, note the location of summary,
log, and response files that installer creates.
The files provide the useful information that can assist you with the configuration
and can also assist future configurations.
response file Contains the configuration information that can be used to perform
secure or unattended installations on other systems.
Veritas License Audit Tool's robust reporting framework enables you to capture
information such as Product name, Product Version, Licensing key, License type,
Operating System, Operating System Version and CPU Name.
To download the Veritas License Audit Tool and its Installation and User Guide,
click the following link:
https://sort.veritas.com/public/utilities/infoscale/latool/linux/LATool-rhel7.tar
# vxlicrep
# ./installer -license
Example:
Note: The license key file must not be saved in the root directory (/) or the
default license directory on the local host (/etc/vx/licenses/lic). You can
save the license key file inside any other directory on the local host.
3 Make sure keyless licenses are replaced on all cluster nodes before starting
SFHA.
# vxlicrep
Configuring SFDB
By default, SFDB tools are disabled that is the vxdbd daemon is not configured.
You can check whether SFDB tools are enabled or disabled using
the/opt/VRTS/bin/sfae_config status command.
Configuring SFHA 80
Configuring SFDB
# /usr/sbin/cfgmgr
2 List the new external disks or the LUNs as recognized by the operating system.
On each node, enter:
3 Determine the VxVM name by which a disk drive (or LUN) is known.
In the following example, VxVM identifies a disk with the AIX device name
/dev/rhdisk75 as EMC0_17:
4 To initialize the disks as VxVM disks, use one of the following methods:
■ Use the interactive vxdiskadm utility to initialize the disks as VxVM disks.
For more information, see the Storage Foundation Administrator’s Guide.
■ Use the vxdisksetup command to initialize a disk as a VxVM disk.
# vxdisksetup -i device_name
# vxdisksetup -i EMC0_17
Repeat this command for each disk you intend to use as a coordinator disk.
■ If you test disks in DMP format, use the VxVM command vxdisk list to get
the DMP path name.
■ If you test disks in raw format for Active/Passive disk arrays, you must use an
active enabled path with the vxfentsthdw command. Run the vxdmpadm
getsubpaths dmpnodename=enclosure-based_name command to list the active
enabled paths.
DMP opens the secondary (passive) paths with an exclusive flag in
Active/Passive arrays. So, if you test the secondary (passive) raw paths of the
disk, the vxfentsthdw command may fail due to DMP’s exclusive flag.
The vxfentsthdw utility has additional options suitable for testing many disks. Review
the options for testing the disk groups (-g) and the disks that are listed in a file (-f).
You can also test disks without destroying data using the -r option.
See the Cluster Server Administrator's Guide.
Checking that disks support SCSI-3 involves the following tasks:
■ Verifying the Array Support Library (ASL)
See “Verifying Array Support Library (ASL)” on page 83.
■ Verifying that nodes have access to the same disk
See “Verifying that the nodes have access to the same disk” on page 84.
■ Testing the shared disks for SCSI-3
See “Testing the disks using vxfentsthdw utility” on page 85.
3 Scan all disk drives and their attributes, update the VxVM device list, and
reconfigure DMP with the new devices. Type:
# vxdisk scandisks
See the Veritas Volume Manager documentation for details on how to add and
configure disks.
# vxfenadm -i diskpath
For A/P arrays, run the vxfentsthdw command only on secondary paths.
Refer to the vxfenadm (1M) manual page.
For example, an EMC disk is accessible by the /dev/rhdisk75 path on node A
and the /dev/rhdisk76 path on node B.
From node A, enter:
# vxfenadm -i /dev/rhdisk75
Vendor id : EMC
Product id : SYMMETRIX
Revision : 5567
Serial Number : 42031000a
The same serial number information should appear when you enter the
equivalent command on node B using the /dev/rhdisk76 path.
On a disk from another manufacturer, Hitachi Data Systems, the output is
different and may resemble:
Vendor id : HITACHI
Product id : OPEN-3
Revision : 0117
Serial Number : 0401EB6F0002
For more information on how to replace coordinator disks, refer to the Cluster Server
Administrator's Guide.
Configuring SFHA clusters for data integrity 86
Setting up disk-based I/O fencing using installer
Warning: The tests overwrite and destroy data on the disks unless you use
the -r option.
4 Review the output as the utility performs the checks and reports its activities.
5 If a disk is ready for I/O fencing on each node, the utility reports success for
each node. For example, the utility displays the following message for the node
sys1.
6 Run the vxfentsthdw utility for each disk you intend to verify.
Note: The installer stops and starts SFHA to complete I/O fencing configuration.
Make sure to unfreeze any frozen VCS service groups in the cluster for the installer
to successfully stop SFHA.
# /opt/VRTS/install/installer -fencing
The installer starts with a copyright message and verifies the cluster information.
Note the location of log files which you can access in the event of any problem
with the configuration process.
2 Enter the host name of one of the systems in the cluster.
3 Confirm that you want to proceed with the I/O fencing configuration at the
prompt.
The program checks that the local node running the script can communicate
with remote nodes and checks whether SFHA 8.0.2 is configured properly.
4 Review the I/O fencing configuration options that the program presents. Type
2 to configure disk-based I/O fencing.
6 Choose whether to use an existing disk group or create a new disk group to
configure as the coordinator disk group.
Configuring SFHA clusters for data integrity 88
Setting up disk-based I/O fencing using installer
The program lists the available disk group names and provides an option to
create a new disk group. Perform one of the following:
■ To use an existing disk group, enter the number corresponding to the disk
group at the prompt.
The program verifies whether the disk group you chose has an odd number
of disks and that the disk group has a minimum of three disks.
■ To create a new disk group, perform the following steps:
■ Enter the number corresponding to the Create a new disk group option.
The program lists the available disks that are in the CDS disk format in
the cluster and asks you to choose an odd number of disks with at least
three disks to be used as coordinator disks.
Veritas recommends that you use three disks as coordination points for
disk-based I/O fencing.
■ If the available VxVM CDS disks are less than the required, installer
asks whether you want to initialize more disks as VxVM disks. Choose
the disks you want to initialize as VxVM disks and then use them to
create new disk group.
■ Enter the numbers corresponding to the disks that you want to use as
coordinator disks.
■ Enter the disk group name.
7 Verify that the coordinator disks you chose meet the I/O fencing requirements.
You must verify that the disks are SCSI-3 PR compatible using the vxfentsthdw
utility and then return to this configuration program.
See “Checking shared disks for I/O fencing” on page 82.
8 After you confirm the requirements, the program creates the coordinator disk
group with the information you provided.
9 Verify and confirm the I/O fencing configuration information that the installer
summarizes.
10 Review the output as the configuration program does the following:
■ Stops VCS and I/O fencing on each node.
■ Configures disk-based I/O fencing and starts the I/O fencing process.
■ Updates the VCS configuration file main.cf if necessary.
■ Copies the /etc/vxfenmode file to a date and time suffixed file
/etc/vxfenmode-date-time. This backup file is useful if any future fencing
configuration fails.
Configuring SFHA clusters for data integrity 89
Setting up disk-based I/O fencing using installer
11 Review the output as the configuration program displays the location of the log
files, the summary files, and the response files.
12 Configure the Coordination Point Agent.
Do you want to configure Coordination Point Agent on
the client cluster? [y,n,q] (y)
13 Enter a name for the service group for the Coordination Point Agent.
Enter a non-existing name for the service group for
Coordination Point Agent: [b] (vxfen) vxfen
Installer adds Coordination Point Agent and updates the main configuration
file.
16 Enable auto refresh of coordination points.
Do you want to enable auto refresh of coordination points
if registration keys are missing
on any of them? [y,n,q,b,?] (n)
Warning: Refreshing keys might cause the cluster to panic if a node leaves
membership before the coordination points refresh is complete.
# /opt/VRTS/install/installer -fencing
The installer starts with a copyright message and verifies the cluster information.
Note down the location of log files that you can access if there is a problem
with the configuration process.
2 Confirm that you want to proceed with the I/O fencing configuration at the
prompt.
The program checks that the local node running the script can communicate
with the remote nodes and checks whether SFHA 8.0.2 is configured properly.
3 Review the I/O fencing configuration options that the program presents. Type
the number corresponding to refresh registrations or keys on the existing
coordination points.
4 Ensure that the disk group constitution that is used by the fencing module
contains the same disks that are currently used as coordination disks.
Configuring SFHA clusters for data integrity 91
Setting up server-based I/O fencing using installer
For example,
Disk Group: fendg
Fencing disk policy: dmp
Fencing disks:
emc_clariion0_62
emc_clariion0_65
emc_clariion0_66
Mix of CP servers and See “To configure server-based fencing for the SFHA cluster
coordinator disks (one CP server and two coordinator disks)” on page 92.
Single CP server See “To configure server-based fencing for the SFHA cluster”
on page 96.
Configuring SFHA clusters for data integrity 92
Setting up server-based I/O fencing using installer
To configure server-based fencing for the SFHA cluster (one CP server and
two coordinator disks)
1 Depending on the server-based configuration model in your setup, make sure
of the following:
■ CP servers are configured and are reachable from the SFHA cluster. The
SFHA cluster is also referred to as the application cluster or the client cluster.
See “Setting up the CP server” on page 36.
■ The coordination disks are verified for SCSI3-PR compliance.
See “Checking shared disks for I/O fencing” on page 82.
# /opt/VRTS/install/installer -fencing
The installer starts with a copyright message and verifies the cluster information.
Note the location of log files which you can access in the event of any problem
with the configuration process.
3 Confirm that you want to proceed with the I/O fencing configuration at the
prompt.
The program checks that the local node running the script can communicate
with remote nodes and checks whether SFHA 8.0.2 is configured properly.
4 Review the I/O fencing configuration options that the program presents. Type
1 to configure server-based I/O fencing.
5 Make sure that the storage supports SCSI3-PR, and answer y at the following
prompt.
6 Provide the following details about the coordination points at the installer prompt:
■ Enter the total number of coordination points including both servers and
disks. This number should be at least 3.
■ Enter the total number of coordinator disks among the coordination points.
Configuring SFHA clusters for data integrity 93
Setting up server-based I/O fencing using installer
■ Enter the virtual IP addresses or the fully qualified host name for each of
the CP servers. The installer assumes these values to be identical as viewed
from all the application cluster nodes.
The installer prompts for this information for the number of virtual IP
addresses you want to configure for each CP server.
■ Enter the port that the CP server would be listening on.
1) rhdisk75
2) rhdisk76
3) rhdisk77
■ If you have not already checked the disks for SCSI-3 PR compliance in
step 1, check the disks now.
The installer displays a message that recommends you to verify the disks
in another window and then return to this configuration procedure.
Press Enter to continue, and confirm your disk selection at the installer
prompt.
■ Enter a disk group name for the coordinator disks or accept the default.
9 Verify and confirm the coordination points information for the fencing
configuration.
For example:
The installer initializes the disks and the disk group and deports the disk group
on the SFHA (application cluster) node.
10 Verify and confirm the I/O fencing configuration information.
CPS Admin utility location: /opt/VRTScps/bin/cpsadm
Cluster ID: 2122
Cluster Name: clus1
UUID for the above cluster: {ae5e589a-1dd1-11b2-dd44-00144f79240c}
Configuring SFHA clusters for data integrity 95
Setting up server-based I/O fencing using installer
11 Review the output as the installer updates the application cluster information
on each of the CP servers to ensure connectivity between them. The installer
then populates the /etc/vxfenmode file with the appropriate details in each of
the application cluster nodes.
Adding the client cluster to the Coordination Point Server 10.209.80.197 .......... Done
Registering client node sys1 with Coordination Point Server 10.209.80.197...... Done
Adding CPClient user for communicating to Coordination Point Server 10.209.80.197 .... Done
Adding cluster clus1 to the CPClient user on Coordination Point Server 10.209.80.197 .. Done
Registering client node sys2 with Coordination Point Server 10.209.80.197 ..... Done
Adding CPClient user for communicating to Coordination Point Server 10.209.80.197 .... Done
Adding cluster clus1 to the CPClient user on Coordination Point Server 10.209.80.197 ..Done
14 Additionally the coordination point agent can also monitor changes to the
Coordinator Disk Group constitution such as a disk being accidently deleted
from the Coordinator Disk Group. The frequency of this detailed monitoring
can be tuned with the LevelTwoMonitorFreq attribute. For example, if you set
this attribute to 5, the agent will monitor the Coordinator Disk Group constitution
every five monitor cycles.
Note that for the LevelTwoMonitorFreq attribute to be applicable there must
be disks as part of the Coordinator Disk Group.
16 Note the location of the configuration log files, summary files, and response
files that the installer displays for later use.
17 Verify the fencing configuration using:
# vxfenadm -d
# /opt/VRTS/install/installer -fencing
The installer starts with a copyright message and verifies the cluster information.
Note the location of log files which you can access in the event of any problem
with the configuration process.
Configuring SFHA clusters for data integrity 97
Setting up server-based I/O fencing using installer
4 Confirm that you want to proceed with the I/O fencing configuration at the
prompt.
The program checks that the local node running the script can communicate
with remote nodes and checks whether SFHA 8.0.2 is configured properly.
5 Review the I/O fencing configuration options that the program presents. Type
1 to configure server-based I/O fencing.
6 Make sure that the storage supports SCSI3-PR, and answer y at the following
prompt.
Read the installer warning carefully before you proceed with the configuration.
8 Provide the following CP server details at the installer prompt:
■ Enter the total number of virtual IP addresses or the total number of fully
qualified host names for each of the CP servers.
■ Enter the virtual IP address or the fully qualified host name for the CP server.
The installer assumes these values to be identical as viewed from all the
application cluster nodes.
The installer prompts for this information for the number of virtual IP
addresses you want to configure for each CP server.
■ Enter the port that the CP server would be listening on.
9 Verify and confirm the coordination points information for the fencing
configuration.
For example:
11 Review the output as the installer updates the application cluster information
on each of the CP servers to ensure connectivity between them. The installer
then populates the /etc/vxfenmode file with the appropriate details in each of
the application cluster nodes.
The installer also populates the /etc/vxfenmode file with the entry single_cp=1
for such single CP server fencing configuration.
Adding the client cluster to the Coordination Point Server 10.209.80.197 .......... Done
Registering client node sys1 with Coordination Point Server 10.209.80.197...... Done
Adding CPClient user for communicating to Coordination Point Server 10.209.80.197 .... Done
Adding cluster clus1 to the CPClient user on Coordination Point Server 10.209.80.197 .. Done
Registering client node sys2 with Coordination Point Server 10.209.80.197 ..... Done
Adding CPClient user for communicating to Coordination Point Server 10.209.80.197 .... Done
Adding cluster clus1 to the CPClient user on Coordination Point Server 10.209.80.197 .. Done
12 Review the output as the installer stops and restarts the VCS and the fencing
processes on each application cluster node, and completes the I/O fencing
configuration.
13 Configure the CP agent on the SFHA (application cluster).
Do you want to configure Coordination Point Agent on the
client cluster? [y,n,q] (y)
15 Note the location of the configuration log files, summary files, and response
files that the installer displays for later use.
Warning: Refreshing keys might cause the cluster to panic if a node leaves
membership before the coordination points refresh is complete.
Configuring SFHA clusters for data integrity 100
Setting up server-based I/O fencing using installer
# /opt/VRTS/install/installer -fencing
The installer starts with a copyright message and verifies the cluster information.
Note the location of log files that you can access if there is a problem with the
configuration process.
2 Confirm that you want to proceed with the I/O fencing configuration at the
prompt.
The program checks that the local node running the script can communicate
with the remote nodes and checks whether SFHA 8.0.2 is configured properly.
3 Review the I/O fencing configuration options that the program presents. Type
the number corresponding to the option that suggests to refresh registrations
or keys on the existing coordination points.
4 Ensure that the /etc/vxfentab file contains the same coordination point
servers that are currently used by the fencing module.
Also, ensure that the disk group mentioned in the /etc/vxfendg file contains
the same disks that are currently used by the fencing module as coordination
disks.
5 Verify the coordination points.
For example,
Total number of coordination points being used: 3
Coordination Point Server ([VIP or FQHN]:Port):
1. 10.198.94.146 ([10.198.94.146]:443)
2. 10.198.94.144 ([10.198.94.144]:443)
SCSI-3 disks:
1. emc_clariion0_61
Disk Group name for the disks in customized fencing: vxfencoorddg
Disk policy used for customized fencing: dmp
Configuring SFHA clusters for data integrity 101
Setting up server-based I/O fencing using installer
Note: Disk-based fencing does not support setting the order of existing coordination
points.
■ First in the order must be the coordination point that has the best chance to win
the race. The next coordination point you list in the order must have relatively
lesser chance to win the race. Complete the order such that the last coordination
point has the least chance to win the race.
# /opt/VRTS/install/installer -fencing
The installer starts with a copyright message and verifies the cluster information.
Note the location of log files that you can access if there is a problem with the
configuration process.
2 Confirm that you want to proceed with the I/O fencing configuration at the
prompt.
The program checks that the local node running the script can communicate
with remote nodes and checks whether SFHA 8.0.2 is configured properly.
3 Review the I/O fencing configuration options that the program presents. Type
the number corresponding to the option that suggests to set the order of existing
coordination points.
For example:
Warning: The cluster might panic if a node leaves membership before the
coordination points change is complete.
Configuring SFHA clusters for data integrity 103
Setting up server-based I/O fencing using installer
5 Enter the new order of the coordination points by the numbers and separate
the order by space [1-3,b,q] 3 1 2.
7 Do you want to send the information about this installation to us to help improve
installation in the future? [y,n,q,?] (y).
8 Do you want to view the summary file? [y,n,q] (n).
Configuring SFHA clusters for data integrity 104
Setting up non-SCSI-3 I/O fencing in virtual environments using installer
For example,
vxfen_mode=customized
vxfen_mechanism=cps
port=443
scsi3_disk_policy=dmp
cps1=[10.198.94.146]
vxfendg=vxfencoorddg
cps2=[10.198.94.144]
vxfen_honor_cp_order=1
10 Verify that the coordination point order is updated in the output of the
vxfenconfig -l command.
For example,
I/O Fencing Configuration Information:
======================================
single_cp=0
[10.198.94.146]:443 {e7823b24-1dd1-11b2-8814-2299557f1dc0}
/dev/vx/rdmp/emc_clariion0_65 60060160A38B1600386FD87CA8FDDD11
/dev/vx/rdmp/emc_clariion0_66 60060160A38B1600396FD87CA8FDDD11
/dev/vx/rdmp/emc_clariion0_62 60060160A38B16005AA00372A8FDDD11
[10.198.94.144]:443 {01f18460-1dd2-11b2-b818-659cbc6eb360}
# /opt/VRTS/install/installer -fencing
The installer starts with a copyright message and verifies the cluster information.
2 Confirm that you want to proceed with the I/O fencing configuration at the
prompt.
The program checks that the local node running the script can communicate
with remote nodes and checks whether SFHA 8.0.2 is configured properly.
3 For server-based fencing, review the I/O fencing configuration options that the
program presents. Type 1 to configure server-based I/O fencing.
4 Enter n to confirm that your storage environment does not support SCSI-3 PR.
5 Confirm that you want to proceed with the non-SCSI-3 I/O fencing configuration
at the prompt.
6 For server-based fencing, enter the number of CP server coordination points
you want to use in your setup.
7 For server-based fencing, enter the following details for each CP server:
■ Enter the virtual IP address or the fully qualified host name.
■ Enter the port address on which the CP server listens for connections.
The default value is 443. You can enter a different port address. Valid values
are between 49152 and 65535.
The installer assumes that these values are identical from the view of the SFHA
cluster nodes that host the applications for high availability.
8 For server-based fencing, verify and confirm the CP server information that
you provided.
9 Verify and confirm the SFHA cluster configuration information.
Review the output as the installer performs the following tasks:
Configuring SFHA clusters for data integrity 106
Setting up majority-based I/O fencing using installer
■ Updates the following configuration files on each node of the SFHA cluster
■ /etc/vxfenmode file
■ /etc/default/vxfen file
■ /etc/vxenviron file
■ /etc/llttab file
10 Review the output as the installer stops SFHA on each node, starts I/O fencing
on each node, updates the VCS configuration file main.cf, and restarts SFHA
with non-SCSI-3 fencing.
For server-based fencing, confirm to configure the CP agent on the SFHA
cluster.
11 Confirm whether you want to send the installation information to us.
12 After the installer configures I/O fencing successfully, note the location of
summary, log, and response files that installer creates.
The files provide useful information which can assist you with the configuration,
and can also assist future configurations.
# /opt/VRTS/install/installer -fencing
Where version is the specific release version. The installer starts with a
copyright message and verifies the cluster information.
Note: Make a note of the log file location which you can access in the event
of any issues with the configuration process.
2 Confirm that you want to proceed with the I/O fencing configuration at the
prompt. The program checks that the local node running the script can
communicate with remote nodes and checks whether SFHA is configured
properly.
3 Review the I/O fencing configuration options that the program presents. Type
3 to configure majority-based I/O fencing.
Note: The installer will ask the following question. Does your storage
environment support SCSI3 PR? [y,n,q,?] Input 'y' if your storage environment
supports SCSI3 PR. Other alternative will result in installer configuring
non-SCSI3 fencing(NSF).
4 The installer then populates the /etc/vxfenmode file with the appropriate details
in each of the application cluster nodes.
5 Review the output as the installer stops and restarts the VCS and the fencing
processes on each application cluster node, and completes the I/O fencing
configuration.
6 Note the location of the configuration log files, summary files, and response
files that the installer displays for later use.
7 Verify the fencing configuration.
# vxfenadm -d
Configuring SFHA clusters for data integrity 108
Enabling or disabling the preferred fencing policy
# vxfenadm -d
2 Make sure that the cluster-level attribute UseFence has the value set to SCSI3.
# haconf -makerw
■ Set the value of the system-level attribute FencingWeight for each node in
the cluster.
For example, in a two-node cluster, where you want to assign sys1 five
times more weight compared to sys2, run the following commands:
# vxfenconfig -a
Configuring SFHA clusters for data integrity 109
Enabling or disabling the preferred fencing policy
# haconf -makerw
■ Set the value of the group-level attribute Priority for each service group.
For example, run the following command:
Make sure that you assign a parent service group an equal or lower priority
than its child service group. In case the parent and the child service groups
are hosted in different subclusters, then the subcluster that hosts the child
service group gets higher preference.
■ Save the VCS configuration.
# haconf -makerw
■ Set the value of the site-level attribute Preference for each site.
For example,
# hasite -modify Pune Preference 2
6 To view the fencing node weights that are currently set in the fencing driver,
run the following command:
# vxfenconfig -a
Configuring SFHA clusters for data integrity 110
Enabling or disabling the preferred fencing policy
# vxfenadm -d
2 Make sure that the cluster-level attribute UseFence has the value set to SCSI3.
3 To disable preferred fencing and use the default race policy, set the value of
the cluster-level attribute PreferredFencingPolicy as Disabled.
# haconf -makerw
# haclus -modify PreferredFencingPolicy Disabled
# haconf -dump -makero
Chapter 6
Manually configuring
SFHA clusters for data
integrity
This chapter includes the following topics:
Table 6-1
Task Reference
Initializing disks as VxVM disks See “Initializing disks as VxVM disks” on page 81.
Checking shared disks for I/O See “Checking shared disks for I/O fencing” on page 82.
fencing
Setting up coordinator disk See “Setting up coordinator disk groups” on page 113.
groups
Manually configuring SFHA clusters for data integrity 112
Setting up disk-based I/O fencing manually
Task Reference
Creating I/O fencing See “Creating I/O fencing configuration files” on page 113.
configuration files
Modifying SFHA configuration See “Modifying VCS configuration to use I/O fencing”
to use I/O fencing on page 114.
Verifying I/O fencing See “Verifying I/O fencing configuration” on page 116.
configuration
2 Set the coordinator attribute value as "on" for the coordinator disk group.
4 Import the disk group with the -t option to avoid automatically importing it when
the nodes restart:
5 Deport the disk group. Deporting the disk group prevents the coordinator disks
from serving other purposes:
# more /etc/vxfenmode
4 Ensure that you edit the following file on each node in the cluster to change
the values of the VXFEN_START and the VXFEN_STOP environment variables
to 1:
/etc/default/vxfen
# hastop -all
Manually configuring SFHA clusters for data integrity 115
Setting up disk-based I/O fencing manually
# /etc/init.d/vxfen.rc stop
# cd /etc/VRTSvcs/conf/config
# cp main.cf main.orig
6 On one node, use vi or another text editor to edit the main.cf file. To modify
the list of cluster attributes, add the UseFence attribute and assign its value
as SCSI3.
cluster clus1(
UserNames = { admin = "cDRpdxPmHpzS." }
Administrators = { admin }
HacliUserLevel = COMMANDROOT
CounterInterval = 5
UseFence = SCSI3
)
9 Start the I/O fencing driver and VCS. Perform the following steps on each node:
■ Start the I/O fencing driver.
The vxfen startup script also invokes the vxfenconfig command, which
configures the vxfen driver to start and use the coordination points that are
listed in /etc/vxfentab.
# /etc/init.d/vxfen.rc start
# /opt/VRTS/bin/hastart
Manually configuring SFHA clusters for data integrity 116
Setting up server-based I/O fencing manually
■ Start VCS on all other nodes once VCS on first node reaches RUNNING
state.
# /opt/VRTS/bin/hastart
# vxfenadm -d
Output similar to the following appears if the fencing mode is SCSI3 and the
SCSI3 disk policy is dmp:
* 0 (sys1)
1 (sys2)
2 Verify that the disk-based I/O fencing is using the specified disks.
# vxfenconfig -l
Task Reference
Preparing the CP servers for See “Preparing the CP servers manually for use by the
use by the SFHA cluster SFHA cluster” on page 117.
Generating the client key and See “Generating the client key and certificates manually
certificates on the client nodes on the client nodes ” on page 119.
manually
Modifying I/O fencing See “Configuring server-based fencing on the SFHA cluster
configuration files to configure manually” on page 121.
server-based I/O fencing
Modifying SFHA configuration See “Modifying VCS configuration to use I/O fencing”
to use I/O fencing on page 114.
Verifying the server-based I/O See “Verifying server-based I/O fencing configuration”
fencing configuration on page 129.
CP server cps1
cluster clus1
# cat /etc/vx/.uuids/clusuuid
{f0735332-1dd1-11b2-bb31-00306eea460a}
2 Use the cpsadm command to check whether the SFHA cluster and nodes are
present in the CP server.
For example:
If the output does not show the cluster and nodes, then add them as described
in the next step.
For detailed information about the cpsadm command, see the Cluster Server
Administrator's Guide.
Manually configuring SFHA clusters for data integrity 119
Setting up server-based I/O fencing manually
See “Generating the client key and certificates manually on the client nodes ”
on page 119.
Note: Since the openssl utility might not be available on client nodes, Veritas
recommends that you access the CP server using SSH to generate the client
keys or certificates on the CP server and copy the certificates to each of the
nodes.
3 Generate the client CSR for the cluster. CN is the UUID of the client's cluster.
# /opt/VRTSperl/non-perl-libs/bin/openssl req -new -key -sha256
client_private.key\
-subj '/C=countryname/L=localityname/OU=COMPANY/CN=CLUS_UUID'\
-out client_192.168.1.201.csr
Where, days is the days you want the certificate to remain valid, 192.168.1.201
is the VIP or FQHN of the CP server.
Manually configuring SFHA clusters for data integrity 121
Setting up server-based I/O fencing manually
5 Copy the client key, client certificate, and CA certificate to each of the client
nodes at the following location.
Copy the client key at
/var/VRTSvxfen/security/keys/client_private.key. The client is common
for all the client nodes and hence you need to generate it only once.
Copy the client certificate at
/var/VRTSvxfen/security/certs/client_192.168.1.201.crt.
Note: Copy the certificates and the key to all the nodes at the locations that
are listed in this step.
6 If the client nodes need to access the CP server using the FQHN and or the
host name, make a copy of the certificates you generated and replace the VIP
with the FQHN or host name. Make sure that you copy these certificates to all
the nodes.
7 Repeat the procedure for every CP server.
8 After you copy the key and certificates to each client node, delete the client
keys and client certificates on the CP server.
Note: Whenever coordinator disks are used as coordination points in your I/O
fencing configuration, you must create a disk group (vxfencoorddg). You must
specify this disk group in the /etc/vxfenmode file.
See “Setting up coordinator disk groups” on page 113.
The customized fencing framework also generates the /etc/vxfentab file which
has coordination points (all the CP servers and disks from disk group specified in
/etc/vxfenmode file).
You must change the values of the VXFEN_START and the VXFEN_STOP
environment variables to 1.
2 Use a text editor to edit the /etc/vxfenmode file values to meet your
configuration specifications.
■ If your server-based fencing configuration uses a single highly available
CP server as its only coordination point, make sure to add the single_cp=1
entry in the /etc/vxfenmode file.
■ If you want the vxfen module to use a specific order of coordination points
during a network partition scenario, set the vxfen_honor_cp_order value
to be 1. By default, the parameter is disabled.
The following sample file output displays what the /etc/vxfenmode file contains:
See “Sample vxfenmode file output for server-based fencing” on page 122.
3 After editing the /etc/vxfenmode file, run the vxfen init script to start fencing.
For example:
# /etc/init.d/vxfen.rc start
#
# vxfen_mode determines in what mode VCS I/O Fencing should work.
#
# available options:
Manually configuring SFHA clusters for data integrity 123
Setting up server-based I/O fencing manually
#
# scsi3_disk_policy determines the way in which I/O fencing
# communicates with the coordination disks. This field is
# required only if customized coordinator disks are being used.
#
# available options:
# dmp - use dynamic multipathing
#
scsi3_disk_policy=dmp
#
# vxfen_honor_cp_order determines the order in which vxfen
# should use the coordination points specified in this file.
#
# available options:
# 0 - vxfen uses a sorted list of coordination points specified
# in this file,
# the order in which coordination points are specified does not matter.
# (default)
# 1 - vxfen uses the coordination points in the same order they are
# specified in this file
# cps1=[192.168.0.23],[192.168.0.24]:58888,[cps1.company.com]
# cps2=[192.168.0.25]
# cps3=[cps2.company.com]:59999
#
# In the above example,
# - port 58888 will be used for vip [192.168.0.24]
# - port 59999 will be used for vhn [cps2.company.com], and
# - default port 57777 will be used for all remaining <vip/vhn>s:
# [192.168.0.23]
# [cps1.company.com]
# [192.168.0.25]
# - if default port 57777 were not specified, port 14250
# would be used for all remaining <vip/vhn>s:
# [192.168.0.23]
# [cps1.company.com]
# [192.168.0.25]
#
# SCSI-3 compliant coordinator disks are specified as:
#
# vxfendg=<coordinator disk group name>
# Example:
# vxfendg=vxfencoorddg
#
# Examples of different configurations:
# 1. All CP server coordination points
# cps1=
# cps2=
# cps3=
#
# 2. A combination of CP server and a disk group having two SCSI-3
# coordinator disks
# cps1=
# vxfendg=
# Note: The disk group specified in this case should have two disks
#
# 3. All SCSI-3 coordinator disks
# vxfendg=
# Note: The disk group specified in case should have three disks
# cps1=[cps1.company.com]
# cps2=[cps2.company.com]
# cps3=[cps3.company.com]
# port=443
Manually configuring SFHA clusters for data integrity 126
Setting up server-based I/O fencing manually
cps<number>=[virtual_ip_address/virtual_host_name]:port
cps1=[192.168.0.23],[192.168.0.24]:58888,
[cps1.company.com]
vxfen_honor_cp_order Set the value to 1 for vxfen module to use a specific order of
coordination points during a network partition scenario.
# haconf -makerw
# hagrp -add vxfen
# hagrp -modify vxfen SystemList sys1 0 sys2 1
# hagrp -modify vxfen AutoFailOver 0
# hagrp -modify vxfen Parallel 1
# hagrp -modify vxfen SourceFile "./main.cf"
# hares -add coordpoint CoordPoint vxfen
# hares -modify coordpoint FaultTolerance 0
# hares -override coordpoint LevelTwoMonitorFreq
# hares -modify coordpoint LevelTwoMonitorFreq 5
# hares -modify coordpoint Enabled 1
# haconf -dump -makero
Manually configuring SFHA clusters for data integrity 128
Setting up server-based I/O fencing manually
# haconf -makerw
# hares -add RES_phantom_vxfen Phantom vxfen
# hares -modify RES_phantom_vxfen Enabled 1
# haconf -dump -makero
4 Verify the status of the agent on the SFHA cluster using the hares commands.
For example:
5 Access the engine log to view the agent log. The agent log is written to the
engine log.
The agent log contains detailed CoordPoint agent monitoring information;
including information about whether the CoordPoint agent is able to access all
the coordination points, information to check on which coordination points the
CoordPoint agent is reporting missing keys, etc.
To view the debug logs in the engine log, change the dbg level for that node
using the following commands:
# haconf -makerw
Note: The Coordpoint agent is always in the online state when the I/O fencing
is configured in the majority or the disabled mode. For both these modes the
I/O fencing does not have any coordination points to monitor. Thereby, the
Coordpoint agent is always in the online state.
Manually configuring SFHA clusters for data integrity 129
Setting up non-SCSI-3 fencing in virtual environments manually
# vxfenadm -d
2 Verify that I/O fencing is using the specified coordination points by running the
vxfenconfig command. For example, run the following command:
# vxfenconfig -l
If the output displays single_cp=1, it indicates that the application cluster uses
a CP server as the single coordination point for server-based fencing.
# vxfenadm -d
data_disk_fencing=off
loser_exit_delay=55
vxfen_script_timeout=25
lltconfig -T sendhbcap:3000
■ Add the following line to the /etc/llttab file so that the changes remain
persistent after any reboot:
set-timer senhbcap:3000
# haconf -makerw
■ For each resource of the type DiskGroup, set the value of the
MonitorReservation attribute to 0 and the value of the Reservation attribute
to NONE.
9 Make sure that the UseFence attribute in the VCS configuration file main.cf is
set to SCSI-3.
10 To make these VxFEN changes take effect, stop and restart VxFEN and the
dependent modules
■ On each node, run the following command to stop VCS:
# /etc/init.d/vcs.rc stop
■ After VCS takes all services offline, run the following command to stop
VxFEN:
# /etc/init.d/vxfen.rc stop
■ On each node, run the following commands to restart VxFEN and VCS:
# /etc/init.d/vxfen.rc start
# /etc/init.d/vcs.rc start
#
# vxfen_mode determines in what mode VCS I/O Fencing should work.
#
# available options:
# scsi3 - use scsi3 persistent reservation disks
# customized - use script based customized fencing
# disabled - run the driver but don't do any actual fencing
#
vxfen_mode=customized
# available options:
# cps - use a coordination point server with optional script
# controlled scsi3 disks
#
vxfen_mechanism=cps
#
# scsi3_disk_policy determines the way in which I/O fencing
# communicates with the coordination disks. This field is
# required only if customized coordinator disks are being used.
#
# available options:
# dmp - use dynamic multipathing
#
scsi3_disk_policy=dmp
#
# Seconds for which the winning sub cluster waits to allow for the
# losing subcluster to panic & drain I/Os. Useful in the absence of
# SCSI3 based data disk fencing loser_exit_delay=55
#
# Seconds for which vxfend process wait for a customized fencing
# script to complete. Only used with vxfen_mode=customized
# vxfen_script_timeout=25
#
# vxfen_honor_cp_order determines the order in which vxfen
# should use the coordination points specified in this file.
#
# available options:
# 0 - vxfen uses a sorted list of coordination points specified
# in this file, the order in which coordination points are specified
# does not matter.
# (default)
# 1 - vxfen uses the coordination points in the same order they are
# specified in this file
# nodes.
#
# Coordination Point Server(CPS) is specified as follows:
#
# cps<number>=[<vip/vhn>]:<port>
#
# If a CPS supports multiple virtual IPs or virtual hostnames
# over different subnets, all of the IPs/names can be specified
# in a comma separated list as follows:
#
# cps<number>=[<vip_1/vhn_1>]:<port_1>,[<vip_2/vhn_2>]:<port_2>,
# ...,[<vip_n/vhn_n>]:<port_n>
#
# Where,
# <number>
# is the serial number of the CPS as a coordination point; must
# start with 1.
# <vip>
# is the virtual IP address of the CPS, must be specified in
# square brackets ("[]").
# <vhn>
# is the virtual hostname of the CPS, must be specified in square
# brackets ("[]").
# <port>
# is the port number bound to a particular <vip/vhn> of the CPS.
# It is optional to specify a <port>. However, if specified, it
# must follow a colon (":") after <vip/vhn>. If not specified, the
# colon (":") must not exist after <vip/vhn>.
#
# For all the <vip/vhn>s which do not have a specified <port>,
# a default port can be specified as follows:
#
# port=<default_port>
#
# Where <default_port> is applicable to all the <vip/vhn>s for which a
# <port> is not specified. In other words, specifying <port> with a
# <vip/vhn> overrides the <default_port> for that <vip/vhn>.
# If the <default_port> is not specified, and there are <vip/vhn>s for
# which <port> is not specified, then port number 14250 will be used
# for such <vip/vhn>s.
#
# Example of specifying CP Servers to be used as coordination points:
# port=57777
Manually configuring SFHA clusters for data integrity 134
Setting up non-SCSI-3 fencing in virtual environments manually
# cps1=[192.168.0.23],[192.168.0.24]:58888,[cps1.company.com]
# cps2=[192.168.0.25]
# cps3=[cps2.company.com]:59999
#
# In the above example,
# - port 58888 will be used for vip [192.168.0.24]
# - port 59999 will be used for vhn [cps2.company.com], and
# - default port 57777 will be used for all remaining <vip/vhn>s:
# [192.168.0.23]
# [cps1.company.com]
# [192.168.0.25]
# - if default port 57777 were not specified, port 14250 would be
# used for all remaining <vip/vhn>s:
# [192.168.0.23]
# [cps1.company.com]
# [192.168.0.25]
#
# SCSI-3 compliant coordinator disks are specified as:
#
# vxfendg=<coordinator disk group name>
# Example:
# vxfendg=vxfencoorddg
#
# Examples of different configurations:
# 1. All CP server coordination points
# cps1=
# cps2=
# cps3=
#
# 2. A combination of CP server and a disk group having two SCSI-3
# coordinator disks
# cps1=
# vxfendg=
# Note: The disk group specified in this case should have two disks
#
# 3. All SCSI-3 coordinator disks
# vxfendg=
# Note: The disk group specified in case should have three disks
# cps1=[cps1.company.com]
# cps2=[cps2.company.com]
# cps3=[cps3.company.com]
# port=443
Manually configuring SFHA clusters for data integrity 135
Setting up majority-based I/O fencing manually
Task Reference
Creating I/O fencing configuration files Creating I/O fencing configuration files
Modifying VCS configuration to use I/O Modifying VCS configuration to use I/O
fencing fencing
# cp /etc/vxfen.d/vxfenmode_majority /etc/vxfenmode
# cat /etc/vxfenmode
3 Ensure that you edit the following file on each node in the cluster to change
the values of the VXFEN_START and the VXFEN_STOP environment variables to
1.
/etc/sysconfig/vxfen
# hastop -all
# /etc/init.d/vxfen.rc stop
# cd /etc/VRTSvcs/conf/config
# cp main.cf main.orig
6 On one node, use vi or another text editor to edit the main.cf file. To modify
the list of cluster attributes, add the UseFence attribute and assign its value
as SCSI3.
cluster clus1(
UserNames = { admin = "cDRpdxPmHpzS." }
Administrators = { admin }
HacliUserLevel = COMMANDROOT
CounterInterval = 5
UseFence = SCSI3
)
For fencing configuration in any mode except the disabled mode, the value of
the cluster-level attribute UseFence is set to SCSI3.
7 Save and close the file.
8 Verify the syntax of the file /etc/VRTSvcs/conf/config/main.cf:
9 Using rcp or another utility, copy the VCS configuration file from a node (for
example, sys1) to the remaining cluster nodes.
For example, on each remaining node, enter:
# rcp sys1:/etc/VRTSvcs/conf/config/main.cf \
/etc/VRTSvcs/conf/config
10 Start the I/O fencing driver and VCS. Perform the following steps on each node:
■ Start the I/O fencing driver.
The vxfen startup script also invokes the vxfenconfig command, which
configures the vxfen driver.
# /etc/init.d/vxfen.rc start
# /opt/VRTS/bin/hastart
■ Start VCS on all other nodes once VCS on first node reaches RUNNING
state.
# /opt/VRTS/bin/hastart
# vxfenadm -d
* 0 (sys1)
1 (sys2)
# /opt/VRTS/install/installer -responsefile
/tmp/response_file
(Required)
(Required)
(Required)
(Required)
Performing an automated SFHA configuration using response files 141
Response file variables to configure SFHA
(Required)
(Optional)
(Optional)
(Optional)
(Optional)
Note that some optional variables make it necessary to define other optional
variables. For example, all the variables that are related to the cluster service group
(csgnic, csgvip, and csgnetmask) must be defined if any are defined. The same is
true for the SMTP notification (smtpserver, smtprecp, and smtprsev), the SNMP
trap notification (snmpport, snmpcons, and snmpcsev), and the Global Cluster
Option (gconic, gcovip, and gconetmask).
Performing an automated SFHA configuration using response files 142
Response file variables to configure SFHA
Table 7-2 lists the response file variables that specify the required information to
configure a basic SFHA cluster.
(Optional)
(Optional)
(Required)
(Required)
(Required)
(Required)
Table 7-3 lists the response file variables that specify the required information to
configure LLT over Ethernet.
Performing an automated SFHA configuration using response files 143
Response file variables to configure SFHA
Table 7-3 Response file variables specific to configuring private LLT over
Ethernet
(Required)
(Optional)
Table 7-4 lists the response file variables that specify the required information to
configure LLT over UDP.
Table 7-4 Response file variables specific to configuring LLT over UDP
(Required)
Performing an automated SFHA configuration using response files 144
Response file variables to configure SFHA
Table 7-4 Response file variables specific to configuring LLT over UDP
(continued)
(Required)
(Required)
(Required)
(Required)
Performing an automated SFHA configuration using response files 145
Response file variables to configure SFHA
Table 7-4 Response file variables specific to configuring LLT over UDP
(continued)
(Required)
(Required)
Table 7-5 lists the response file variables that specify the required information to
configure virtual IP for SFHA cluster.
Performing an automated SFHA configuration using response files 146
Response file variables to configure SFHA
Table 7-5 Response file variables specific to configuring virtual IP for SFHA
cluster
(Optional)
(Optional)
(Optional)
Table 7-6 lists the response file variables that specify the required information to
configure the SFHA cluster in secure mode.
(Optional)
Table 7-7 lists the response file variables that specify the required information to
configure VCS users.
(Optional)
(Optional)
Performing an automated SFHA configuration using response files 148
Response file variables to configure SFHA
(Optional)
Table 7-8 lists the response file variables that specify the required information to
configure VCS notifications using SMTP.
(Optional)
(Optional)
(Optional)
Table 7-9 lists the response file variables that specify the required information to
configure VCS notifications using SNMP.
Performing an automated SFHA configuration using response files 149
Response file variables to configure SFHA
(Optional)
(Optional)
(Optional)
Table 7-10 lists the response file variables that specify the required information to
configure SFHA global clusters.
(Optional)
(Optional)
(Optional)
Performing an automated SFHA configuration using response files 150
Sample response file for SFHA configuration
##############################################
#Auto generated sfha responsefile #
##############################################
our %CFG;
$CFG{accepteula}=1;
$CFG{opt}{rsh}=1;
$CFG{vcs_allowcomms}=1;
$CFG{opt}{gco}=1;
$CFG{opt}{vvr}=1;
$CFG{opt}{configure}=1;
$CFG{activecomponent}=[ qw(SFHA802) ];
$CFG{prod}="ENTERPRISE802";
$CFG{systems}=[ qw( sys1 sys2 ) ];
$CFG{vm_restore_cfg}{sys1}=0;
$CFG{vm_restore_cfg}{sys2}=0;
$CFG{vcs_clusterid}=127;
$CFG{vcs_clustername}="clus1";
$CFG{vcs_username}=[ qw(admin operator) ];
$CFG{vcs_userenpw}=[ qw(JlmElgLimHmmKumGlj bQOsOUnVQoOUnTQsOSnUQuOUnPQtOS) ];
$CFG{vcs_userpriv}=[ qw(Administrators Operators) ];
$CFG{vcs_lltlink1}{"sys1"}="en1";
$CFG{vcs_lltlink2}{"sys1"}="en2";
$CFG{vcs_lltlink1}{"sys2"}="en3";
$CFG{vcs_lltlink2}{"sys2"}="en4";
$CFG{opt}{logpath}="/opt/VRTS/install/logs/installer-xxxxxx/installer-xxxxxx.response";
1;
Chapter 8
Performing an automated
I/O fencing configuration
using response files
This chapter includes the following topics:
# /opt/VRTS/install/installer
-responsefile /tmp/response_file
(Required)
(Required)
(Optional)
Note: You must define the
fencing_dgname variable to use an
existing disk group. If you want to create
a new disk group, you must use both the
fencing_dgname variable and the
fencing_newdg_disks variable.
(Optional)
Note: You must define the
fencing_dgname variable to use an
existing disk group. If you want to create
a new disk group, you must use both the
fencing_dgname variable and the
fencing_newdg_disks variable.
Performing an automated I/O fencing configuration using response files 154
Response file variables to configure disk-based I/O fencing
# Configuration Values:
#
our %CFG;
$CFG{fencing_config_cpagent}=1;
$CFG{fencing_auto_refresh_reg}=1;
$CFG{fencing_cpagent_monitor_freq}=5;
$CFG{fencing_cpagentgrp}="vxfen";
$CFG{fencing_dgname}="fencingdg1";
$CFG{fencing_newdg_disks}=[ qw(emc_clariion0_155
emc_clariion0_162 emc_clariion0_163) ];
$CFG{fencing_option}=2;
$CFG{opt}{configure}=1;
$CFG{opt}{fencing}=1;
$CFG{prod}="ENTERPRISE802";
$CFG{activecomponent}="SFRAC802";
$CFG{systems}=[ qw(sys1sys2)];
$CFG{vcs_clusterid}=32283;
$CFG{vcs_clustername}="clus1";
1;
Performing an automated I/O fencing configuration using response files 156
Response file variables to configure server-based I/O fencing
Table 8-2 Coordination point server (CP server) based fencing response
file definitions
Table 8-2 Coordination point server (CP server) based fencing response
file definitions (continued)
Table 8-2 Coordination point server (CP server) based fencing response
file definitions (continued)
$CFG{fencing_config_cpagent}=0;
$CFG{fencing_cps}=[ qw(10.200.117.145) ];
$CFG{fencing_cps_vips}{"10.200.117.145"}=[ qw(10.200.117.145) ];
$CFG{fencing_dgname}="vxfencoorddg";
$CFG{fencing_disks}=[ qw(emc_clariion0_37 emc_clariion0_12) ];
$CFG{fencing_ncp}=3;
$CFG{fencing_ndisks}=2;
$CFG{fencing_cps_ports}{"10.200.117.145"}=443;
$CFG{fencing_reusedg}=1;
$CFG{opt}{configure}=1;
$CFG{opt}{fencing}=1;
$CFG{prod}="ENTERPRISE802";
$CFG{systems}=[ qw(sys1 sys2) ];
$CFG{vcs_clusterid}=1256;
Performing an automated I/O fencing configuration using response files 159
Response file variables to configure non-SCSI-3 I/O fencing
$CFG{vcs_clustername}="clus1";
$CFG{fencing_option}=1;
CFG {fencing_config_cpagent} Enter '1' or '0' depending upon whether you want to
configure the Coordination Point agent using the
installer or not.
CFG {fencing_cpagentgrp} Name of the service group which will have the
Coordination Point agent resource as part of it.
Note: This field is obsolete if the
fencing_config_cpagent field is given a value of
'0'. This variable does not apply to majority-based
fencing.
# Configuration Values:
# our %CFG;
$CFG{fencing_config_cpagent}=0;
$CFG{fencing_cps}=[ qw(10.198.89.251 10.198.89.252 10.198.89.253) ];
$CFG{fencing_cps_vips}{"10.198.89.251"}=[ qw(10.198.89.251) ];
$CFG{fencing_cps_vips}{"10.198.89.252"}=[ qw(10.198.89.252) ];
$CFG{fencing_cps_vips}{"10.198.89.253"}=[ qw(10.198.89.253) ];
$CFG{fencing_ncp}=3;
$CFG{fencing_ndisks}=0;
$CFG{fencing_cps_ports}{"10.198.89.251"}=443;
$CFG{fencing_cps_ports}{"10.198.89.252"}=443;
$CFG{fencing_cps_ports}{"10.198.89.253"}=443;
$CFG{non_scsi3_fencing}=1;
$CFG{opt}{configure}=1;
$CFG{opt}{fencing}=1;
$CFG{prod}="ENTERPRISE802";
Performing an automated I/O fencing configuration using response files 161
Response file variables to configure majority-based I/O fencing
(Required)
(Required)
$CFG{fencing_option}=7;
$CFG{config_majority_based_fencing}=1;
$CFG{opt}{configure}=1;
$CFG{opt}{fencing}=1;
$CFG{prod}="ENTERPRISE802";
$CFG{systems}=[ qw(sys1 sys2) ];
$CFG{vcs_clusterid}=59082;
$CFG{vcs_clustername}="clus1";
Section 3
Upgrade of SFHA
During the upgrade, the installation program performs the following tasks:
1. Stops the product before starting the upgrade
2. Upgrades the installed packages and installs additional packages
Planning to upgrade SFHA 165
Supported upgrade paths
Slf license key files are required while upgrading to version 7.4 and later. The
text-based license keys that are used in previous product versions are not
supported when upgrading to version 7.4 and later. If you plan to upgrade any
of the InfoScale products from a version earlier than 7.4, first contact Customer
Care for your region to procure an applicable slf license key file. Refer to the
following link for contact information of the Customer Care center for your
region: https://www.veritas.com/content/support/en_US/contact-us.html.
If your current installation uses a permanent license key, you will be prompted
to update the license to 8.0.2. Ensure that the license key file is downloaded
on the local host, where you want to upgrade the product. The license key file
must not be saved in the root directory (/) or the default license directory on
the local host (/etc/vx/licenses/lic). You can save the license key file
inside any other directory on the local host.
If you choose not to update your license, you will be registered with a keyless
license. Within 60 days of choosing this option, you must install a valid license
key file corresponding to the entitled license level.
3. You must configure the Veritas Telemetry Collector while upgrading, if you
have do not already have it configured. For more information, refer to the About
telemetry data collection in InfoScale section in the Veritas Installation guide.
4. Restores the existing configuration.
For example, if your setup contains an SFHA installation, the installer upgrades
and restores the configuration to SFHA. If your setup included multiple
components, the installer upgrades and restores the configuration of the
components.
5. Starts the configured components.
7.3.1 AIX 7.1 TL4, TL5 AIX 7.2 TL5 Veritas InfoScale SFHA
Enterprise 8.0.2
AIX 7.2 TL0, AIX 7.3 TL0
TL1, TL2, TL3,
TL4
Planning to upgrade SFHA 166
Considerations for upgrading SFHA to 8.0.2 on systems configured with an Oracle resource
7.4 AIX7.1 TL4, TL5 AIX 7.2 TL5 Veritas InfoScale SFHA
Enterprise 8.0.2
AIX7.2 TL0, TL1, AIX 7.3 TL0
TL2
7.4.1 AIX 7.1 TL4, TL5 AIX 7.2 TL5 Veritas InfoScale SFHA
Enterprise 8.0.2
AIX 7.2 TL0, AIX 7.3 TL0
TL1, TL2, TL3,
TL4
7.4.2 AIX 7.1 TL4, TL5 AIX 7.2 TL5 Veritas InfoScale SFHA
Enterprise 8.0.2
AIX 7.2 TL3, AIX 7.3 TL0
TL4, TL5
8.0 AIX 7.1 TL5 AIX 7.2 TL5 Veritas InfoScale SFHA
Enterprise 8.0.2
AIX 7.2 TL4, TL5 AIX 7.3 TL0
■ You can configure the Veritas Telemetry Collector while upgrading, if you have
do not already have it configured. For more information, refer to the About
telemetry data collection in InfoScale section in the Veritas Installation guide.
■ Make sure that the administrator who performs the upgrade has root access
and a good knowledge of the operating system's administration.
■ Make sure that all users are logged off and that all major user applications are
properly shut down.
■ Make sure that you have created a valid backup.
See “Creating backups” on page 168.
■ Ensure that you have enough file system space to upgrade. Identify where you
want to copy the filesets, for example /packages/Veritas when the root file
system has enough space or /var/tmp/packages if the /var file system has
enough space.
Do not put the files on a file system that is inaccessible before running the
upgrade script.
You can use a Veritas-supplied disc for the upgrade as long as modifications
to the upgrade script are not required.
If /usr/local was originally created as a slice, modifications are required.
■ For any startup scripts in /etc/init.d/, comment out any application commands
or processes that are known to hang if their file systems are not present.
■ Make sure that the current operating system supports version 8.0.2 of the
product. If the operating system does not support it, plan for a staged upgrade.
■ Schedule sufficient outage time and downtime for the upgrade and any
applications that use the Veritas InfoScale products. Depending on the
configuration, the outage can take several hours.
■ Make sure that the file systems are clean before upgrading.
See “Verifying that the file systems are clean” on page 175.
■ Upgrade arrays (if required).
See “Upgrading the array support” on page 176.
■ To reliably save information on a mirrored disk, shut down the system and
physically remove the mirrored disk. Removing the disk in this manner offers a
failback point.
■ Make sure that DMP support for native stack is disabled
(dmp_native_support=off). If DMP support for native stack is enabled
(dmp_native_support=on), the installer may detect it and ask you to restart the
system.
Planning to upgrade SFHA 168
Preparing to upgrade SFHA
■ If you want to upgrade the application clusters that use CP server based fencing
to version 7.3.1 and later, make sure that you first upgrade VCS or SFHA on
the CP server systems to version 7.3.1 and later. And then, from 7.3.1 onwards,
CP server supports only HTTPS based communication with its clients and
IPM-based communication is no longer supported. CP server needs to be
reconfigured if you upgrade the CP server with IPM-based CP server configured.
For instructions to upgrade VCS or SFHA on the CP server systems, refer to
the relevant Configuration and Upgrade Guides.
# umount mnt_point
3 Stop all the volumes by entering the following command for each disk group:
4 Before the upgrade of a high availability (HA) product, take all service groups
offline.
List all service groups:
# /opt/VRTSvcs/bin/hagrp -list
Creating backups
Save relevant system information before the upgrade.
Planning to upgrade SFHA 169
Preparing to upgrade SFHA
To create backups
1 Log in as superuser.
2 Make a record of the mount points for VxFS file systems and the VxVM volumes
that are defined in the /etc/filesystems file. You need to recreate these
entries in the /etc/filesystems file on the freshly upgraded system.
3 Before the upgrade, ensure that you have made backups of all data that you
want to preserve.
4 Installer verifies that recent backups of configuration files in VxVM private
region have been saved in /etc/vx/cbr/bk.
If not, a warning message is displayed.
# cp /etc/filesystems /etc/filesystems.orig
6 Run the vxlicrep, vxdisk list, and vxprint -ht commands and record
the output. Use this information to reconfigure your system after the upgrade.
7 If you install Veritas InfoScale Enterprise 8.0.2 software, follow the guidelines
that are given in the Cluster Server Configuration and Upgrade Guide for
information on preserving your VCS configuration across the installation
procedure.
8 Back up the external quotas and quotas.grp files.
9 Verify that quotas are turned off on all the mounted file systems.
■ If replication using VVR is configured, make sure the size of the SRL volume is
greater than 110 MB.
Refer to the Veritas InfoScale™ Replication Administrator’s Guide.
■ If replication using VVR is configured, verify that all the Primary RLINKs are
up-to-date on all the hosts.
For information regarding VVR support for replicating across Storage Foundation
versions, refer to the Veritas InfoScale Release Notes.
Replicating between versions is intended to remove the restriction of upgrading the
Primary and Secondary at the same time. VVR can continue to replicate an existing
RDS with Replicated Volume Groups (RVGs) on the systems that you want to
upgrade. When the Primary and Secondary are at different versions, VVR does not
support changing the configuration with the vradmin command or creating a new
RDS.
Also, if you specify TCP as the network protocol, the VVR versions on the Primary
and Secondary determine whether the checksum is calculated. As shown in
Table 9-2, if either the Primary or Secondary are running a version of VVR prior to
8.0.2, and you use the TCP protocol, VVR calculates the checksum for every data
packet it replicates. If the Primary and Secondary are at VVR 8.0.2, VVR does not
calculate the checksum. Instead, it relies on the TCP checksum mechanism.
Note: When replicating between versions of VVR, avoid using commands associated
with new features. The earlier version may not support new features and problems
could occur.
If you do not need to upgrade all the hosts in the RDS simultaneously, you can use
replication between versions after you upgrade one host. You can then upgrade
the other hosts in the RDS later at your convenience.
Note: If you have a cluster setup, you must upgrade all the nodes in the cluster at
the same time.
Note: You must also stop any remaining applications not managed by VCS.
# haconf -makerw
# hagrp -list
6 On any node in the cluster, freeze all service groups except the ClusterService
group by typing the following command for each group name displayed in the
output from step 5.
Note: Make a note of the list of frozen service groups for future use.
7 On any node in the cluster, save the configuration file (main.cf) with the groups
frozen:
Note: Continue only after you have performed steps 3 to step 7 for each node
of the cluster.
8 Display the list of service groups that have RVG resources and the nodes on
which each service group is online by typing the following command on any
node in the cluster:
Note: For the resources that are ONLINE, write down the nodes displayed in
the System column of the output.
Planning to upgrade SFHA 174
Preparing to upgrade SFHA
Note: Write down the list of the disk groups that are under VCS control.
2 For each disk group listed in the output in step 1, list its corresponding disk
group resource name:
3 For each disk group resource name listed in the output in step 2, get and note
down the node on which the disk group is imported by typing the following
command:
The output displays the disk groups that are under VCS control and nodes on
which the disk groups are imported.
The output displays a list of the disk groups that are under VCS control and
the disk groups that are not under VCS control.
Note: The disk groups that are not locally imported are displayed in
parentheses.
2 If any of the disk groups have not been imported on any node, import them.
For disk groups in your VCS configuration, you can import them on any node.
For disk groups that are not under VCS control, choose an appropriate node
on which to import the disk group. Enter the following command on the
appropriate node:
3 If a disk group is already imported, then recover the disk group by typing the
following command on the node on which it is imported:
# vxrecover -bs
A clean_value value of 0x5a indicates the file system is clean. A value of 0x3c
indicates the file system is dirty. A value of 0x69 indicates the file system is
dusty. A dusty file system has pending extended operations.
2 If a file system is not clean, enter the following commands for that file system:
These commands should complete any extended operations on the file system
and unmount the file system cleanly.
A pending large fileset clone removal extended operation might be in progress
if the umount command fails with the following error:
3 If an extended operation is in progress, you must leave the file system mounted
for a longer time to allow the operation to complete. Removing a very large
fileset clone can take several hours.
4 Repeat step 1 to verify that the unclean file system is now clean.
When you upgrade Storage Foundation products with the product installer, the
installer automatically upgrades the array support. If you upgrade Storage
Foundation products with manual steps, you should remove any external ASLs or
APMs that were installed previously on your system. Installing the VRTSvxvm fileset
exits with an error if external ASLs or APMs are detected.
After you have installed Veritas InfoScale 8.0.2, Veritas provides support for new
disk arrays through updates to the VRTSaslapm fileset.
For more information about array support, see the Storage Foundation
Administrator's Guide.
2. Base + patch:
This integration method can be used when you install or upgrade from a lower
version to 8.0.2.0.100.
Enter the following command:
3. Maintenance + patch:
This integration method can be used when you upgrade from version 8.0.2 to
8.0.2.1.100.
Enter the following command:
■ Upgrading Storage Foundation and High Availability with the product installer
■ Upgrading SFDB
3 If you want to upgrade Storage Foundation and High Availability, take all service
groups offline.
List all service groups:
# /opt/VRTSvcs/bin/hagrp -list
# haconf -makerw
# hasys -freeze -persistent nodename
# haconf -dump -makero
5 If replication using VVR is configured, verify that all the Primary RLINKs are
up-to-date:
6 Load and mount the disc. If you downloaded the software, navigate to the top
level of the download directory.
7 From the disc (or if you downloaded the software) , run the installer
command.
# ./installer
10 The installer asks if you agree with the terms of the End User License
Agreement. Press y to agree and continue.
11 Stop the product's processes.
Do you want to stop SFHA processes now? [y,n,q] (y) y
If you select y, the installer stops the product processes and makes some
configuration updates before it upgrades.
12 The installer stops, uninstalls, reinstalls, and starts specified filesets.
13 The Storage Foundation and High Availability software is verified and
configured.
14 The installer prompts you to provide feedback, and provides the log location
for the upgrade.
15 Restart the nodes when the installer prompts restart. Then, unfreeze the nodes
and start the cluster by entering the following:
# haconf -makerw
# hasys -unfreeze -persistent nodename
# haconf -dump –makero
# hastart
Previous version of SFHA on AIX See “Upgrading from prior version of SFHA on AIX 7.3
7.1 to SFHA 8.0.2 on a DMP-enabled rootvg” on page 183.
Upgrade from AIX 7.2 to AIX 7.3 See “Upgrading the operating system from AIX 7.2 to
in Veritas InfoScale 8.0.2 AIX 7.3 in Veritas InfoScale 8.0.2 ” on page 183.
Upgrading Storage Foundation and High Availability 183
Upgrade Storage Foundation and High Availability and AIX on a DMP-enabled rootvg
6 Restart the system. After the restart, the system has DMP root support enabled.
Upgrading the operating system from AIX 7.2 to AIX 7.3 in Veritas
InfoScale 8.0.2
In Veritas InfoScale 8.0.2, when you upgrade the operating system from AIX 7.2
to AIX 7.3, DMP root support is not automatically enabled.
To upgrade AIX and enable DMP support for rootvg
1 Disable DMP support for rootvg.
2 Restart the system.
Upgrading Storage Foundation and High Availability 184
Upgrading the AIX operating system
# touch /etc/vx/reconfig.d/state.d/install-db
LLT_START=0
3 Stop activity to all file systems and raw volumes, for example by unmounting
any file systems that have been created on volumes.
# umount mnt_point
4 Stop all the volumes by entering the following command for each disk group:
5 If you want to upgrade a high availability (HA) product, take all service groups
offline.
List all service groups:
# /opt/VRTSvcs/bin/hagrp -list
6 Upgrade the AIX operating system. See the operating system documentation
for more information.
7 Apply the necessary APARs.
For information about APARs required for Veritas InfoScale Storage 8.0.2,
refer to the Veritas InfoScale 8.0.2 Release Notes.
8 Restart the system.
# shutdown -Fr
# rm /etc/vx/reconfig.d/state.d/install-db
LLT_START=1
Note: If you have a cluster setup, you must upgrade all the nodes in the cluster at
the same time.
3 Upgrade VVR from any version from 7.3.1 to the latest on the Secondary.
4 Start the replication to the Secondary host by initiating startrep on the Primary.
# /usr/sbin/vradmin -g <disk_group_name> startrep <RVG_name>
<secondary_hostname>
5 Upgrade VVR from any version from 7.3.1 to the latest on the Primary.
6 Start the replication to the Secondary host by initiating startrep on the Primary.
# /usr/sbin/vradmin -g <disk_group_name> startrep <RVG_name>
<secondary_hostname>
Upgrading Storage Foundation and High Availability 187
Upgrading Volume Replicator
8 Mount all the file systems and start all the applications on the Primary.
8 Mount all the file systems and start all the applications on the Primary.
4 Upgrade VVR from any supported older version to the latest VVR version on
the Secondary.
VCS starts automatically after the upgrade.
5 Start the replication to the Secondary host by initiating startrep on the Primary.
# /usr/sbin/vradmin -g <disk_group_name> startrep <RVG_name>
<secondary_hostname>
2 Take the applications and the mount points down by using the VCS Application
or the VCS Mount service groups.
3 Stop the replication to a Secondary by initiating stoprep on the Primary.
# /usr/sbin/vradmin -g <disk_group_name> stoprep <RVG_name>
<secondary_hostname>
Upgrading Storage Foundation and High Availability 189
Upgrading Volume Replicator
5 Upgrade VVR from any supported older version to the latest VVR version on
the Secondary.
VCS starts automatically after the upgrade.
6 Start the replication to the Secondary host by initiating startrep on the Primary.
# /usr/sbin/vradmin -g <disk_group_name> startrep <RVG_name>
<secondary_hostname>
8 Mount all the file systems and start all the applications on the Primary.
2 Take the applications and the mount points down by using the VCS Application
or the VCS Mount service groups.
3 Pause the replication to the Secondary by initiating pauserep on the Primary.
# /usr/sbin/vradmin -g <disk_group_name> pauserep <RVG_name>
<secondary_hostname>
9 Mount all the file systems and start all the applications on the Primary.
2 Upgrade the disk group version on the Primary for all the disk groups.
# /usr/sbin/vxdg upgrade <disk_group_name>
3 Upgrade the disk layout version (DLV) on the Primary for all the VxFS file
systems.
# /opt/VRTS/bin/vxupgrade -n 17 <vxfs_mount_point_name>
# /opt/VRTS/bin/fstyp -v <disk_path_for_mount_point_volume>
Upgrading SFDB
While upgrading to 8.0.2, the SFDB tools are enabled by default, which implies that
the vxdbd daemon is configured. You can enable the SFDB tools, if they are
disabled.
To enable SFDB tools
1 Log in as root.
2 Run the following command to configure and start the vxdbd daemon.
# /opt/VRTS/bin/sfae_config enable
Note: If any SFDB installation with authentication setup is upgraded to 8.0.2, the
commands fail with an error. To resolve the issue, setup the SFDB authentication
again. For more information, see the Veritas InfoScale™ Storage and Availability
Management for Oracle Databases or Veritas InfoScale™ Storage and Availability
Management for DB2 Databases.
Chapter 11
Performing a rolling
upgrade of SFHA
This chapter includes the following topics:
2. Application downtime occurs during the first phase as the installer moves service
groups to free nodes for the upgrade. The only downtime that is incurred is the
normal time required for the service group to failover. The downtime is limited to
the applications that are failed over and not the entire cluster.
Performing a rolling upgrade of SFHA 193
About rolling upgrade
3. The installer performs the second phase of the upgrade on all of the nodes in the
cluster. The second phase of the upgrade includes downtime of the Cluster Server
(VCS) engine HAD, but does not include application downtime.
SG1 SG2
Node A Node B
Running cluster prior to
the rolling upgrade
SG1 SG2 SG1 SG2 SG1 SG2
Node is
upgraded
Node A Node B Node A Node B Node A Node B
Service groups running Phase 1 completes
Phase 1 starts on Node B;
on Node A; on Node B
SG2 fails over
Node B is upgraded for
phase 1
SG1 SG2 SG1 SG2 SG1 SG2
Node is
upgraded
Node A Node B Node A Node B Node A Node B
SG1 SG2
Key:
SG1: Failover service group
Node A Node B SG2: Failover service group
Phase 1: Upgrades kernel packages
In Phase 2, all remaining packages Are Phase 2: Upgrades VCS and VCS agent packges
upgraded on all nodes simulatenously;
HAD stops and starts
■ Rolling upgrades are not compatible with phased upgrades. Do not mix rolling
upgrades and phased upgrades.
■ You can perform a rolling upgrade from 7.3.1 or later versions.
■ The rolling upgrade procedures support only minor operating system upgrades.
■ The rolling upgrade procedure requires the product to be started before and
after upgrade. If the current release does not support your current operating
system version and the installed old release version does not support the
operating system version that the current release supports, then rolling upgrade
is not supported.
# umount mount_point
# ./installer
5 From the menu, select Upgrade a Product and from the sub menu, select
Rolling Upgrade.
6 The installer suggests system names for the upgrade. Press Enter to upgrade
the suggested systems, or enter the name of any one system in the cluster on
which you want to perform a rolling upgrade and then press Enter.
7 The installer checks system communications, release compatibility, version
information, and lists the cluster name, ID, and cluster nodes. Type y to
continue.
8 The installer inventories the running service groups and determines the node
or nodes to upgrade in phase 1 of the rolling upgrade. Type y to continue. If
you choose to specify the nodes, type n and enter the names of the nodes.
9 The installer performs further prechecks on the nodes in the cluster and may
present warnings. You can type y to continue or quit the installer and address
the precheck's warnings.
10 Review the end-user license agreement, and type y if you agree to its terms.
11 After the installer detects the online service groups, the installer prompts the
user to do one of the following:
■ Manually switch service groups
■ Use the CPI to automatically switch service groups
The downtime is the time that it normally takes for the service group's failover.
12 The installer prompts you to stop the applicable processes. Type y to continue.
The installer evacuates all service groups to the node or nodes that are not
upgraded at this time. The installer stops parallel service groups on the nodes
that are to be upgraded.
13 The installer stops relevant processes, uninstalls old kernel filesets, and installs
the new filesets. The installer asks if you want to update your licenses to the
current version. Select Yes or No. Veritas recommends that you update your
licenses to fully use the new features in the current release.
Performing a rolling upgrade of SFHA 196
Performing a rolling upgrade of SFHA using the product installer
14 If the cluster has configured Coordination Point Server based fencing, then
during upgrade, installer may ask the user to provide the new HTTPS
Coordination Point Server.
The installer performs the upgrade configuration and starts the processes. If
the boot disk is encapsulated before the upgrade, installer prompts the user
to reboot the node after performing the upgrade configuration.
15 Complete the preparatory steps on the nodes that you have not yet upgraded.
Unmount all VxFS file systems not under VCS control on all the nodes.
# umount mount_point
If the installer prompts to restart nodes, restart the nodes. Restart the installer.
The installer repeats step 8 through step 14.
For clusters with larger number of nodes, this process may repeat several
times. Service groups come down and are brought up to accommodate the
upgrade.
18 When Phase 1 of the rolling upgrade completes, mount all the VxFS file systems
that are not under VCS control manually. Begin Phase 2 of the upgrade. Phase
2 of the upgrade includes downtime for the VCS engine (HAD), which does
not include application downtime. Type y to continue. Phase 2 of the rolling
upgrade begins here.
19 The installer determines the remaining filesets to upgrade. Press Enter to
continue.
Performing a rolling upgrade of SFHA 197
Performing a rolling upgrade of SFHA using the product installer
20 The installer displays the following question before the installer stops the product
processes. If the cluster was configured in secure mode and version is prior
to 6.2 before the upgrade, these questions are displayed.
■ Do you want to grant read access to everyone? [y,n,q,?]
■ To grant read access to all authenticated users, type y.
■ To grant usergroup specific permissions, type n.
■ Do you want to provide any usergroups that you would like to grant read
access?[y,n,q,?]
■ To specify usergroups and grant them read access, type y
■ To grant read access only to root users, type n. The installer grants read
access read access to the root users.
■ Enter the usergroup names separated by spaces that you would like to
grant read access. If you would like to grant read access to a usergroup on
a specific node, enter like 'usrgrp1@node1', and if you would like to grant
read access to usergroup on any cluster node, enter like 'usrgrp1'. If some
usergroups are not created yet, create the usergroups after configuration
if needed. [b]
21 The installer stops Cluster Server (VCS) processes but the applications continue
to run. Type y to continue.
The installer performs prestop, uninstalls old filesets, and installs the new
filesets. It performs post-installation tasks, and the configuration for the upgrade.
22 If you have network connection to the Internet, the installer checks for updates.
If updates are discovered, you can apply them now.
23 A prompt message appears to ask if the user wants to read the summary file.
You can choose y if you want to read the install summary file.
Chapter 12
Performing a phased
upgrade of SFHA
This chapter includes the following topics:
Table 12-1
Fail over condition Downtime
You can fail over all your service groups to Downtime equals the time that is taken to
the nodes that are up. offline and online the service groups.
You have a service group that you cannot fail Downtime for that service group equals the
over to a node that runs during upgrade. time that is taken to perform an upgrade and
restart the node.
■ Before you start the upgrade make sure that all the disk groups have the latest
backup of configuration files in the /etc/vx/cbr/bk directory. If not, then run
the following command to take the latest backup.
sg1, sg2, sg3, and sg4. For the purposes of this example, the cluster is split into
two subclusters. The nodes node01 and node02 are in the first subcluster, which
you first upgrade. The nodes node03 and node04 are in the second subcluster,
which you upgrade last.
sg3 sg4
# hagrp -state
2 Offline the parallel service groups (sg1 and sg2) from the first subcluster. Switch
the failover service groups (sg3 and sg4) from the first subcluster (node01 and
node02) to the nodes on the second subcluster (node03 and node04). For
SFHA, vxfen sg is the parallel service group.
3 On the nodes in the first subcluster, unmount all the VxFS file systems that
VCS does not manage, for example:
# df -k
# umount /mnt/dg2/dg2vol1
# umount /mnt/dg2/dg2vol2
# umount /mnt/dg2/dg2vol3
4 On the nodes in the first subcluster, stop all VxVM volumes (for each disk
group) that VCS does not manage.
5 Make the configuration writable on the first subcluster.
# haconf -makerw
8 Verify that the service groups are offline on the first subcluster that you want
to upgrade.
# hagrp -state
Output resembles:
# mv /etc/llttab /etc/llttab.save
The program starts with a copyright message and specifies the directory where
it creates the logs. It performs a system verification and outputs upgrade
information.
5 Enter y to agree to the End User License Agreement (EULA).
Do you agree with the terms of the End User License Agreement
as specified in the EULA/en/EULA_InfoScale_Ux_8.0.2.pdf file present
on media? [y,n,q,?] y
# hastatus -summ
-- SYSTEM STATE
-- System State Frozen
A node01 EXITED 1
A node02 EXITED 1
A node03 RUNNING 0
A node04 RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
2 Unmount all the VxFS file systems that VCS does not manage, for example:
# df -k
# umount /mnt/dg2/dg2vol1
# umount /mnt/dg2/dg2vol2
# umount /mnt/dg2/dg2vol3
# haconf -makerw
# hagrp -state
#Group Attribute System Value
SG1 State node01 |OFFLINE|
SG1 State node02 |OFFLINE|
SG1 State node03 |OFFLINE|
SG1 State node04 |OFFLINE|
SG2 State node01 |OFFLINE|
SG2 State node02 |OFFLINE|
SG2 State node03 |OFFLINE|
SG2 State node04 |OFFLINE|
SG3 State node01 |OFFLINE|
SG3 State node02 |OFFLINE|
SG3 State node03 |OFFLINE|
SG3 State node04 |OFFLINE|
8 Stop all VxVM volumes (for each disk group) that VCS does not manage.
Performing a phased upgrade of SFHA 209
Performing a phased upgrade using the product installer
9 Stop VCS, I/O Fencing, GAB, and LLT on node03 and node04.
# /opt/VRTSvcs/bin/hastop -local
# /etc/init.d/vxfen.rc stop
# /etc/init.d/gab.rc stop
# /etc/init.d/llt.rc stop
10 Make sure that the VXFEN, GAB, and LLT modules on node03 and node04
are not loaded.
# /sbin/vxfenconfig -l
VXFEN vxfenconfig ERROR V-11-2-1087 There are 0 active
coordination points for this node
# /sbin/gabconfig -l
GAB Driver Configuration
Driver state : Unconfigured
Partition arbitration: Disabled
Control port seed : Disabled
Halt on process death: Disabled
Missed heartbeat halt: Disabled
Halt on rejoin : Disabled
Keep on killing : Disabled
Quorum flag : Disabled
Restart : Disabled
Node count : 0
Disk HB interval (ms): 1000
Disk HB miss count : 4
IOFENCE timeout (ms) : 15000
Stable timeout (ms) : 5000
# /usr/sbin/strload -q -d /usr/lib/drivers/pse/llt
/usr/lib/drivers/pse/llt: no
# gabconfig -x
Performing a phased upgrade of SFHA 210
Performing a phased upgrade using the product installer
# cd /opt/VRTS/install
# haconf -makerw
# mv /etc/llttab /etc/llttab.save
The program starts with a copyright message and specifies the directory where
it creates the logs.
4 Enter y to agree to the End User License Agreement (EULA).
Do you agree with the terms of the End User License Agreement
as specified in the EULA/en/EULA_InfoScale_Ux_8.0.2.pdf file present
on media? [y,n,q,?] y
■ Check whether the current version is compatible with the newer cluster
version and whether it can be upgraded successfully.
haclus -version -verify <newer-cluster-version>
Performing a phased upgrade of SFHA 212
Performing a phased upgrade using the product installer
For example:
# /opt/VRTSvcs/bin/haclus -version -verify 8.0.0.0000
2 Verify that the cluster UUID is the same on the nodes in the second subcluster
and the first subcluster. Run the following command to display the cluster UUID:
# /opt/VRTSvcs/bin/uuidconfig.pl
-clus -display node1 [node2 ...]
If the cluster UUID differs, manually copy the cluster UUID from a node in the
first subcluster to the nodes in the second subcluster. For example:
# /usr/sbin/shutdown -r
The nodes in the second subcluster join the nodes in the first subcluster.
4 In the /etc/default/llt file, change the value of the LLT_START attribute.
In the /etc/default/gab file, change the value of the GAB_START attribute.
In the /etc/default/vxfen file, change the value of the VXFEN_START
attribute.
In the /etc/default/vcs file, change the value of the VCS_START attribute.
LLT_START = 1
GAB_START = 1
VXFEN_START =1
VCS_START =1
# /etc/init.d/llt.rc start
# /etc/init.d/gab.rc start
Performing a phased upgrade of SFHA 213
Performing a phased upgrade using the product installer
# gabconfig -x
# cd /opt/VRTS/install
8 Check to see if SFHA and High Availability and its components are up.
# gabconfig -a
GAB Port Memberships
===============================================================
Port a gen nxxxnn membership 0123
Port b gen nxxxnn membership 0123
Port h gen nxxxnn membership 0123
Performing a phased upgrade of SFHA 214
Performing a phased upgrade using the product installer
9 Run an hastatus -sum command to determine the status of the nodes, service
groups, and cluster.
# hastatus -sum
-- SYSTEM STATE
-- System State Frozen
A node01 RUNNING 0
A node02 RUNNING 0
A node03 RUNNING 0
A node04 RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B sg1 node01 Y N ONLINE
B sg1 node02 Y N ONLINE
B sg1 node03 Y N ONLINE
B sg1 node04 Y N ONLINE
B sg2 node01 Y N ONLINE
B sg2 node02 Y N ONLINE
B sg2 node03 Y N ONLINE
B sg2 node04 Y N ONLINE
B sg3 node01 Y N ONLINE
B sg3 node02 Y N OFFLINE
B sg3 node03 Y N OFFLINE
B sg3 node04 Y N OFFLINE
B sg4 node01 Y N OFFLINE
B sg4 node02 Y N ONLINE
B sg4 node03 Y N OFFLINE
B sg4 node04 Y N OFFLINE
10 After the upgrade is complete, start the VxVM volumes (for each disk group)
and mount the VxFS file systems.
In this example, you have performed a phased upgrade of SFHA. The service
groups were down when you took them offline on node03 and node04, to the time
SFHA brought them online on node01 or node02.
Chapter 13
Performing an automated
SFHA upgrade using
response files
This chapter includes the following topics:
5 Mount the product disc and navigate to the folder that contains the installation
program.
6 Start the upgrade from the system to which you copied the response file. For
example:
Variable Description
Variable Description
Variable Description
our %CFG;
$CFG{accepteula}=1;
$CFG{opt}{gco}=1;
$CFG{opt}{redirect}=1;
$CFG{opt}{upgrade}=1;
$CFG{opt}{vr}=1;
$CFG{prod}="ENTERPRISE802";
Performing an automated SFHA upgrade using response files 219
Sample response file for rolling upgrade of SFHA
$CFG{systems}=[ "sys01","sys02" ];
$CFG{vcs_allowcomms}=1;
1;
The vcs_allowcomms variable is set to 0 if it is a single-node cluster, and the llt and
gab processes are not started before upgrade.
■ Updating variables
■ About enabling LDAP authentication for clusters that run in secure mode
■ To upgrade VxFS Disk Layout versions and VxVM Disk Group versions, follow
the upgrade instructions.
See “Upgrading VxVM disk group versions” on page 226.
# restoresrl
# adddcm
# srlprot
# attrlink
# start.rvg
# haconf -makerw
# hacf -verify
Performing post-upgrade tasks 222
Post-upgrade tasks when VCS agents for VVR are configured
4 Unfreeze all service groups that you froze previously. Enter the following
command on any node in the cluster:
6 If you are upgrading in a shared disk group environment, bring online the
RVGShared groups with the following commands:
This IP is the virtual IP that is used for replication within the cluster.
8 In shared disk group environment, online the virtual IP resource on the master
node.
Note: Restore the original configuration only after you have upgraded VVR on all
nodes for the Primary and Secondary cluster.
Performing post-upgrade tasks 223
Post-upgrade tasks when VCS agents for VVR are configured
Each disk group should be imported onto the same node on which it was online
when the upgrade was performed. The reboot after the upgrade could result
in another node being online; for example, because of the order of the nodes
in the AutoStartList. In this case, switch the VCS group containing the disk
groups to the node on which the disk group was online while preparing for the
upgrade.
2 Recover all the disk groups by typing the following command on the node on
which the disk group was imported in step 1.
# vxrecover -bs
3 Upgrade all the disk groups on all the nodes on which VVR has been upgraded:
4 On all nodes that are Secondary hosts of VVR, make sure the data volumes
on the Secondary are the same length as the corresponding ones on the
Primary. To shrink volumes that are longer on the Secondary than the Primary,
use the following command on each volume on the Secondary:
Note: Do not continue until you complete this step on all the nodes in the
Primary and Secondary clusters on which VVR is upgraded.
5 Restore the configuration according to the method you used for upgrade:
If you upgraded with the VVR upgrade scripts
Complete the upgrade by running the vvr_upgrade_finish script on all the
nodes on which VVR was upgraded. We recommend that you first run the
vvr_upgrade_finish script on each node that is a Secondary host of VVR.
# /disc_path/scripts/vvr_upgrade_finish
where disc_path is the location where the Veritas software disc is mounted.
■ Attach the RLINKs on the nodes on which the messages were displayed:
7 If you plan on using IPv6, you must bring up IPv6 addresses for virtual
replication IP on primary/secondary nodes and switch from using IPv4 to IPv6
host names or addresses, enter:
CVM master node needs to assume the logowner role for VCS
managed VVR resources
If you use VCS to manage RVGLogowner resources in an SFCFSHA environment
or an SF Oracle RAC environment, Veritas recommends that you perform the
following procedures. These procedures ensure that the CVM master node always
assumes the logowner role. Not performing these procedures can result in
unexpected issues that are due to a CVM slave node that assumes the logowner
role.
For a service group that contains an RVGLogowner resource, change the value of
its TriggersEnabled attribute to PREONLINE to enable it.
To enable the TriggersEnabled attribute from the command line on a service
group that has an RVGLogowner resource
◆ On any node in the cluster, perform the following command:
Note: If you plan to use 64-bit quotas, you must upgrade to the disk layout version
10 or later.
Disk layout version 7, 8, 9, 10, 11, and 12 are deprecated and you cannot cluster
mount an existing file system that has any of these versions. To upgrade a cluster
file system from any of these deprecated versions, you must local mount the file
system and then upgrade it using the vxupgrade utility or the vxfsconvert utility.
Performing post-upgrade tasks 226
Upgrading VxVM disk group versions
The vxupgrade utility enables you to upgrade the disk layout while the file system
is online. However, the vxfsconvert utility enables you to upgrade the disk layout
while the file system is offline.
If you use the vxupgrade utility, you must incrementally upgrade the disk layout
versions. However, you can directly upgrade to a desired version, using the
vxfsconvert utility.
For example, to upgrade from disk layout version 7 to a disk layout version 17,
using the vxupgrade utility:
# vxupgrade -n 8 /mnt
# vxupgrade -n 9 /mnt
# vxupgrade -n 10 /mnt
# vxupgrade -n 11 /mnt
# vxupgrade -n 12 /mnt
# vxupgrade -n 13 /mnt
# vxupgrade -n 14 /mnt
# vxupgrade -n 15 /mnt
# vxupgrade -n 16 /mnt
# vxupgrade -n 17 /mnt
Note: Veritas recommends that before you begin to upgrade the product version,
you must upgrade the existing file system to the highest supported disk layout
version. Once a disk layout version has been upgraded, it is not possible to
downgrade to the previous version.
For more information about disk layout versions, see the Storage Foundation
Administrator's Guide.
work only on disk groups with the current disk group version. Before you can perform
the tasks or use the features, upgrade the existing disk groups.
For 8.0.2, the Veritas Volume Manager disk group version is different than in
previous VxVM releases. Veritas recommends that you upgrade the disk group
version if you upgraded from a previous VxVM release.
After upgrading to SFHA 8.0.2, you must upgrade any existing disk groups that are
organized by ISP. Without the version upgrade, configuration query operations
continue to work fine. However, configuration change operations will not function
correctly.
For more information about ISP disk groups, refer to the Storage Foundation
Administrator's Guide.
Use the following command to find the version of a disk group:
To upgrade a disk group to the current disk group version, use the following
command:
For more information about disk group versions, see the Storage Foundation
Administrator's Guide.
Updating variables
In /etc/profile, update the PATH and MANPATH variables as needed.
MANPATH can include /opt/VRTS/man and PATH can include /opt/VRTS/bin.
VCS client
VCS node
(authentication broker)
The LDAP schema and syntax for LDAP commands (such as, ldapadd, ldapmodify,
and ldapsearch) vary based on your LDAP implementation.
Performing post-upgrade tasks 229
About enabling LDAP authentication for clusters that run in secure mode
Before adding the LDAP domain in Veritas Product Authentication Service, note
the following information about your LDAP environment:
■ The type of LDAP schema used (the default is RFC 2307)
■ UserObjectClass (the default is posixAccount)
■ UserObject Attribute (the default is uid)
■ User Group Attribute (the default is gidNumber)
■ Group Object Class (the default is posixGroup)
■ GroupObject Attribute (the default is cn)
■ Group GID Attribute (the default is gidNumber)
■ Group Membership Attribute (the default is memberUid)
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat showversion
vssat version: 6.1.14.26
Performing post-upgrade tasks 230
About enabling LDAP authentication for clusters that run in secure mode
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf \
-d -s domain_controller_name_or_ipaddress -u domain_user
You can use the catatldapconf command to view the entries in the attributes
file.
2 Run the LDAP configuration tool using the -c option. The -c option creates a
CLI file to add the LDAP domain.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf \
-c -d LDAP_domain_name
3 Run the LDAP configuration tool atldapconf using the -x option. The -x option
reads the CLI file and executes the commands to add a domain to the AT.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf -x
4 Check the AT version and list the LDAP domains to verify that the Windows
Active Directory server integration is complete.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat showversion
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat listldapdomains
SSL Enabled : No
User Attribute : cn
Group Attribute : cn
Admin User :
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat showdomains -p vx
The command output lists the number of domains that are found, with the
domain names and domain types.
Performing post-upgrade tasks 232
About enabling LDAP authentication for clusters that run in secure mode
# unset EAT_LOG
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat authenticate \
-d ldap:LDAP_domain_name -p user_name -s user_password -b \
localhost:14149
# useradd user1
# passwd pw1
# su user1
# bash
# id
uid=204(user1) gid=1(staff)
# pwd
# mkdir /home/user1
# haconf -makerw
# hauser -add user1
# haconf -dump -makero
# cd /home/user1
# ls
# cat .vcspwd
# unset VCS_DOMAINTYPE
# unset VCS_DOMAIN
# /opt/VRTSvcs/bin/hasys -state
■ Switching on Quotas
Switching on Quotas
This turns on the group and user quotas once all the nodes are upgraded to 8.0.2,
if it was turned off earlier.
To turn on the group and user quotas
◆ Switch on quotas:
# vxquotaon -av
Configure the vxdbd daemon to require See “Configuring vxdbd for SFDB tools
authentication authentication” on page 236.
Add a node to a cluster that is using See “Adding nodes to a cluster that is using
authentication for SFDB tools authentication for SFDB tools” on page 252.
Performing post-installation tasks 236
About configuring authentication for SFDB tools
# /opt/VRTS/bin/sfae_auth_op -o setup
Setting up AT
Starting SFAE AT broker
Creating SFAE private domain
Backing up AT configuration
Creating principal for vxdbd
# /opt/VRTS/bin/sfae_config disable
vxdbd has been disabled and the daemon has been stopped.
# /opt/VRTS/bin/sfae_config enable
vxdbd has been enabled and the daemon has been started.
It will start automatically on reboot.
■ Updating the Storage Foundation for Databases (SFDB) repository after adding
a node
The following table provides a summary of the tasks required to add a node to an
existing SFHA cluster.
Step Description
Complete the prerequisites and See “Before adding a node to a cluster” on page 239.
preparatory tasks before adding
a node to the cluster.
Add a new node to the cluster. See “Adding a node to a cluster using the Veritas
InfoScale installer” on page 241.
If you are using the Storage See “Adding nodes to a cluster that is using
Foundation for Databases (SFDB) authentication for SFDB tools” on page 252.
tools, you must update the
See “Updating the Storage Foundation for Databases
repository database.
(SFDB) repository after adding a node” on page 253.
The example procedures describe how to add a node to an existing cluster with
two nodes.
Public network
Shared storage
Existing Existing
node 1 node 2
Hub/switch
Private
network
New node
For more information, see the Cluster Server Configuration and Upgrade
Guide.
■ The network interface names used for the private interconnects on the new
node must be the same as that of the existing nodes in the cluster.
Complete the following preparatory steps on the new node before you add it to an
existing SFHA cluster.
To prepare the new node
1 Navigate to the folder that contains the installer program. Verify that the new
node meets installation requirements.Verify that the new node meets installation
requirements.
# ./installer -precheck
# ./installer
■ Updates and copies the following files to the new node from the existing node:
Adding a node to SFHA clusters 242
Adding a node to a cluster using the Veritas InfoScale installer
/etc/llthosts
/etc/gabtab
/etc/VRTSvcs/conf/config/main.cf
■ Copies the following files from the existing cluster to the new node
/etc/vxfenmode
/etc/vxfendg
/etc/vx/.uuids/clusuuid
/etc/default/llt
/etc/default/gab
/etc/default/vxfen
■ Configures disk-based or server-based fencing depending on the fencing mode
in use on the existing cluster.
At the end of the process, the new node joins the SFHA cluster.
Note: If you have configured server-based fencing on the existing cluster, make
sure that the CP server does not contain entries for the new node. If the CP server
already contains entries for the new node, remove these entries before adding the
node to the cluster, otherwise the process may fail with an error.
See “Removing the node configuration from the CP server” on page 259.
# cd /opt/VRTS/install
# ./installer -addnode
The installer displays the copyright message and the location where it stores
the temporary installation logs.
3 Enter the name of a node in the existing SFHA cluster.
The installer uses the node information to identify the existing cluster.
Enter the name of any one node of the InfoScale ENTERPRISE cluster
where you would like to add one or more new nodes: sys1
5 Enter the name of the systems that you want to add as new nodes to the cluster.
Confirm if the installer prompts if you want to add the node to the cluster.
The installer checks the installed products and filesets on the nodes and
discovers the network interfaces.
6 Enter the name of the network interface that you want to configure as the first
private heartbeat link.
Note: At least two private heartbeat links must be configured for high availability
of the cluster.
10 If the existing cluster uses server-based fencing, the installer will configure
server-based fencing on the new nodes.
The installer then starts all the required processes and joins the new node to
cluster.
The installer indicates the location of the log file, summary file, and response
file with details of the actions performed.
If you have enabled security on the cluster, the installer displays the following
message:
# haconf -makerw
11 Confirm that the new node has joined the SFHA cluster using lltstat -n and
gabconfig -a commands.
Step Description
Start the Veritas Volume Manager See “Starting Veritas Volume Manager (VxVM) on the
(VxVM) on the new node. new node” on page 245.
Configure the cluster processes See “Configuring cluster processes on the new node”
on the new node. on page 246.
Adding a node to SFHA clusters 245
Adding the node to a cluster manually
Step Description
Configure fencing for the new See “Starting fencing on the new node” on page 248.
node to match the fencing
configuration on the existing
cluster.
Start VCS. See “To start VCS on the new node” on page 252.
If the ClusterService group is See “Configuring the ClusterService group for the new
configured on the existing cluster, node” on page 248.
add the node to the group.
# vxinstall
2 Enter n when prompted to set up a system wide disk group for the system.
The installation completes.
3 Verify that the daemons are up and running. Enter the command:
# vxdisk list
Make sure the output displays the shared disks without errors.
Adding a node to SFHA clusters 246
Adding the node to a cluster manually
0 sys1
1 sys2
2 sys5
2 Copy the /etc/llthosts file from one of the existing systems over to the new
system. The /etc/llthosts file must be identical on all nodes in the cluster.
3 Create an /etc/llttab file on the new system. For example:
set-node Sys5
set-cluster 101
Except for the first line that refers to the node, the file resembles the /etc/llttab
files on the existing nodes. The second line, the cluster ID, must be the same
as in the existing nodes.
4 Use vi or another text editor to create the file /etc/gabtab on the new node.
This file must contain a line that resembles the following example:
/sbin/gabconfig -c -nN
Where N represents the number of systems in the cluster including the new
node. For a three-system cluster, N would equal 3.
5 Edit the /etc/gabtab file on each of the existing systems, changing the content
to match the file on the new system.
6 Use vi or another text editor to create the file /etc/VRTSvcs/conf/sysname
on the new node. This file must contain the name of the new node added to
the cluster.
For example:
Sys5
Adding a node to SFHA clusters 247
Adding the node to a cluster manually
8 Start the LLT, GAB, and ODM drivers on the new node:
# /etc/init.d/llt.rc start
# /etc/init.d/gab.rc start
# /etc/rc.d/rc2.d/S99odm start
# gabconfig -a
GAB Port Memberships
===============================================================
Port a gen df204 membership 012
/etc/default/vxfen
/etc/vxfendg
/etc/vxfenmode
# /etc/init.d/vxfen.rc start
# haconf -makerw
3 Modify the IP address and NIC resource in the existing group for the new node.
4 Save the configuration by running the following command from any node.
Table 16-4 Response file variables for adding a node to an SFHA cluster
Variable Description
our %CFG;
$CFG{clustersystems}=[ qw(sys1) ];
$CFG{newnodes}=[ qw(sys5) ];
$CFG{opt}{addnode}=1;
$CFG{opt}{configure}=1;
$CFG{opt}{vr}=1;
$CFG{prod}=" ENTERPRISE802";
1;
# cpsadm -s cps1.example.com \
-a add_node -c clus1 -h sys5 -n2
# haconf -makerw
3 Save the configuration by running the following command from any node in
the SFHA cluster:
# hastart
# /opt/VRTS/bin/sfae_auth_op \
-o export_broker_config -f exported-data
2 Copy the exported file to the new node by using any available copy mechanism
such as scp or rcp.
3 Import the authentication data on the new node by using the -o
import_broker_config option of the sfae_auth_op command.
Use the -f option to provide the name of the file copied in Step 2.
# /opt/VRTS/bin/sfae_auth_op \
-o import_broker_config -f exported-data
Setting up AT
Importing broker configuration
Starting SFAE AT broker
# /opt/VRTS/bin/sfae_config disable
vxdbd has been disabled and the daemon has been stopped.
Adding a node to SFHA clusters 253
Updating the Storage Foundation for Databases (SFDB) repository after adding a node
# /opt/VRTS/bin/sfae_config enable
vxdbd has been enabled and the daemon has been started.
It will start automatically on reboot.
The new node is now authenticated to interact with the cluster to run SFDB
commands.
Task Reference
■ Back up the configuration file. See “Verifying the status of nodes and
■ Check the status of the nodes and the service service groups” on page 255.
groups.
■ Switch or remove any SFHA service groups on See “Deleting the departing node from
the node departing the cluster. SFHA configuration” on page 256.
■ Delete the node from SFHA configuration.
Modify the llthosts(4) and gabtab(4) files to reflect See “Modifying configuration files on
the change. each remaining node” on page 259.
For a cluster that is running in a secure mode, See “Removing security credentials from
remove the security credentials from the leaving the leaving node ” on page 260.
node.
Removing a node from SFHA clusters 255
Removing a node from a SFHA cluster
Task Reference
On the node departing the cluster: See “Unloading LLT and GAB and
removing Veritas InfoScale Availability
■ Modify startup scripts for LLT, GAB, and SFHA
or Enterprise on the departing node”
to allow reboot of the node without affecting
on page 260.
the cluster.
■ Unconfigure and unload the LLT and GAB
utilities.
■ Remove the Veritas InfoScale filesets.
# cp -p /etc/VRTSvcs/conf/config/main.cf\
/etc/VRTSvcs/conf/config/main.cf.goodcopy
# hastatus -summary
-- SYSTEM STATE
-- System State Frozen
A sys1 RUNNING 0
A sys2 RUNNING 0
A sys5 RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B grp1 sys1 Y N ONLINE
B grp1 sys2 Y N OFFLINE
B grp2 sys1 Y N ONLINE
B grp3 sys2 Y N OFFLINE
B grp3 sys5 Y N ONLINE
B grp4 sys5 Y N ONLINE
The example output from the hastatus command shows that nodes sys1,
sys2, and sys5 are the nodes in the cluster. Also, service group grp3 is
configured to run on node sys2 and node sys5, the departing node. Service
group grp4 runs only on node sys5. Service groups grp1 and grp2 do not run
on node sys5.
2 Check for any dependencies involving any service groups that run on the
departing node; for example, grp4 runs only on the departing node.
# hagrp -dep
3 If the service group on the departing node requires other service groups—if it
is a parent to service groups on other nodes—unlink the service groups.
# haconf -makerw
# hagrp -unlink grp4 grp1
These commands enable you to edit the configuration and to remove the
requirement grp4 has for grp1.
4 Stop SFHA on the departing node:
5 Check the status again. The state of the departing node should be EXITED.
Make sure that any service group that you want to fail over is online on other
nodes.
# hastatus -summary
-- SYSTEM STATE
-- System State Frozen
A sys1 RUNNING 0
A sys2 RUNNING 0
A sys5 EXITED 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B grp1 sys1 Y N ONLINE
B grp1 sys2 Y N OFFLINE
B grp2 sys1 Y N ONLINE
B grp3 sys2 Y N ONLINE
B grp3 sys5 Y Y OFFLINE
B grp4 sys5 Y N OFFLINE
Removing a node from SFHA clusters 258
Removing a node from a SFHA cluster
6 Delete the departing node from the SystemList of service groups grp3 and
grp4.
# haconf -makerw
# hagrp -modify grp3 SystemList -delete sys5
# hagrp -modify grp4 SystemList -delete sys5
Note: If sys5 was in the autostart list, then you need to manually add another
system in the autostart list so that after reboot, the group comes online
automatically.
7 For the service groups that run only on the departing node, delete the resources
from the group before you delete the group.
8 Delete the service group that is configured to run on the departing node.
# hastatus -summary
-- SYSTEM STATE
-- System State Frozen
A sys1 RUNNING 0
A sys2 RUNNING 0
A sys5 EXITED 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B grp1 sys1 Y N ONLINE
B grp1 sys2 Y N OFFLINE
B grp2 sys1 Y N ONLINE
B grp3 sys2 Y N ONLINE
Removing a node from SFHA clusters 259
Removing a node from a SFHA cluster
0 sys1
1 sys2
2 sys5
To:
0 sys1
1 sys2
Note: The cpsadm command is used to perform the steps in this procedure. For
detailed information about the cpsadm command, see the Cluster Server
Administrator's Guide.
4 View the list of nodes on the CP server to ensure that the node entry was
removed:
# /etc/init.d/vxfen.rc stop
# /etc/init.d/gab.rc stop
# /etc/init.d/llt.rc stop
# installp -u VRTSsfcpi
# installp -u VRTSvcswiz
# installp -u VRTSvbs
# installp -u VRTSsfmh
# installp -u VRTSvcsea
# installp -u VRTSvcsag
# installp -u VRTScps
# installp -u VRTSvcs
# installp -u VRTSamf
# installp -u VRTSvxfen
# installp -u VRTSgab
# installp -u VRTSllt
# installp -u VRTSspt
# installp -u VRTSvlic
# installp -u VRTSperl
# rm /etc/llttab
# rm /etc/gabtab
# rm /etc/llthosts
■ Appendix E. Configuring the secure shell or the remote shell for communications
Note: If Live update operation fails due to any AIX specific error, Veritas does not
guarantee sanity of machine after LKU operation is completed.
general:
kext_check = yes
aix_mpio = no
disks:
nhdisk = <hdisk1>
mhdisk = <hdisk2>
hmc:
lpar_id = <lparid>
management_console = <management console ip>
user = <user>
■ nhdisk: The names of disks to be used to make a copy of the original rootvg
which will be used to boot the Surrogate.
■ mhdisk: The names of disks to be used to temporarily mirror rootvg on the
Original LPAR.
■ The size of the specified disks must match the total size of the original rootvg.
■ These disks should be free. Application or Administrator should not use these
disks for any other operation during the Live update operation.
■ These disks should not be a part of any active or disabled Logical Volume
Manager (LVM) volume groups.
■ These disks should not be a part of any VxVM disk group and should not have
any VxVM tag.
■ Partition Directories
■ InfoScale product upgrades are not supported through the LKU operation
■ LKU operation is not supported in high availability configurations for InfoScale
■ LKU operation is not supported in presence of VxVM swap devices
■ LKU operation is not supported if any of the administrative tasks like fsadm, fsck
is running
■ LKU operation fails if any changes like volume creation, deletion and so on are
made to the VxVM configuration within the LKU start and MCR phase
■ LKU operation is not supported in presence of vSCSI disk
■ The integration of InfoScale products and LKU framework is supported only for
the Local Mount filesystem
Known issues
LKU operation fails with the "kernel extensions are not known to be safe for
Live Update: vxglm.ext(vxglm.ext64)" error.
A Live Update operation fails if a loaded kernel extension is not marked as safe in
the safe list.
If the Group Lock Manager (GLM) is installed on a system, but the VRTSglm
package is not marked with the SYS_LUSAFE flag, the LKU operation fails with
the "kernel extensions are not known to be safe for Live Update:
vxglm.ext(vxglm.ext64)" error.
Workaround:
Mark the VRTSglm package SYS_LUSAFE before initiating the LKU operation.
To add the VRTSglm package to the safe list for the Live Update operation, use
the following command:
# lvupdateSafeKE -a /usr/lib/drivers/vxglm.ext\(vxglm.ext64\)
–keyfile ssh_key_file Specifies a key file for secure shell (SSH) installs.
This option passes -I ssh_key_file to every
SSH invocation.
-rsh Specify this option when you want to use RSH and
RCP for communication between systems instead
of the default SSH and SCP.
When you use the postcheck option, it can help you troubleshoot the following
VCS-related issues:
■ The heartbeat link does not exist.
■ The heartbeat link cannot communicate.
■ The heartbeat link is a part of a bonded or aggregated NIC.
■ A duplicated cluster ID exists (if LLT is not running at the check time).
■ The VRTSllt pkg version is not consistent on the nodes.
■ The llt-linkinstall value is incorrect.
Installation scripts 274
About using the postcheck option
■ Volume Manager cannot start because the volboot file is not loaded.
■ Volume Manager cannot start because no license exists.
■ Cluster Volume Manager cannot start because the CVM configuration is incorrect
in the main.cf file. For example, the Autostartlist value is missing on the nodes.
■ Cluster Volume Manager cannot come online because the node ID in the
/etc/llthosts file is not consistent.
■ Cluster Volume Manager cannot come online because Vxfen is not started.
■ Cluster Volume Manager cannot start because gab is not configured.
■ Cluster Volume Manager cannot come online because of a CVM protocol
mismatch.
■ Cluster Volume Manager group name has changed from "cvm", which causes
CVM to go offline.
You can use the installer’s post-check option to perform the following checks:
General checks for all products:
■ All the required filesets are installed.
Installation scripts 275
About using the postcheck option
■ Lists the disks which are not in 'online' or 'online shared' state (vxdisk list).
■ Lists the diskgroups which are not in 'enabled' state (vxdg list).
■ Lists the volumes which are not in 'enabled' state (vxprint -g <dgname>).
■ Lists the volumes which are in 'Unstartable' state (vxinfo -g <dgname>).
■ Lists the volumes which are not configured in /etc/filesystems .
Checks for File System (FS):
■ Lists the VxFS kernel modules which are not loaded (vxfs/fdd/vxportal.).
■ Whether all VxFS file systems present in /etc/filesystems file are mounted.
■ Whether all VxFS file systems present in /etc/filesystems are in disk layout
12 or higher.
■ Whether all mounted VxFS file systems are in disk layout 12 or higher.
Checks for Cluster File System:
■ Whether FS and ODM are running at the latest protocol level.
■ Whether all mounted CFS file systems are managed by VCS.
■ Whether cvm service group is online.
Appendix C
SFHA services and ports
This appendix includes the following topics:
Note: The port numbers that appear in bold are mandatory for configuring InfoScale
Enterprise.
File Description
/etc/default/llt This file stores the start and stop environment variables for LLT:
■ LLT_START—Defines the startup behavior for the LLT module after a system reboot.
Valid values include:
1—Indicates that LLT is enabled to start up.
0—Indicates that LLT is disabled to start up.
■ LLT_STOP—Defines the shutdown behavior for the LLT module during a system
shutdown. Valid values include:
1—Indicates that LLT is enabled to shut down.
0—Indicates that LLT is disabled to shut down.
The installer sets the value of these variables to 1 at the end of SFHA configuration.
If you manually configured VCS, make sure you set the values of these environment
variables to 1.
/etc/llthosts The file llthosts is a database that contains one entry per system. This file links the LLT
system ID (in the first column) with the LLT host name. This file must be identical on each
node in the cluster. A mismatch of the contents of the file can cause indeterminate behavior
in the cluster.
For example, the file /etc/llthosts contains the entries that resemble:
0 sys1
1 sys2
Configuration files 280
About the LLT and GAB configuration files
File Description
/etc/llttab The file llttab contains the information that is derived during installation and used by
the utility lltconfig(1M). After installation, this file lists the LLT network links that
correspond to the specific system.
For example, the file /etc/llttab contains the entries that resemble:
set-node sys1
set-cluster 2
link en1 /dev/dlpi/en:1 - ether - -
link en2 /dev/dlpi/en:2 - ether - -
set-node sys1
set-cluster 2
link en1 /dev/en:1 - ether - -
link en2 /dev/en:2 - ether - -
The first line identifies the system. The second line identifies the cluster (that is, the cluster
ID you entered during installation). The next two lines begin with the link command.
These lines identify the two network cards that the LLT protocol uses.
If you configured a low priority link under LLT, the file also includes a "link-lowpri" line.
Refer to the llttab(4) manual page for details about how the LLT configuration may be
modified. The manual page describes the ordering of the directives in the llttab file.
Table D-2 lists the GAB configuration files and the information that these files
contain.
File Description
/etc/default/gab This file stores the start and stop environment variables for GAB:
The installer sets the value of these variables to 1 at the end of SFHA
configuration.
Configuration files 281
About the AMF configuration files
File Description
/etc/gabtab After you install SFHA, the file /etc/gabtab contains a gabconfig(1)
command that configures the GAB driver for use.
/sbin/gabconfig -c -nN
The -c option configures the driver for use. The -nN specifies that the
cluster is not formed until at least N nodes are ready to form the cluster.
Veritas recommends that you set N to be the total number of nodes in
the cluster.
Note: Veritas does not recommend the use of the -c -x option for
/sbin/gabconfig. Using -c -x can lead to a split-brain condition.
Use the -c option for /sbin/gabconfig to avoid a split-brain
condition.
File Description
/etc/default/amf This file stores the start and stop environment variables for AMF:
File Description
The AMF init script uses this /etc/amftab file to configure the
AMF driver. The /etc/amftab file contains the following line by
default:
/opt/VRTSamf/bin/amfconfig -c
■ The installer creates the ClusterService service group if you configured the
virtual IP, SMTP, SNMP, or global cluster options.
The service group also has the following characteristics:
■ The group includes the IP and NIC resources.
■ The service group also includes the notifier resource configuration, which is
based on your input to installer prompts about notification.
■ The installer also creates a resource dependency tree.
■ If you set up global clusters, the ClusterService service group contains an
Application resource, wac (wide-area connector). This resource’s attributes
contain definitions for controlling the cluster in a global cluster environment.
Refer to the Cluster Server Administrator's Guide for information about
managing VCS global clusters.
include "types.cf"
include "OracleTypes.cf"
include "OracleASMTypes.cf"
include "Db2udbTypes.cf"
include "SybaseTypes.cf"
cluster vcs02 (
SecureClus = 1
)
system sysA (
)
system sysB (
)
system sysC (
)
group ClusterService (
SystemList = { sysA = 0, sysB = 1, sysC = 2 }
Configuration files 284
About the VCS configuration files
NIC csgnic (
Device = en0
NetworkHosts = { "10.182.13.1" }
)
NotifierMngr ntfr (
SnmpConsoles = { sys4" = SevereError }
SmtpServer = "smtp.example.com"
SmtpRecipients = { "ozzie@example.com" = SevereError }
)
include "types.cf"
cluster vcs03 (
ClusterAddress = "10.182.13.50"
SecureClus = 1
)
Configuration files 285
About the VCS configuration files
system sysA (
)
system sysB (
)
system sysC (
)
group ClusterService (
SystemList = { sysA = 0, sysB = 1, sysC = 2 }
AutoStartList = { sysA, sysB, sysC }
OnlineRetryLimit = 3
OnlineRetryInterval = 120
)
Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart -secure"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = { "/opt/VRTSvcs/bin/wac -secure" }
RestartLimit = 3
)
IP gcoip (
Device = en0
Address = "10.182.13.50"
NetMask = "255.255.240.0"
)
NIC csgnic (
Device = en0
NetworkHosts = { "10.182.13.1" }
)
NotifierMngr ntfr (
SnmpConsoles = { sys4 = SevereError }
SmtpServer = "smtp.example.com"
SmtpRecipients = { "ozzie@example.com" = SevereError }
)
File Description
/etc/default/vxfen This file stores the start and stop environment variables for I/O fencing:
■ VXFEN_START—Defines the startup behavior for the I/O fencing module after a system
reboot. Valid values include:
1—Indicates that I/O fencing is enabled to start up.
0—Indicates that I/O fencing is disabled to start up.
■ VXFEN_STOP—Defines the shutdown behavior for the I/O fencing module during a system
shutdown. Valid values include:
1—Indicates that I/O fencing is enabled to shut down.
0—Indicates that I/O fencing is disabled to shut down.
The installer sets the value of these variables to 1 at the end of SFHA configuration.
This file is not applicable for server-based fencing and majority-based fencing.
Configuration files 287
About I/O fencing configuration files
File Description
■ vxfen_mode
■ scsi3—For disk-based fencing.
■ customized—For server-based fencing.
■ disabled—To run the I/O fencing driver but not do any fencing operations.
■ majority— For fencing without the use of coordination points.
■ vxfen_mechanism
This parameter is applicable only for server-based fencing. Set the value as cps.
■ scsi3_disk_policy
■ dmp—Configure the vxfen module to use DMP devices
The disk policy is dmp by default. If you use iSCSI devices, you must set the disk policy
as dmp.
Note: You must use the same SCSI-3 disk policy on all the nodes.
■ List of coordination points
This list is required only for server-based fencing configuration.
Coordination points in server-based fencing can include coordinator disks, CP servers, or
both. If you use coordinator disks, you must create a coordinator disk group containing the
individual coordinator disks.
Refer to the sample file /etc/vxfen.d/vxfenmode_cps for more information on how to specify
the coordination points and multiple IP addresses for each CP server.
■ single_cp
This parameter is applicable for server-based fencing which uses a single highly available
CP server as its coordination point. Also applicable for when you use a coordinator disk
group with single disk.
■ autoseed_gab_timeout
This parameter enables GAB automatic seeding of the cluster even when some cluster
nodes are unavailable.
This feature is applicable for I/O fencing in SCSI3 and customized mode.
0—Turns the GAB auto-seed feature on. Any value greater than 0 indicates the number of
seconds that GAB must delay before it automatically seeds the cluster.
-1—Turns the GAB auto-seed feature off. This setting is the default.
■ detect_false_pesb
0—Disables stale key detection.
1—Enables stale key detection to determine whether a preexisting split brain is a true
condition or a false alarm.
Default: 0
Note: This parameter is considered only when vxfen_mode=customized.
Configuration files 288
Sample configuration files for CP server
File Description
/etc/vxfentab When I/O fencing starts, the vxfen startup script creates this /etc/vxfentab file on each node.
The startup script uses the contents of the /etc/vxfendg and /etc/vxfenmode files. Any time a
system is rebooted, the fencing driver reinitializes the vxfentab file with the current list of all the
coordinator points.
Note: The /etc/vxfentab file is a generated file; do not modify this file.
For disk-based I/O fencing, the /etc/vxfentab file on each node contains a list of all paths to
each coordinator disk along with its unique disk identifier. A space separates the path and the
unique disk identifier. An example of the /etc/vxfentab file in a disk-based fencing configuration
on one node resembles as follows:
■ DMP disk:
/dev/vx/rdmp/rhdisk75 HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006804E795D075
/dev/vx/rdmp/rhdisk76 HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006814E795D076
/dev/vx/rdmp/rhdisk77 HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006824E795D077
For server-based fencing, the /etc/vxfentab file also includes the security settings information.
For server-based fencing with single CP server, the /etc/vxfentab file also includes the single_cp
settings information.
See “Sample main.cf file for CP server hosted on a two-node SFHA cluster”
on page 291.
The example main.cf files use IPv4 addresses.
Sample main.cf file for CP server hosted on a single node that runs
VCS
The following is an example of a single CP server node main.cf.
For this CP server single node main.cf, note the following values:
■ Cluster name: cps1
■ Node name: cps1
include "types.cf"
include "/opt/VRTScps/bin/Quorum/QuorumTypes.cf"
cluster cps1 (
UserNames = { admin = bMNfMHmJNiNNlVNhMK, haris = fopKojNvpHouNn,
"cps1.example.com@root@vx" = aj,
"root@cps1.example.com" = hq }
Administrators = { admin, haris,
"cps1.example.com@root@vx",
"root@cps1.example.com" }
SecureClus = 1
HacliUserLevel = COMMANDROOT
)
system cps1 (
)
group CPSSG (
SystemList = { cps1 = 0 }
AutoStartList = { cps1 }
)
IP cpsvip1 (
Critical = 0
Device @cps1 = en0
Address = "10.209.3.1"
Configuration files 290
Sample configuration files for CP server
NetMask = "255.255.252.0"
)
IP cpsvip2 (
Critical = 0
Device @cps1 = en1
Address = "10.209.3.2"
NetMask = "255.255.252.0"
)
NIC cpsnic1 (
Critical = 0
Device @cps1 = en0
PingOptimize = 0
NetworkHosts @cps1 = { "10.209.3.10 }
)
NIC cpsnic2 (
Critical = 0
Device @cps1 = en1
PingOptimize = 0
)
Process vxcpserv (
PathName = "/opt/VRTScps/bin/vxcpserv"
ConfInterval = 30
RestartLimit = 3
)
Quorum quorum (
QuorumResources = { cpsvip1, cpsvip2 }
)
// {
// NIC cpsnic1
// }
// IP cpsvip2
// {
// NIC cpsnic2
// }
// Process vxcpserv
// {
// Quorum quorum
// }
// }
include "types.cf"
include "CFSTypes.cf"
include "CVMTypes.cf"
include "/opt/VRTScps/bin/Quorum/QuorumTypes.cf"
// cluster: cps1
// CP servers:
// cps1
// cps2
cluster cps1 (
UserNames = { admin = ajkCjeJgkFkkIskEjh,
"cps1.example.com@root@vx" = JK,
"cps2.example.com@root@vx" = dl }
Administrators = { admin, "cps1.example.com@root@vx",
"cps2.example.com@root@vx" }
SecureClus = 1
)
system cps1 (
Configuration files 292
Sample configuration files for CP server
system cps2 (
)
group CPSSG (
SystemList = { cps1 = 0, cps2 = 1 }
AutoStartList = { cps1, cps2 } )
DiskGroup cpsdg (
DiskGroup = cps_dg
)
IP cpsvip1 (
Critical = 0
Device @cps1 = en0
Device @cps2 = en0
Address = "10.209.81.88"
NetMask = "255.255.252.0"
)
IP cpsvip2 (
Critical = 0
Device @cps1 = en1
Device @cps2 = en1
Address = "10.209.81.89"
NetMask = "255.255.252.0"
)
Mount cpsmount (
MountPoint = "/etc/VRTScps/db"
BlockDevice = "/dev/vx/dsk/cps_dg/cps_volume"
FSType = vxfs
FsckOpt = "-y"
)
NIC cpsnic1 (
Critical = 0
Device @cps1 = en0
Device @cps2 = en0
PingOptimize = 0
NetworkHosts @cps1 = { "10.209.81.10 }
)
Configuration files 293
Sample configuration files for CP server
NIC cpsnic2 (
Critical = 0
Device @cps1 = en1
Device @cps2 = en1
PingOptimize = 0
)
Process vxcpserv (
PathName = "/opt/VRTScps/bin/vxcpserv"
)
Quorum quorum (
QuorumResources = { cpsvip1, cpsvip2 }
)
Volume cpsvol (
Volume = cps_volume
DiskGroup = cps_dg
)
// Quorum quorum
// Mount cpsmount
// {
// Volume cpsvol
// {
// DiskGroup cpsdg
// }
// }
// }
// }
■ Setting up ssh and rsh connection using the installer -comsetup command
Figure E-1 Creating the DSA key pair and appending it to target systems
Private Public
Key Key
authorized_keys
file
Configuring the secure shell or the remote shell for communications 297
Manually configuring passwordless ssh
Read the ssh documentation and online manual pages before enabling ssh. Contact
your operating system support provider for issues regarding ssh configuration.
Visit the Openssh website that is located at: http://www.openssh.com/ to access
online manuals and other resources.
To create the DSA key pair
1 On the source system (sys1), log in as root, and navigate to the root directory.
sys1 # cd /
2 Make sure the /.ssh directory is on all the target installation systems (sys2 in
this example). If that directory is not present, create it on all the target systems
and set the write permission to root only:
Change the permissions of this directory, to secure it.
3 To generate a DSA key pair on the source system, type the following command:
To append the public key from the source system to the authorized_keys file
on the target system, using secure file transfer
1 From the source system (sys1), move the public key to a temporary file on the
target system (sys2).
Use the secure file transfer program.
In this example, the file name id_dsa.pub in the root directory is the name for
the temporary file for the public key.
Use the following command for secure file transfer:
If the secure file transfer is set up for the first time on this system, output similar
to the following lines is displayed:
2 Enter yes.
Output similar to the following is displayed:
sftp> quit
Configuring the secure shell or the remote shell for communications 299
Manually configuring passwordless ssh
6 To begin the ssh session on the target system (sys2 in this example), type the
following command on sys1:
password:
7 After you log in to sys2, enter the following command to append the id_dsa.pub
file to the authorized_keys file:
8 After the id_dsa.pub public key file is copied to the target system (sys2), and
added to the authorized keys file, delete it. To delete the id_dsa.pub public
key file, enter the following command on sys2:
sys2 # rm /id_dsa.pub
sys2 # exit
10 Run the following commands on the source installation system. If your ssh
session has expired or terminated, you can also run these commands to renew
the session. These commands bring the private key into the shell environment
and make the key globally available to the user root:
This shell-specific step is valid only while the shell is active. You must execute
the procedure again if you close the shell during the session.
Configuring the secure shell or the remote shell for communications 300
Setting up ssh and rsh connection using the installer -comsetup command
# ./installer -comsetup
Either ssh or rsh needs to be set up between the local system and
sys2 for communication
# ./pwdutil.pl -h
Usage:
pwdutil.pl -h | -?
Configuring the secure shell or the remote shell for communications 302
Setting up ssh and rsh connection using the pwdutil.pl utility
Option Usage
--user|-u '<user>' Specifies user id, default is the local user id.
<hostname>
<user>:<password>@<hostname>
<user>:<password>@<hostname>:
<port>
You can check, configure, and unconfigure ssh or rsh using the pwdutil.plutility.
For example:
■ To check ssh connection for only one host:
■ To configure ssh for multiple hosts with same user ID and password:
Configuring the secure shell or the remote shell for communications 303
Setting up ssh and rsh connection using the pwdutil.pl utility
■ To configure ssh or rsh for different hosts with different user ID and password:
■ To check or configure ssh or rsh for multiple hosts with one configuration file:
■ To keep the host configuration file secret, you can use the 3rd party utility to
encrypt and decrypt the host file with password.
For example:
■ To use the ssh authentication keys which are not under the default $HOME/.ssh
directory, you can use --keyfile option to specify the ssh keys. For example:
### generate private and public key pair under the directory:
# ssh-keygen -t rsa -f /keystore/id_rsa
### setup ssh connection with the new generated key pair under
the directory:
# pwdutil.pl -a configure -t ssh --keyfile /keystore/id_rsa
user:password@hostname
You can see the contents of the configuration file by using the following command:
Configuring the secure shell or the remote shell for communications 304
Restarting the ssh session
# cat /tmp/sshrsh_hostfile
user1:password1@hostname1
user2:password2@hostname2
user3:password3@hostname3
user4:password4@hostname4
0 Successful completion.
1 Command syntax error.
2 Ssh or rsh binaries do not exist.
3 Ssh or rsh service is down on the remote machine.
4 Ssh or rsh command execution is denied due to password is required.
5 Invalid password is provided.
255 Other unknown error.
sys1 # ssh-add
Configuring the secure shell or the remote shell for communications 305
Enabling rsh for AIX
sysname.domainname.com root
Change permissions on the /.rhosts file to 600 by typing the following command:
After you complete an installation procedure, delete the .rhosts file from each
target system to ensure security:
# rm -f /.rhosts
Appendix F
Sample SFHA cluster
setup diagrams for CP
server-based I/O fencing
This appendix includes the following topics:
■ Two node campus cluster that is served be remote CP server and 2 SCSI-3
disks:
■ Multiple client clusters that are served by highly available CP server and 2
SCSI-3 disks:
Figure F-1 Client cluster served by highly available CP server and 2 SCSI-3
disks
VLAN
Private
network
et et
ern ern
Eth witch Eth witch
S S
GigE
GigE
GigE
Cluster-1
GigE
Cluster -1
NIC 1 NIC 2
NIC 1 NIC 2
node 2
node 1
Client cluster C
3
C
3 NI
NI A
A vxfenmode=customized HB
HB
vxfen_mechanism=cps
cps1=[VIP]:14250
vxfendg=vxfencoorddg
et
CPS hosted on
ern SFHA cluster
Eth witch
S cp1=[VIP]:14250(port no.)
Intranet/
Internet VLAN
Public network Private network
SAN
GigE
et et
ern ern
ch Eth witch Eth witch
wit S
CS
S om
F y.c
an
mp
om
GigE
.co
GigE
2
.c
s
cp
ny
SFHA
pa
CPS-standby
GigE
disk1 CPS-Primary cluster
NIC 1 NIC 2
NIC 1 NIC 2
om
node node
.c
s1
rv
disk2 rv
cp
se
se
cp
cp
vx
vx
VIP 3
VIP
C 3
SCSI-3 LUNs as 2 NI NI
C
coordination points A SAN A
HB HB
The coordinator disk group CPS database itc
h
specified in /etc/vxfenmode /etc/VRTScps/db Sw
FC
should have these 2 disks.
Coordinator
LUNs
Data
LUNs
The two SCSI-3 disks (one from each site) are part of disk group vxfencoorddg.
The third coordination point is a CP server on a single node VCS cluster.
Figure F-2 Two node campus cluster served by remote CP server and 2
SCSI-3
Client Client
SITE 1 Applications SITE 2 Applications
et
th ern h
et E witc
ern S et
Eth witch LAN ern
S Eth witch
et LAN S
ern
Eth witch
S et
et ern
ern Eth witch Cluster
Eth witch
Cluster
NIC 1NIC 2HBA 1HBA 2
node 1 node 2
3
IC
3
N
IC
N
3
IC
3
N
IC
N
ch
Swit
SAN FC
it ch SAN
Sw
FC
itch ch
Sw S wit
FC FC
DWDM
Dark Fibre
Coordinator Coordinator
Data LUN 2 Data
LUN 1
Storage Array LUNs Storage Array LUNs
SITE 3
.) et Legends
On the client cluster: CPS hosted rt n
o
ern
vxfenmode=customized on single node po om
0 ( any.c Eth witch
5
2 p S Private Interconnects
vxfen_mechanism=cps VCS cluster :14 m
cps1=[VIP]:443 (default) or in the IP] s.co (GigE)
=[V cp
range [49152, 65535] s1 Public Links (GigE)
cp
vxfendg=vxfencoorddg rv
se
The coordinator disk group cp Dark Fiber
vx
specified in /etc/vxfenmode Connections
should have one SCSI3 disk CPS database VIP
/etc/VRTScps/db
from site1 and another from
C
San 1 Connections
site2. NI
San 2 Connections
Sample SFHA cluster setup diagrams for CP server-based I/O fencing 310
Configuration diagrams for setting up server-based I/O fencing
# haremajor -v
55
Changing NFS server major numbers for VxVM volumes 312
Changing NFS server major numbers for VxVM volumes
Run this command on each cluster node. If major numbers are not the same on
each node, you must change them on the nodes so that they are identical.
To list the available major numbers for a system
◆ Use the command:
# haremajor -a
54,56..58,60,62..
The output shows the numbers that are not in use on the system where the
command is issued.
To reset the major number on a system
◆ You can reset the major number to an available number on a system. For
example, to set the major number to 75 type:
# haremajor -s 75
Appendix H
Configuring LLT over UDP
This appendix includes the following topics:
set-node sys1
set-cluster 1
link link1 /dev/xti/udp - udp 50000 - 192.168.9.1 192.168.9.255
link link2 /dev/xti/udp - udp 50001 - 192.168.10.1 192.168.10.255
Verify the subnet mask using the ifconfig command to ensure that the two links
are on separate subnets.
■ Display the content of the /etc/llttab file on the second node sys2:
set-node sys2
set-cluster 1
link link1 /dev/xti/udp - udp 50000 - 192.168.9.2 192.168.9.255
link link2 /dev/xti/udp - udp 50001 - 192.168.10.2 192.168.10.255
Verify the subnet mask using the ifconfig command to ensure that the two links
are on separate subnets.
Configuring LLT over UDP 315
Manually configuring LLT over UDP using IPv4
Field Description
tag-name A unique string that is used as a tag by LLT; for example link1,
link2,....
device The device path of the UDP protocol; for example /dev/xti/udp.
node-range Nodes using the link. "-" indicates all cluster nodes are to be
configured for this link.
udp-port Unique UDP port in the range of 49152-65535 for the link.
MTU "-" is the default, which has a value of 8192. The value may be
increased or decreased depending on the configuration. Use the
lltstat -l command to display the current value.
bcast-address ■ For clusters with enabled broadcasts, specify the value of the
subnet broadcast address.
■ "-" is the default for clusters spanning routers.
Field Description
link tag-name The string that LLT uses to identify the link; for example link1,
link2,....
To check which ports are defined as defaults for a node, examine the file
/etc/services. You should also use the netstat command to list the UDP ports
currently in use. For example:
# netstat -a | more
UDP
Local Address Remote Address State
-------------------- ------------------- ------
*.* Unbound
*.32771 Idle
*.32776 Idle
*.32777 Idle
*.name Idle
*.biff Idle
*.talk Idle
*.32779 Idle
.
.
.
*.55098 Idle
*.syslog Idle
*.58702 Idle
*.* Unbound
Configuring LLT over UDP 317
Manually configuring LLT over UDP using IPv4
Look in the UDP section of the output; the UDP ports that are listed under Local
Address are already in use. If a port is listed in the /etc/services file, its associated
name is displayed rather than the port number in the output.
For example:
■ For the first network interface on the node sys1:
# cat /etc/llttab
set-node nodexyz
set-cluster 100
Figure H-1 A typical configuration of direct-attached links that use LLT over
UDP
UDP Endpoint en2
Node0 UDP Port = 50001 Node1
IP = 192.1.3.1
Link Tag = link2
en2
192.1.3.2
Link Tag = link2
Switches
The configuration that the /etc/llttab file for Node 0 represents has directly attached
crossover links. It might also have the links that are connected through a hub or
switch. These links do not cross routers.
LLT sends broadcast requests to peer nodes to discover their addresses. So the
addresses of peer nodes do not need to be specified in the /etc/llttab file using the
Configuring LLT over UDP 319
Manually configuring LLT over UDP using IPv4
set-addr command. For direct attached links, you do need to set the broadcast
address of the links in the /etc/llttab file. Verify that the IP addresses and broadcast
addresses are set correctly by using the ifconfig -a command.
set-node Node0
set-cluster 1
#configure Links
#link tag-name device node-range link-type udp port MTU \
IP-address bcast-address
link link1 /dev/xti/udp - udp 50000 - 192.1.2.1 192.1.2.255
link link2 /dev/xti/udp - udp 50001 - 192.1.3.1 192.1.3.255
set-node Node1
set-cluster 1
# configure Links
# link tag-name device node-range link-type udp port MTU \
IP-address bcast-address
link link1 /dev/xti/udp - udp 50000 - 192.1.2.2 192.1.2.255
link link2 /dev/xti/udp - udp 50001 - 192.1.3.2 192.1.3.255
en2
192.1.4.1
Link Tag = link2
The configuration that the following /etc/llttab file represents for Node 1 has
links crossing IP routers. Notice that IP addresses are shown for each link on each
peer node. In this configuration broadcasts are disabled. Hence, the broadcast
address does not need to be set in the link command of the /etc/llttab file.
set-node Node1
set-cluster 1
link link1 /dev/xti/udp - udp 50000 - 192.1.3.1 -
link link2 /dev/xti/udp - udp 50001 - 192.1.4.1 -
#set address of each link for all peer nodes in the cluster
#format: set-addr node-id link tag-name address
set-addr 0 link1 192.1.1.1
set-addr 0 link2 192.1.2.1
set-addr 2 link1 192.1.5.2
set-addr 2 link2 192.1.6.2
set-addr 3 link1 192.1.7.3
set-addr 3 link2 192.1.8.3
set-node Node0
set-cluster 1
#set address of each link for all peer nodes in the cluster
#format: set-addr node-id link tag-name address
set-addr 1 link1 192.1.3.1
set-addr 1 link2 192.1.4.1
set-addr 2 link1 192.1.5.2
set-addr 2 link2 192.1.6.2
set-addr 3 link1 192.1.7.3
set-addr 3 link2 192.1.8.3
■ For the links that cross an IP router, disable multicast features and specify the
IPv6 address of each link manually in the /etc/llttab file.
See “Sample configuration: links crossing IP routers” on page 323.
Figure H-3 A typical configuration of direct-attached links that use LLT over
UDP
fe80::21a:64ff:fe92:1a93
Link Tag = link2
Switches
The configuration that the /etc/llttab file for Node 0 represents has directly attached
crossover links. It might also have the links that are connected through a hub or
switch. These links do not cross routers.
LLT uses IPv6 multicast requests for peer node address discovery. So the addresses
of peer nodes do not need to be specified in the /etc/llttab file using the set-addr
command. Use the ifconfig -a command to verify that the IPv6 address is set
correctly.
set-node Node0
set-cluster 1
#configure Links
#link tag-name device node-range link-type udp port MTU \
IP-address mcast-address
link link1 /dev/xti/udp6 - udp6 50000 - fe80::21a:64ff:fe92:1b46 -
link link1 /dev/xti/udp6 - udp6 50001 - fe80::21a:64ff:fe92:1b47 -
set-node Node1
set-cluster 1
# configure Links
# link tag-name device node-range link-type udp port MTU \
IP-address mcast-address
link link1 /dev/xti/udp6 - udp6 50000 - fe80::21a:64ff:fe92:1a92 -
link link1 /dev/xti/udp6 - udp6 50001 - fe80::21a:64ff:fe92:1a93 -
Configuring LLT over UDP 323
Manually configuring LLT over UDP using IPv6
fe80::21a:64ff:fe92:1b47
Link Tag = link2
Routers
The configuration that the following /etc/llttab file represents for Node 1 has
links crossing IP routers. Notice that IPv6 addresses are shown for each link on
each peer node. In this configuration multicasts are disabled.
set-node Node1
set-cluster 1
#set address of each link for all peer nodes in the cluster
#format: set-addr node-id link tag-name address
set-addr 0 link1 fe80::21a:64ff:fe92:1b46
set-addr 0 link2 fe80::21a:64ff:fe92:1b47
set-addr 2 link1 fe80::21a:64ff:fe92:1d70
set-addr 2 link2 fe80::21a:64ff:fe92:1d71
set-addr 3 link1 fe80::209:6bff:fe1b:1c94
set-addr 3 link2 fe80::209:6bff:fe1b:1c95
set-node Node0
set-cluster 1
#set address of each link for all peer nodes in the cluster
#format: set-addr node-id link tag-name address
set-addr 1 link1 fe80::21a:64ff:fe92:1a92
set-addr 1 link2 fe80::21a:64ff:fe92:1a93
set-addr 2 link1 fe80::21a:64ff:fe92:1d70
set-addr 2 link2 fe80::21a:64ff:fe92:1d71
set-addr 3 link1 fe80::209:6bff:fe1b:1c94
set-addr 3 link2 fe80::209:6bff:fe1b:1c95