Upgrading Controller Hardware in A Singlenode
Upgrading Controller Hardware in A Singlenode
You are upgrading a node running clustered Data ONTAP 8.3 to a new node running clustered Data ONTAP 8.3
If you want to upgrade a node that is not currently running Data ONTAP 8.3, you can perform this procedure, provided
that the node can run Data ONTAP 8.3, and that you follow the instructions in the section Ensuring hardware and
software compatibility between original and new nodes for upgrading Data ONTAP.
You are reusing the IP addresses, netmasks, and gateways of the original node on the new node.
After the procedure, you will use the root volume of the original system.
If you are replacing a single failed node with the same model instead of upgrading the node, use the appropriate controllerreplacement procedure on the the NetApp Support Site at mysupport.netapp.com.
If you are replacing an individual component, see the field-replaceable unit (FRU) flyer for that component on the NetApp
Support Site at mysupport.netapp.com.
This procedure uses the term boot environment prompt to refer to the prompt on a node from which you can perform certain
tasks, such as rebooting the node and printing or setting environmental variables.
The prompt is shown in the following example:
LOADER>
This procedure applies to FAS systems, V-Series systems, and systems with FlexArray Virtualization Software.
Note: Most Data ONTAP platforms released before Data ONTAP 8.2.1 were released as separate FAS and V-Series hardware
systems (for example, a FAS6280 and a V6280). Only the V-Series systems (a V or GF prefix) could attach to storage arrays.
Starting in Data ONTAP 8.2.1, only one hardware system is being released for new platforms. These new platforms, which
have only a FAS prefix, can attach to storage arrays if the required license is installed. These new platforms are the FAS80xx
and FAS25xx systems.
This document uses the term systems with FlexArray Virtualization Software to refer to systems belonging to these new
platforms and the term V-Series system to refer to the separate hardware systems that can attach to storage arrays.
215-09571_A0_ur002
Steps
1.
2.
3.
4.
5.
New node
FAS22xxA
See the note below the table.
FAS2520
FAS255x
3220
3240, 3270
3250
6210
6220
FAS8060, FAS8080 EX
6240
6280
FAS8080 EX
Note: If your FAS80xx controllers are running Data ONTAP 8.3 or later and one or both are All-Flash FAS models, make
sure that both controllers have the same All-Flash Optimized personality set:
system node show -instance node_name
Both nodes must either have the personality enabled or disabled; you cannot combine a node with the All-Flash Optimized
personality enabled with a node that does not have the personality enabled in the same HA pair. If the personalities are
different, refer to KB Article 1015157 in the NetApp Knowledge Base for instructions on how to sync node personality.
Note: If the root aggregate on FAS22xx systems is on partitioned disks in clustered Data ONTAP 8.3 you will not be able to
use this procedure. You can upgrade by using the procedure Upgrading the controller hardware on a pair of nodes running
clustered Data ONTAP by moving volumes. After the data is removed, you can delete the existing partitions and remove
ownership of the disks and then the FAS22xx can be repurposed as a data shelf.
Note: The system being upgraded cannot have volumes or aggregates on any internal drives.
Note: You can upgrade a FAS2220A system under one of the following circumstances:
The system has internal SATA drives or SSDs and you plan to transfer the drives to a disk shelf attached to the new
system.
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
If you have a FAS2220A system with internal SAS drives, you can upgrade the system using the procedure Upgrading the
controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes.
You can upgrade a FAS2240 system under the one of the following circumstances:
The system has internal storage and you plan to convert the FAS2240 to a disk shelf and attach it to the new system.
Note: If the new system has fewer slots than the original system, or if it has fewer or different types of ports, you might need
to add an adapter to the new system. See the Hardware Universe on the NetApp Support Site for details about specific
platforms.
Licensing in Data ONTAP 8.3
When you set up a cluster, the setup wizard prompts you to enter the cluster base license key. However, some features require
additional licenses, which are issued as packages that include one or more features. Each node in the cluster must have its own
key for each feature to be used in the cluster.
Starting with Data ONTAP 8.3, all license keys are 28 uppercase alphabetic characters in length. If you need to install a license
apart from a Data ONTAP upgrade, you must enter the license key in the 28-character uppercase alphabetic format.
If you do not have new license keys, currently licensed features in the cluster will be available to the new controller. However,
using features unlicensed on the controller might put you out of compliance with your license agreement, so you should install
the new license key or keys on the new controller. You can obtain new license keys on the NetApp Support Site in the My
Support section under Software licenses. If the site does not have the license keys you need, contact your NetApp sales
representative.
For detailed information about licensing, see the Clustered Data ONTAP System Administration Guide for Cluster
Administrators and the KB article Data ONTAP 8.2 and 8.3 Licensing Overview and References on the NetApp Support Site.
Storage Encryption
Storage Encryption is available beginning with clustered Data ONTAP 8.2.1. The original nodes or the new nodes might be
enabled for Storage Encryption. In that case, you need to take additional steps in this procedure to ensure that Storage
Encryption is set up properly.
If you want to use Storage Encryption, all the disk drives associated with the nodes must have self-encrypting disk drives.
Grounding strap
#2 Phillips screwdriver
You need information in the following documents, which are available from the NetApp Support Site at mysupport.netapp.com:
Note: If you cannot access the NetApp Support Site when you are installing the new node, download the appropriate
documents before beginning this procedure.
Document
Contents
Document
Contents
Describes how to configure and manage iSCSI and FC protocols for SAN
environments
The NetApp Support Site also contains documentation about disk shelves, NICs, and other hardware that you might use with
your system. It also contains the Hardware Universe, which provides information about the hardware that the new system
supports.
1.
2.
3.
4.
5.
6.
7.
Ensuring hardware and software compatibility between the original and new nodes on page 5
Checking license information and getting license keys on page 5
Rekeying disks for Storage Encryption on page 5
Recording original node port information on page 7
Planning port migration on page 8
Sending an AutoSupport message for the original node on page 10
Recording system ID information and shutting down the node on page 10
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
Ensuring hardware and software compatibility between the original and new nodes
You need to ensure that the original hardware is compatible with the new hardware, and that both the original and new nodes are
running Data ONTAP 8.3.
Steps
1. Check the Hardware Universe at hwu.netapp.com to verify that your existing and new hardware components are compatible
and supported.
Note: You might need to add adapters to the new node.
2. Ensure that the original node has access to an HTTP server to download the latest software image from the NetApp Support
Site.
3. Make sure that Data ONTAP 8.3 is installed on both the old node and the new node before performing the upgrade.
Both the old and new nodes must be running Data ONTAP 8.3 before the upgrade. Using a different version of the software
on the new node results in storage subsystem panics. You can download Data ONTAP software images from the NOW site
or request them from technical support. See the Clustered Data ONTAP Upgrade and Revert/Downgrade Guide for
information about downloading and installing Data ONTAP.
1. Enter the following command to see which licenses are on the original system, and record the information:
system license show
You might need the information if you want the new system to have the same licensed features as the original system.
2. Obtain new license keys for the new node from the NetApp Support Site at mysupport.netapp.com.
Contact technical support to perform an optional step to preserve the security of the encrypted drives by rekeying all drives to a
known authentication key.
Steps
The nodeshell is a special shell for commands that take effect only at the node level.
2. Display the status information to check for disk encryption:
disk encrypt show
Example
The system displays the key ID for each self-encrypting disk, as shown in the following example:
Preparing to upgrade the node
node> disk
Disk
0c.00.1
0c.00.0
0c.00.3
0c.00.4
0c.00.2
0c.00.5
...
encrypt show
Key ID
0x0
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
Locked?
No
Yes
Yes
Yes
Yes
Yes
If you get the following error message, proceed to the Preparing for netboot section; if you do not get an error message,
continue with these steps.
node> disk encrypt show
ERROR: The system is not configured to run this command.
3. Examine the output of the disk encrypt show command, and if any disks are associated with a non-MSID key, rekey
them to an MSID key by taking one of the following actions:
To rekey disks individually, enter the following command, once for each disk:
disk encrypt rekey 0x0 disk_name
4. Verify that all the self-encrypting disks are associated with an MSID:
disk encrypt show
Example
The following example shows the output of the disk encrypt show command when all self-encrypting disks are
associated with an MSID:
node> disk
Disk
---------0b.10.23
0b.10.18
0b.10.0
0b.10.12
0b.10.3
0b.10.15
0a.00.1
0a.00.2
encrypt show
Key ID
---------------------------------------------------------------0x0
0x0
0x0
0x0
0x0
0x0
0x0
0x0
Locked?
------No
No
Yes
Yes
No
No
Yes
Yes
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
1. Enter the following command on both node1 and node2 and record the information about the shelves, numbers of disks in
each shelf, flash storage details, memory, NVRAM, and network cards from the output:
run -node node_name sysconfig
Note: You can use this information to identify parts or accessories that you might want to transfer to other nodes. If you
do not know whether the nodes are V-Series systems or have FlexArray Virtualization, you can learn that from the output
as well.
2. If you are upgrading a V-Series system or a system with FlexArray Virtualization software, capture information about the
topology of the original nodes by entering the following command and recording the output:
storage array config show -switch
Example
Target Side
Switch Port
---------------
Initiator Side
Switch Port
----------------
vgbr6510s164:5
vgbr6510s164:3
vgbr6510s164:6
vgbr6510s163:6
vgbr6510s164:4
vgbr6510s163:1
2b
0c
vgbr6510s164:5
vgbr6510s164:6
vgbr6510s163:6
vgbr6510s163:5
vgbr6510s164:1
vgbr6510s164:2
vgbr6510s163:3
vgbr6510s163:4
0d
2b
0c
2a
Initiator
---------
3. Find the node-management port on the node by entering the following command:
network interface show -role node-mgmt
The system displays the LIFs for the node, as shown in the following example:
node::> network interface show -role node-mgmt
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------node0
mgmt1
up/up
10.225.248.88/25
Current
Current Is
Node
Port
Home
------------- ------- ---node0
e0M
true
4. Capture or record the information in the output to use later in the procedure.
5. Enter the following command:
network port show -node node_name -type physical
Example
The system displays the physical ports on the node, as shown in the following example for node0:
Speed (Mbps)
Broadcast Domain Link MTU
Admin/Oper
---------------- ---- ------ -----------10.225.248.88/24
Cluster
Cluster
up
up
up
up
up
1500
1500
1500
9000
9000
auto/100
auto/1000
auto/1000
auto/10000
auto/10000
Different platforms have different ports, so you might need to change the old node's port and LIF configuration to be compatible
with the new node's configuration before you power down the old node. This is because the new node will replay the same
configuration when it boots unless the configuration is changed before the upgrade.
For example, if the node-management LIF is hosted on port e1a of the old node, then booting the new node will cause it to also
host the node management LIF on port e1a. However, if you want the new node to have its node management LIF on port e4a,
then you must make the change on the original node before you boot it. If the two nodes have ports in common, you can choose
the same port on the new node that is on the original node. If they do not, you need to choose the port on the old node that
corresponds to the correct port on the new node.
Steps
1. Identify the data and node-management ports on both the original and the new nodes and record the information.
The data ports on each platform are:
FAS2240-4A: e0M
FAS2220A: e0M
FAS2240-2A: e0M
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
FAS2240-4A: e0M
FAS25xxA: e0M
32xx/FAS32xxA: e0M
62xx/FAS62xxA: e0M
FAS80xxA: e0M
2. Determine which LIFs are present on which ports by entering the network interface show and the network port
show commands and noting their output.
3. Consult the list in Step 1 or the Hardware Universe for port information on the new node.
4. Use the network port modify commands to adjust network broadcast domains, making the old node's configuration
compatible with the new node's configuration.
The goal of this step is to obtain a compatible configuration for at least one port of each role. Compatible means that the
configuration settings of the old node can be applied to the new node to enable connectivity when the new node boots.
See the Clustered Data ONTAP Commands: Manual Page Reference for more information about the network port
modify command. Also see the lists in Step 1 for each platform's port assignments.
5. Complete the following substeps to ensure that physical ports can be mapped correctly later in the procedure:
a. Enter the following command to see if there are any VLANs configured on the node:
network port vlan show -node node_name
VLANs are configured over physical ports. If the physical ports change, then the VLANs need to be re-created later in
the procedure.
Example
The system displays VLANs configured on the node, as shown in the following example:
node::> network port vlan
Network
Node
VLAN Name Port
------ --------- ------node0
e1b-70
e1b
show -node
Network
VLAN ID MAC Address
------- -----------------70
00:15:17:76:7b:69
b. If there are VLANs configured on the node, take note of each network port and VLAN ID pairing.
c. If there are VLANs configured on interface groups (ifgrps), remove the VLANs from the interface groups by entering the
following command:
network port vlan delete -node node_name -port ifgrp -vlan-id VLAN_ID
d. Enter the following command and examine its output to see if there are any interface groups configured on the node:
network port ifgrp show -node node_name -instance
e. If any interface groups are configured on the node, record the names of the interface groups and the ports assigned to
them, and then delete the ports by entering the following command, once for each port:
network port ifgrp remove-port -node node_name -ifgrp ifgrp_name -port port_name
1. Halt the node by entering the following command at the command prompt:
system node halt -node node_name
2. Boot to Maintenance mode by entering the following command at the boot environment prompt:
boot_ontap maint
10
POOL
---Pool0
Pool0
Pool0
Pool0
SERIAL NUMBER
------------3KS6BN970000973655KL
3KS6BCKD000097363ZHK
3KS6BL9H000097364W74
3KS6BNEX000097364BRE
HOME
---node1
node1
node1
node1
(118049495)
(118049495)
(118049495)
(118049495)
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
0c.43
0c.41
0c.42
)
node1 (118049495)
node1 (118049495)
node1 (118049495)
Pool0
Pool0
Pool0
V5YEB4TA
V5YE72KA
V5YD47BA
node1 (118049495)
node1 (118049495)
node1 (118049495)
6. Enter the following command at the prompt to confirm that you want to destroy the local mailbox:
y
7. Exit Maintenance mode on the node that you are upgrading by entering the following command at the Maintenance mode
prompt of the node:
halt
8. Turn off the power to the original node and then unplug it from the power source.
1.
2.
3.
4.
5.
6.
7.
11
You must have completed the steps in Preparing to upgrade the node pair.
Steps
1. Turn on the power to the first new node, and then immediately press Ctrl-C at the console terminal to access the boot
environment prompt.
Note: If you are upgrading to a system with both nodes in the same chassis, the other node also reboots. However, you can
disregard booting the other node until Step 10.
Attention: When you boot the new node, you might see the following message:
WARNING: The battery is unfit to retain data during a power
outage. This is likely because the battery is
discharged but could be due to other temporary
conditions.
When the battery is ready, the boot process will
complete and services will be engaged.
To override this delay, press 'c' followed by 'Enter'
2. If you see the warning message in Step 1, take the following actions:
a. Check for any console messages that might indicate a problem other than a low NVRAM battery, and, if necessary, take
any required corrective action.
b. Allow the battery to charge and the boot process to complete.
Attention: Do not override the delay; failure to allow the battery to charge could result in a loss of data.
12
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
Description
a.
b.
Return to this section and complete the remaining steps, beginning with Step 4.
Important: You must reconfigure FC onboard ports, CNA onboard ports, and CNA cards
before you boot Data ONTAP on the V-Series system or system with FlexArray
Virtualization software.
4. Add the FC initiator ports of the new node to the switch zones.
If your system has a tape SAN, then you need zoning for the initiators. See your storage array and zoning documentation for
instructions.
5. Add the FC initiator ports to the storage array as new hosts, mapping the array LUNs to the new hosts.
See your storage array and zoning documentation for instructions.
6. Adjust the World Wide Port Name (WWPN) values in the host or volume groups associated with array LUNs on the storage
array.
Installing a new controller module changes the WWPN and World Wide Node Name (WWNN) values associated with each
onboard FC port.
7. If your configuration uses switch-based zoning, adjust the zoning to reflect the new WWNN values.
8. At the Maintenance mode prompt, halt the system:
halt
The system displays all the array LUNs visible to each of the FC initiator ports. If the array LUNs are not visible, you will
not be able to reassign disks from node1 to node3 later in this section.
10. Take one of the following actions:
If the system you are upgrading to is
in a...
Then...
Go to Step 11.
13
Then...
a.
Switch the console cable from the first new node to the other new node.
b.
Turn on the power to the second new node, and then interrupt the boot process by pressing
Ctrl-C at the console terminal to access the boot environment prompt.
The power should already be on if both controllers are in the same chassis.
Note: Leave the second new node at the boot environment prompt; you return to this
procedure and repeat these steps after the first new node is installed.
c.
If you see the warning message displayed in Step 1, follow the instructions in Step 2.
d.
Switch the console cable back from the first new node to the other new node.
e.
Go to Step 11.
13. Configure the netboot connection by choosing one of the following actions.
Note: You should use the management port and IP as the netboot connection. Do not use a data LIF IP address, or a data
outage might occur while the upgrade is being performed.
If DHCP is...
Then...
Running
Configure the connection automatically by entering the following command at the boot
environment prompt:
ifconfig e0M -auto
Manually configure the connection by entering the following command at the boot
environment prompt:
ifconfig e0M -addr=filer_addr mask=netmask -gw=gateway dns=dns_addr -domain=dns_domain
filer_addr is the IP address of the storage system.
netmask is the network mask of the storage system.
gateway is the gateway for the storage system.
dns_addr is the IP address of a name server on your network.
dns_domain is the Domain Name Service (DNS) domain name. If you use this optional
parameter, you do not need a fully qualified domain name in the netboot server URL; you
need only the server's host name.
Note: Other parameters might be necessary for your interface. Entering the help
ifconfig command at the firmware prompt provides details.
14. Perform netboot by entering the following command on the first new node:
netboot http://path_to_the_web-accessible_directory/netboot/kernel
The path_to_the_web-accessible_directory should lead to where you downloaded the netboot.tgz file in Step
1a in the Preparing for netboot section.
Note: Do not interrupt the boot.
14
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
15. From the boot menu, select option (7) Install new software first.
This menu option downloads and installs the new Data ONTAP image to the boot device.
Note: Disregard the following message: "This procedure is not supported for Non-Disruptive Upgrade
on an HA pair". The note applies to nondisruptive upgrades of Data ONTAP, not to upgrades of controllers.
16. If you are prompted to continue the procedure, enter y, and when prompted for the package, enter the URL http://
path_to_the_web-accessible_directory/image.tgz.
17. Complete the following substeps:
a. Enter n to skip the backup recovery when you see the following prompt:
Do you want to restore the backup configuration now? {y|n}
n
The controller module reboots but stops at the boot menu because the boot device was reformatted and the configuration
data needs to be restored.
18. Boot the new node:
boot_ontap
The system displays the system ID of the node and information about its disks, as shown in the following example:
*> disk show -a
Local System ID: 536881109
DISK
OWNER
-------- ------------0b.02.23 nst-fas2520-2(536880939)
0b.02.13 nst-fas2520-2(536880939)
0b.01.13 nst-fas2520-2(536880939)
......
0a.00.0
(536881109)
......
POOL
----Pool0
Pool0
Pool0
SERIAL NUMBER
------------KPG2RK6F
KPG3DE4F
PPG4KLAA
Pool0
YFKSX6JG
HOME
DR HOME
------------------------nst-fas2520-2(536880939)
nst-fas2520-2(536880939)
nst-fas2520-2(536880939)
(536881109)
23. Reassign the new node's spares, disks belonging to the root, and any non-root aggregates that were not relocated to the node
earlier:
disk reassign -s new_node_sysid -d node3_sysid -p node2_sysid
15
When you run the disk reassign command on the new node, the -p option is the node2_sysid; when you run the disk
reassign command on the other node, the -p option is the node3_sysid.
For the new _node_sysid value, use the information captured in Step 2 of the Recording original node port information
section. To obtain the value for node3_sysid, use the sysconfig command.
The disk reassign command reassigns only those disks for which node1_sysid is the current owner.
The system displays the following message:
Partner node must not be in Takeover mode during disk reassignment from maintenance mode.
Serious problems could result!!
Do not proceed with reassignment if the partner is in takeover mode. Abort reassignment
(y/n)? n
24. Enter n.
The system displays the following message:
After the node becomes operational, you must perform a takeover and giveback of the HA
partner node to ensure disk reassignment is successful.
Do you want to continue (y/n)? y
25. Enter y.
The system displays the following message:
Disk ownership will be updated on all disks previously belonging to Filer with sysid
<sysid>.
Do you want to continue (y/n)? y
26. Enter y.
27. Verify that the controller and chassis are configured as ha:
ha-config show
Example
The following example shows the output of the ha-config show command:
*> ha-config show
Chassis HA configuration: ha
Controller HA configuration: ha
Systems record in a PROM whether they are in an HA pair or stand-alone configuration. The state must be the same on all
components within the stand-alone system or HA pair.
If the controller and chassis are not configured as ha, use the following commands to correct the configuration: ha-config
modify controller ha and ha-config modify chassis ha.
If you have a MetroCluster configuration, use the following commands to modify the controller and chassis: ha-config
modify controller mcc and ha-config modify chassis mcc.
28. Destroy the mailboxes on node3:
mailbox destroy local
16
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
29. Enter y at the prompt to confirm that you want to destroy the local mailboxes.
30. Exit Maintenance mode:
halt
33. If the date is incorrect, set the date by entering the following command at the boot environment prompt:
set date mm/dd/yyyy
34. At the boot environment prompt, verify that the bootarg variable was set correctly:
printenv
35. Transfer all remaining cables from the old node to the corresponding ports on the new node.
This includes Fibre Channel or other external disk shelf cables, and Ethernet cables. You need to ensure that all the cables
are correctly connected. For information about cabling, see the installation and setup instructions for the node model.
You must have completed the previous steps in this procedure up to this point.
Choices
You must have done the following before proceeding with this section:
Made sure that the SATA or SSD drive carriers from the FAS2220 system are compatible with the new disk shelf
Check the Hardware Universe on the NetApp Support Site for compatible disk shelves
Made sure that there is a compatible disk shelf attached to the new system
17
Made sure that the disk shelf has enough free bays to accommodate the SATA or SSD drive carriers from the FAS2220
system
You cannot transfer SAS disk drives from a FAS2220 system to a disk shelf attached to the new nodes.
Steps
6.0TB
6.0TB
6.0TB
The cam handle on the carrier springs open partially, and the carrier releases from the midplane.
3. Pull the cam handle to its fully open position to unseat the carrier from the midplane and gently slide the carrier out of the
disk shelf.
Attention: Always use two hands when removing, installing, or carrying a disk drive. However, do not place your hands
6.0TB
6.0TB
6.0TB
6.0TB
4. With the cam handle in the open position, insert the carrier into a slot in the new disk shelf, firmly pushing until the carrier
stops.
Caution: Use two hands when inserting the carrier.
5. Close the cam handle so that the carrier is fully seated in the midplane and the handle clicks into place.
Be sure you close the handle slowly so that it aligns correctly with the face of the carrier.
6. Repeat Step 2 through Step 5 for all of the disk drives that you are moving to the new system.
18
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
Converting the FAS2240 system to a disk shelf and attaching it to the new system
After you complete the upgrade, you can convert the FAS2240 system to a disk shelf and attach it to the new system to provide
additional storage.
Before you begin
You must have upgraded the FAS2240 system before converting it to a disk shelf. The FAS2240 system must be powered down
and uncabled.
Steps
1. Replace the controller modules in the FAS2240 system with IOM6 modules.
2. Set the disk shelf ID.
Each disk shelf, including the FAS2240 chassis, requires a unique ID.
3. Reset other disk shelf IDs as needed.
4. Turn off power to any disk shelves connected to the new node and then turn the shelves back on.
5. Cable the converted FAS2240 disk shelf to a SAS port on the new system, and, if you are using ACP cabling, to the ACP
port on the new system.
Note: If the new system does not have a dedicated onboard network interface for ACP for each controller, you must
dedicate one for each controller at system setup. See the Installation and Setup Instructions for the new system and the
Universal SAS and ACP Cabling Guide for cabling information. Also consult the Clustered Data ONTAP HighAvailability Configuration Guide.
6. Turn on the power to the converted FAS2240 disk shelf and any other disk shelves attached to the new node.
7. Turn on the power to the new node and then interrupt the boot process by pressing Ctrl-C to access the boot environment
prompt.
1. Boot Data ONTAP on the new node by entering the following command at the boot environment prompt:
boot_primary maint
3. Display the node system ID and then reassign the node's spares, disks belonging to the root volume, and any SFO aggregates
by entering the following commands:
disk reassign -s original_sysid -d new_sysid
The disk reassign command will reassign only those disks for which original_sysid is the current owner.
4. Ensure that the old root volume is still marked as root:
aggr options aggr0 root
19
halt
The node reboots and then restarts and displays the boot menu.
2. From the boot menu, select (6) Update flash from backup config., as shown in the following example:
Please choose one of the following:
(1) Normal Boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node.
Selection (1-8)? 6
3. Enter y.
The update flash process runs for several minutes, and then the system reboots. The startup process then asks you to confirm
the system ID mismatch.
4. Confirm the mismatch as shown in the following example:
WARNING: System id mismatch. This usually occurs when replacing CF or NVRAM cards!
Override system id? {y|n} [n] y
You can capture information about the ports for the new node from the Hardware Universe at hwu.netapp.com. You use the
information later in this section.
About this task
The software configuration of the new node must match the physical connectivity of the node and IP connectivity must be
restored before you continue with the upgrade.
20
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
Port settings might vary, depending on the model of the node. You must make the original node's port and LIF configuration
compatible with what you plan the new node's configuration to be. This is because the new node replays the same configuration
when it boots, which means that when you boot the new node that Data ONTAP tries to host LIFs on the same ports that were
used on node1.
If the physical ports on the original node do not map directly to the physical ports on the new node, software configuration
changes will be required to restore cluster, management, and network connectivity after the boot. In addition, if the cluster ports
on the original node do not directly map to the cluster ports on the new node, the new node may not automatically rejoin quorum
when it is rebooted until a software configuration change is made to host the cluster LIFs on the correct physical ports.
Steps
1. Record all of the original node's cabling information for that node, the ports, broadcast domains, and IPspaces, in this table:
LIF
Original
node
ports
Original
node
IPspaces
Original
broadcast
domains
New node
ports
New node
IPspaces
New node
broadcast
domains
SAN/NAS
Cluster 1
NA
Cluster 2
NA
Cluster 3
NA
Cluster 4
NA
Cluster 5
NA
Cluster 6
NA
Node
management
NA
Cluster
management
NA
Data 1
Data 2
Data 3
Data 4
See the Recording original node port information section for the steps to obtain this information.
2. Record all the cabling information for the new node, the ports, broadcast domains, and IPspaces in the previous table using
the same procedure; see the Recording original node port information section for the steps to obtain this information.
3. Place the new node into quorum, perform the following steps:
a. Boot the new node. See the Booting and setting up the new nodes section to boot the node if you have not already done
so.
b. Add the correct ports to the Cluster broadcast domain:
network port modify -ipspace Cluster -mtu 9000
Example
c. Migrate the LIFs to the new ports, once for each LIF:
21
network interface migrate -vserver vserver_name -lif lif_name -source-node node-new destination-node node-new -destination-port port_name
Note: If you cannot migrate a cluster LIF, make sure the node has another network port that can communicate on the
Cluster network and migrate the cluster LIF to that network port. After the node has joined quorum, you try a reboot to
resolve the issue of Data ONTAP not having discovered the network port.
If you cannot migrate a data LIF, delete the LIF and re-create it on a functioning network port.
Note: You cannot migrate SAN data LIFs. Those LIFs need to be taken offline before you can change their home
ports.
d. Modify the home port of the cluster LIFs:
network interface modify -vserver Cluster -lif lif_name home-port port_name
4. If there were any ports on the original node that no longer exist on the new node, delete them:
network port delete -node node_name -port port_name
5. Adjust the node-management broadcast domain and migrate the node-management and cluster-management LIFs, if
necessary, as follows:
a. Display the home port of a LIF:
network interface show -fields home-node,home-port
6. Adjust the intercluster broadcast domains and migrate the intercluster LIFs, if necessary, using the same commands shown in
Step 5.
7. Adjust any other broadcast domains and migrate the data LIFs, if necessary, using the same commands shown in Step 5.
8. Adjust all the LIF failover groups:
network interface modify -failover-group failover_group -failover-policy failover_policy
22
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
Example
The following command sets the failover policy to broadcast-domain-wide and uses the ports in failover group fg1 as
failover targets for LIF data1 on node3:
network interface modify -vserver node3 -lif data1 failover-policy broadcast-domain-wide failover-group fg1
See the Configuring failover settings on a LIF section in the Clustered Data ONTAP Network Management Guide or
Clustered Data ONTAP Commands: Manual Page Reference for more information.
9. Verify the changes on the new node:
network port show -node node-new
2. Examine the output of the network interface show command to verify that the logical interfaces are operational:
Example
node::> network interface show
Logical
Status
Server Interface
Admin/Oper
------ ------------------node1
mgmt1
up/down
Network
Address/Mask
----------------
Current Current
Node
Port
------- -------
Is
Home
----
10.98.201.97/21
node1
e0M
true
10.90.225.50/20
10.90.225.51/20
10.90.225.53/20
10.90.225.52/20
node1
node1
node2
node2
e0d
e0f
e0d
e0f
true
true
true
true
vs9
node1_data1 up/up
node1_data2 up/up
node2_data1 up/up
node2_data2 up/up
5 entries were displayed.
Note: After you enter the network interface show command, you might not see the output because the node is
completing startup. If you do not see the output, wait 5 to 10 seconds and then run the network interface show
command again.
3. Delete any unused ports on the new node by completing the following substeps:
a. Access the advanced privilege level by entering the following command on either node:
set -privilege advanced
b. Enter y.
c. Enter the following command, once for each port that you want to delete:
23
4. Delete any unused network interfaces on the new node by completing the following substeps:
If a LIF you want to remove is in a port set, you must delete the LIF from the port set before you can remove it.
a. If you are in a SAN environment, check to see if any unused LIFs are in port sets by entering the following command and
examining its output:
lun portset show
b. If any unused LIFs are in port sets, remove the LIFs from the port sets by entering the following command:
lun portset remove -vserver vserver_name -portset portset_name -port-name port_name
c. Remove each unused LIF by entering the following command, once for each LIF:
network interface delete -vserver vserver_name -lif LIF_name
5. If the name of the node includes the platform name (for example, node-8060-1), rename the node by entering the following
command at the command prompt:
system node rename -node current_name -newname new_name
See the Clustered Data ONTAP System Administration Guide for Cluster Administrators for more information about the
system node rename command.
6. If the node names are changing, enter the following command to find the reporting nodes list:
lun mapping show -vserver vserver_name -path lun-path igroup igroup_name
Example
7. To modify the reporting list to add the local HA pairs for each LUN:
lun mapping add-reporting-nodes -path * -local-nodes
24
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
9. If you renamed the node in Step 5, check that the name took effect by entering the following command and examining its
output:
system node show
The system node show command displays information about the node.
10. If the name of the Storage Virtual Machine (SVM, formerly known as Vserver) includes a platform name, rename it by
entering the following command:
vserver rename -vserver current_vserver_name -newname new_vserver_name
See the Clustered Data ONTAP System Administration Guide for Cluster Administrators for information about the SPs and
the Clustered Data ONTAP Commands: Manual Page Reference for detailed information about the system serviceprocessor network modify command.
12. Install new licenses on the new node by entering the following command:
system license add license_code,license_code,license_code...
The parameter accepts a list of 28 upper-case alphabetic character keys. You can add one license at a time or multiple
licenses, separating each license key with a comma or space.
1.
2.
3.
4.
5.
1. Set up AutoSupport, following the instructions in the Clustered Data ONTAP System Administration Guide for Cluster
Administrators.
2. Send a post-upgrade AutoSupport message to NetApp by entering the following command:
system node autosupport invoke -node node_name -type all -message "node_name successfully
upgraded from platform_old to platform_new"
25
All the disks on the storage system must be encryption-enabled before you set up Storage Encryption on the new node.
About this task
You can skip this section if the system that you upgraded to does not have Storage Encryption enabled.
If you used Storage Encryption on the original system and migrated the disk shelves to the new system, you can reuse the SSL
certificates that are stored on migrated disk drives for Storage Encryption functionality on the upgraded system. However, you
should check that the SSL certificates are present on the migrated disk drives. If they are not present you will need to obtain
them.
Note: Step 1 through Step 3 are only the overall tasks required for configuring Storage Encryption. You need to follow the
detailed instructions for each task in the Clustered Data ONTAP Software Setup Guide.
Steps
1. Obtain and install private and public SSL certificates for the storage system and a private SSL certificate for each key
management server that you plan to use.
Requirements for obtaining the certificates and instructions for installing them are contained in the Clustered Data ONTAP
Software Setup Guide.
2. Collect the information required to configure Storage Encryption on the new node.
This includes the network interface name, the network interface IP address, and the IP address for external key management
server. The required information is contained in the Clustered Data ONTAP Software Setup Guide.
3. Launch and run the Storage Encryption setup wizard, responding to the prompts as appropriate.
After you finish
See the Clustered Data ONTAP Physical Storage Management Guide for information about managing Storage Encryption on
the updated system.
Configuring FC ports
If the new node has Fibre Channel (FC) portsonboard or on an FC adapteryou must set port configurations on the node
before you bring it into service because the ports are not preconfigured. If the ports are not configured, you might experience a
disruption in service.
Before you begin
You must have the values of the FC port settings for the new node that you saved in the and Preparing to upgrade the
nodesections.
About this task
You can skip this section if your system does not have FC configurations. If your system has onboard CNA ports or a CNA
adapter, you configure them in the next section.
Important: If your system has storage disks, you enter the commands in this section at the cluster prompt. If you have a VSeries system connected to storage arrays, you enter the commands in this section in Maintenance mode.
26
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
Steps
Then...
Go to Step 3.
Go to Step 2.
Then...
ucadmin show
The system displays information about all FC and converged network adapters on the system.
5. Compare the FC settings on the new node with the settings that you recorded from the original node.
6. Take one of the following actions:
If the default FC settings on the new
nodes are...
Then...
Go to Step 11.
Go to Step 7.
Then...
27
Then...
Then...
10. After you enter the command, wait until the system stops at the boot environment prompt.
11. Boot the new node and access Maintenance mode by entering the following command at the boot environment prompt:
boot_ontap maint
Then...
If the new node has a CNA card or CNA onboard ports, go to the Configuring CNA
ports section.
If the new node does not have a CNA card or CNA onboard ports, skip the Configuring
CNA portssection. and go to the Mapping network portssection.
If the new node has a CNA card or CNA onboard ports, go to the Configuring CNA
ports section.
If the new node does not have a CNA card or CNA onboard ports, skip the Configuring
CNA ports section, and return to Step 4 in the Booting and setting up the new
nodesection.
You must have the correct SFP+ modules for the CNA ports.
28
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
CNA ports can be configured into native Fibre Channel (FC) mode or CNA mode. FC mode supports FC initiator and FC target;
CNA mode allows concurrent NIC and FCoE traffic over the same 10-GbE SFP+ interface and supports FC target.
Note: NetApp marketing materials might use the term UTA2 to refer to CNA adapters and ports. However, the CLI and
product documentation use the term CNA.
CNA ports might be on an adapter or onboard the controller and have the following configurations:
CNA cards ordered when the controller is ordered are configured before shipment to have the personality you request.
CNA cards ordered separately from the controller are shipped with the default FC target personality.
Onboard CNA ports on new controllers are configured before shipment to have the personality you request.
However, you should check the configuration of the CNA ports on the node and change them, if necessary.
Steps
3. Check how the ports are currently configured by entering the following command on the new node:
If the system you are upgrading...
Then...
unified-connect show
Pending Pending
Admin
Mode
Type
Status
------- -----------online
online
online
online
online
online
online
online
Pending
Mode
-------
Pending
Type
-------
Status
-----online
online
online
online
online
29
0f
0g
0h
*>
fc
cna
cna
initiator target
target
-
online
online
online
4. If the current SFP+ module does not match the desired use, replace it with the correct SFP+ module.
5. Examine the output of the ucadmin show or system node hardware unified-connect show command and
determine whether the CNA ports have the personality you want.
6. Take one of the following actions:
If the CNA ports ...
Then...
Go to Step 7.
Go to Step 9.
7. If the CNA adapter is online, take if offline by entering one of the following commands:
Adapters in target mode are automatically placed offline in Maintenance mode.
8. If the current configuration does not match the desired use, enter the following commands to change the configuration as
needed:
If the system that you are
upgrading...
Then...
In either command:
9. Verify the settings by entering one of the following commands and examining its output:
If the system that you are
upgrading...
Then...
Example
The output in the following examples show that the FC4 type of adapter 1b is changing to initiator and that the mode of
adapters 2a and 2b is changing to cna:
30
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage
unified-connect show
Pending Pending
Mode
Type
Status
------- -----------online
initiator online
cna
online
cna
online
Pending
Mode
------cna
cna
Pending
Type
------initiator
-
Status
-----online
online
online
online
Then...
Then...
a.
Select Decommission this system in the Product Tool Site drop-down menu.
b.
Go to Step 5.
31
Then...
Not correct...
a.
Click the feedback link to open the form for reporting the problem.
b.
5. On the Decommission Form page, fill out the form and click Submit.
Trademark information
NetApp, the NetApp logo, Go Further, Faster, ASUP, AutoSupport, Campaign Express, Cloud ONTAP, clustered Data ONTAP,
Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash Accel, Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache,
FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster,
MultiStore, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, SANtricity, SecureShare, Simplicity, Simulate
ONTAP, Snap Creator, SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect,
SnapRestore, Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, Unbound Cloud, and WAFL are trademarks or
registered trademarks of NetApp, Inc., in the United States, and/or other countries. A current list of NetApp trademarks is
available on the web at http://www.netapp.com/us/legal/netapptmlist.aspx.
Cisco and the Cisco logo are trademarks of Cisco in the U.S. and other countries. All other brands or products are trademarks or
registered trademarks of their respective holders and should be treated as such.
32
Upgrading controller hardware in a single-node cluster running clustered Data ONTAP 8.3 by moving storage