AHV Admin Guide v6 0
AHV Admin Guide v6 0
AHV Admin Guide v6 0
ii
VLAN Configuration................................................................................................................................... 55
IGMP Snooping.............................................................................................................................................57
Switch Port ANalyzer on AHV Hosts..................................................................................................59
Enabling RSS Virtio-Net Multi-Queue by Increasing the Number of VNIC Queues...................... 66
Changing the IP Address of an AHV Host................................................................................................... 69
Copyright................................................................................................................. 148
License........................................................................................................................................................................148
Conventions..............................................................................................................................................................148
Version........................................................................................................................................................................148
iii
1
AHV OVERVIEW
As the default option for Nutanix HCI, the native Nutanix hypervisor, AHV, represents a unique
approach to virtualization that offers the powerful virtualization capabilities needed to deploy
and manage enterprise applications. AHV compliments the HCI value by integrating native
virtualization along with networking, infrastructure, and operations management with a single
intuitive interface - Nutanix Prism.
Virtualization teams find AHV easy to learn and transition to from legacy virtualization solutions
with familiar workflows for VM operations, live migration, VM high availability, and virtual
network management. AHV includes resiliency features, including high availability and dynamic
scheduling without the need for additional licensing, and security is integral to every aspect
of the system from the ground up. AHV also incorporates the optional Flow Security and
Networking, allowing easy access to hypervisor-based network microsegmentation and
advanced software-defined networking.
See the Field Installation Guide for information about how to deploy and create a cluster. Once
you create the cluster by using Foundation, you can use this guide to perform day-to-day
management tasks.
Limitations
Nested Virtualization
Nutanix does not support nested virtualization (nested VMs) in an AHV cluster.
Storage Overview
AHV uses a Distributed Storage Fabric to deliver data services such as storage provisioning,
snapshots, clones, and data protection to VMs directly.
In AHV clusters, AOS passes all disks to the VMs as raw SCSI block devices. By that means,
the I/O path is lightweight and optimized. Each AHV host runs an iSCSI redirector, which
establishes a highly resilient storage path from each VM to storage across the Nutanix cluster.
QEMU is configured with the iSCSI redirector as the iSCSI target portal. Upon a login request,
the redirector performs an iSCSI login redirect to a healthy Stargate (preferably the local one).
AHV Turbo
AHV Turbo represents significant advances to the data path in AHV. AHV Turbo provides an I/
O path that bypasses QEMU and services storage I/O requests, which lowers CPU usage and
increases the amount of storage I/O available to VMs.
AHV Turbo is enabled by default on VMs running in AHV clusters.
When using QEMU, all I/O travels through a single queue, which can impact performance. The
AHV Turbo design introduces a multi-queue approach to allow data to flow from a VM to the
storage, resulting in a much higher I/O capacity. The storage queues scale out automatically
to match the number of vCPUs configured for a given VM, making even higher performance
possible as the workload scales up.
The AHV Turbo technology is transparent to the VMs, but you can achieve even greater
performance if:
• The VM has multi-queue enabled and an optimum number of queues. See Enabling RSS
Virtio-Net Multi-Queue by Increasing the Number of VNIC Queues on page 66 for
instructions about how to enable multi-queue and set an optimum number of queues.
Consult your Linux distribution documentation to make sure that the guest operating system
fully supports multi-queue before you enable it.
• You have installed the latest Nutanix VirtIO package for Windows VMs. Download the VirtIO
package from the Downloads section of Nutanix Support Portal. No additional configuration
is required.
• The VM has more than one vCPU.
• The workloads are multi-threaded.
• ADS improves the initial placement of the VMs depending on the VM configuration.
• Nutanix Volumes uses ADS for balancing sessions of the externally available iSCSI targets.
Note: ADS honors all the configured host affinities, VM-host affinities, VM-VM antiaffinity policies,
and HA policies.
By default, ADS is enabled and Nutanix recommends you keep this feature enabled. However,
see Disabling Acropolis Dynamic Scheduling on page 7 for information about how to
disable the ADS feature. See Enabling Acropolis Dynamic Scheduling on page 7 for
information about how to enable the ADS feature if you previously disabled the feature.
ADS monitors the following resources:
Note:
Note: For a storage hotspot, ADS looks at the last 40 minutes of data and uses a smoothing
algorithm to use the most recent data. For a CPU hotspot, ADS looks at the last 10 minutes of
data only, that is, the average CPU usage over the last 10 minutes.
Following are the possible reasons if there is an obvious hotspot, but the VMs did not migrate:
• If there is a huge VM (16 vCPUs) at 100% usage, and accounts for 75% of the AHV host
usage (which is also at 100% usage).
• The other hosts are loaded at ~ 40% usage.
In these situations, the other hosts cannot accommodate the large VM without causing
contention there as well. Lazan does not prioritize one host or VM over others for contention,
so it leaves the VM where it is hosted.
• Number of all-flash nodes in the cluster is less than the replication factor.
If the cluster has an RF2 configuration, the cluster must have a minimum of two all-flash
nodes for successful migration of VMs on all the all-flash nodes.
Migrations Audit
Prism Central displays the list of all the VM migration operations generated by ADS. In Prism
Central, go to Menu -> Activity -> Audits to display the VM migrations list. You can filter the
migrations by clicking Filters and selecting Migrate in the Operation Type tab. The list displays
all the VM migration tasks created by ADS with details such as the source and target host, VM
name, and time of migration.
Procedure
2. Disable ADS.
nutanix@cvm$ acli ads.update enable=false
Even after you disable the feature, the checks for the contentions or hotspots run in the
background and if any anomalies are detected, an alert is raised in the Alerts dashboard.
However, no action is taken by ADS to solve the contentions. You need to manually take the
remedial actions or you can enable the feature.
Procedure
Procedure
2. The Hypervisor Summary widget widget on the top left side of the Home page displays the
AHV version.
Procedure
3. Click the host you want to see the hypervisor version for.
4. The Host detail view page displays the Properties widget that lists the Hypervisor Version.
Nutanix Software
Modifying any of the following Nutanix software settings may inadvertently constrain
performance of your Nutanix cluster or render the Nutanix cluster inoperable.
AHV Settings
Nutanix AHV is a cluster-optimized hypervisor appliance.
Alteration of the hypervisor appliance (unless advised by Nutanix Technical Support) is
unsupported and may result in the hypervisor or VMs functioning incorrectly.
Unsupported alterations include (but are not limited to):
Controller VM Access
Although each host in a Nutanix cluster runs a hypervisor independent of other hosts in the
cluster, some operations affect the entire cluster.
Most administrative functions of a Nutanix cluster can be performed through the web console
(Prism), however, there are some management tasks that require access to the Controller
VM (CVM) over SSH. Nutanix recommends restricting CVM SSH access with password or key
authentication.
This topic provides information about how to access the Controller VM as an admin user and
nutanix user.
You can perform most administrative functions of a Nutanix cluster through the Prism web
consoles or REST API. Nutanix recommends using these interfaces whenever possible and
disabling Controller VM SSH access with password or key authentication. Some functions,
however, require logging on to a Controller VM with SSH. Exercise caution whenever
connecting directly to a Controller VM as it increases the risk of causing cluster issues.
Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does not
import or change any locale settings. The Nutanix software is not localized, and running the
commands with any locale other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment
variables are set to anything other than en_US.UTF-8, reconnect with an SSH
configuration that does not import or change any locale settings.
• As an admin user, you cannot access nCLI by using the default credentials. If you are
logging in as the admin user for the first time, you must log on through the Prism web
console or SSH to the Controller VM. Also, you cannot change the default password
of the admin user through nCLI. To change the default password of the admin user,
you must log on through the Prism web console or SSH to the Controller VM.
• When you make an attempt to log in to the Prism web console for the first time after
you upgrade to AOS 5.1 from an earlier AOS version, you can use your existing admin
user password to log in and then change the existing password (you are prompted)
to adhere to the password complexity requirements. However, if you are logging
in to the Controller VM with SSH for the first time after the upgrade as the admin
user, you must use the default admin user password (Nutanix/4u) and then change
the default password (you are prompted) to adhere to the Controller VM Password
Complexity Requirements.
• You cannot delete the admin user account.
• The default password expiration age for the admin user is 60 days. You can configure
the minimum and maximum password expiration days based on your security
requirement.
When you change the admin user password, you must update any applications and scripts
using the admin user credentials for authentication. Nutanix recommends that you create a user
assigned with the admin role instead of using the admin user for authentication. The Prism Web
Console Guide describes authentication and roles.
Following are the default credentials to access a Controller VM.
1. Log on to the Controller VM with SSH by using the management IP address of the Controller
VM and the following credentials.
2. Respond to the prompts, providing the current and new admin user password.
Changing password for admin.
Old Password:
New password:
Retype new password:
Password changed.
Note:
• As a nutanix user, you cannot access nCLI by using the default credentials. If you
are logging in as the nutanix user for the first time, you must log on through the
Prism web console or SSH to the Controller VM. Also, you cannot change the default
password of the nutanix user through nCLI. To change the default password of
the nutanix user, you must log on through the Prism web console or SSH to the
Controller VM.
• When you make an attempt to log in to the Prism web console for the first time after
you upgrade the AOS from an earlier AOS version, you can use your existing nutanix
user password to log in and then change the existing password (you are prompted)
to adhere to the password complexity requirements. However, if you are logging
in to the Controller VM with SSH for the first time after the upgrade as the nutanix
user, you must use the default nutanix user password (nutanix/4u) and then change
the default password (you are prompted) to adhere to the Controller VM Password
Complexity Requirements on page 15.
• You cannot delete the nutanix user account.
When you change the nutanix user password, you must update any applications and scripts
using the nutanix user credentials for authentication. Nutanix recommends that you create a
user assigned with the nutanix role instead of using the nutanix user for authentication. The
Prism Web Console Guide describes authentication and roles.
Following are the default credentials to access a Controller VM.
Procedure
1. Log on to the Controller VM with SSH by using the management IP address of the Controller
VM and the following credentials.
2. Respond to the prompts, providing the current and new nutanix user password.
Changing password for nutanix.
Old Password:
New password:
Retype new password:
Password changed.
Note: From AOS 5.15.5, AHV has two new user accounts—admin and nutanix.
• root—It is used internally by the AOS. The root user is used for the initial access and
configuration of the AHV host.
• admin—It is used to log on to an AHV host. The admin user is recommended for accessing the
AHV host.
• nutanix—It is used internally by the AOS and must not be used for interactive logon.
Exercise caution whenever connecting directly to a AHV host as it increases the risk of causing
cluster issues.
Following are the default credentials to access an AHV host:
nutanix nutanix/4u
Procedure
1. Use SSH and log on to the AHV host using the root account.
$ ssh root@<AHV Host IP Address>
Nutanix AHV
root@<AHV Host IP Address> password: # default password nutanix/4u
Procedure
1. Log on to the AHV host with SSH using the admin account.
$ ssh admin@ <AHV Host IP Address>
2. Enter the admin user password configured in the Initial Configuration on page 16.
admin@<AHV Host IP Address> password:
Procedure
1. Log on to the AHV host using the admin account with SSH.
2. Enter the admin user password configured in the Initial Configuration on page 16.
See AHV Host Password Complexity Requirements on page 18 to set a secure password.
Procedure
1. Log on to the AHV host using the admin account with SSH.
4. Respond to the prompts and provide the current and new root password.
Changing password for root.
New password:
Retype new password:
See AHV Host Password Complexity Requirements on page 18 to set a secure password.
Procedure
1. Log on to the AHV host using the admin account with SSH.
4. Respond to the prompts and provide the current and new root password.
Changing password for nutanix.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
See AHV Host Password Complexity Requirements on page 18 to set a secure password.
• At least 15 characters.
• At least one upper case letter (A–Z).
• At least one lower case letter (a–z).
• At least one digit (0–9).
• At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~),
exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
• At least eight characters different from the previous password.
• At most three consecutive occurrences of any given character.
• At most four consecutive occurrences of any given class.
The password cannot be the same as the last 24 passwords.
Note: If you see any critical alerts, resolve the issues by referring to the indicated KB articles. If
you are unable to resolve any issues, contact Nutanix Support.
Procedure
Note: If you receive alerts indicating expired encryption certificates or a key manager is not
reachable, resolve these issues before you shut down the cluster. If you do not resolve these
issues, data loss of the cluster might occur.
» In the Prism Element web console, in the Home page, check the status of the Data
Resiliency Status dashboard.
Verify that the status is OK. If the status is anything other than OK, resolve the indicated
issues before you perform any maintenance activity.
» Log on to a Controller VM (CVM) with SSH and check the fault tolerance status of the
cluster.
nutanix@cvm$ ncli cluster get-domain-fault-tolerance-status type=node
The value of the Current Fault Tolerance column must be at least 1 for all the nodes in the
cluster.
CAUTION: Verify the data resiliency status of your cluster. If the cluster only has replication
factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have
more than one node shut down, shut down the entire cluster.
See Verifying the Cluster Health on page 19 to check if the cluster can tolerate a single-node
failure. Do not proceed if the cluster cannot tolerate a single-node failure.
Procedure
2. Determine the IP address of the node you want to put into maintenance mode.
nutanix@cvm$ acli host.list
Note the value of Hypervisor IP for the node you want to put in maintenance mode.
Note: Never put Controller VM and AHV hosts into maintenance mode on single-node
clusters. It is recommended to shutdown user VMs before proceeding with disruptive changes.
Replace hypervisor-IP-address with either the IP address or host name of the AHV host you
want to shut down.
By default, the non_migratable_vm_action parameter is set to block, which means VMs with
CPU passthrough, PCI passthrough, and host affinity policies are not migrated or shut down
when you put a node into maintenance mode.
If you want to automatically shut down such VMs, set the non_migratable_vm_action
parameter to acpi_shutdown.
Agent VMs are always shut down if you put a node in maintenance mode and are powered
on again after exiting maintenance mode.
For example:
nutanix@cvm$ acli host.enter_maintenance_mode 10.x.x.x
5. See Verifying the Cluster Health on page 19 to once again check if the cluster can tolerate
a single-node failure.
Id : aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee::1234
Uuid : ffffffff-gggg-hhhh-iiii-jjjjjjjjjjj
Name : XXXXXXXXXXX-X
IPMI Address : X.X.Z.3
Controller VM Address : X.X.X.1
Hypervisor Address : X.X.Y.2
Only the Scavenger, Genesis, and Zeus processes must be running (process ID is displayed
next to the process name).
Do not continue if the CVM has failed to enter the maintenance mode, because it can cause a
service interruption.
What to do next
Proceed to remove the node from the maintenance mode. See Exiting a Node from the
Maintenance Mode on page 23 for more information.
Procedure
a. From any other CVM in the cluster, run the following command to exit the CVM from the
maintenance mode.
nutanix@cvm$ ncli host edit id=host-ID enable-maintenance-mode=false
Note: The command fails if you run the command from the CVM that is in the maintenance
mode.
Do not continue if the CVM has failed to exit the maintenance mode.
a. From any CVM in the cluster, run the following command to exit the AHV host from the
maintenance mode.
nutanix@cvm$ acli host.exit_maintenance_mode host-ip
In the output that is displayed, ensure that node_state equals to kAcropolisNormal and
schedulable equals to True.
Contact Nutanix Support if any of the steps described in this document produce unexpected
results.
CAUTION: Verify the data resiliency status of your cluster. If the cluster only has replication
factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have
more than one node shut down, shut down the entire cluster.
See Verifying the Cluster Health on page 19 to check if the cluster can tolerate a single-node
failure. Do not proceed if the cluster cannot tolerate a single-node failure.
• VMs with CPU passthrough, PCI passthrough, and host affinity policies are not
migrated to other hosts in the cluster. You can shut down such VMs by setting the
non_migratable_vm_action parameter to acpi_shutdown.
• Agent VMs are always shut down if you put a node in maintenance mode and are powered
on again after exiting maintenance mode.
Perform the following procedure to shut down a node.
Note the value of Hypervisor IP for the node you want to shut down.
c. Put the node into maintenance mode.
See Putting a Node into Maintenance Mode on page 20 for instructions about how to
put a node into maintenance mode.
d. Shut down the Controller VM.
nutanix@cvm$ cvm_shutdown -P now
See Starting a Node in a Cluster (AHV) on page 25 for instructions about how to start a
CVM.
What to do next
See Starting a Node in a Cluster (AHV) on page 25 for instructions about how to start a
node, including how to start a CVM and how to exit a node from maintenance mode.
Procedure
If the Nutanix cluster is running properly, output similar to the following is displayed for each
node in the Nutanix cluster.
CVM: <host IP-Address> Up
Zeus UP [9935, 9980, 9981, 9994, 10015, 10037]
Scavenger UP [25880, 26061, 26062]
Xmount UP [21170, 21208]
SysStatCollector UP [22272, 22330, 22331]
IkatProxy UP [23213, 23262]
IkatControlPlane UP [23487, 23565]
SSLTerminator UP [23490, 23620]
SecureFileSync UP [23496, 23645, 23646]
Medusa UP [23912, 23944, 23945, 23946, 24176]
DynamicRingChanger UP [24314, 24404, 24405, 24558]
Pithos UP [24317, 24555, 24556, 24593]
InsightsDB UP [24322, 24472, 24473, 24583]
Athena UP [24329, 24504, 24505]
Mercury UP [24338, 24515, 24516, 24614]
Mantle UP [24344, 24572, 24573, 24634]
VipMonitor UP [18387, 18464, 18465, 18466, 18474]
Stargate UP [24993, 25032]
InsightsDataTransfer UP [25258, 25348, 25349, 25388, 25391, 25393,
25396]
Ergon UP [25263, 25414, 25415]
Cerebro UP [25272, 25462, 25464, 25581]
Chronos UP [25281, 25488, 25489, 25547]
Curator UP [25294, 25528, 25529, 25585]
Prism UP [25718, 25801, 25802, 25899, 25901, 25906,
25941, 25942]
CIM UP [25721, 25829, 25830, 25856]
AlertManager UP [25727, 25862, 25863, 25990]
Arithmos UP [25737, 25896, 25897, 26040]
Catalog UP [25749, 25989, 25991]
Acropolis UP [26011, 26118, 26119]
Uhura UP [26037, 26165, 26166]
Snmp UP [26057, 26214, 26215]
NutanixGuestTools UP [26105, 26282, 26283, 26299]
MinervaCVM UP [27343, 27465, 27466, 27730]
ClusterConfig UP [27358, 27509, 27510]
Aequitas UP [27368, 27567, 27568, 27600]
APLOSEngine UP [27399, 27580, 27581]
APLOS UP [27853, 27946, 27947]
Lazan UP [27865, 27997, 27999]
Delphi UP [27880, 28058, 28060]
Flow UP [27896, 28121, 28124]
Anduril UP [27913, 28143, 28145]
XTrim UP [27956, 28171, 28172]
ClusterHealth UP [7102, 7103, 27995, 28209,28495, 28496,
28503, 28510,
28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646,
28648, 28792,
If you receive any failure or error messages, resolve those issues by referring to the KB
articles indicated in the output of the NCC check results. If you are unable to resolve these
issues, contact Nutanix Support.
Warning: Warning: If you receive alerts indicating expired encryption certificates or a key
manager is not reachable, resolve these issues before you shut down the cluster. If you do not
resolve these issues, data loss of the cluster might occur.
Procedure
1. Shut down the services or VMs associated with AOS features or Nutanix products. For
example, shut down all the Nutanix file server VMs (FSVMs). See the documentation of those
features or products for more information.
2. Shut down all the guest VMs in the cluster in one of the following ways.
» Shut down the guest VMs from within the guest OS.
» Shut down the guest VMs by using the Prism Element web console.
» If you are running many VMs, shut down the VMs by using aCLI:
d. If any VMs are on, consider powering off the VMs from within the guest OS. To force shut
down through AHV, run the following command:
nutanix@cvm$ acli vm.off vm-name
Replace vm-name with the name of the VM you want to shut down.
The output displays the message The state of the cluster: stop, which confirms that
the cluster has stopped.
Note: Some system services continue to run even if the cluster has stopped.
4. Shut down all the CVMs in the cluster. Log on to each CVM in the cluster with SSH and shut
down that CVM.
nutanix@cvm$ sudo shutdown -P now
5. Shut down each node in the cluster. Perform the following steps for each node in the cluster.
a. Press the power button on the front of the block for each node.
b. Log on to the IPMI web console of each node.
c. On the System tab, check the Power Control status to verify if the node is powered on.
a. Wait for approximately 5 minutes after you start the last node to allow the cluster
services to start.
All CVMs start automatically after you start all the nodes.
b. Log on to any CVM in the cluster with SSH.
c. Start the cluster.
nutanix@cvm$ cluster start
e. Start the guest VMs from within the guest OS or use the Prism Element web console.
If you are running many VMs, start the VMs by using aCLI:
nutanix@cvm$ for i in `acli vm.list power_state=off | awk '{print $1}' | grep -v NTNX` ;
do acli vm.on $i; done
f. Start the services or VMs associated with AOS features or Nutanix products. For example,
start all the FSVMs. See the documentation of those features or products for more
information.
g. Verify if all guest VMs are powered on by using the Prism Element web console.
Procedure
Replace host-IP-address with the IP address of the host whose name you want to change and
new-host-name with the new hostname for the AHV host.
If you want to update the hostname of multiple hosts in the cluster, run the script for one
host at a time (sequentially).
Note: The Prism Element web console displays the new hostname after a few minutes.
Changing the Name of the CVM Displayed in the Prism Web Console
You can change the CVM name that is displayed in the Prism web console. The procedure
described in this document does not change the CVM name that is displayed in the terminal or
console of an SSH session.
• 1. Checks if the new name starts with NTNX- and ends with -CVM. The CVM name must
have only letters, numbers, and dashes (-).
2. Checks if the CVM has received a shutdown token.
3. Powers off the CVM. The script does not put the CVM or host into maintenance mode.
Therefore, the VMs are not migrated from the host and continue to run with the I/O
operations redirected to another CVM while the current CVM is in a powered off state.
4. Changes the CVM name, enables autostart, and powers on the CVM.
Perform the following to change the CVM name displayed in the Prism web console.
Procedure
1. Use SSH to log on to a CVM other than the CVM whose name you want to change.
Replace CVM-IP with the IP address of the CVM whose name you want to change and new-name
with the new name for the CVM.
The CVM name must have only letters, numbers, and dashes (-), and must start with NTNX-
and end with -CVM.
Note: Do not run this command from the CVM whose name you want to change, because the
script powers off the CVM. In this case, when the CVM is powered off, you lose connectivity to
the CVM from the SSH console and the script abruptly ends.
Note:
• Ensure that at any given time, the cluster has a minimum of three nodes (never-
schedulable or otherwise) in function. To add your first never-schedulable node to
your Nutanix cluster, the cluster must comprise of at least three schedulable nodes.
• You can add any number of never-schedulable nodes to your Nutanix cluster.
• If you want a node that is already a part of the cluster to work as a never-
schedulable node, remove that node from the cluster and then add that node as a
never-schedulable node.
• If you no longer need a node to work as a never-schedulable node, remove the node
from the cluster.
Note: *
If you want a node that is already a part of the cluster to work as a never-schedulable
node, see step 1, skip step 2, and then proceed to step 3.
If you want to add a new node as a never-schedulable node, skip step 1 and proceed to
step 2.
1. * Perform the following steps if you want a node that is already a part of the cluster to work
as a never-schedulable node:
a. Determine the UUID of the node that you want to use as a never-schedulable node:
nutanix@cvm$ ncli host ls
You require the UUID of the node when you are adding the node back to the cluster as a
never-schedulable node.
b. Remove the node from the cluster.
For information about how to remove a node from a cluster, see the Modifying a Cluster
topic in Prism Web Console Guide.
c. Proceed to step 3.
2. * Perform the following steps if you want to add a new node as a never-schedulable node:
• username: nutanix
• password: nutanix/4u
c. Determine the UUID of the node.
nutanix@cvm$ cat /etc/nutanix/factory_config.json
Replace uuid-of-the-node with the UUID of the node you want to add as a never-schedulable
node.
The never-schedulable-node parameter is an optional and is required only if you want to add
a never-schedulable node.
If you no longer need a node to work as a never-schedulable node, remove the node from
the cluster.
If you want the never-schedulable node to now work as a schedulable node, remove the
node from the cluster and add the node back to the cluster by using the Prism Element web
console.
Note: For information about how to add a node (other than a never-schedulable node) to a
cluster, see the Expanding a Cluster topic in Prism Web Console Guide.
Note: If you want an existing HC node that is already a part of the cluster to work as a CO node,
remove that node from the cluster, image that node as CO by using Foundation, and add that
node back to the cluster. See the Modifying a Cluster topic in Prism Web Console Guide for
information about how to remove a node.
• The Nutanix cluster must be at least a three-node cluster before you add a compute-only
node.
However, Nutanix recommends that the cluster has four nodes before you add a compute-
only node.
• The ratio of compute-only to hyperconverged nodes in a cluster must not exceed the
following:
1 compute-only : 2 hyperconverged
• All the hyperconverged nodes in the cluster must be all-flash nodes.
Restrictions
Nutanix does not support the following features or tasks on a CO node in this release:
1. Host boot disk replacement
2. Network segmentation
Networking Configuration
To perform network tasks on a compute-only node such as creating an Open vSwitch bridge or
changing uplink load balancing, you must add the --host flag to the manage_ovs commands as
shown in the following example:
nutanix@cvm$ manage_ovs --host IP_address_of_co_node --bridge_name bridge_name
create_single_bridge
Replace IP_address_of_co_node with the IP address of the CO node and bridge_name with the
name of bridge you want to create.
Note: Run the manage_ovs commands for a CO from any CVM running on a hyperconverged
node.
Perform the networking tasks for each CO node in the cluster individually.
See the Host Network Management section in the AHV Administration Guide for more
information about networking configuration of the AHV hosts.
Procedure
» Click the gear icon in the main menu and select Expand Cluster in the Settings page.
» Go to the hardware dashboard (see the Hardware Dashboard topic in Prism Web
Console Guide) and click Expand Cluster.
3. In the Select Host screen, scroll down and, under Manual Host Discovery, click Discover
Hosts Manually.
5. Under Host or CVM IP, type the IP address of the AHV host and click Save.
This node does not have a Controller VM and you must therefore provide the IP address of
the AHV host.
8. Click Next.
10. Check the Hardware Diagram view to verify if the node is added to the cluster.
You can identity a node as a CO node if the Prism Element web console does not display
the IP address for the CVM.
3
HOST NETWORK MANAGEMENT
Network management in an AHV cluster consists of the following tasks:
• Configuring Layer 2 switching through virtual switch and Open vSwitch bridges. When
configuring virtual switch vSwitch, you configure bridges, bonds, and VLANs.
• Optionally changing the IP address, netmask, and default gateway that were specified for the
hosts during the imaging process.
Virtual Switch Do not modify the OpenFlow tables of any bridges configured in
any VS configurations in the AHV hosts.
Do not rename default virtual switch vs0. You cannot delete the
default virtual switch vs0.
Do not delete or rename OVS bridge br0.
Do not modify the native Linux bridge virbr0.
Switch Hops Nutanix nodes send storage replication traffic to each other
in a distributed fashion over the top-of-rack network. One
Nutanix node can, therefore, send replication traffic to any other
Nutanix node in the cluster. The network should provide low and
predictable latency for this traffic. Ensure that there are no more
than three switches between any two Nutanix nodes in the same
cluster.
WAN Links A WAN (wide area network) or metro link connects different
physical sites over a distance. As an extension of the switch fabric
requirement, do not place Nutanix nodes in the same cluster if they
are separated by a WAN.
VLANs Add the Controller VM and the AHV host to the same VLAN. Place
all CVMs and AHV hosts in a cluster in the same VLAN. By default
the CVM and AHV host are untagged, shown as VLAN 0, which
effectively places them on the native VLAN configured on the
upstream physical switch.
Note: Do not add any other device (including guest VMs) to the
VLAN to which the CVM and hypervisor host are assigned. Isolate
guest VMs on one or more separate VLANs.
Default VS bonded port Aggregate the fastest links of the same speed on the physical host
(br0-up) to a VS bond on the default vs0 and provision VLAN trunking for
these interfaces on the physical switch.
By default, interfaces in the bond in the virtual switch operate in
the recommended active-backup mode.
Note: The mixing of bond modes across AHV hosts in the same
cluster is not recommended and not supported.
1 GbE and 10 GbE If 10 GbE or faster uplinks are available, Nutanix recommends that
interfaces (physical host) you use them instead of 1 GbE uplinks.
Recommendations for 1 GbE uplinks are as follows:
• If you plan to use 1 GbE uplinks, do not include them in the same
bond as the 10 GbE interfaces.
Nutanix recommends that you do not use uplinks of different
speeds in the same bond.
• If you choose to configure only 1 GbE uplinks, when migration
of memory-intensive VMs becomes necessary, power off and
power on in a new host instead of using live migration. In
this context, memory-intensive VMs are VMs whose memory
changes at a rate that exceeds the bandwidth offered by the 1
GbE uplinks.
Nutanix recommends the manual procedure for memory-
intensive VMs because live migration, which you initiate either
manually or by placing the host in maintenance mode, might
appear prolonged or unresponsive and might eventually fail.
Use the aCLI on any CVM in the cluster to start the VMs on
another AHV host:
nutanix@cvm$ acli vm.on vm_list host=host
IPMI port on the Do not use VLAN trunking on switch ports that connect to the
hypervisor host IPMI interface. Configure the switch ports as access ports for
management simplicity.
Upstream physical switch Nutanix does not recommend the use of Fabric Extenders (FEX)
or similar technologies for production use cases. While initial, low-
load implementations might run smoothly with such technologies,
poor performance, VM lockups, and other issues might occur
as implementations scale upward (see Knowledge Base article
KB1612). Nutanix recommends the use of 10Gbps, line-rate, non-
blocking switches with larger buffers for production workloads.
Cut-through versus store-and-forward selection depends on
network design. In designs with no oversubscription and no speed
mismatches you can use low-latency cut-through switches. If you
have any oversubscription or any speed mismatch in the network
design, then use a switch with larger buffers. Port-to-port latency
should be no higher than 2 microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on
switch ports that are connected to the hypervisor host.
Jumbo Frames The Nutanix CVM uses the standard Ethernet MTU (maximum
transmission unit) of 1,500 bytes for all the network interfaces
by default. The standard 1,500 byte MTU delivers excellent
performance and stability. Nutanix does not support configuring
the MTU on network interfaces of a CVM to higher values.
You can enable jumbo frames (MTU of 9,000 bytes) on the
physical network interfaces of AHV hosts and guest VMs if the
applications on your guest VMs require them. If you choose to use
jumbo frames on hypervisor hosts, be sure to enable them end
to end in the desired network and consider both the physical and
virtual network infrastructure impacted by the change.
Controller VM Do not remove the Controller VM from either the OVS bridge br0
or the native Linux bridge virbr0.
Rack Awareness and Block awareness and rack awareness provide smart placement of
Block Awareness Nutanix cluster services, metadata, and VM data to help maintain
data availability, even when you lose an entire block or rack. The
same network requirements for low latency and high throughput
between servers in the same cluster still apply when using block
and rack awareness.
This diagram shows the recommended network configuration for an AHV cluster. The interfaces
in the diagram are connected with colored lines to indicate membership to different VLANs:
Figure 7:
IP Address Management
IP Address Management (IPAM) is a feature of AHV that allows it to assign IP addresses
automatically to VMs by using DHCP. You can configure each virtual network with a specific IP
address subnet, associated domain settings, and IP address pools available for assignment to
VMs.
An AHV network is defined as a managed network or an unmanaged network based on the
IPAM setting.
Note: You can enable IPAM only when you are creating a virtual network. You cannot enable or
disable IPAM for an existing virtual network.
IPAM enabled or disabled status has implications. For example, when you want to reconfigure
the IP address of a Prism Central VM, the procedure to do so may involve additional steps for
managed networks (that is, networks with IPAM enabled) where the new IP address belongs
to an IP address range different from the previous IP address range. See Reconfiguring the IP
Address and Gateway of Prism Central VMs in Prism Central Guide.
Uplink configuration uses bonds to improve traffic management. The bond types are defined
for the aggregated OVS bridges.A new bond type - No uplink bond - provides a no-bonding
option. A virtual switch configured with the No uplink bond uplink bond type has 0 or 1 uplinks.
When you configure a virtual switch with any other bond type, you must select at least two
uplink ports on every node.
If you change the uplink configuration of vs0, AOS applies the updated settings to all the
nodes in the cluster one after the other (the rolling update process). To update the settings in a
cluster, AOS performs the following tasks when configuration method applied is Standard:
1. Puts the node in maintenance mode (migrates VMs out of the node)
2. Applies the updated settings
3. Checks connectivity with the default gateway
4. Exits maintenance mode
5. Proceeds to apply the updated settings to the next node
AOS does not put the nodes in maintenance mode when the Quick configuration method is
applied.
• Speed—Fast (1s)
• Mode—Active fallback-
active-backup
• Priority—Default. This is not
configurable.
• Virtual switches are not enabled in a cluster that has one or more compute-only nodes. See
Virtual Switch Limitations on page 53 and Virtual Switch Requirements on page 53.
Note:
If you are modifying an existing bond, AHV removes the bond and then re-creates the
bond with the specified interfaces.
Ensure that the interfaces you want to include in the bond are physically connected
to the Nutanix appliance before you run the command described in this topic. If the
interfaces are not physically connected to the Nutanix appliance, the interfaces are not
added to the bond.
Bridge Migration
After upgrading to a compatible version of AOS, you can migrate bridges other than br0 that
existed on the nodes. When you migrate the bridges, the system converts the bridges to virtual
switches.
See Virtual Switch Migration Requirements in Virtual Switch Requirements on page 53.
Note: You can migrate only those bridges that are present on every compute node in the cluster.
See Migrating Bridges after Upgrade topic in Network Management in the Prism Web Console
Guide.
Note: If a host already included in a cluster is removed and then added back, it is treated
as a new host.
• The system validates the default bridge br0 and uplink bond br0-up to check if it
conforms to the default virtual switch vs0 already present on the cluster.
If br0 and br0-up conform, the system includes the new host and its uplinks in vs0.
If br0 and br0-up do not conform,then the system generates an NCC alert.
• The system does not automatically add any other bridge configured on the new host
to any other virtual switch in the cluster.
It generates NCC alerts for all the other non-default virtual switches.
VS Management
You can manage virtual switches from Prism Central or Prism Web Console. You can also use
aCLI or REST APIs to manage them. See the Acropolis API Reference and Command Reference
guides for more information.
You can also use the appropriate aCLI commands for virtual switches from the following list:
• net.create_virtual_switch
• net.list_virtual_switch
• net.get_virtual_switch
• net.update_virtual_switch
• net.delete_virtual_switch
• net.migrate_br_to_virtual_switch
• net.disable_virtual_switch
• An internal port with the same name as the default bridge; that is, an internal port named
br0. This is the access port for the hypervisor host.
• A bonded port named br0-up. The bonded port aggregates all the physical interfaces
available on the node. For example, if the node has two 10 GbE interfaces and two 1 GbE
interfaces, all four interfaces are aggregated on br0-up. This configuration is necessary for
Foundation to successfully image the node regardless of which interfaces are connected to
the network.
Note:
Before you begin configuring a virtual network on a node, you must disassociate
the 1 GbE interfaces from the br0-up port. This disassociation occurs when you
modify the default virtual switch (vs0) and create new virtual switches. Nutanix
recommends that you aggregate only the 10 GbE or faster interfaces on br0-up and
use the 1 GbE interfaces on a separate OVS bridge deployed in a separate virtual
switch.
See Virtual Switch Management on page 54 for information about virtual switch
management.
The following diagram illustrates the default factory configuration of OVS on an AHV node:
• Before migrating to Virtual Switch, all bridge br0 bond interfaces should have the same
bond type on all hosts in the cluster. For example, all hosts should use the Active-backup
bond type or balance-tcp. If some hosts use Active-backup and other hosts use balance-tcp,
virtual switch migration fails.
• Before migrating to Virtual Switch, if using LACP:
• Confirm that all bridge br0 lacp-fallback parameters on all hosts are set to the case
sensitive value True with manage_ovs show_uplinks |grep lacp-fallback:. Any host with
lowercase true causes virtual switch migration failure.
• Confirm that the LACP speed on the physical switch is set to fast or 1 second. Also
ensure that the switch ports are ready to fallback to individual mode if LACP negotiation
fails due to a configuration such as no lacp suspend-individual.
• Before migrating to the Virtual Switch, confirm that the upstream physical switch is set to
spanning-tree portfast or spanning-tree port type edge trunk. Failure to do so may lead
to a 30-second network timeout and the virtual switch migration may fail because it uses 20-
second non-modifiable timer.
• Nutanix recommends increasing the MTU to 9000 bytes on the virtual switch vs0 and
ensure that the physical networking infrastructure supports higher MTU values (jumbo
frame support). The recommended MTU range is 1600-9000 bytes.
Nutanix CVMs use the standard Ethernet MTU (maximum transmission unit) of 1,500
bytes for all the network interfaces by default. The system advertises the MTU of 1442
bytes to guest VMs using DHCP to account for the extra 58 bytes used by Generic
Network Virtualization Encapsulation (Geneve). However, some VMs ignore the MTU
advertisements in the DHCP response. Therefore, to ensure that Flow Networking
functions properly with such VMs, enable jumbo frame support on the physical
network and the default virtual switch vs0.
If you cannot increase the MTU of the physical network, you can decrease the MTU of
every VM vNIC to 1442 bytes.
Note: Do not decrease the MTU of virtual switch vs0 lower than 1500 bytes without
first decreasing the MTU of CVM and guest VM vNICs.
1. Put the node in maintenance mode. This is in addition to the previous maintenance mode
that enabled Active-Active on the node.
2. Enable LAG and LACP on the ToR switch connected to that node.
VLAN Configuration
You can set up a VLAN-based segmented virtual network on an AHV node by assigning
the ports on virtual bridges managed by virtual switches to different VLANs. VLAN port
assignments are configured from the Controller VM that runs on each node.
For best practices associated with VLAN assignments, see AHV Networking Recommendations
on page 38. For information about assigning guest VMs to a virtual switch and VLAN, see
Network Connections section in Prism Central Guide..
Note: Perform the following procedure during a scheduled downtime. Before you begin, stop the
cluster. Once the process begins, hosts and CVMs partially lose network access to each other and
VM data or storage containers become unavailable until the process completes.
To assign an AHV host to a VLAN, do the following on every AHV host in the cluster:
Procedure
3. Assign port br0 (the internal port on the default OVS bridge, br0 on defaul virtual switch
vs0) to the VLAN that you want the host be on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag
6. Verify connectivity to the IP address of the AHV host by performing a ping test.
7. Exit the AHV host and the CVM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 23 for more information.
Note: Perform the following procedure during a scheduled downtime. Before you begin, stop the
cluster. Once the process begins, hosts and CVMs partially lose network access to each other and
VM data or storage containers become unavailable until the process completes.
Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you
are logged on to the Controller VM through its public interface. To change the VLAN ID, log on to
the internal interface that has IP address 192.168.5.254.
Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a
VLAN, do the following:
Procedure
root@host# logout
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.
Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 201.
nutanix@cvm$ change_cvm_vlan 201
Replace cvm_name with the CVM name or CVM ID to view the VLAN tagging information.
9. Verify connectivity to the Controller VMs external IP address by performing a ping test
from the same subnet. For example, perform a ping from another Controller VM or directly
from the host itself.
10. Exit the AHV host and the Controller VM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 23 for more information.
IGMP Snooping
On an AHV host, when multicast traffic flows to a virtual switch the host floods the Mcast traffic
to all the VMs on the specific VLAN. This mechanism is inefficient when many of the VMs on
the VLAN do not need that multicast traffic. IGMP snooping allows the host to track which VMs
on the VLAN need the multicast traffic and send the multicast traffic to only those VMs. For
example, assume there are 50 VMs on VLAN 100 on virtual switch vs1 and only 25 VMs need to
receive (hence, receiver VMs) the multicast traffic. Turn on IGMP snooping to help the AHV host
track the 25 receiver VMs and deliver the multicast traffic to only the 25 receiver VMs instead of
pushing the multicast traffic to all 50 VMs.
When IGMP snooping is enabled in a virtual switch on a VLAN, the ToR switch or router queries
the VMs about the Mcast traffic that the VMs are interested in. When the switch receives a join
request from a VM in response to the query, it adds the VM to the multicast list for that source
Provide:
• virtual-switch-name—The name of the virtual switch in which the VLANs are configured.
For example, the name of the default virtual switch is vs0. Provide the name of the virtual
switch exactly as it is configured.
• enable_igmp_snooping=[true | false]—true to enable IGMP snooping. Provide false to
disable IGMP snooping. The default setting is false.
• enable_igmp_querier=[true | false]—true to enable the native IGMP querier. Provide false
to disable the native IGMP querier. The default setting is false.
• igmp_query_vlan_list=VLAN IDs—List of VLAN IDs mapped to the virtual switch for which
IGMP querier is enabled. When it's not set or set as an empty list, querier is enabled for all
VLANs of the virtual switch.
• igmp_snooping_timeout=timeout—An integer indicating time in seconds. For example, you
can provide 30 to indicate IGMP snooping timeout of 30 seconds.
The default timeout is 300 seconds.
You can set the timeout in the range of 15 - 3600 seconds.
What to do next
You can verify whether IGMP snooping is enabled or disabled by running the following
command:
net.get_virtual_switch virtual-switch-name
The above sample shows that IGMP snooping and the native acropolis IGMP querier are
enabled.
• In this release, AHV supports mirroring of traffic only from physical interfaces.
• The SPAN destination VM or guest VM must be running on the same AHV host where the
source ports are located.
• Delete the SPAN session before you delete the SPAN destination VM or VNIC. Otherwise, the
state of the SPAN session is displayed as error.
• AHV does not support SPAN from a member of a bond port. For example, if you have
mapped br0-up to bridge br0 with members eth0 and eth1, you cannot create a SPAN
session with either eth0 or eth1 as the source port. You must use only br0-up as the source
port.
• AHV supports different types of source ports in one session. For example, you can create
a session with br0-up (bond port) and eth5 (single uplink port) on the same host as two
different source ports in the same session. You can even have two different bond ports in
the same session.
• One SPAN session supports up to two source and two destination ports.
• One host supports up to two SPAN sessions.
• You cannot create a SPAN session on an AHV host that is in the maintenance mode.
• If you move the uplink interface to another Virtual Switch, the SPAN session fails. Note that
the system does not generate an alert in this situation.
• With TCP Segmentation Offload, multiple packets belonging to the same stream can be
coalesced into a single one before being delivered to the SPAN destination VM. With TCP
Segmentation Offload enabled, there can be a difference between the number of packets
received on the uplink interface and packets forwarded to the SPAN destination VM (session
packet count <= uplink interface packet count). However, the byte count at the SPAN
destination VM is closer to the number at the uplink interface.
Note: The SPAN destination VM must run on the same AHV host where the source ports are
located. Therefore, Nutanix highly recommends that you create or modify the guest VM as an
agent VM so that the VM is not migrated from the host.
In this example, span-dest-VM is the name of the guest VM that you are modifying as an agent
VM.
Procedure
2. Determine the name and UUID of the guest VM that you want to configure as the SPAN
destination VM.
nutanix@cvm$ acli vm.list
Example:
nutanix@cvm$ acli vm.list
VM name VM UUID
span-dest-VM 85abfdd5-7419-4f7c-bffa-8f961660e516
Note: If you delete the SPAN destination VM without deleting the SPAN session you create
with this SPAN destination VM, the SPAN session State displays kError.
Replace vm-name with the name of the guest VM on which you want to configure SPAN.
Note: Do not include any other parameter when you are creating a SPAN destination VNIC.
Example:
nutanix@cvm$ acli vm.nic_create span-dest-VM type=kSpanDestinationNic
NicCreate: complete
Note: If you delete the SPAN destination VNIC without deleting the SPAN session you create
with this SPAN destination VNIC, the SPAN session State displays kError.
Replace vm-name with the name of the guest VM to which you assigned the VNIC.
Example:
Note the MAC address (value of mac_addr) of the VNIC whose type is set to
kSpanDestinationNic.
5. Determine the UUID of the host whose traffic you want to monitor by using SPAN.
nutanix@cvm$ acli host.list
Replace the variables mentioned in the command for the following parameters with their
appropriate values as follows:
Note:
All source_list and dest_list parameters are mandatory inputs. The parameters do
not have default values. Provide an appropriate value for each parameter.
• uuid: Replace host-uuid with the UUID of the host whose traffic you want to monitor by
using SPAN. (determined in step 5).
• type: Specify kHostNic as the type. Only the kHostNic type is supported in this release.
• identifier: Replace source-port-name with name of the source port whose traffic you want
to mirror. For example, br0-up, eth0, or eth1.
• direction: Replace traffic-type with kIngress if you want to mirror inbound traffic,
kEgress for outbound traffic, or kBiDir for bidirectional traffic.
Destination list parameters:
• uuid: Replace vm-uuid with the UUID of the guest VM that you want to configure as the
SPAN destination VM. (determined in step 2).
• type: Specify kVmNic as the type. Only the kVmNic type is supported in this release.
• identifier: Replace vnic-mac-address with the MAC address of the destination port where
you want to mirror the traffic (determined in step 4).
Example:
nutanix@cvm$ acli net.create_span_session span1 description="span session 1"
source_list=\{uuid=492a2bda-ffc0-486a-8bc0-8ae929471714,type=kHostNic,identifier=br0-
up,direction=kBiDir} dest_list=\{uuid=85abfdd5-7419-4f7c-
bffa-8f961660e516,type=kVmNic,identifier=50:6b:8d:de:c6:44}
SpanCreate: complete
Example:
nutanix@cvm$ acli net.list_span_session
Name UUID State
span1 69252eb5-8047-4e3a-8adc-91664a7104af kActive
Replace span-session-name with the name of the SPAN session whose details you want to
view.
Example:
nutanix@cvm$ acli net.get_span_session span1
span1 {
config {
datapath_name: "s6925"
description: "span session 1"
destination_list {
nic_type: "kVmNic"
port_identifier: "50:6b:8d:de:c6:44"
uuid: "85abfdd5-7419-4f7c-bffa-8f961660e516"
}
name: "span1"
session_uuid: "69252eb5-8047-4e3a-8adc-91664a7104af"
source_list {
direction: "kBiDir"
nic_type: "kHostNic"
port_identifier: "br0-up"
uuid: "492a2bda-ffc0-486a-8bc0-8ae929471714"
}
}
stats {
name: "span1"
session_uuid: "69252eb5-8047-4e3a-8adc-91664a7104af"
state: "kActive"
stats_list {
tx_byte_cnt: 67498
tx_pkt_cnt: 436
}
}
}
Note the value of the datapath_name field in the SPAN session configuration, which is a
unique key that identifies the SPAN session. You might need the unique key to correctly
identify the SPAN session for troubleshooting reasons.
Procedure
The update command includes the same parameters as the create command. See
Configuring SPAN on an AHV Host on page 60 for more information.
Example:
nutanix@cvm$ acli net.update_span_session span1 name=span_br0_to_span_dest
description="span from br0-up to span-dest VM" source_list=\{uuid=492a2bda-
ffc0-486a-8bc0-8ae929471714,type=kHostNic,identifier=br0-up,direction=kBiDir} dest_list=
\{uuid=85abfdd5-7419-4f7c-bffa-8f961660e516,type=kVmNic,identifier=50:6b:8d:de:c6:44}
SpanUpdate: complete
In this example, only the name and description were updated. However, complete details of
the source and destation ports were included in the command again.
If you want to change the name of a SPAN session, specify the existing name first and then
include the new name by using the “name=” parameter as shown in this example.
Procedure
Replace span-session-name with the name of the SPAN session you want to delete.
Note: You must shut down the guest VM to change the number of queues. Therefore, make this
change during a planned maintenance window. The VNIC status might change from Up->Down-
>Up or a restart of the guest OS might be required to finalize the settings depending on the
guest OS implementation requirements.
Procedure
2. Determine the exact name of the guest VM for which you want to change the number of
VNIC queues.
nutanix@cvm$ acli vm.list
3. Determine the MAC address of the VNIC and confirm the current number of VNIC queues.
nutanix@cvm$ acli vm.nic_get VM-name
Note: AOS defines queues as the maximum number of Tx/Rx queue pairs (default is 1).
Replace VM-name with the name of the guest VM, vNIC-MAC-address with the MAC address of
the VNIC, and N with the number of queues.
Note: N must be less than or equal to the vCPUs assigned to the guest VM.
8. Confirm in the guest OS documentation if any additional steps are required to enable multi-
queue in VirtIO-net.
Note: It is active by default on CentOS VMs. You might have to start it on RHEL VMs.
9. Monitor the VM performance to make sure that the expected network performance increase
is observed and that the guest VM vCPU usage is not dramatically increased to impact the
application on the guest VM.
For assistance with the steps described in this document, or if these steps do not resolve
your guest VM network performance issues, contact Nutanix Support.
CAUTION: All Controller VMs and hypervisor hosts must be on the same subnet.
Warning: Ensure that you perform the steps in the exact order as indicated in this document.
1. Edit the settings of port br0, which is the internal port on the default bridge br0.
f. Assign the host to a VLAN. For information about how to add a host to a VLAN, see
Assigning an AHV Host to a VLAN on page 55.
g. Verify network connectivity by pinging the gateway, other CVMs, and AHV hosts.
2. Log on to the Controller VM that is running on the AHV host whose IP address you changed
and restart genesis.
nutanix@cvm$ genesis restart
See Controller VM Access on page 11 for information about how to log on to a Controller VM.
Genesis takes a few minutes to restart.
3. Verify if the IP address of the hypervisor host has changed. Run the following nCLI command
from any CVM other than the one in the maintenance mode.
nutanix@cvm$ ncli host list
Note: You cannot manage your guest VMs after the Acropolis service is stopped.
b. Verify if the Acropolis service is DOWN on all the CVMs, except the one in the
maintenance mode.
nutanix@cvm$ cluster status | grep -v UP
2019-09-04 14:43:18 INFO cluster:2774 Executing action status on SVMs X.X.X.1, X.X.X.2,
X.X.X.3
6. Verify if all processes on all the CVMs, except the one in the maintenance mode, are in the
UP state.
nutanix@cvm$ cluster status | grep -v UP
7. Exit the AHV host and the CVM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 23 for more information.
• SCSI: 256
• PCI: 6
• IDE: 4
Creating a VM (AHV)
In AHV clusters, you can create a new virtual machine (VM) through the Prism Element web
console.
Note: This option does not appear in clusters that do not support this feature.
Note:
The RTC of Linux VMs must be in UTC, so select the UTC timezone if you are
creating a Linux VM.
Windows VMs preserve the RTC in the local timezone, so set up the Windows
VM with the hardware clock pointing to the desired timezone.
d. Use this VM as an agent VM: Select this option to make this VM as an agent VM.
You can use this option for the VMs that must be powered on before the rest of the VMs
(for example, to provide network functions before the rest of the VMs are powered on
on the host) and must be powered off after the rest of the VMs are powered off (for
example, during maintenance mode operations). Agent VMs are never migrated to any
other host in the cluster. If an HA event occurs or the host is put in maintenance mode,
agent VMs are powered off and are powered on on the same host once that host comes
back to a normal state.
If an agent VM is powered off, you can manually start that agent VM on another host
and the agent VM now permanently resides on the new host. The agent VM is never
migrated back to the original host. Note that you cannot migrate an agent VM to
another host while the agent VM is powered on.
e. vCPU(s): Enter the number of virtual CPUs to allocate to this VM.
f. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
g. Memory: Enter the amount of memory (in MiBs) to allocate to this VM.
See GPU and vGPU Support in AHV Administration Guide for more information.
a. To configure GPU pass-through, in GPU Mode, click Passthrough, select the GPU that
you want to allocate, and then click Add.
If you want to allocate additional GPUs to the VM, repeat the procedure as many times
as you need to. Make sure that all the allocated pass-through GPUs are on the same
host. If all specified GPUs of the type that you want to allocate are in use, you can
Note: This option is available only if you have installed the GRID host driver on the GPU
hosts in the cluster. See Installing NVIDIA GRID Virtual GPU Manager (Host Driver) in the
AHV Administration Guide.
You can assign multiple virtual GPU to a VM. A vGPU is assigned to the VM only if a
vGPU is available when the VM is starting up.
Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support and
Restrictions for Multiple vGPU Support in the AHV Admin Guide.
Note:
Multiple vGPUs are supported on the same VM only if you select the highest
vGPU profile type.
After you add the first vGPU, to add multiple vGPUs, see Adding Multiple vGPUs to the
Same VM in the AHV Admin Guide.
» Legacy BIOS: Select legacy BIOS to boot the VM with legacy BIOS firmware.
» UEFI: Select UEFI to boot the VM with UEFI firmware. UEFI firmware supports larger
hard drives, faster boot time, and provides more security features. For more information
about UEFI firmware, see UEFI Support for VM section in the AHV Administration Guide.
» Secure Boot is supported with AOS 5.16. The current support to Secure Boot is limited
to the aCLI. For more information about Secure Boot, see Secure Boot Support for VMs
section in the AHV Administration Guide. To enable Secure Boot, do the following:
• Select UEFI.
• Power-off the VM.
• Log on to the aCLI and update the VM to enable Secure Boot. For more information, see
Updating a VM to Enable Secure Boot in the AHV Administration Guide.
a. Type: Select the type of storage device, DISK or CD-ROM, from the drop-down list.
The following fields and options vary depending on whether you choose DISK or CD-
ROM.
b. Operation: Specify the device contents from the drop-down list.
• Select Clone from ADSF file to copy any file from the cluster that can be used as an
image onto the disk.
• Select Empty CD-ROM to create a blank CD-ROM device. (This option appears only
when CD-ROM is selected in the previous field.) A CD-ROM device is needed when
you intend to provide a system image from CD-ROM.
• Select Allocate on Storage Container to allocate space without specifying an image.
(This option appears only when DISK is selected in the previous field.) Selecting this
option means you are allocating space only. You have to provide a system image
later from a CD-ROM or other source.
• Select Clone from Image Service to copy an image that you have imported by using
image service feature onto the disk. For more information on the Image Service
a. Subnet Name: Select the target virtual LAN from the drop-down list.
The list includes all defined networks (see the Network Configuration For VM Interfaces
in the Prism Web Console Guide).
Note: Selecting IPAM enabled subnet from the drop-down list displays the Private IP
Assignment information that provides information about the number of free IP addresses
available in the subnet and in the IP pool.
b. Network Connection State: Select the state for the network that you want it to operate
in after VM creation. The options are Connected or Disconnected.
c. Private IP Assignment: This is a read-only field and displays the following:
Note: Acropolis leader generates MAC address for the VM on AHV. The first 24 bits of the
MAC address is set to 50-6b-8d (0101 0000 0110 1101 1000 1101) and are reserved by
Nutanix, the 25th bit is set to 1 (reserved by Acropolis leader), the 26th bit to 48th bits are
auto generated random numbers.
a. Select the host or hosts on which you want configure the affinity for this VM.
b. Click Save.
The selected host or hosts are listed. This configuration is permanent. The VM will not be
moved from this host or hosts even in case of HA event and will take effect once the VM
starts.
9. To specify a user data file (Linux VMs) or answer file (Windows VMs) for unattended
provisioning, do one of the following:
» If you uploaded the file to a storage container on the cluster, click ADSF path, and then
enter the path to the file.
Enter the ADSF prefix (adsf://) followed by the absolute path to the file. For example, if
the user data is in /home/my_dir/cloud.cfg, enter adsf:///home/my_dir/cloud.cfg. Note
the use of three slashes.
» If the file is available on your local computer, click Upload a file, click Choose File, and
then upload the file.
» If you want to create or paste the contents of the file, click Type or paste script, and
then use the text box that is provided.
a. In Source File ADSF Path, enter the absolute path to the file.
b. In Destination Path in VM, enter the absolute path to the target directory and the file
name.
For example, if the source file entry is /home/my_dir/myfile.txt then the entry for
the Destination Path in VM should be /<directory_name>/copy_desitation> i.e. /mnt/
myfile.txt.
c. To add another file or directory, click the button beside the destination path field. In the
new row that appears, specify the source and target details.
11. When all the field entries are correct, click the Save button to create the VM and close the
Create VM dialog box.
The new VM appears in the VM table view.
Managing a VM (AHV)
You can use the web console to manage virtual machines (VMs) in Acropolis managed clusters.
Note: Your available options depend on the VM status, type, and permissions. Unavailable
options are grayed out.
Procedure
a. Select Enable Nutanix Guest Tools check box to enable NGT on the selected VM.
b. Select Mount Nutanix Guest Tools to mount NGT on the selected VM.
Ensure that VM must have at least one empty IDE CD-ROM slot to attach the ISO.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected
virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
c. To enable self-service restore feature for Windows VMs, click Self Service Restore (SSR)
check box.
The Self-Service Restore feature is enabled of the VM. The guest VM administrator can
restore the desired file or files from the VM. For more information on self-service restore
d. After you select Enable Nutanix Guest Tools check box the VSS snapshot feature is
enabled by default.
After this feature is enabled, Nutanix native in-guest VmQuiesced Snapshot Service
(VSS) agent takes snapshots for VMs that support VSS.
Note:
The AHV VM snapshots are not application consistent. The AHV snapshots are
taken from the VM entity menu by selecting a VM and clicking Take Snapshot.
The application consistent snapshots feature is available with Protection Domain
based snapshots and Recovery Points in Prism Central. For more information,
see Implementation Guidelines for Asynchronous Disaster Recovery section in
the guide titled Data Protection and Recovery with Prism Element.
e. Click Submit.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected
virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
Note:
• If you clone a VM, by default NGT is not enabled on the cloned VM. If the
cloned VM is powered off, enable NGT from the UI and power on the VM. If
cloned VM is powered on, enable NGT from the UI and restart the nutanix
guest agent service.
• If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT
and Mounting the NGT Installer Simultaneously on Multiple Cloned VMs in the
Prism Web Console Guide.
If you eject the CD, you can mount the CD back again by logging into the Controller VM
and running the following nCLI command.
nutanix@cvm$ ncli ngt mount vm-id=virtual_machine_id
• Clicking the Mount ISO button displays the following window that allows you to mount
an ISO image to the VM. To mount an image, select the desired image and CD-ROM
drive from the drop-down lists and then click the Mount button.
Note: See Add New Disk in Creating a VM (AHV) on page 72 section for information
about how to select CD-ROM as the storage device when you intent to provide a system
image from CD-ROM.
• Clicking the C-A-D icon button sends a CtrlAltDel command to the VM.
• Clicking the camera icon button takes a screenshot of the console window.
• Clicking the power icon button allows you to power on/off the VM. These are the same
options that you can access from the Power On Actions or Power Off Actions action link
below the VM table (see next step).
5. To start or shut down the VM, click the Power on (or Power off) action link.
Power on begins immediately. If you want to power off the VMs, you are prompted to
select one of the following options:
• Power Off. Hypervisor performs a hard power off action on the VM.
• Power Cycle. Hypervisor performs a hard restart action on the VM.
• Reset. Hypervisor performs an ACPI reset action through the BIOS on the VM.
• Guest Shutdown. Operating system of the VM performs a graceful shutdown.
• Guest Reboot. Operating system of the VM performs a graceful restart.
Note: If you perform power operations such as Guest Reboot or Guest Shutdown by using
the Prism Element web console or API on Windows VMs, these operations might silently
fail without any error messages if at that time a screen saver is running in the Windows VM.
Perform the same power operations again immediately, so that they succeed.
6. To make a snapshot of the VM, click the Take Snapshot action link.
See Virtual Machine Snapshots for more information.
Note: Nutanix recommends to live migrate VMs when they are under light load. If they are
migrated while heavily utilized, migration may fail because of limited bandwidth.
Note:
• Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support and
Restrictions for Multiple vGPU Support in the AHV Admin Guide.
• Multiple vGPUs are supported on the same VM only if you select the highest vGPU
profile type.
• For more information on vGPU profile selection, see:
• Virtual GPU Types for Supported GPUs in the NVIDIA Virtual GPU Software User
Guide in the NVIDIA's Virtual GPU Software Documentation webpage, and
• GPU and vGPU Support in the AHV Administration Guide.
• After you add the first vGPU, to add multiple vGPUs, see Adding Multiple vGPUs to the
Same VM in the AHV Admin Guide.
Note: If you delete a vDisk attached to a VM and snapshots associated with this VM exist,
space associated with that vDisk is not reclaimed unless you also delete the VM snapshots.
To increase the memory allocation and the number of vCPUs on your VMs while the VMs
are powered on (hot-pluggable), do the following:
a. In the vCPUs field, you can increase the number of vCPUs on your VMs while the VMs
are powered on.
b. In the Number of Cores Per vCPU field, you can change the number of cores per vCPU
only if the VMs are powered off.
a. In the Volume Groups section, click Add volume group, and then do one of the
following:
» From the Available Volume Groups list, select the volume group that you want to
attach to the VM.
» Click Create new volume group, and then, in the Create Volume Group dialog box,
create a volume group (see Creating a Volume Group in the Prism Web Console
Guide). After you create a volume group, select it from the Available Volume Groups
list.
Repeat these steps until you have added all the volume groups that you want to attach
to the VM.
b. Click Add.
» After you enable this feature on the VM, the status is updated in the VM table view. To
view the status of individual virtual disks (disks that are flashed to the SSD), click the
update disk icon in the Disks pane in the Update VM window.
» You can disable the flash mode feature for individual virtual disks. To update the flash
mode for individual virtual disks, click the update disk icon in the Disks pane and
deselect the Enable Flash Mode check box.
11. To delete the VM, click the Delete action link. A window prompt appears; click the OK
button to delete the VM.
The deleted VM disappears from the list of VMs in the table.
Windows VM Provisioning
Nutanix VirtIO for Windows
Nutanix VirtIO is a collection of drivers for paravirtual devices that enhance the stability and
performance of virtual machines on AHV.
Nutanix VirtIO is available in two formats:
VirtIO Requirements
Requirements for Nutanix VirtIO for Windows.
VirtIO supports the following operating systems:
Note: On Windows 7 and Windows Server 2008 R2, install Microsoft KB3033929 or update the
operating system with the latest Windows Update to enable support for SHA2 certificates.
Procedure
1. Go to the Nutanix Support portal and select Downloads > AHV > VirtIO.
» If you are creating a new Windows VM, download the ISO file. The installer is available on
the ISO if your VM does not have Internet access.
» If you are updating drivers in a Windows VM, download the MSI installer file.
» For the ISO: Upload the ISO to the cluster, as described in Prism Web Console Guide:
Configuring Images.
» For the MSI: open the download file to run the MSI.
The Nutanix VirtIO setup wizard shows a status bar and completes installation.
Note: To automatically install Nutanix VirtIO, see Installing or Upgrading Nutanix VirtIO for
Windows on page 96.
If you have already installed Nutanix VirtIO, use the following procedure to upgrade VirtIO to a
latest version. If you have not yet installed Nutanix VirtIO, use the following procedure to install
Nutanix VirtIO.
Procedure
1. Go to the Nutanix Support portal and select Downloads > AHV > VirtIO.
» Extract the VirtIO ISO into the same VM where you load Nutanix VirtIO, for easier
installation.
If you choose this option, proceed directly to step 7.
» Download the VirtIO ISO for Windows to your local machine.
If you choose this option, proceed to step 3.
3. Upload the ISO to the cluster, as described in the Configuring Images topic of Prism Web
Console Guide.
4. Locate the VM where you want to install the Nutanix VirtIO ISO and update the VM.
• TYPE: CD-ROM
• OPERATION: CLONE FROM IMAGE SERVICE
• BUS TYPE: IDE
• IMAGE: Select the Nutanix VirtIO ISO
6. Click Add.
Open the devices and select the specific Nutanix drivers for download. For each device,
right-click and Update Driver Software into the drive containing the VirtIO ISO. For each
device, follow the wizard instructions until you receive installation confirmation.
• Upload the Windows installer ISO to your cluster as described in the Web Console Guide:
Configuring Images.
• Upload the Nutanix VirtIO ISO to your cluster as described in the Web Console Guide:
Configuring Images.
Procedure
Note:
The RTC of Linux VMs must be in UTC, so select the UTC timezone if you are
creating a Linux VM.
Windows VMs preserve the RTC in the local timezone, so set up the Windows
VM with the hardware clock pointing to the desired timezone.
d. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
e. MEMORY: Enter the amount of memory for the VM (in GiBs).
5. If you are creating a Windows VM, add a Windows CD-ROM to the VM.
a. Click the pencil icon next to the CD-ROM that is already present and fill out the indicated
fields.
• TYPE: CD-ROM
• OPERATION: CLONE FROM IMAGE SERVICE
• BUS TYPE: IDE
• IMAGE: Select the Nutanix VirtIO ISO.
b. Click Add.
• TYPE: DISK
• OPERATION: ALLOCATE ON STORAGE CONTAINER
• BUS TYPE: SCSI
• STORAGE CONTAINER: Select the appropriate storage container.
• SIZE: Enter the number for the size of the hard drive (in GiB).
b. Click Add to add the disk driver.
• TYPE: DISK
• OPERATION: CLONE FROM IMAGE
• BUS TYPE: SCSI
• CLONE FROM IMAGE SERVICE: Click the drop-down menu and choose the image
you created previously.
b. Click Add to add the disk driver.
9. Optionally, after you have migrated or created a VM, add a network interface card (NIC).
What to do next
Install Windows by following Installing Windows on a VM on page 104.
Installing Windows on a VM
Install a Windows virtual machine.
Procedure
6. Select the desired language, time and currency format, and keyboard information.
10. Click Next > Custom: Install Windows only (advanced) > Load Driver > OK > Browse.
The amd64 folder contains drivers for 64-bit operating systems. The x86 folder contains
drivers for 32-bit operating systems.
Note: From Nutanix VirtIO driver version 1.1.5, the driver package contains Windows
Hardware Quality Lab (WHQL) certified driver for Windows.
13. Select the allocated disk space for the VM and click Next.
Windows shows the installation progress, which can take several minutes.
14. Enter your user name and password information and click Finish.
Installation can take several minutes.
Once you complete the logon information, Windows setup completes installation.
15. Follow the instructions in Installing or Upgrading Nutanix VirtIO for Windows on page 96
to install other drivers which are part of Nutanix VirtIO package.
Limitations
If you enable Windows Defender Credential Guard for your AHV guest VMs, the following
optional configurations are not supported:
CAUTION: Use of Windows Defender Credential Guard in your AHV clusters impacts VM
performance. If you enable Windows Defender Credential Guard on AHV guest VMs, VM density
drops by ~15–20%. This expected performance impact is due to nested virtualization overhead
added as a result of enabling credential guard.
Procedure
1. Enable Windows Defender Credential Guard when you are either creating a VM or updating
a VM. Do one of the following:
See UEFI Support for VM on page 114 and Secure Boot Support for VMs on page 120
for more information about these features.
e. Proceed to configure other attributes for your Windows VM.
See Creating a Windows VM on AHV with Nutanix VirtIO on page 101 for more
information.
f. Click Save.
g. Turn on the VM.
Note:
If the VM is configured to use BIOS, install the guest OS again.
If the VM is already configured to use UEFI, skip the step to select Secure Boot.
See UEFI Support for VM on page 114 and Secure Boot Support for VMs on page 120
for more information about these features.
d. Click Save.
e. Turn on the VM.
5. Open command prompt in the Windows VM and apply the Group Policy settings:
> gpupdate /force
If you have not enabled Windows Defender Credential Guard (step 4) and perform this step
(step 5), a warning similar to the following is displayed:
Updating policy...
For more detailed information, review the event log or run GPRESULT /H GPReport.html from the
command line to access information about Group Policy results.
Event Viewer displays a warning for the group policy with an error message that indicates
Secure Boot is not enabled on the VM.
To view the warning message in Event Viewer, do the following:
Note: Ensure that you follow the steps in the order that is stated in this document to
successfully enable Windows Defender Credential Guard.
a. In the Windows VM, open System Information by typing msinfo32 in the search field next
to the Start menu.
b. Verify if the values of the parameters are as indicated in the following screen shot:
Note:
• If you choose to apply the VM-host affinity policy, it limits Acropolis HA and
Acropolis Dynamic Scheduling (ADS) in such a way that a virtual machine cannot
be powered on or migrated to a host that does not conform to requirements of the
affinity policy as this policy is enforced mandatorily.
• The VM-host anti-affinity policy is not supported.
• VMs configured with host affinity settings retain these settings if the VM is migrated
to a new cluster. Remove the VM-host affinity policies applied to a VM that you want
to migrate to another cluster, as the UUID of the host is retained by the VM and it
does not allow VM restart on the destination cluster. When you attempt to protect
such VMs, it is successful. However, some disaster recovery operations like migration
fail and attempts to power on these VMs also fail.
You can define the VM-host affinity policies by using Prism Element during the VM create or
update operation. For more information, see Creating a VM (AHV) in Prism Web Console Guide
or AHV Administration Guide.
Note:
• Currently, you can only define VM-VM anti-affinity policy by using aCLI. For more
information, see Configuring VM-VM Anti-Affinity Policy on page 113.
• The VM-VM affinity policy is not supported.
Note: If a VM is cloned that has the affinity policies configured, then the policies are not
automatically applied to the cloned VM. However, if a VM is restored from a DR snapshot, the
policies are automatically applied to the VM.
Procedure
2. Create a group.
nutanix@cvm$ acli vm_group.create group_name
3. Add the VMs on which you want to define anti-affinity to the group.
nutanix@cvm$ acli vm_group.add_vms group_name vm_list=vm_name
Replace group_name with the name of the group. Replace vm_name with the name of the VMs
that you want to define anti-affinity on.
Procedure
Note: You can also perform these power operations by using the V3 API calls. For more
information, see developer.nutanix.com.
Procedure
• Boot faster
Note:
• Nutanix supports the starting of VMs with UEFI firmware in an AHV cluster.
However, if a VM is added to a protection domain and later restored on a different
cluster, the VM loses boot configuration. To restore the lost boot configuration, see
Setting up Boot Device.
• Nutanix also provides limited support for VMs migrated from a Hyper-V cluster.
You can create or update VMs with UEFI firmware by using acli commands, Prism Element
web console, or Prism Central web console. For more information about creating a VM by using
the Prism Element web console or Prism Central web console, see Creating a VM (AHV) on
page 72. For information about creating a VM by using aCLI, see Creating UEFI VMs by Using
aCLI on page 115.
Note: If you are creating a VM by using aCLI commands, you can define the location of the
storage container for UEFI firmware and variables. Prism Element web console or Prism Central
web console does not provide the option to define the storage container to store UEFI firmware
and variables.
For more information about the supported OSes for the guest VMs, see the AHV Guest OS
section in the Compatibility Matrix document.
Procedure
A VM is created with UEFI firmware. Replace vm-name with a name of your choice for the VM.
By default, the UEFI firmware and variables are stored in an NVRAM container. If you would
Replace NutanixManagementShare with a storage container in which you want to store the UEFI
variables.
The UEFI variables are stored in a default NVRAM container. Nutanix recommends you
to choose a storage container with at least RF2 storage policy to ensure the VM high
availability for node failure scenarios. For more information about RF2 storage policy, see
Failure and Recovery Scenarios in the Prism Web Console Guide document.
Note: When you update the location of the storage container, clear the UEFI configuration
and update the location of nvram_container to a container of your choice.
What to do next
Go to the UEFI BIOS menu and configure the UEFI firmware settings. For more information
about accessing and setting the UEFI firmware, see Getting Familiar with UEFI Firmware Menu
on page 116.
Procedure
Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog
box, and immediately press F2 when the VM starts to boot.
Important: Resetting the VM causes a downtime. We suggest that you reset the VM only
during off-production hours or during a maintenance period.
4. Use the up or down arrow key to go to Device Manager and press Enter.
The Device Manager page appears.
5. In the Device Manager screen, use the up or down arrow key to go to OVMF Platform
Configuration and press Enter.
8. Select Reset and click Submit in the Power off/Reset dialog box to restart the VM.
After you restart the VM, the OS displays the changed resolution.
Procedure
Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog
box, and immediately press F2 when the VM starts to boot.
Important: Resetting the VM causes a downtime. We suggest that you reset the VM only
during off-production hours or during a maintenance period.
5. In the Boot Manager screen, use the up or down arrow key to select the boot device and
press Enter.
The boot device is saved. After you select and save the boot device, the VM boots up with
the new boot device.
Procedure
Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog
box, and immediately press F2 when the VM starts to boot.
Important: Resetting the VM causes a downtime. We suggest that you reset the VM only
during off-production hours or during a maintenance period.
5. In the Boot Maintenance Manager screen, use the up or down arrow key to go to the Auto
Boot Time-out field.
The default boot-time value is 0 seconds.
6. In the Auto Boot Time-out field, enter the boot-time value and press Enter.
The boot-time value is changed. The VM starts after the defined boot-time value.
Limitations
Secure Boot for guest VMs has the following limitation:
• Nutanix does not support converting a VM that uses IDE disks or legacy BIOS to VMs that
use Secure Boot.
• The minimum supported version of the Nutanix VirtIO package for Secure boot-enabled VMs
is 1.1.6.
Procedure
Note: Specifying the machine type is required to enable the secure boot feature. UEFI is
enabled by default when the Secure Boot feature is enabled.
Procedure
Note:
• If you disable the secure boot flag alone, the machine type remains q35, unless
you disable that flag explicitly.
• UEFI is enabled by default when the Secure Boot feature is enabled. Disabling
Secure Boot does not revert the UEFI flags.
Note: You cannot decrease the memory allocation and the number of CPUs on your VMs while
the VMs are powered on.
You can change the memory and CPU configuration of your VMs by using the Acropolis CLI
(aCLI), Prism Element (see Managing a VM (AHV) in the Prism Web Console Guide), or Prism
Central (see Managing a VM (AHV and Self Service) in the Prism Central Guide).
See the AHV Guest OS Compatibility Matrix for information about operating systems on which
you can hot plug memory and CPUs.
Memory OS Limitations
1. On Linux operating systems, the Linux kernel might not make the hot-plugged memory
online. If the memory is not online, you cannot use the new memory. Perform the following
procedure to make the memory online.
1. Identify the memory block that is offline.
Display the status of all of the memory.
$ cat /sys/devices/system/memory/memoryXXX/state
2. If your VM has CentoOS 7.2 as the guest OS and less than 3 GB memory, hot plugging more
memory to that VM so that the final memory is greater than 3 GB, results in a memory-
overflow condition. To resolve the issue, restart the guest OS (CentOS 7.2) with the following
setting:
swiotlb=force
CPU OS Limitation
On CentOS operating systems, if the hot-plugged CPUs are not displayed in /proc/cpuinfo, you
might have to bring the CPUs online. For each hot-plugged CPU, run the following command to
bring the CPU online.
$ echo 1 > /sys/devices/system/cpu/cpu<n>/online
Replace vm-name with the name of the VM and new_memory_size with the memory size.
Replace vm-name with the name of the VM and n with the number of CPUs.
Procedure
2. Check how many NUMA nodes are available on each AHV host in the cluster.
nutanix@cvm$ hostssh "numactl --hardware"
The example output shows that each AHV host has two NUMA nodes.
Replace <vm_name> with the name of the VM on which you want to enable vNUMA or vUMA.
Replace x with the values for the following indicated parameters:
This command creates a VM with 2 vNUMA nodes, 10 vCPUs and 75 GB memory for each
vNUMA node.
Note: You can configure either pass-through or a vGPU for a guest VM but not both.
This guide describes the concepts related to the GPU and vGPU support in AHV. For the
configuration procedures, see the Prism Web Console Guide.
For driver installation instructions, see the NVIDIA Grid Host Driver for Nutanix AHV Installation
Guide.
Supported GPUs
The following GPUs are supported:
Note: These GPUs are supported only by the AHV version that is bundled with the AOS release.
Limitations
GPU pass-through support has the following limitations:
Note: NVIDIA does not support Windows Guest VMs on the C-series NVIDIA vGPU types. See
the NVIDIA documentation on Virtual GPU software for more information.
Note: If the specified license is not available on the licensing server, the VM starts up and
functions normally, but the vGPU runs with reduced capability.
You must determine the vGPU profile that the VM requires, install an appropriate license on the
licensing server, and configure the VM to use that license and vGPU type. For information about
licensing for different vGPU types, see the NVIDIA GRID licensing documentation.
Guest VMs check out a license over the network when starting up and return the license when
shutting down. As the VM is powering on, it checks out the license from the licensing server.
When a license is checked back in, the vGPU is returned to the vGPU resource pool.
When powered on, guest VMs use a vGPU in the same way that they use a physical GPU that is
passed through.
• Memory is not reserved for the VM on the failover host by the HA process. When the VM fails
over, if sufficient memory is not available, the VM cannot power on.
• vGPU resource is not reserved on the failover host. When the VM fails over, if the required
vGPU resources are not available on the failover host, the VM cannot power on.
Note: ADS support requires live migration of VMs with vGPU be operational in the cluster. See
Live Migration of VMs with vGPUs above for minimum NVIDIA and AOS versions that support live
migration of VMs with vGPUs.
When a number of VMs with vGPUs are running on a host and you enable ADS support for
the cluster, the Lazan manager invokes VM migration tasks to resolve resource hotspots or
fragmentation in the cluster to power on incoming vGPU VMs. The Lazan manager can migrate
vGPU-enabled VMs to other hosts in the cluster only if:
• The other hosts support compatible or identical vGPU resources as the source host (hosting
the vGPU-enabled VMs).
• The host affinity is not set for the vGPU-enabled VM.
For more information about limitations, see Live Migration of VMs with Virtual GPUs on
page 137 and Limitations of Live Migration Support on page 137.
For more information about ADS, see Acropolis Dynamic Scheduling in AHV on page 6.
Note: Multiple vGPUs on the same VM are supported on NVIDIA Virtual GPU software version
10.1 (440.53) or later.
You can deploy virtual GPUs of different types. A single physical GPU can be divided into the
number of vGPUs depending on the type of vGPU profile that is used on the physical GPU.
Each physical GPU on a GPU board supports more than one type of vGPU profile. For example,
a Tesla® M60 GPU device provides different types of vGPU profiles like M60-0Q, M60-1Q,
M60-2Q, M60-4Q, and M60-8Q.
You can only add multiple vGPUs of the same type of vGPU profile to a single VM. For example,
consider that you configure a VM on a Node that has one NVidia Tesla® M60 GPU board. Tesla®
M60 provides two physical GPUs, each supporting one M60-8Q (profile) vGPU, thus supporting
a total of two M60-8Q vGPUs for the entire host.
For restrictions on configuring multiple vGPUs on the same VM, see Restrictions for Multiple
vGPU Support on page 130.
For steps to add multiple vGPUs to the same VM, see Creating a VM (AHV) and Adding Multiple
vGPUs to a VM in Prism Web Console Guide or Prism Central Guide.
• All the vGPUs that you assign to one VM must be of the same type. In the aforesaid example,
with the Tesla® M60 GPU device, you can assign multiple M60-8Q vGPU profiles. You cannot
assign one vGPU of the M60-1Q type and another vGPU of the M60-8Q type.
Note: You can configure any number of vGPUs of the same type on a VM. However, the
cluster calculates a maximum number of vGPUs of the same type per VM. This number is
defined as max_instances_per_vm. This number is variable and changes based on the GPU
resources available in the cluster and the number of VMs deployed. If the number of vGPUs
of a specific type that you configured on a VM exceeds the max_instances_per_vm number,
then the VM fails to power on and the following error message is displayed:
Operation failed: NoHostResources: No host has enough available GPU for VM <name of VM>(UUID
of VM).
You could try reducing the GPU allotment...
When you configure multiple vGPUs on a VM, after you select the appropriate vGPU type
for the first vGPU assignment, Prism (Prism Central and Prism Element Web Console)
Note:
• Configure multiple vGPUs only of the highest type using Prism. The highest type of vGPU
profile is based on the driver deployed in the cluster. In the aforesaid example, on a Tesla®
M60 device, you can only configure multiple vGPUs of the M60-8Q type. Prism prevents you
from configuring multiple vGPUs of any other type such as M60-2Q.
Note:
You can use CLI (acli) to configure multiple vGPUs of other available types.
See Acropolis Command-Line Interface (aCLI) for the aCLI information. Use the
vm.gpu_assign <vm.name> gpu=<gpu-type> command multiple times, once for each
vGPU, to configure multiple vGPUs of other available types.
See the GPU board and software documentation for more information.
• Configure either a passthrough GPU or vGPUs on the same VM. You cannot configure both
passthrough GPU and vGPUs. Prism automatically disallows such configurations after the
first GPU is configured.
• The VM powers on only if the requested type and number of vGPUs are available in the host.
In the aforesaid example, the VM, which is configured with two M60-8Q vGPUs, fails to
power on if another VM sharing the same GPU board is already using one M60-8Q vGPU.
This is because the Tesla® M60 GPU board allows only two M60-8Q vGPUs. Of these, one
is already used by another VM. Thus, the VM configured with two M60-8Q vGPUs fails to
power on due to unavailability of required vGPUs.
Important:
Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support and
Restrictions for Multiple vGPU Support in the AHV Admin Guide.
After you add the first vGPU, do the following on the Create VM or Update VM dialog box (the
main dialog box) to add more vGPUs:
Procedure
4. Repeat the steps for each vGPU addition you want to make.
• For vGPUs created with NVIDIA Virtual GPU software version 10.1 (440.53).
• On AOS 5.18.1 or later.
Important: In an HA event involving any GPU node, the node locality of the affected vGPU VMs
is not restored after GPU node recovery. The affected vGPU VMs are not migrated back to their
original GPU host intentionally to avoid extended VM stun time expected while migrating vGPU
frame buffer. If vGPU VM node locality is required, migrate the affected vGPU VMs to the desired
host manually. For information about the steps to migrate a live VM with vGPUs, see Migrating
Live a VM with Virtual GPUs in the Prism Central Guide and the Prism Web Console Guide.
Note:
Important frame buffer and VM stun time considerations are:
• The GPU board (for example, NVIDIA Tesla M60) vendor provides the information
for maximum frame buffer size of vGPU types (for example, M60-8Q type) that can
be configured on VMs. However, the actual frame buffer usage may be lower than
the maximum sizes.
• The VM stun time depends on the number of vGPUs configured on the VM being
migrated. Stun time may be longer in case of multiple vGPUs operating on the VM.
The stun time also depends on the network factors such bandwidth available for use
during the migration.
For information about the limitations applicable to the live migration support, see Limitations
of Live Migration Support on page 137 and Restrictions for Multiple vGPU Support on
page 130.
For information about the steps to migrate live a VM with vGPUs, see Migrating Live a VM with
Virtual GPUs in the Prism Central Guide and the Prism Web Console Guide.
• Live migration is supported for VMs configured with single or multiple virtual GPUs. It is not
supported for VMs configured with passthrough GPUs.
• The target host for the migration must have adequate and available GPU resources, with the
same vGPU types as configured for the VMs to be migrated, to support the vGPUs on the
VMs that need to be migrated.
See Restrictions for Multiple vGPU Support on page 130 for more details.
• The VMs with vGPUs that need to be migrated live cannot be protected with high
availability.
• Ensure that the VM is not powered off.
Procedure
1. Run the following aCLI command to check if console support is enabled or disabled for the
VM with vGPUs.
acli> vm.get vm-name
Where vm-name is the name of the VM for which you want to check the console support
status.
The step result includes the following parameter for the specified VM:
gpu_console=False
Where False indicates that console support is not enabled for the VM. This parameter
is displayed as True when you enable console support for the VM. The default value for
gpu_console= is False since console support is disabled by default.
Note: The console may not display the gpu_console parameter in the output of the vm.get
command if the gpu_console parameter was not previously enabled.
2. Run the following aCLI command to enable or disable console support for the VM with
vGPU:
vm.update vm-name gpu_console=true | false
Where:
• true—indicates that you are enabling console support for the VM with vGPU.
• false—indicates that you are disabling console support for the VM with vGPU.
3. Run the vm.get command to check if gpu_console value is true indicating that console
support is enabled or false indicating that console support is disabled as you configured it.
If the value indicated in the vm.get command output is not what is expected, then perform
Guest Shutdown of the VM with vGPU. Next, run the vm.on vm-name aCLI command to turn the
VM on again. Then run vm.get command and check the gpu_console= value.
4. Click a VM name in the VM table view to open the VM details page. Click Launch Console.
The Console opens but only a black screen is displayed.
• An unmanaged network does not perform IPAM functions and gives VMs direct access to an
external Ethernet network. Therefore, the procedure for configuring the PXE environment
for AHV VMs is the same as for a physical machine or a VM that is running on any other
hypervisor. VMs obtain boot file information from the DHCP or PXE server on the external
network.
• A managed network intercepts DHCP requests from AHV VMs and performs IP address
management (IPAM) functions for the VMs. Therefore, you must add a TFTP server and the
required boot file information to the configuration of the managed network. VMs obtain boot
file information from this configuration.
A VM that is configured to use PXE boot boots over the network on subsequent restarts until
the boot order of the VM is changed.
Procedure
1. Log on to the Prism web console, click the gear icon, and then click Network Configuration
in the menu.
The Network Configuration dialog box is displayed.
a. Select the Configure Domain Settings check box and do the following in the fields shown
in the domain settings sections:
• In the TFTP Server Name field, specify the host name or IP address of the TFTP server.
If you specify a host name in this field, make sure to also specify DNS settings in the
Domain Name Servers (comma separated), Domain Search (comma separated), and
Domain Name fields.
• In the Boot File Name field, specify the boot file URL and boot file that the VMs
must use. For example, tftp://ip_address/boot_filename.bin, where ip_address is
the IP address (or host name, if you specify DNS settings) of the TFTP server and
boot_filename.bin is the PXE boot file.
b. Click Save.
4. Click Close.
Procedure
2. Create a VM.
Replace vm with a name for the VM, and replace num_vcpus and memory with the number of
vCPUs and amount of memory that you want to assign to the VM, respectively.
For example, create a VM named nw-boot-vm.
nutanix@cvm$ acli vm.create nw-boot-vm num_vcpus=1 memory=512
Replace vm with the name of the VM and replace network with the name of the network. If the
network is an unmanaged network, make sure that a DHCP server and the boot file that the
VM requires are available on the network. If the network is a managed network, configure the
5. Update the boot device setting so that the VM boots over the network.
nutanix@cvm$ acli vm.update_boot_device vm mac_addr=mac_addr
Replace vm with the name of the VM and mac_addr with the MAC address of the virtual
interface that the VM must use to boot over the network.
For example, update the boot device setting of the VM named nw-boot-vm so that the VM
uses the virtual interface with MAC address 00-00-5E-00-53-FF.
nutanix@cvm$ acli vm.update_boot_device nw-boot-vm mac_addr=00-00-5E-00-53-FF
Replace vm_list with the name of the VM. Replace host with the name of the host on which
you want to start the VM.
For example, start the VM named nw-boot-vm on a host named host-1.
nutanix@cvm$ acli vm.on nw-boot-vm host="host-1"
Procedure
1. Use WinSCP, with SFTP selected, to connect to Controller VM through port 2222 and start
browsing the DSF datastore.
Note: The root directory displays storage containers and you cannot change it. You can only
upload files to one of the storage containers and not directly to the root directory. To create
or delete storage containers, you can use the Prism user interface.
2. Authenticate by using Prism username and password or, for advanced users, use the public
key that is managed through the Prism cluster lockdown user interface.
Note:
• vDisk load balancing is disabled by default for volume groups that are directly
attached to VMs.
However, vDisk load balancing is enabled by default for volume groups that are
attached to VMs by using a data services IP address.
• You can attach a maximum number of 10 load balanced volume groups per guest
VM.
• For Linux VMs, ensure that the SCSI device timeout is 60 seconds. For
information about how to check and modify the SCSI device timeout, see the
Red Hat documentation at https://access.redhat.com/documentation/en-
us/red_hat_enterprise_linux/5/html/online_storage_reconfiguration_guide/
task_controlling-scsi-command-timer-onlining-devices.
Perform the following procedure to enable load balancing of vDisks by using aCLI.
Procedure
Note: To modify an existing volume group, you must first detach all the VMs that are
attached to that volume group before you enable vDisk load balancing.
General Considerations
You cannot perform the following operations during an ongoing vDisk migration:
• Clone a VM
• Take a snapshot
• Resize, clone, or take a snapshot of the vDisks that are being migrated
• Migrate images
• Migrate volume groups
Note: During vDisk migration, the logical usage of a vDisk is more than the total capacity of the
vDisk. The issue occurs because the logical usage of the vDisk includes the space occupied in
both the source and destination containers. Once the migration is complete, the logical usage of
the vDisk returns to its normal value.
Migration of vDisks stalls if sufficient storage space is not available in the target storage
container. Ensure that the target container has sufficient storage space before you
begin migration.
Procedure
Replace vm-name with the name of the VM whose vDisks you want to migrate and target-
container with the name of the target container.
» Migrate specific vDisks by using either the UUID of the vDisk or address of the vDisk.
Migrate specific vDisks by using the UUID of the vDisk.
nutanix@cvm$ acli vm.update_container vm-name device_uuid_list=disk-UUID container=target-
container wait=false
Replace vm-name with the name of VM, disk-UUID with the UUID of the disk, and target-
container with the name of the target storage container.
Run nutanix@cvm$ acli vm.get <vm-name> to determine the UUID of the vDisk.
You can migrate multiple vDisks at a time by specifying a comma-separated list of vDisk
UUIDs.
Alternatively, you can migrate vDisks by using the address of the vDisk.
nutanix@cvm$ acli vm.update_container vm-name disk_addr_list=disk-address
container=target-container wait=false
Replace vm-name with the name of VM, disk-address with the address of the disk, and
target-container with the name of the target storage container.
Run nutanix@cvm$ acli vm.get <vm-name> to determine the address of the vDisk.
Following is the format of the vDisk address:
bus.index
Combine the values of bus and index as shown in the following example:
nutanix@cvm$ acli vm.update_container TestUVM_1 disk_addr_list=scsi.0 container=test-
container-17475
You can migrate multiple vDisks at a time by specifying a comma-separated list of vDisk
addresses.
3. Check the status of the migration in the Tasks menu of the Prism Element web console.
• If you cancel an ongoing migration, AOS retains the vDisks that have not yet been
migrated in the source container. AOS does not migrate vDisks that have already
been migrated to the target container back to the source container.
• If sufficient storage space is not available in the original storage container,
migration of vDisks back to the original container stalls. To resolve the issue,
ensure that the source container has sufficient storage space.
OVAs
An Open Virtual Appliance (OVA) file is a tar archive file created by converting a virtual
machine (VM) into an Open Virtualization Format (OVF) package for easy distribution and
deployment. OVA helps you to quickly create, move or deploy VMs on different hypervisors.
Prism Central helps you perform the following operations with OVAs:
• QCOW2: Default disk format auto-selected in the Export as OVA dialog box.
• VMDK: Deselect QCOW2 and select VMDK, if required, before you submit the VM export
request when you export a VM.
• When you export a VM or upload an OVA and the VM or OVA does not have any disks,
the disk format is irrelevant.
• Upload an OVA to multiple clusters using a URL as the source for the OVA. You can upload
an OVA only to a single cluster when you use the local OVA File source.
• Perform the OVA operations only with appropriate permissions. You can run the OVA
operations that you have permissions for, based on your assigned user role.
• The OVA that results from exporting a VM on AHV is compatible with any AHV version 5.18
or later.
License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.
Conventions
Convention Description
root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.
> command The commands are executed in the Hyper-V host shell.
Version
Last modified: September 3, 2021 (2021-09-03T12:43:03+05:30)