VMC Aws Operations
VMC Aws Operations
Operations Guide
25 October 2021
SDDC Version 1.16
VMware Cloud on AWS
VMware Cloud on AWS Operations Guide
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2017-2021 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
VMware, Inc. 3
VMware Cloud on AWS Operations Guide
Add Hosts 63
Remove Hosts 64
About Elastic DRS 65
How the Elastic DRS Algorithm Works 66
Select Elastic DRS Policy 68
Introduction to Auto-Remediation 70
Auto-Remediation High-Level Architecture 71
Functions of Auto-Remediation 71
How Do You Remediate with Auto-Scaler 72
Events in Auto-Scaler 73
Using Policies and Profiles 73
Create or Delete a VM-Host Affinity Policy 74
Create or Delete a VM-Host Anti-Affinity Policy 75
Create or Delete a VM-VM Affinity Policy 76
Create or Delete a VM-VM Anti-Affinity Policy 77
Create or Delete a Disable DRS vMotion Policy 78
Microsoft Product Licenses in VMware Cloud on AWS 79
License Mobility 81
Select License Options from the VMC Console 82
Deploying Microsoft Server Instances 83
Activate or Reactivate a Windows Server VM 87
Converting Clusters from i3 to i3en Hosts 88
VMware, Inc. 4
VMware Cloud on AWS Operations Guide
9 Troubleshooting 132
Get Support 132
Unable to Connect to VMware Cloud on AWS 132
Unable to Connect to vCenter Server 133
Unable to Select Subnet When Creating SDDC 134
Unable to Copy Changed Password Into vCenter Login Page 135
Compute Workloads Are Unable to Reach an On-Premises DNS Servers Over a Policy-Based
VPN 135
VMware, Inc. 5
About VMware Cloud on AWS Operations
The VMware Cloud on AWS Operations Guide provides information about configuring advanced
SDDC features that support ongoing operation of your VMware Cloud on AWS SDDC, including
storage management, provisioning, and seamless interoperation with your on-premises data
center.
Intended Audience
This guide is primarily for VMware Cloud on AWS organization members who have the
CloudAdmin role or another role that includes administrative rights over objects owned by your
organization. It covers operational areas like provisioning your SDDC with content from your
on-premises datacenter, using AWS services like S3 and Direct Connect, and integrating VMware
Cloud on AWS with other VMware and Amazon tools.
We assume you already have experience using an SDDC with a management network as
described in the VMware Cloud on AWS Getting Started guide. Experience configuring and
managing vSphere in an on-premises environment and familiarity with virtualization concepts are
assumed. In-depth knowledge of Amazon Web Services is useful, but is not required.
VMware, Inc. 6
About Software-Defined Data
Centers 1
A VMware Cloud on AWS Software-Defined Data Center (SDDC) includes compute, storage, and
networking resources.
Each SDDC runs in an Amazon Virtual Private Cloud (VPC) and provides a full VMware stack,
including vCenter Server, NSX-T software-defined networking, vSAN software-defined storage,
and one or more ESXi hosts that provide compute and storage resources to your workloads.
n Creating and Managing SDDC Deployment Groups with VMware Transit Connect™
n Using VMware Tanzu™ Kubernetes Grid™ Service with VMware Cloud on AWS
Each version of the SDDC software has an expiration date. SDDCs whose software version is past
the expiration date are not guaranteed support from VMware.
VMware, Inc. 7
VMware Cloud on AWS Operations Guide
1.15 TBD
1.14 TBD
All limits listed are hard limits unless otherwise indicated. A hard limit cannot be changed. Any
limit described as a soft limit may be increased upon request. Contact VMware Support to
request an increase to a soft limit.
1.16 7.0.3 (Build 7.0.3 (Build 3.1.3 (Build 18707895) 19 (version 17 is the
18710037) 18725380) default)
1.15v2 7.0.2 (Build 18186873) 7.0.2 (Build 18241532) 3.1.2 (Build 18196142) 17
1.15 7.0.2 (Build 18110030) 7.0.2 (Build 18137590) 3.1.2 (Build 18112709) 17
VMware, Inc. 8
VMware Cloud on AWS Operations Guide
1.13 7.0.1 (Build 17230216) 7.0.1 (Build 17182107) 3.0.2 (Build 17148774) 17
1.12v5 7.0.1 (Build 17460241) 7.0.1 (Build 17589294) 3.0.2 (Build 17367165) 17
1.12v2 7.0.1 (Build 17053012) 7.01 (Build 17053011) 3.0.2 (Build 17101807) 17
VMware, Inc. 9
VMware Cloud on AWS Operations Guide
n VMware HCX
Visit the VMware Cloud on AWS Dev Center to find out more about the available APIs, SDKs,
CLIs, and other resources. You can also use many of the native vSphere tools that apply to your
on-premises vSphere installation to automate vSphere operations in the SDDC. See Working with
the Developer Center in the vSphere Documentation.
You can deploy one, two or multiple hosts on VMware Cloud on AWS.
If you are deploying a Single Host SDDC, you can delay linking your customer AWS account for
up to two weeks. You cannot scale up a Single Host SDDC to a multiple host SDDC until you link
an AWS account. If you are deploying a multiple host SDDC, you must link your customer AWS
account when you deploy the SDDC.
VMware, Inc. 10
VMware Cloud on AWS Operations Guide
n The subnet must be in an AWS Availability Zone (AZ) where VMware Cloud on AWS is
available. Start by creating a subnet in every AZ in the AWS Region where the SDDC will be
created. It helps you identify all AZs where an SDDC can be deployed and select the one that
best meets your SDDC placement needs, whether you want to keep your VMC workloads
close to or isolated from your AWS workloads running in a particular AZ. See Creating a
Subnet in Your VPC in the AWS documentation for information about how to use the Amazon
VPC console to create a subnet in your VPC.
n The subnet must exist in the connected AWS account. It cannot be one owned by and shared
from another account.
n The AWS account being linked must have sufficient capacity to create a minimum of 17 ENIs
per SDDC in each region where an SDDC is deployed. Although you cannot provision more
than 16 hosts in a cluster, SDDC operations including planned maintenance and Elastic DRS
can require us to temporarily add as many as 16 more hosts, so we recommend using an
AWS that has sufficient capacity for 32 ENIs per SDDC per region.
n We recommend dedicating a /26 CIDR block to each SDDC and not using that subnet for
any other AWS services or EC2 instances. Because some of the IP addresses in this block
are reserved for internal use, a /26 CIDR block is the smallest subnet that can accommodate
SDDC IP address requirements.
n Any VPC subnets on which AWS services or instances communicate with the SDDC must be
associated with the main route table of the connected VPC. Use of a custom route table or
replacement of the main route table is not supported. By default, AWS limits the size of the
main route table to 50 routes. Because the main route table must accommodate an entry
for each routed SDDC network segment as well as the management network CIDR and any
additional routes you create directly in your AWS account, the default limit might not be
adequate for your SDDC networks, especially if you connect more than one SDDC to the
VPC. You can request a route table size increase as described in Amazon VPC quotas.
n If necessary, you can link multiple SDDCs to a VPC if the VPC subnet used for ENI
connectivity has a large enough CIDR block to accommodate them. Because all SDDCs in
a VPC use the same main route table, make sure that network segments in those SDDCs do
not overlap with each other or the VPC's primary CIDR block. Workload VMs on routed SDDC
networks can communicate with all subnets in the VPC's primary CIDR block, but are unaware
of other CIDR blocks that might exist in the VPC.
VMware, Inc. 11
VMware Cloud on AWS Operations Guide
Usage Description
Management Gateway (MGW) SNAT Provides the SNAT address for traffic egressing the MGW to the
Internet.
Compute Gateway (CGW) SNAT Provides the default SNAT address for traffic egressing the CGW to
the Internet.
vCenter Server Public IP Provides the IP address used for vCenter Server when the vCenter
FQDN is set to Public IP. See Set vCenter Server FQDN Resolution
Address. This EIP is always consumed, even if you set the vCenter
FQDN to Private IP.
n The linked VPC must have two subnets, one in each AZ occupied by the cluster.
n A given SDDC can contain either standard (single availability zone) clusters or stretched
clusters, but not a mix of both.
n You cannot convert a stretched cluster to a standard cluster or convert a standard cluster to
a stretched cluster.
VMware, Inc. 12
VMware Cloud on AWS Operations Guide
n You need a minimum of two hosts (one in each AZ) to create a stretched cluster. Hosts must
be added in pairs.
For limitations that affect all stretched clusters, see VMware Configuration Maximums. Additional
restrictions apply to two-host stretched clusters:
n Large-sized SDDC appliances are not supported with two-host stretched clusters.
n If you scale up a two-host stretched cluster to a four-host or larger stretched cluster, you
cannot later scale back down to a two-host stretched cluster.
To create an SDDC, pick an AWS region to host it, give the SDDC a name, and specify how many
ESXi hosts you want the SDDC to contain. If you don't already have an AWS account, you can
still create a starter configuration SDDC that contains a single ESXi host.
VMware, Inc. 13
VMware Cloud on AWS Operations Guide
Procedure
See Available AWS Regions for a list of available regions and the features they support.
Option Description
Single Host Select this option to create Single Host Starter Configuration SDDC.
Single Host Starter Configuration SDDCs expire after 60 days. For more
information, see Deploying a Single Host SDDC Starter Configuration.
Multi-Host Select this option to create an SDDC with two or more hosts.
Stretched Cluster If you create a multiple-host SDDC, you also have the option to
create a stretched cluster that spans two availability zones (AZs). This
configuration provides data redundancy in the event that there is a
problem with one of the AZs. The system deploys management VMs in
the first AZ you select. Both AZs can be used by your workloads. Either
can be used for failover. You need a minimum of two hosts (one in each
AZ) to create a stretched cluster. Hosts must be added in pairs.
Currently available host types are i3 and i3en. For more information on host types, see
VMC on AWS Host Types.
You can change this name later if you want to. See Rename an SDDC in the VMware
Cloud on AWS Operations Guide.
e If you are creating a multiple host SDDC, specify the initial Number of Hosts you want in
the SDDC.
Note Storage capacity, performance, and redundancy are all affected by the number of
hosts in the SDDC. See Storage Capacity and Data Redundancy for more information.
Host Capacity and Total Capacity update to reflect the number of hosts you've specified.
VMware, Inc. 14
VMware Cloud on AWS Operations Guide
By default, a new SDDC is created with medium-sized NSX Edge and vCenter Server
appliances. Large-sized appliances are recommended for deployments with more than 30
hosts or 3000 VMs or in any other situation where management cluster resources might
be oversubscribed. Large-sized appliances are also required if you want to Configure a Multi-
Edge SDDC With Traffic Groups.
To deploy the SDDC with large appliances, click Show Advanced Configuration and select
Large from the drop-down control.
Note Large-sized appliances are not supported for SDDCs with a two-host stretched cluster
as the primary cluster.
If you create the SDDC with a medium appliance configuration and find that you need
additional management cluster resources, you can upsize the configuration to large. See
Upsize SDDC Management Appliances.
See AWS VPC Configuration and Availability Requirements for important information about
requirements for the AWS account and subnets you create in it.
Option Description
Skip for now If you don't have an AWS account or don't want to connect to one you have
now, you can postpone this step for up to 14 days. This option is currently
available for Single Host SDDCs only.
Use an existing AWS account From the Choose an AWS account drop-down, select an AWS account to
use an AWS account that was previously connected to another SDDC. If
no accounts are listed in the drop-down, you must Connect to a new AWS
account.
Connect a new AWS account From the Choose an AWS account drop-down. select Connect to a new
AWS account and follow the instructions on the page. The VMC Console
shows the progress of the connection.
6 Select a VPC and Subnet from the drop-down menu and click Next.
Enter an IP address range for the management subnet as a CIDR block or leave the text
box blank to use the default, which is 10.2.0.0/16. You can't change these values after the
SDDC has been created, so consider the following when you specify the Management Subnet
address range:
n Choose a range of IP addresses that does not overlap with the AWS subnet you
are connecting to. If you plan to connect your SDDC to an on-premises data center,
the IP address range of the subnet must be unique within your enterprise network
VMware, Inc. 15
VMware Cloud on AWS Operations Guide
n If you are deploying a single-host SDDC, the IP address range 192.168.1.0/24 is reserved
for the default compute network of the SDDC. If you specify a management network
address range that overlaps that address, single-host SDDC creation fails. If you are
deploying a multi-host SDDC, no compute gateway logical network is created during
deployment, so you'll need to create one after the SDDC is deployed.
n CIDR blocks of size 16, 20, or 23 are supported, and must be in one of the "private
address space" blocks defined by RFC 1918 (10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16).
The primary factor in choosing a Management CIDR block size is the anticipated scalability
requirements of the SDDC. The management CIDR block cannot be changed after the
SDDC has been deployed, so a /23 block is appropriate only for SDDCs that will not
require much growth in capacity.
CIDR
block
size Number of hosts (Single AZ) Number of hosts (Multi AZ)
23 27 22
20 251 246
Note Because VMware Cloud on AWS reserves the capacity to add hosts (and their IP
addresses) to every SDDC to meet SLA requirements during maintenance operations or
in case of host failure, the number of usable hosts is reduced from what's shown here by
two per SDDC, plus one more per cluster. This means that, for example, an SDDC with
two clusters and a /23 management CIDR has enough IP addresses to deploy up to 23
hosts. The remaining addresses are reserved to be used when needed by hosts deployed
to meet SLA requirements.
8 Acknowledge that you understand and take responsibility for the costs you incur when you
deploy an SDDC, then click DEPLOY SDDC to create the SDDC.
Charges begin when you click DEPLOY SDDC. You cannot pause or cancel the deployment
process after it starts. You won't be able to use the SDDC until deployment is complete.
Deployment typically takes about two hours.
What to do next
VMware, Inc. 16
VMware Cloud on AWS Operations Guide
n For full-scale SDDCs, you must configure a logical segment for workload VM networking.
Single host SDDCs have a default logical segment. A banner is displayed on the SDDC card
after creation is complete to indicate whether you need to create a logical segment. See
Create a Network Segment .
n For single host SDDCs, a banner is displayed on the SDDC card to indicate that a default
logical segment has been created for this SDDC. If this default segment causes a conflict,
delete it and create a new segment. See Create a Network Segment.
Note You cannot currently purchase an SDDC in the Asia Pacific (Osaka) region through AWS.
SDDCs in the Osaka region cannot be members of an SDDC Group.
Compliance Hardening
AWS Region Name Stretched Cluster Support Support Enablement Required
Canada (Central) Y N N
Europe (Paris) Y N N
Europe (Stockholm) Y N N
VMware, Inc. 17
VMware Cloud on AWS Operations Guide
Compliance Hardening
AWS Region Name Stretched Cluster Support Support Enablement Required
US East (Ohio) Y N N
Permissions Statement
Initial permissions required to create the SDDC are shown in italics. These permissions are
removed from the role after the SDDC has been created. The others remain with this role in
your AWS account.
Important You must not change any of the remaining AWS roles and permissions. Doing so will
render your SDDC inoperable.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeRouteTables",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:ReplaceRoute"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeNetworkInterfaces",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:CreateNetworkInterfacePermission",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:DescribeNetworkInterfaceAttribute",
"ec2:DescribeVpcs",
VMware, Inc. 18
VMware Cloud on AWS Operations Guide
"ec2:DescribeSubnets"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:AssignPrivateIpAddresses",
"ec2:UnassignPrivateIpAddresses"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"cloudformation:CreateStack",
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackEvents",
"cloudformation:DescribeStackResource",
"cloudformation:DescribeStackResources",
"cloudformation:GetTemplateSummary",
"cloudformation:ListStackResources",
"cloudformation:GetTemplate",
"cloudformation:ListChangeSets",
"cloudformation:GetStackPolicy"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"iam:CreateRole",
"iam:CreatePolicy",
"iam:AttachRolePolicy",
"iam:GetRole",
"iam:PassRole",
"iam:PutRolePolicy",
"lambda:CreateFunction",
"lambda:InvokeFunction",
"lambda:GetFunctionConfiguration",
"cloudformation:DescribeStackResource",
"cloudformation:DescribeStackResources"
],
"Resource": "*"
}
]
}
VMware, Inc. 19
VMware Cloud on AWS Operations Guide
To see the associated Policy Permissions document, log into the AWS
Console and open https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/
arn:aws:iam::aws:policy/AmazonVPCCrossAccountNetworkInterfaceOperations$jsonEditor.
Here's the summary description of that policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeRouteTables",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:ReplaceRoute"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeNetworkInterfaces",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:CreateNetworkInterfacePermission",
"ec2:DeleteNetworkInterfacePermission",
"ec2:DescribeNetworkInterfacePermissions",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:DescribeNetworkInterfaceAttribute",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeVpcs",
"ec2:DescribeSubnets"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:AssignPrivateIpAddresses",
"ec2:UnassignPrivateIpAddresses"
],
"Resource": [
"*"
]
}
]
}
VMware, Inc. 20
VMware Cloud on AWS Operations Guide
Rename an SDDC
You can rename an existing SDDC.
SDDC names are limited to 128 characters. They are not required to be unique.
Procedure
2 On the card for the SDDC you want to rename, click Actions > Rename SDDC.
Delete an SDDC
Deleting an SDDC terminates all running workloads and destroys all SDDC data and configuration
settings including public IP addresses. Deletion of an SDDC cannot be undone.
Procedure
2 On the card for the SDDC you want to remove, click Actions > Delete SDDC
n You will lose all data and configuration settings in this SDDC.
An SDDC deployment group (SDDC Group) is a logical entity designed to simplify management
of your organization's VMware Cloud on AWS resources at scale. Collecting SDDCs into an SDDC
Group provides a number of benefits to an organization with multiple SDDCs whose workloads
need a high-bandwidth, low-latency connection to each other. All network traffic between group
VMware, Inc. 21
VMware Cloud on AWS Operations Guide
members travels over a VMware Transit Connect network. Routing between compute networks
of all SDDCs in a group is managed automatically by VMware Transit Connect as subnets are
added and deleted. You control network traffic among group member workloads with compute
gateway firewall rules.
Any organization member who has a VMC service role of Administrator or Administrator (Delete
Restricted) can create or modify an SDDC Group.
Group Membership
SDDC groups are an organization-level object. An SDDC group cannot contain SDDCs from more
than one organization. An SDDC group can include members from up to three AWS regions. An
SDDC must meet several criteria to be eligible for group membership:
n It must be at SDDC version 1.11 or later. Members of a multi-region group must be at SDDC
version 1.15 or later.
n Its management network CIDR block cannot overlap the management CIDR block of any
other group member.
While you can create a group with a single member, most practical applications of SDDC Groups
require two or more members.
Note Hybrid Linked Mode over a VPN connection is incompatible with SDDC groups. If you
add an SDDC that you've configured to use Hybrid Linked Mode over a VPN connection, the
connection will fail and you won't be able to use Hybrid Linked Mode with that SDDC. Hybrid
Linked Mode over a DX connection is unaffected when an SDDC is added to a group.
VMware, Inc. 22
VMware Cloud on AWS Operations Guide
Figure 1-1. VMware Transit Connect Connects SDDCs in the Group With Each Other
Edge
Tier0
vCenter
CGW
Tier1
SDDC-1
SDDC Group
MGW
Edge Tier1
Tier0
vCenter
CGW
VTGW Tier1
SDDC-2
MGW
Edge Tier1
Tier0
vCenter
Members can be added to and removed from a group as needed. You cannot remove a group
until all members have been removed. Removing the group also destroys the group's VMware
Managed Transit Gateway.
VMware, Inc. 23
VMware Cloud on AWS Operations Guide
Figure 1-2. Using VMware Transit Connect to Attach a VPC to an SDDC Group
Edge
Tier0
vCenter
CGW
Tier1
SDDC-1
SDDC Group
MGW
Edge Tier1
Tier0
AWS
vCenter
CGW
VPC VTGW Tier1
SDDC-2
MGW
Edge Tier1
Tier0
vCenter
VMware, Inc. 24
VMware Cloud on AWS Operations Guide
Figure 1-3. An AWS Direct Connect Gateway Connects the SDDC Group to On-Premises SDDCs
Edge
Tier0
vCenter
CGW
Tier1
SDDC-1
SDDC Group
MGW
AWS Tier1
Edge
Tier0
vCenter
VPC
CGW
VTGW Tier1
SDDC-2
Direct Connect
Gateway
MGW
Edge Tier1
Tier0
On-Premises
SDDC vCenter
VMware, Inc. 25
VMware Cloud on AWS Operations Guide
VMware, Inc. 26
VMware Cloud on AWS Operations Guide
VMware Cloud
on AWS Organization
SDDC Group
SDDC 1 SDDC 3
VTGW VTGW
SDDC 2 SDDC 4
Direct Connect
Gateway
On-Premises SDDC
VMware, Inc. 27
VMware Cloud on AWS Operations Guide
To view the routes learned and advertised by all SDDCs in the group, click the Routing tab. You
can use the drop-down control. Select External to view routes between members or Members
to view routes between members and external endpoints like VPCs or Direct Connect Gateways.
External routes carry traffic originating from an external endpoint like a VPC or DXGW to an
SDDC group member. Members routes carry traffic originating in a member SDDC and include
SDDC group members and external endpoints.
SDDCs in the group learn routes to the networks advertised by other SDDCs in the group and
those advertised over the group's DXGW. They also learn the CIDRs for any VPCs attached to
the group. Because AWS imposes a limit of 20 prefixes that can be advertised by a DXGW to an
external endpoint like an on-premises SDDC, the CIDR block prefixes of all SDDC group members
must fall within a range that can be summarized without exceeding limit.
n Traffic originating from member SDDCs can be routed to other member SDDCs as well as
to VPCs and Direct Connect Gateways attached to the group in the same region as the
originating SDDC.
n Traffic originating from VPCs or Direct Connect Gateways attached to the group can be
routed only to SDDCs in the group that are in the same region as the originating SDDC.
n Traffic between VPCs or between a VPC and the Direct Connect Gateway is blocked.
Note When an SDDC becomes a member of an SDDC group, several aspects of existing SDDC
networking change:
n Routes advertised by a route-based VPN are preferred over routes advertised by VMware
Transit Connect or a DXGW. However, all outbound traffic from hosts to destinations outside
the SDDC network is routed to the VTGW or private VIF regardless of other routing
configurations in the SDDC. This includes vMotion and vSphere replication traffic. You must
ensure that inbound traffic to ESXi hosts is also routed over the DXGW interface so that the
inbound and outbound traffic paths are symmetrical.
n If the same route is advertised over the VTGW and DX, the VTGW path is preferred. This
includes routes from a DXGW connected to the VTGW.
n The maximum MTU for intranet traffic among group members is limited to 8500 bytes. An
MTU of up to 8900 bytes can still be used for traffic internal to the SDDC, or over DX. See
Create a Private Virtual Interface for SDDC Management and Compute Network Traffic in the
VMware Cloud on AWS Networking and Security guide.
VMware, Inc. 28
VMware Cloud on AWS Operations Guide
Prerequisites
You must be logged into the VMC console as a user with a VMC service role of Administrator or
Administrator (Delete Restricted).
Procedure
3 On the SDDC Groups tab, click ACTIONS and select Create SDDC Group.
Give the group a Name and optional Description, then click NEXT. You can edit the group
later to change these values.
The grid displays a list of all SDDCs in your organization. To qualify for membership in the
group, an SDDC must meet several criteria:
n It must be at SDDC version 1.11 or later. Members of a multi-region group must be at SDDC
version 1.15 or later.
n Its management network CIDR block cannot overlap the management CIDR block of any
other group member.
When you have finished selecting members, click NEXT. You can edit the group later to add
or remove members.
5 Acknowledge that you understand and take responsibility for the costs you incur when you
create an SDDC group, then click CREATE GROUP to create the SDDC Group and its VMware
Transit Connect network.
Charges begin when you click CREATE GROUP. You cannot pause or cancel the process
after it starts. Group members won't be able to use the group's VMware Transit Connect
network until deployment is complete. Deployment typically takes about fifteen minutes.
When deployment is complete, the group's Connectivity Status changes from PENDING to
CONNECTED
6 (Optional) To modify the group name and description or to add or remove group members,
click ACTIONS and select Edit Group.
You cannot edit the group while its Connectivity Status is PENDING.
What to do next
To view the routes learned and advertised by all SDDCs in the group, click the Routing tab. You
can use the drop-down control. Select External to view routes between members or Members to
view routes between members and external endpoints like VPCs or Direct Connect Gateways
VMware, Inc. 29
VMware Cloud on AWS Operations Guide
To enable network traffic between workloads in member SDDCs, you'll need to create a set of
compute gateway firewall rules in each member. See Add Compute Gateway Firewall Rules to
Enable SDDC Group Member Workload Connectivity for details. You'll need to do this for each
new member you add to the group.
Attaching a VPC to the SDDC group is a multi-step process that requires you to use both the
VMC Console and the AWS console. You use the VMC Console to make the VTGW (an AWS
resource managed by VMware) available for sharing. You then use the AWS console to accept
the shared resource and associate it with the VPCs you'd like to attach to the SDDC Group.
Procedure
1 On the Software-Defined Data Centers (SDDC) page of the VMC Console, click SDDC
Groups, then click the Name of the group to which you want to attach the VPC.
2 On the External VPC tab for the group, click ADD ACCOUNT and specify the AWS account
that owns the VPC you want to attach to the group.
This enables AWS resource sharing in that account for the VTGW.
3 In the AWS console, open Resource Access Manager > Shared with me to accept the shared
VTGW resource.
The resource Name has the form VMC-Group-UUID and a Status of Pending. Click the resource
name to open the resource Summary card, then click Accept resource share and confirm
acceptance,
4 In the VMC Console , return to the VPC Connectivity tab for the group and wait for Status of
the resource share you accepted in Step 3 to change from ASSOCIATING to ASSOCIATED.
VPC resource association can take up to ten minutes. Once the VPC association is complete,
you can attach the VTGW.
5 Return to the AWS console Resource Access Manager to find the resource ID of the shared
VTGW resource.
It will be listed under Shared with me: Shared resources with a Resource ID of the form
TGW-UUID and a Resource type of ec2:TransitGateway.
In the AWS console navigate to Transit Gateway Attachments and click Create Transit
Gateway Attachment. Select the Transit Gateway ID identified in Step 5 and specify an
Attachment type of VPC, then select the VPC ID you would like to connect to the SDDC
group. Select a Subnet ID in each Availability Zone (AZ) that requires connectivity to the
group. You can select only one subnet per AZ, but SDDC group members can communicate
with all VPC subnets in that AZ.
VMware, Inc. 30
VMware Cloud on AWS Operations Guide
7 In the VMC Console, return to the External VPC tab for the group and ACCEPT the shared
VPC attachment.
When the VPC status changes to PENDING_ACCEPTANCE, click ACCEPT to accept it. The
status changes to AVAILABLE after the acceptance process completes. Acceptance can take
up to ten minutes.
In the AWS console, identify the route tables associated with any subnets in the VPC
connected to the shared VTGW and need to communicate with the SDDC Group. On the
Routes tab of the route table, click Edit Routes and add any CIDRs in the SDDC group as the
destination with the target set to the VTGW ID you identified in Step 5. The list of CIDRs for
the SDDC group can be found in the VMC Console for the SDDC group on the Routing tab,
by selecting External in the Route Table drop-down.
When you create an SDDC group, the system creates routes for the VPC's primary CIDR and
any secondary CIDRs. If you need to have destinations beyond the VPC routed through it
(something you might need for a Security VPC or Transit VPC), click ADD ROUTES on the
VPC row to open the Edit Routes page, where you can define CIDR blocks to route to the
attached VPC. See Manage Routing to an External VPC.
What to do next
n In the AWS console, create network ACLs to manage traffic between the VPCs you've added
to the group and other group members. If you want to access an AWS service running in the
VPC, you might need to modify the AWS security policy for the service. See Access an S3
Bucket Using an S3 Endpoint for an example of AWS security policy configuration for the S3
service.
Although VMware Transit Connect handles all compute and management network traffic among
SDDC group members, it does not currently provide a way for you to configure routes that
include an external VPC (one that you own and manage). Network topologies that require this
sort of connectivity include creation of a "security VPC" through which all traffic between the
SDDC group and the Internet is routed for inspection, and connectivity to SDDC Groups in other
organizations in the same region. To enable these kinds of connections, associate an external
VPC with the group's VTGW and modify that VPC's route table to establish the static routes you
need.
VMware, Inc. 31
VMware Cloud on AWS Operations Guide
Procedure
1 On the Software-Defined Data Centers (SDDC) page of the VMC Console, click SDDC
Groups, then click the Name of the group that has the VPC attached.
2 On the External VPC tab for the group, select the AWS Account ID that owns the VPC and
expand the row.
If no routes have been specified for this VPC, click ADD ROUTES in the Routes column to
open the Edit Routes page and add one or more routes that use this VPC as a Target.
Otherwise the Routes column shows the first route and the number of additional routes. Click
the pencil icon ( ) to open the Edit Routes page so you can edit this list. Each prefix
defines a static route from the group's VTGW to the attached VPC and appears as a Target
on the group's Routing tab. You can specify up to 100 routes for each attached VPC.
Prerequisites
You must be logged into the VMC console as a user with a VMC service role of Administrator or
Administrator (Delete Restricted).
Procedure
3 Click VIEW DETAILS on the card for a group to open the group Summary screen.
4 Click the Support tab to view Support Information for the group.
Removing a member from a group disconnects it from the group's VTGW but makes no other
changes in group properties. Removing an SDDC group destroys the group's VMware Transit
Connect network and any routing information associated with it, along with its VTGW.
Prerequisites
You must be logged into the VMC console as a user with a VMC service role of Administrator or
Administrator (Delete Restricted).
Procedure
2 On the Software-Defined Data Centers (SDDC) page, click SDDC Groups and click the group
you want to remove.
VMware, Inc. 32
VMware Cloud on AWS Operations Guide
3 Click the Name checkbox to select all SDDCs in the group, then click REMOVE SDDCS.
Confirm that you understand the implications of removing the SDDCs, then click CONTINUE
to proceed with the removal. Removal can take several minutes per SDDC.
4 After all the SDDCs have been removed, click ACTIONS > Delete Group to remove the group
and its associated AWS resources.
Confirm that you understand the effects of removing the group, then click DELETE GROUP to
proceed with the removal.
Because all members of an SDDC Group are owned by the same VMware Cloud on AWS
organization, network traffic among members of the group can be safely treated as East-West
traffic, rather than North-South traffic that might have an external source or destination. But since
an SDDC compute gateway's default firewall rules reject external traffic, you'll need to create
firewall rules allowing that traffic through the compute gateway of each SDDC in the Group.
(SDDC Groups do not currently need to route network traffic through members' management
gateways.)
VMware Cloud on AWS defines a set of inventory groups intended for use in Compute Gateway
firewall rules that provide high-level control over traffic among group members. These groups
contain the prefixes (CIDR blocks) for routes learned over VMware Transit Connect and any AWS
Transit Gateways owned by the SDDC's AWS account owner.
Prefixes in each of these groups are automatically added, removed, and updated as group
membership changes and new routes are learned.
For more information, see Add or Modify Compute Gateway Firewall Rules in the VMware Cloud
on AWS Networking and Security documentation.
VMware, Inc. 33
VMware Cloud on AWS Operations Guide
Procedure
2 Define inventory groups as needed to provide sources and destinations for workload traffic.
The system-defined inventory groups are useful for creating high-level connectivity among
group members and attached VPCs. If you need to create finer-grained firewall rules that
to apply to individual workload segments in member SDDCs, you'll need to create inventory
groups that define those segments, as shown in the example below.
3 On the Gateway Firewall card, click Compute Gateway, then click ADD RULE.
The system-defined inventory groups, along with any compute groups you defined are
available as choices in the Sources and Destinations drop-downs. To enable unrestricted
group connectivity, you could add a rule like this one, which allows inbound traffic to this
SDDC from other group members .
If you have created inventory groups with the CIDR blocks of you local workload segments,
you can use them to create rules at a higher precedence that apply finer-grained controls
over this traffic.
On the Groups card, click COMPUTE GROUPS, then click ADD GROUP and create three
groups. You can use any names you want for the groups. The ones we show here are just
examples.
n A group named Local Workloads that includes segment prefixes for the SDDC's own
workload segments.
n A group named Peer Workloads that includes segment prefixes for workload segments
of other SDDCs in the group.
n A group named Peer SDDC vCenters that includes the private IP address of the vCenter
in each SDDC in the group.
For each group, click Set Members to open the Select Members tool. In this tool, you can
click ADD CRITERA and enter the IP Addresses or MAC Addresses of group members. You
ca also click ACTIONS > import to import these values from a file.
VMware, Inc. 34
VMware Cloud on AWS Operations Guide
As shown in Step 3, open the Gateway Firewall card, click Compute Gateway, then click ADD
RULE to create new rules that use the inventory groups you created for their Sources and
Destinations. You can use any names you want for the rules. The ones we show here are just
examples.
Local workload to peer Local Workloads Peer Workloads As needed for outbound
workload traffic from local
workloads to workloads in
other group members
Peer workload to local Peer Workloads Local Workloads As needed for in traffic
workload to local workloads from
workloads in other group
members
All rules governing SDDC group member traffic through the compute gateway firewall should
be applied to All Uplinks and have an action of Allow.
VMware Transit Connect handles all compute and management network traffic among SDDC
group members. Many SDDC group members will also need to make network connections to
your on-premises data center. To enable these connections, associate an AWS Direct Connect
Gateway with the group's VMware Managed Transit Gateway.
Attaching a Direct Connect Gateway to the SDDC group is a multi-step process that requires
you to use both the VMC Console and the AWS console. You use the VMC Console to make the
VTGW (an AWS resource) available for sharing. You then use the AWS console to accept the
shared resource and associate it with the Direct Connect Gateway you'd like to attach to the
SDDC Group. You'll also use the AWS console if you need to modify the list of allowed prefixes
for an existing Direct Connect Gateway.
Prerequisites
You must create an AWS Direct Connect Gateway. See Creating a Direct Connect gateway in the
AWS documentation.
Procedure
1 On the Software-Defined Data Centers (SDDC) page of the VMC Console, click SDDC
Groups, then click the Name of the group to which you want to attach the Direct Connect
Gateway.
VMware, Inc. 35
VMware Cloud on AWS Operations Guide
2 On the Direct Connect tab for the group, click ADD ACCOUNT and specify the AWS account
that owns the Direct Connect Gateway you want to add to the group.
On the Add Direct Connect Gateway page, fill in the following values:
Option Description
Direct Connect Gateway Attachment The ID value, displayed on the AWS console Direct Connect Gateways page
ID for the gateway object.
Location Specifies additional regional VTGW attachments for this gateway. A single
Direct Connect gateway attachment in any region can handle traffic among
all members of a multi-region group, but transitive routing is not supported.
If a group has members in two different regions but only a single DXGW
connection, only traffic from the SDDC in the region connected to the
DXGW is routed to the on-premises data center. Use the VTGW Location
control to associate the DXGW with a VTGW in another region.
Allowed Prefixes A comma-separated list of compute network CIDR blocks of SDDC group
members for the specified VTGW Location.
3 In the AWS console, open the Direct Connect Gateways page for the gateway object and
accept the association proposal.
n In the AWS console, the gateway will have a State of associated on the AWS Direct
Connect Gateways page for the gateway object.
n In the VMC Console, the gateway will have a State of Connected in the Direct Connect
tab for the group.
4 Attach an AWS Transit VIF between the Direct Connect Gateway and your Direct Connect
Location (Direct Connect provider).
See Transit gateway attachments to a Direct Connect gateway in the AWS VPC
documentation.
In a multi-region SDDC group, you can attach a group VTGW in any region to a Direct
Connect Gateway. On the Direct Connect Gateway tab for the group, click ADD LOCATION
to open the Add Direct Connect Gateway Location card, then specify an AWS region to
attach to the gateway and one or more Allowed Prefixes.
What to do next
Create any firewall rules needed to allow traffic between the Direct Connect Gateway and the
on-premises SDDC.
VMware, Inc. 36
VMware Cloud on AWS Operations Guide
When you enable vCenter linking in an SDDC group, a cloud administrator can log in as
cloudadmin@vmc.local and use the vSphere Client to manage all the vCenter Server systems in
the group. If the cloudadmin@vmc.local account configures these systems to use single sign-on,
then users with accounts in that single sign-on domain can access all the linked systems in the
group.
After vCenter linking has been enabled in an SDDC group, the vCenter Server systems in SDDCs
added to the group are linked automatically, and vCenter Server systems in SDDCs that are
removed from the group are unlinked automatically.
Prerequisites
Networking
The required L3 networking for this feature is offered by VMware Transit Connect which is
already configured as part of the creation of the SDDC Group. Each linked vCenter Server
in the group must be able to reach the other linked vCenter Server instances at a private IP
address using a route that goes through the group's VMware Transit Connect. Other routing
configurations are not supported.
Migration with vMotion of a VM across the vCenter Server instances in a linked SDDC group
does not work because VMware Transit Connect only creates L3 connectivity between the
group members.
SDDC Version
Service Role
VMware, Inc. 37
VMware Cloud on AWS Operations Guide
As noted in Creating and Managing SDDC Deployment Groups with VMware Transit
Connect™, use of Hybrid Linked Mode over a VPN connection is not supported when the
SDDC is a member of an SDDC group. You can configure Hybrid Linked Mode with the Cloud
Gateway Appliance over a Direct Connect Gateway (DXG) connection to an SDDC group
member and use it to manage that SDDC's vCenter Server even if it is linked with other
vCenter Server systems in the group.
Procedure
This page lists all the SDDC groups in your organization. To create an SDDC group, see
Create or Modify an SDDC Group.
3 On the SDDC Groups page, choose an SDDC group card, click VIEW DETAILS, and open the
vCenter Linking tab.
This page presents a list of all the SDDCs in the group, their versions, and vCenter Server
linking status.
4 To link all the vCenter Server systems in the list, click LINK ALL VCENTERS.
This action links all the vCenter Server systems that have a status of Unlinked. Linking
vCenter Server systems in an SDDC group is something you do only once. It establishes
a group property ensuring that vCenter Server systems in the group are always linked,
regardless of the set of member SDDCs, until you deliberately unlink them. After you LINK
ALL VCENTERS in a group vCenter linking is automatic whenever an SDDC is added to
the group. Linked vCenter Server systems are unlinked automatically when their SDDC is
removed from the group.
5 (Optional) Configure a shared identity source for the linked vCenter Server systems.
If you configure the linked vCenter Server systems to use the same identity source, user
accounts defined in that identity source can access all linked vCenter Server systems with the
privileges defined for their account in the identity source. See vSphere Authentication with
vCenter Single Sign-On in the VMware vSphere Documentation for configuration details. If
you don't take this step, cloudadmin@vmc.local can authenticate to all linked vCenter Server
systems using the credentials listed on the Settings tab of the VMC Console.
6 To unlink all the vCenter Server systems in the list, click UNLINK ALL VCENTERS.
This action unlinks all the vCenter Server systems that have a status of Linked. Like linking
vCenter Server systems in an SDDC group, unlinking is something you do only once. It
establishes a group property ensuring that vCenter Server systems in the group are not
linked until you deliberately link them. After you UNLINK ALL VCENTERS in a group, vCenter
Server systems remain unlinked when an SDDC is added to the group.
VMware, Inc. 38
VMware Cloud on AWS Operations Guide
Compliance hardening of a VMware Cloud on AWS SDDC helps you provide a runtime
environment suitable for compliance-audited workloads. VMware Cloud on AWS SDDC
compliance hardening uses a shared accountability model that distributes security and
compliance responsibilities among AWS, VMware, and the customer. Read the Technical White
Paper Migrating PCI Workloads to VMware Cloud on AWS for supplemental guidance covering
the responsibilities and ownership of compliance hardening functions in VMware Cloud on AWS.
Note VMware Cloud on AWS does not enable compliance hardening by default. Contact your
account team for more information.
Compliance hardening can be configured in new SDDCs at version 1.14 and later created in an
AWS region that provides the appropriate support, as shown in Available AWS Regions.
Because certain SDDC features and add-on services are not compatible with compliance
hardening requirements, you must disable them before migrating PCI workloads to your SDDC.
Access controls on the VMC Console Networking & Security tab are not appropriate for a
compliance-hardened SDDC. Any access to an SDDC using the Networking & Security tab
renders the SDDC non-compliant. To maintain compliance, you must manage your SDDC
networks using only the local NSX Manager, which has an authentication framework that meets
compliance hardening requirements. Access to the Networking & Security tab must be disabled
before you begin a compliance audit, and must remain disabled the duration of the audited
period.
Before you disable access to the Networking & Security tab, you'll use it to create a VPN
connection to your on-premises data center and a management gateway firewall rule that allows
you to access the local NSX Manager over that VPN. After you verify that you can access NSX
Manager, you can proceed to prepare the SDDC for compliance hardening by disabling access
to the Networking & Security tab. If you need to re-enable access to the Networking & Security
tab, contact VMware Support.
VMware, Inc. 39
VMware Cloud on AWS Operations Guide
Prerequisites
n You must be logged into the VMC console as a user with a VMC service role of Administrator
or Administrator (Delete Restricted).
n You must have a VPN connection to the SDDC. See Configure a VPN Connection Between
Your SDDC and On-Premises Data Center in the VMware Cloud on AWS Networking and
Security guide. After you have disabled Networking & Security tab access, a connection to
the local NSX Manager over a VPN is the only way to manage your SDDC network. To ensure
that you can reach the local NSX Manager in the event of a network failure, we recommend
configuring a redundant connection such as AWS Direct Connect to with a route-based VPN
as the backup, as described in Configure Direct Connect to a Private Virtual Interface for
SDDC Management and Compute Network Traffic in the VMware Cloud on AWS Networking
and Security guide.
n Compliance hardening must be enabled in the SDDC. VMware Cloud on AWS does not
enable compliance hardening by default. Contact your account team for more information.
Compliance hardening can be configured in SDDCs at version 1.14 and later created in an
AWS region that provides the appropriate support, as shown in Available AWS Regions.
Procedure
2 Create a Management Gateway firewall rule that allows you to open an HTTPS connection to
the local NSX Manager for this SDDC.
See Add or Modify Management Gateway Firewall Rules In the VMware Cloud on AWS
Networking and Security guide for more information about how to create a Management
Gateway firewall rule. The rule must have the following parameters:
Action Allow
You cannot gain access to the local NSX manager until you have disabled access to the
Networking & Security tab, so it's important to verify that your firewall rule works before
you proceed with the next step. To test the rule, verify that you can view the local NSX
manager's index.html page. Use a Web browser to open a connection to https://NSX-
Manager-IP/nsx/index.html where NSX-Manager-IP is the Private IP shown under Access
NSX Manager via internal network in NSX Manager Information on the Settings tab of
your SDDC. If your firewall rule is correct, this request returns the local NSX Manager’s
index.html page, which displays several JSON key/value pairs, including error_code: 403.
You cannot take any actions on this page.
VMware, Inc. 40
VMware Cloud on AWS Operations Guide
4 After you have verified that your firewall rule is correct, you can proceed to disable access to
the Networking & Security tab.
b On the Compliance Hardening section of the Settings tab, expand the Networking &
Security tab access line to display the Disable Networking & Security tab access card.
After you have verified that you can access the local NSX Manager’s index.html page,
select the checkbox to confirm that you have created and tested the necessary firewall
rule and are ready to proceed. Select the checkbox to confirm that you understand that
you'll need to file a VMware support request if you want to re-enable access to the
Networking & Security tab for this SDDC.
Log in to the VMC Console and open the Networking & Security tab. Click the OPEN NSX
MANAGER button on this tab and log in with the Default NSX Manager Credentials. See NSX
Manager in the NSX-T Data Center Administration Guide for information about how to use
NSX Manager.
Note If you want to view (but not modify) the networking configuration for this SDDC, you
can log in with the credentials of the NSX Manager Audit User Account, which are available
under NSX Manager Information on the Settings tab.
What to do next
After you have disabled Networking & Security tab access, you must use the local NSX Manager
to manage your SDDC network. You can navigate the NSX Manager UI in much the same way
as you navigate the Networking & Security tab. See NSX Manager in the NSX-T Data Center
Administration Guide for information about how to use NSX Manager.
Important To conform with PCI compliance requirement 8.2.4 (Change user passwords/
passphrases at least once every 90 days), you must use the NSX manager REST API, as
documented in VMware Knowledge Base article 83551.
If you need to re-enable access to the Networking & Security tab, contact VMware Support.
VMware, Inc. 41
VMware Cloud on AWS Operations Guide
Because certain SDDC add-on services are not compatible with compliance hardening, you must
disable them before migrating compliance-audited workloads to your SDDC. If you disable SDDC
add-on services, you'll need to contact VMware Support to have them re-enabled.
Note Add-on services are not enabled in a new SDDC. This procedure is only required when
reconfiguring an existing SDDC to disable its add-on services.
Prerequisites
You must be logged into the VMC console as a user with a VMC service role of Administrator or
Administrator (Delete Restricted).
Procedure
The Compliance Hardening area of this tab displays the status of the VMware HCX and
VMware Site Recovery add-ons.
3 Disable the VMware HCX Add-On. (You can install and enable HCX in a new SDDC and use
it to migrate compliance-audited workloads, but after that migration is complete, you must
disable it. )
On the Compliance Hardening section of the Settings tab, expand the HCX Add-on control
to display the Disable VMware HCX add-on card.
a Uninstall HCX from the SDDC vCenter.
If you have created any custom firewall rules that reference HCX inventory groups,
remove them before you begin to uninstall HCX, then follow the procedures documented
in Uninstalling HCX in VMware Cloud on AWS Deployments to clean up SDDC resources
created or used by VMware HCX. After HCX has been uninstalled, select the checkbox to
confirm that the clean-up is complete and you are ready to proceed.
n Select the checkbox to confirm that you have uninstalled HCX and cleaned up any
remaining HCX resources (see 3.a).
n Select the checkbox to confirm that you understand that you'll need to contact
VMware support if you want to re-enable the VMware HXC add-on for this SDDC.
VMware, Inc. 42
VMware Cloud on AWS Operations Guide
On the Compliance Hardening section of the Settings tab, expand the SRM Add-on control
to display the Disable VMware Site Recovery add-on card.
a Uninstall Site Recovery Manager from the on-premises site.
See Uninstall Site Recovery Manager on the on-premises site in the VMware Site
Recovery Product Documentation.
n Select the checkbox to confirm that you have uninstalled Site Recovery Manager and
cleaned up any remaining Site Recovery Manager resources (see 4.a).
n Select the checkbox to confirm that you understand that you'll need to contact
VMware support if you want to re-enable the Site Recovery Manager add-on for this
SDDC.
To activate Tanzu Kubernetes Grid, you need to provide a range of network addresses for
the Tanzu workload control plane. In multi-cluster SDDCs, activation (including workload control
plane configuration) is per-cluster.
Note VMware Cloud on AWS does not enable Tanzu Kubernetes Grid by default. Contact your
account team for more information.
VMware, Inc. 43
VMware Cloud on AWS Operations Guide
For information about Tanzu administration in VMware Cloud on AWS, you can refer to the
VMware Tanzu Documentation, but you'll need to keep a few high-level differences in mind when
reading those topics:
n VMware Cloud on AWS users don't have physical access to access ESXi host hardware and
cannot log in to the ESXi host operating system. Procedures that require this kind of access
are performed by VMware staff.
n Global Permissions are not replicated from your on-premises vCenter Server and the vCenter
Server in your SDDC. Global permissions do not apply to objects that VMware manages for
you, like SDDC hosts and datastores.
n In VMware Cloud on AWS, the Tanzu workload control plane can be activated only through
the VMC Console.
In addition to the high-level differences we've noted, many topics in the VMware Tanzu
Documentation are written specifically for on-premises users, and don't include some of the
information you need when using Tanzu Kubernetes Grid in VMware Cloud on AWS.
Table 1-3. Topic Content Differences Between On-Premises and SDDC Tanzu
n Creating and Managing Content Libraries for Tanzu Tanzu Kubernetes Grid for VMware Cloud on AWS is pre-
Kubernetes releases provisioned with a VMC-specific content library that you
n Migrate Tanzu Kubernetes Clusters to a New Content cannot modify.
Library
vSphere with Tanzu User Roles and Workflows The vCenter Server in your SDDC includes a predefined
CloudAdmin role that is not present in your on-premises
vCenter. This role has privileges required to create
and manage workloads on your SDDC, but does not
allow access to SDDC management components that
are supported and managed by VMware, such as hosts,
clusters, and management virtual machines.
Deploying Workloads to vSphere Pods Tanzu Kubernetes Grid for VMware Cloud on AWS does
not support vSphere Pods.
n Configure a vSphere Namespace for Tanzu vSphere namespaces for Kubernetes releases are
Kubernetes releases configured automatically during Tanzu Kubernetes Grid
n Create and Configure a vSphere Namespace activation.
Workflow for Provisioning Tanzu Kubernetes Clusters Step 10 of this procedure, "Monitor the deployment of
cluster nodes using the vSphere Client", does not apply to
Tanzu Kubernetes Grid for VMware Cloud on AWS.
Virtual Machine Classes for Tanzu Kubernetes Clusters In Tanzu Kubernetes Grid for VMware Cloud on AWS, the
VM Service allows probe definitions only for port 6443.
VMware, Inc. 44
VMware Cloud on AWS Operations Guide
For more about SDDC network architecture, see NSX-T Networking Concepts in the VMware
Cloud on AWS Networking and Security guide, and watch the VMware Tanzu Kubernetes Grid
Service CIDR Descriptions video.
In an SDDC that uses Direct Connect, ingress and egress CIDRs are advertised to the DX
connection. In an SDDC that is a member of an SDDC group, these CIDRs are advertised to
the VTGW.
Activation workflow begins in the SDDC Summary tab for an SDDC that has only a single eligible
cluster. In an SDDC that has more than one eligible cluster, you activate Tanzu Kubernetes Grid
from each cluster card in the VMC Console
Prerequisites
You must be logged into the VMC console as a user with a VMC service role of Administrator or
Administrator (Delete Restricted).
To qualify for Tanzu Kubernetes Grid activation, an SDDC must meet these criteria:
n It must have at least one conventional cluster with at least three hosts and no deployed
customer workloads. For a large SDDC, a minimum of four hosts is required. Stretched
clusters are not supported.
VMware, Inc. 45
VMware Cloud on AWS Operations Guide
Procedure
2 On the Software-Defined Data Centers (SDDC) page, click SDDCs and select an SDDC where
you want to activate Tanzu Kubernetes Grid.
If the SDDC has more than one eligible cluster, you activate Tanzu Kubernetes Grid from any
of the cluster cards in the VMC Console. Otherwise, activation begins on the Capacity and
Usage view of the SDDC Summary.
3 On the cluster card or the Capacity and Usage view, click ACTIONS and select Activate
Tanzu Kubernetes Grid.
VMware Cloud on AWS checks the cluster to be sure that it meets requirements for
activating Tanzu Kubernetes Grid
VMware, Inc. 46
VMware Cloud on AWS Operations Guide
CIDR blocks of size 16, 20, 23, or 26 are supported, and must be in one of the "private
address space" blocks defined by RFC 1918 (10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16).
For each CIDR block, choose a range of IP addresses that does not overlap the workload
network address range of other Tanzu clusters in this SDDC, the SDDC Management or
Compute networks, or your on-premises networks. For a complete list of IPv4 addresses
reserved by VMware Cloud on AWS, see Reserved Network Addresses in the VMware
Cloud on AWS Networking and Security guide. These CIDR blocks cannot be changed
after you activate Tanzu Kubernetes Grid.
Service CIDR
Addresses allocated to Tanzu supervisor services. Cannot overlap with the Namespace
Network CIDR, Ingress CIDR, or Egress CIDR.
Addresses allocated to namespace segments. Cannot overlap with the SDDC Management or
Compute subnets. This CIDR should have a span of at least /23 to provide adequate capacity
for your Tanzu Kubernetes Grid workloads. Consider a span of /16 or /12.
Ingress CIDR
Inbound traffic through load-balancers to containers. Cannot overlap with the SDDC
Management or Compute subnets. The system creates a NAT address from this pool for
each Tanzu Kubernetes Grid namespace, so a span of /24 or /26 should be adequate.
Egress CIDR
SNATted outbound traffic from containers and guest clusters . Cannot overlap with the SDDC
Management or Compute subnets. The system allocates an SNAT IP address from this pool
for each Tanzu Kubernetes Grid namespace, so a span of /24 or /26 should be adequate.
Note You can't change these CIDR blocks after activation is complete for this cluster,
so the system must validate the ranges you specify before activation can proceed.
Validation can take up to 15 seconds.
b Click VALIDATE AND PROCEED to validate that the CIDR blocks you have specified are
valid for this SDDC.
Upon successful validation, the system displays the validated CIDR blocks and details of the
Tanzu cluster that will be created. Click Activate Tanzu Kubernetes Grid to create this cluster
and allocate the CIDR blocks. The SDDC Summary page shows that Tanzu Kubernetes Grid
is Activating. The system displays a status message showing the cluster name and the time
that activation started. When activation completes, the SDDC Summary page shows that
Tanzu Kubernetes Grid is Activated.
VMware, Inc. 47
VMware Cloud on AWS Operations Guide
What to do next
After activation completes, open the Workload Management page of the vSphere Web Client.
The new Tanzu Kubernetes Grid cluster is listed in the Clusters tab. The Namespaces tab lists the
next steps you can take.
When using Tanzu Kubernetes Grid in a VMware Cloud on AWS SDDC, procedures like the one
in SSH to Tanzu Kubernetes Cluster Nodes as the System User Using a Password require you to
add a VMware Cloud on AWS distributed firewall rule to allow access to a namespace network. In
this topic, we describe the firewall rule you'll need if you want to enable SSH access to a cluster
in your SDDC from a jump host in a different cluster.
Procedure
See Create a Linux Jump Host VM. The IP address is shown vCenter UI under the Summary
tab for the jump host VM.
This is shown as Network under Related Objects on the Summary tab for any of the cluster
nodes.
3 Create a Distributed Firewall rule allowing access from the jump host to nodes in the
namespace network.
Follow the procedure in Add or Modify Distributed Firewall Rules in the VMware Cloud on
AWS Networking and Security guide. Create a new policy if you need one, then create a rule
that uses the following values:
Option Description
Services Any
Action Allow
VMware, Inc. 48
VMware Cloud on AWS Operations Guide
Prerequisites
You must be logged into the VMC console as a user with a VMC service role of Administrator or
Administrator (Delete Restricted).
Procedure
2 On the Software-Defined Data Centers (SDDC) page, click SDDCs and select the SDDC
where you want to deactivate Tanzu .
This includes the Distributed Firewall Rue rule described in Enable Network access to a Tanzu
Kubernetes Cluster.
n All persistent VMDKs associated with supervisor namespaces or TKG clusters are
destroyed.
Select all the checkboxes to confirm that you understand the consequences of this action,
then click Deactivate Tanzu Kubernetes Grid.
Upgrades to even-numbered releases of the SDDC software, such as VMC Version 1.10 or 1.12,
will be provided to all SDDCs and are mandatory. Odd-numbered releases, such as 1.11 or 1.13,
are available only for new SDDC deployments. These optional releases are not available for
upgrades.
When an SDDC update is upcoming, VMware sends a notification email to you. Typically, this
occurs 7 days before a regular update and 1-2 days before an emergency update. Delays to
upgrades could result in your SDDC running an unsupported software version. See Supported
SDDC Versions.
You also receive notifications by email when each phase of the update process starts, completes,
is rescheduled, or is canceled. To ensure that you receive these notifications, ensure that
donotreply@vmware.com is added to your email allow list.
VMware, Inc. 49
VMware Cloud on AWS Operations Guide
The impact of the upgrade on different elements of the SDDC infrastructure is shown in the
figure below.
n Do not perform hot or cold workload migrations. Migrations fail if they are started or in
progress during maintenance.
n Do not perform workload provisioning (New/Clone VM). Provisioning operations fail if they
are started or in progress during maintenance.
n Do not make changes to Storage-based Policy Management settings for workload VMs.
n Ensure that there is enough storage capacity (> 30% slack space) in each cluster.
Phase 1: Control Plane Updates. These are the updates to vCenter Server and NSX Edge. A
backup of the management appliances is taken during this phase. If a problem occurs, there is
a restore point for the SDDC. A management gateway firewall rule is added during this phase.
There is an NSX Edge failover during this upgrade phase, resulting in a brief downtime. You do
not have access to NSX Manager and vCenter Server during this phase. During this time, your
workloads and other resources function as usual subject to the constraints outlined above.
VMware, Inc. 50
VMware Cloud on AWS Operations Guide
Certificates for vCenter Server and NSX Edge are replaced during Phase 1 if the certificates were
last replaced more than 14 days ago. If you are using other software that relies on the vCenter
Server certificate, such as Horizon Enterprise, vRealize Operations, vRealize Automation, VMware
Site Recovery, and many third-party management applications, you must re-accept the vCenter
Server and NSX certificates in that software after Phase 1 of the upgrade.
Note VMware Site Recovery certificates in the SRM and VR appliances are also replaced if
the vCenter Server certificate was replaced. VMware HCX has its own certificates that are not
replaced as part of the SDDC Upgrade process.
More information on updating certificates for specific products can be found below:
n AppVolumes: https://kb.vmware.com/s/article/2150281
n Horizon Enterprise:https://kb.vmware.com/s/article/74599
When Phase 1 is complete, you receive a notification. After Phase 1 is complete, there is a waiting
period until Phase 2 starts. Phase 2 is initiated at a designated start time.
Phase 2: Host Updates. These are the updates to the ESXi hosts and host networking software
in the SDDC. An additional host is temporarily added to your SDDC to provide enough capacity
for the update. You are not billed for these host additions. vMotion and DRS activities occur
to facilitate the update. The upgrade process has been improved so that only one NSX Edge
migration occurs during the update. During this time, your workloads and other resources
function as usual subject to the constraints outlined above. When Phase 2 is complete, the hosts
that were temporarily added are removed from each cluster in the SDDC.
When Phase 2 is complete, you receive a notification. After Phase 2 is complete, there is a
waiting period until Phase 3 starts. Phase 3 is initiated at a designated start time.
Phase 3: These are the updates to the NSX-T appliances. A backup of the management
appliances is taken during this phase. If a problem occurs, there is a restore point for the SDDC.
A management gateway firewall rule is added during this phase. You do not have access to
NSX Manager and vCenter Server during this phase. During this time, your workloads and other
resources function as usual subject to the constraints outlined above.
For more information on estimating the duration of each phase, see Estimating the Duration of
SDDC Maintenance.
When an SDDC upgrade for your SDDC is scheduled, you can see information about upcoming
or ongoing maintenance in the Maintenance Tab of the VMC Console. For more information, see
View an SDDC Maintenance Schedule Reservation.
VMware, Inc. 51
VMware Cloud on AWS Operations Guide
1 See Correlating VMware Cloud on AWS with Component Releases for the NSX-T version that
the upgrade will apply.
2 See the L2VPN Interoperability table in Install and Configure the On-Premises NSX Edge in
the VMware Cloud on AWS Networking and Security for the set of supported L2VPN client
and server versions.
3 If the version of NSX-T that the SDDC upgrade will apply is compatible with your existing
on-premises NSX Edge, no action is needed. Otherwise, follow the procedure in Install and
Configure the On-Premises NSX Edge to replace your existing on-premises NSX Edge with
a compatible version. If there is an NSX Edge version that is compatible with your SDDC
pre- and post-upgrade, upgrade the on-premises Edge before the SDDC upgrade begins. If
these is no NSX Edge version that meets this criterion, you must wait until the SDDC upgrade
is complete before you upgrade the on-premises Edge. This scenario will result in L2VPN
downtime for the duration of the on-premises upgrade.
n The VMware Hybrid Cloud Extension (HCX) for the SDDC managers will not be upgraded as
part of this release.
n Avoid starting HCX migrations that might overlap with the SDDC upgrade window. HCX bulk
migration processes might be halted, and HCX vMotion migrations might fail.
n For more details, see the VMware HCX User Guide at https://docs.vmware.com/en/VMware-
NSX-Hybrid-Connect/index.html.
n The VMware vCenter Cloud Gateway will be updated to the latest release.
n The user interface for the VMware vCenter Cloud Gateway might be inaccessible during the
upgrade of the appliance.
n For more information, see the documentation for the vCenter Cloud Gateway Appliance
at https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vsphere.vmc-
aws-manage-data-center.doc/GUID-58C1AC46-3F99-4F93-BB1F-FD1878B49374.html.
VMware, Inc. 52
VMware Cloud on AWS Operations Guide
n You cannot open the Site Recovery UI for the SDDC under maintenance. From the remote
SDDC Site Recovery UI, this site will appear as disconnected.
n Recovery plan failover operations towards the SDDC under maintenance cannot be initiated.
Failover operations in progress might fail when maintenance starts.
n Incoming replications are interrupted. Depending on RPO settings and the maintenance
duration, RPO violations notification for these replications might appear in the remote Site
Recovery UI. RPO violations should disappear automatically sometime after the maintenance
is completed, depending on when vSphere Replication manages to sync the accumulated
delta. Replications outgoing from the SDDC under maintenance are not affected.
n For more information, see the documentation for VMware Site Recovery at https://
docs.vmware.com/en/VMware-Site-Recovery/index.html.
VMware periodically schedules software maintenance for its services, including VMware Cloud on
AWS. During maintenance, your workload VMs will remain online, but you won't be able to view
or modify your vCenter Server and SDDC networking.
Prerequisites
Procedure
If maintenance is scheduled for this SDDC, you'll see an Upcoming maintenance card
showing a date and time range for the maintenance.
Procedure
VMware, Inc. 53
VMware Cloud on AWS Operations Guide
If maintenance is ongoing, the maintenance summary shows information about the current
phase and step of maintenance.
You can see details of the operations performed in each step, as well as start and end times
for the steps.
4 (Optional) Click View Times in Local Time Zone to view the times in your local time zone
rather than UTC time.
Procedure
u Calculate your local time from a UTC time using one of the following methods.
Option Description
Use a time zone calculator Use the time zone calculator at https://www.timeanddate.com/worldclock/
converter.html to convert from UTC time to your time.
Compute local time using UTC offset a Determine the time offset from UTC time for your local time zone. See
https://en.wikipedia.org/wiki/List_of_UTC_time_offsets.
b Add the time offset to the UTC time (expressed in 24-hour time).
c If daylight saving time is in effect in your local time zone, adjust for
daylight saving time.
The length of maintenance depends on many factors, including but not limited to:
n The amount of data in vCenter Server, ESXi hosts, and NSX-T databases
n Time required to add and remove hosts. SDDCs used for VDI require additional time to
update and remove hosts.
n Time to execute multiple service actions such as backup, pre-update, and post-update
actions. When upgrading from SDDC version 1.8 to version 1.9 or higher, additional time is
required for NSX appliance configuration changes.
VMware, Inc. 54
VMware Cloud on AWS Operations Guide
The number of factors makes it difficult to provide a precise estimate of the maintenance time.
However, the numbers given below are based on historical data and should give you a good idea
of the duration of upcoming maintenance for your SDDC.
Update Host 45-60 minutes per host Clusters are updated in parallel. The
duration for Phase 2 depends on the
number of hosts in the largest cluster.
Clusters are upgraded in parallel up to ten clusters at a time. If you have ten or fewer clusters,
the overall upgrade time is determined by the largest cluster in the SDDC. If you have more than
ten clusters, each additional cluster begins upgrading as soon as one of the initial ten clusters
completes. In this case, the overall upgrade time is determined by the time required for the
largest cluster of the initial ten, plus any additional time required to complete subsequent clusters
that started after one of the first ten completed.
VM Operations
Orphaned VM(s) Auto-Remediation
VMware, Inc. 55
VMware Cloud on AWS Operations Guide
If you use "No data redundancy/VMs w/ FTT=0" as a storage policy, you might experience
data loss if there is a failure or if the VM becomes unresponsive. If a failure happens and a
VM or VMs become orphaned, VMware performs a cleanup action. You will receive an email
notification when this happens.
vCenter Operations
vCenter Sessions (Connections) Maxed Out
If many sessions are created and not cleared, vCenter Server might become inaccessible.
Typically this is caused by automation creating a large number of sessions. This generates an
automated alert and VMware will restart vCenter Server. You will receive an email notification
when this happens.
A number of different issues might require a reboot of vCenter Server. Some issues might
require an immediate reboot for remediation, while others might allow for continued usage
with a reboot required in the near future. In the latter case, you will receive an email
notification alerting you that a restart will occur in the next 24 hours. After a reboot, ongoing
tasks and application connections might need to restart.
NSX Operations
Management Plane (NSX Manager) Restart
A number of different issues might require a restart of NSX Manager. Some issues might
require an immediate reboot for remediation, while others might allow for continued usage
with a reboot required in the near future. For the short time while NSX Manager is in the
process of restarting, you will not be able to access the SDDC Networking and Security UI.
You will not receive an email notification for NSX Manager restart events.
SDDC Operations
Single Host SDDC Failure
The Single Host SDDC starter configuration has no SLA and is appropriate for proof-of-
concept or test and development use cases. VMware does not perform any remediation in
VMware, Inc. 56
VMware Cloud on AWS Operations Guide
the event of a Single Host SDDC failure. You will receive an email notification if a Single Host
SDDC failure occurs.
Your billing cycle begins on the day of the month when the first service for your organization was
set up. For example, if you set up the first service in your organization on the 15th of the month,
your billing cycle runs from the 15th of the month through the 14th of the following month.
Host usage for VMware Cloud on AWS is tracked in alignment with your billing cycle. The host
usage shown on your bill is the entirety of your host usage during the billing period.
Other types of usage, including data transfer out and IP address usage and remaps, are received
on the 5th of each month and include usage up to the last day of the previous month. For these
types of usage, there is a time lag between when the usage occurs and when it shows up on
your bill. The amount of time lag depends on where the beginning of your billing cycle is in
relation to the 5th of the month.
For example, consider two users, Alice and Bob. Alice's billing cycle begins on the 3rd of the
month, while Bob's billing cycle begins on the 12th.
Note If you purchased through AWS, pricing and payment details are not shown in the
VMware Cloud Services Console. Pricing is determined by your agreement with AWS. For more
information, see Purchase Options for VMware Cloud on AWS.
Procedure
VMware, Inc. 57
VMware Cloud on AWS Operations Guide
By default, a new SDDC is created with medium-sized NSX Edge and vCenter Server appliances.
Large-sized appliances are recommended for deployments with more than 30 hosts or 3000
VMs or in any other situation where management cluster resources might be oversubscribed.
Large-sized appliances are also required if you want to Configure a Multi-Edge SDDC With Traffic
Groups.
You can use a control on the SDDC Settings tab to upsize a medium-sized SDDC to a large-sized
one. This change is permanent and cannot be undone. The operation incurs about an hour of
SDDC downtime, and requires a vCenter re-start and an NSX failover. If there aren’t enough free
resources available, the operation adds a host to the SDDC.
Note This operation cannot be performed while SDDC maintenance, including the addition or
removal of hosts, is underway.
Prerequisites
You must be logged in to the VMC Console at https://vmc.vmware.com. as a user with a VMC
service role of Administrator or Administrator (Delete Restricted).
Procedure
2 On the card for your SDDC, click View Details and then click the Settings tab.
The Settings page displays information about SDDC settings, pre-defined user accounts, and
SDDC access via the API and PowerCLI.
3 Expand the Management Appliances item under the SDDC section of the Settings tab to
view or change the appliance size in this SDDC.
If the appliance size is NSX_MEDIUM, you can click UPSIZE to open the Upsize management
appliances dialog. To upsize the appliance to NSX_LARGE, select the checkboxes to confirm
that you understand the consequences of your action, then click UPSIZE.
The system displays a message to confirm that the requested operation has started and track
its progress.
SDDC Roles
CloudAdmin
VMware, Inc. 58
VMware Cloud on AWS Operations Guide
The CloudAdmin role has the necessary privileges for you to create and manage workloads
on your SDDC. However, you cannot access or configure objects that are supported and
managed by VMware, such as hosts, clusters, and management virtual machines. For detailed
information about the privileges assigned to this role, see CloudAdmin Privileges.
CloudGlobalAdmin
The CloudGlobalAdmin role is associated with global privileges and allows you to create and
manage content library objects and perform some other global tasks.
Note The CoudGlobalAdmin role, which has a subset of the privileges granted to the
CloudAdmin role, is deprecated as of SDDC version 1.7.
The CloudAdmin is responsible for creating users, groups, and roles in the SDDC, typically by
using vCenter Single Sign-On and Hybrid Linked Mode. For the majority of use cases, rights and
roles in the SDDC vCenter can be configured the same way that they are in an on-premises
vCenter linked to the SDDC with Hybrid Linked Mode, so that your organization's workflows can
benefit from having the same access controls in both environments.
Because it is a service, VMware Cloud on AWS limits access by all tenants (organization
members) to vSphere resources that must remain under the control of the service provider
(VMware). It also places limitations on the rights you can associate with roles you create, and
prevents you from modifying the CloudAdmin role or any roles that have more rights than the
CloudAdmin role. The service provider is granted super-user rights over all users, groups, rights,
roles, and inventory objects in your organization.
See Understanding Authorization in vSphere in the VMware vSphere Documentation for more
information about roles and rights in the system.
AWS Roles
To create an SDDC, VMware must add several required AWS roles and permissions to your AWS
account. Most permissions are removed from these roles after the SDDC has been created. The
others remain with the roles in your AWS account.
Important You must not change any of the remaining AWS roles and permissions. Doing so will
render your SDDC inoperable.
VMware, Inc. 59
Managing SDDC Hosts and
Clusters 2
You can add and remove clusters and hosts from your cloud SDDC, as long as this would not
bring your SDDC below the minimum or above the maximum number of allowed clusters and
hosts.
The initial cluster created during SDDC creation is named Cluster-1. Additional clusters that you
create are numbered sequentially, Cluster-2, Cluster-3, and so on.
When you add hosts to an SDDC with multiple clusters, you can select the cluster to add them to.
n Add a Cluster
n Remove a Cluster
n Add Hosts
n Remove Hosts
n Introduction to Auto-Remediation
A given cluster in your SDDC must contain hosts of the same type.
Some host types might not be available within a particular region or availability zone.
I3
VMware, Inc. 60
VMware Cloud on AWS Operations Guide
The I3 host type is the default host type. I3 hosts have 36 cores, 512GiB RAM, and 10.37TiB
raw storage capacity per host.
I3en
The I3en host type is optimized for data-intensive workloads. I3en hosts have 96 logical
cores, 768GiB RAM, and approximately 45.84 TiB raw storage capacity per host. Single-host
SDDCs cannot contain the I3en host type.
Add a Cluster
You can add clusters to a cloud SDDC up to the maximum configured for your account.
Additional clusters are created in the same availability zone or availability zones as the initial
SDDC.
When you deploy an additional cluster, whether it is a single availability zone cluster or stretched
cluster, you do not have to select the same host type used in the initial cluster created for the
SDDC. However, all hosts in a given cluster must be of the same type. In addition, while all host
CPUs must be enabled in the initial cluster, when you create additional clusters, you have the
option to disable some of the host CPUs.
Logical networks you have created for your SDDC are automatically shared across all clusters.
Compute and storage resources are configured similarly for all clusters. For example:
n Each cluster contains a Compute-ResourcePool that has the same permissions as the one
created in the initial SDDC cluster.
n Each cluster contains a workloadDatastore that has the same permissions as the one created
in the initial SDDC cluster.
Note The initial cluster contains the Mgmt-ResourcePool and vsanDatastore, and all
management workloads run in this cluster. All clusters contain a workloadDatastore and run
virtual machine workloads.
Custom core counts can be selected only during cluster creation and only for secondary
clusters. Custom core counts are not supported in Cluster-0 because all cores are required for
management VMs.
Procedure
2 On the card for the SDDC you want to add a cluster to, select Actions > Add Cluster.
Currently available host types are i3 and i3en. For more information on host types, see
VMware Cloud on AWS Host Types.
VMware, Inc. 61
VMware Cloud on AWS Operations Guide
4 Specify the number of CPU cores to enable for each host in the cluster.
For i3 hosts, all CPU cores are enabled by default on each host in the cluster. If you'd like to
disable some of the cores to save on licensing costs for applications that are licensed on a
per-core basis, you can enable a subset of the available cores. This subset applies to all hosts
in the cluster. Other cores on each host are disabled and remain disabled for the lifetime of
the host.
For i3en hosts, all CPU cores are hyperthreaded and offered as logical cores by default on
each host in the cluster. If you'd like to disable some of the cores to save on licensing costs
for applications that are licensed on a per-core basis, you can enable a subset of the available
cores. This subset is offered as physical cores and applies to all hosts in the cluster. Other
cores on each host are disabled and remain disabled for the lifetime of the host.
Important Reducing core count affects the compute performance of all workloads on the
host and increases the likelihood of system performance degradation. For example, vCenter
Server and vSAN overhead can become more noticeable, and operations like adding clusters
and hosts can take longer to complete.
Results
Remove a Cluster
You can remove any cluster in an SDDC except for the initial cluster, Cluster-1.
When you delete a cluster, all workload VMs in the cluster are immediately terminated and all
data and configuration information is deleted. You lose API and UI access to the cluster. Public IP
addresses associated with VMs in the cluster are released.
Currently deleting a cluster from an SDDC deployed with a multiple availability zone cluster is not
supported.
Prerequisites
n Migrate any workload VMs that you want to keep to another cluster in the SDDC.
Procedure
VMware, Inc. 62
VMware Cloud on AWS Operations Guide
3 On the card for the cluster you want to remove, click Delete Cluster.
Before you can delete the cluster, you must select all of the check boxes to confirm that
you understand the consequences of this action. When all the check boxes are selected, the
Delete Cluster button is enabled. Click it to delete the cluster.
Add Hosts
Add hosts to your SDDC to increase the amount of computing and storage capacity available in
your SDDC.
You can add hosts to your SDDC as long as you do not exceed the maximum number of hosts
allotted to your account.
Note If you add one or more hosts to a two-host single-AZ cluster, the minimum size of that
cluster will be permanently increased to three hosts. You will not be able to reduce the number
of hosts below three later. Similarly, if you scale up a two-host stretched cluster to four or more
hosts, you will not be able to reduce the size of the stretched cluster back to two hosts.
Procedure
n If the SDDC has only one cluster, select Actions > Add Hosts from the SDDC card.
VMware, Inc. 63
VMware Cloud on AWS Operations Guide
n If the SDDC has more than one cluster, select Actions > Add Hosts from the card for the
cluster where you want to add the hosts.
displayed.
If you are adding hosts to a stretched cluster, you must add them in multiples of two hosts at
a time.
Results
Remove Hosts
You can remove hosts from your SDDC as long as the number of hosts in your SDDC cluster
remains above the minimum.
The minimum number of hosts for a single availability zone cluster is 3. The minimum number for
a stretched cluster is 6.
Note Although you can create an SDDC containing two hosts, you cannot reduce the number
of hosts in an existing SDDC below 3 for a single availablility zone cluster, or 6 for a stretched
cluster.
VMware, Inc. 64
VMware Cloud on AWS Operations Guide
Whenever you reduce cluster size, storage latency increases due to process overhead
introduced by host removal. The duration of this overhead varies with the amount of data
involved. It can take as little as an hour, though an extreme case could require more than
24 hours. While cluster-size reduction (scale-in) is underway, workload VMs supported by the
affected clusters can experience significant increases in storage latency.
When you remove a host, VMs running on that host are evacuated to other hosts in the SDDC
cluster. The host is placed into maintenance mode and then removed.
Prerequisites
Ensure that you have sufficient capacity in your cluster to hold the workload VMs that will be
evacuated from the hosts that you remove.
Procedure
n If the SDDC has only one cluster, select Actions > Remove Hosts from the SDDC card.
n If the SDDC has more than one cluster, select Actions > Remove Hosts from the card for
the cluster from which you want to remove the hosts.
If you are removing hosts from a multiple availability zone cluster, you must remove them in
multiples of two.
Note All vSAN storage policies have requirements for a minimum number of hosts. If you
attempt to reduce the number of hosts below this minimum, the operation fails. See vSAN
Policies in Managing the VMware Cloud on AWS Data Center.
5 Select the I understand that this action cannot be undone check box.
6 Click Remove.
This action initiates the host removal process. If you are removing a single host, billing for that
host stops immediately. Multiple hosts are removed one at a time to avoid violating vSAN
storage policies. A host slated for removal transitions to the READY state while its data is
being moved. It can take up to several hours to move large volumes of data. When the data
move is complete, the host transitions out of the READY state and billing for that host stops.
VMware, Inc. 65
VMware Cloud on AWS Operations Guide
Elastic DRS monitors the current demand on your SDDC and applies an algorithm to make
recommendations to either scale-in or scale-out the cluster. A decision engine responds to a
scale-out recommendation by provisioning a new host into the cluster. It responds to a scale-in
recommendation by removing the least-utilized host from the cluster.
Special considerations apply to two-host SDDCs. Only the Default Storage Scale-Out policy
is available for two-host SDDCs. EDRS cannot scale-in an SDDC to fewer than three hosts.
Therefore, when a two-host SDDC is scaled-out to three hosts, it cannot be scaled back to two
hosts.
When the Elastic DRS algorithm initiates a scale-out, all Organization users receive a notification
in the VMC Console and through email.
The algorithm runs every 5 minutes and uses the following parameters:
n Minimum and maximum number of hosts the algorithm should scale up or down to.
n Thresholds for CPU, memory and storage utilization such that host allocation is optimized for
cost or performance. These thresholds, which we list on the Select Elastic DRS Policy page,
are predefined for each DRS policy type and cannot be altered by user.
Scale-out Recommendation
A scale-out recommendation is generated when any of CPU, memory, or storage utilization
remains consistently above thresholds. For example, if storage utilization goes above the high
threshold but memory and CPU utilization remain below their respective thresholds, a scale-out
recommendation is generated. A vCenter Server event is posted to indicate the start, completion,
or failure of scaling out on the cluster.
VMware, Inc. 66
VMware Cloud on AWS Operations Guide
Scale-in Recommendation
A scale-in recommendation is generated when CPU, memory, and storage utilization all remain
consistently below thresholds. The scale-in recommendation is not acted upon if the number
of hosts in the cluster is at the minimum specified value. A vCenter Server event is posted to
indicate the start, completion, or failure of the scaling in operation on the cluster.
Note Whenever you reduce cluster size, storage latency increases due to process overhead
introduced by host removal. The duration of this overhead varies with the amount of data
involved. It can take as little as an hour, though an extreme case could require more than
24 hours. While cluster-size reduction (scale-in) is underway, workload VMs supported by the
affected clusters can experience significant increases in storage latency.
If a host fails in a multiple availability zone cluster, Auto-Scaler attempts to replace it in its original
availability zone. If it is unable to do this because of a full or partial availability zone failure,
Elastic DRS scales out the cluster in the remaining availability zone. It adds non-billable hosts
in the remaining availability zone until the cluster reaches its original host count. This scale out
is dependent on available capacity and is not guaranteed. When the failed availability zone is
restored, Elastic DRS scales in the cluster to remove the extra hosts.
n A three hour delay to process a scale-in event after scaling out the cluster.
Normally, you would not need to manually add or remove hosts from a cluster with Elastic
DRS enabled. You can still perform these operations, but an Elastic DRS recommendation
might revert them at some point.
VMware, Inc. 67
VMware Cloud on AWS Operations Guide
If you start an add or remove host operation while an Elastic DRS recommendation is being
applied, the add or remove host operation fails with an error indicating a concurrent update
exception.
n Auto-remediation
During auto-remediation, a failed host is replaced by a new host, and its host tags are applied
to the replacement host. While auto-remediation is in progress, the current recommendations
by the Elastic DRS algorithm are ignored. After auto-remediation completes, the algorithm
runs again and fresh recommendations are applied. If an auto-remediation event is initiated
for a cluster while an Elastic DRS recommendation is being applied to that cluster, the
auto-remediation task is queued. After the Elastic DRS recommendation task completes, the
auto-remediation task starts.
In a new SDDC, elastic DRS uses the Default Storage Scale-Out policy, adding hosts only
when storage utilization exceeds the threshold of 75%. You can select a different policy if it
provides better support for your workload VMs. For any policy, scale-out is triggered when a
cluster reaches the high threshold for any resource. Scale-in is triggered only after all of the low
thresholds have been reached. See How the Elastic DRS Algorithm Works for more information
about EDRS scale-out and scale-in logic.
Note For two-host SDDCs, only the Default Storage Scale-Out policy is available.
When scaling in, this policy removes hosts gradually in order to avoid performance
slowdowns as demand spikes. It has the following thresholds:
VMware, Inc. 68
VMware Cloud on AWS Operations Guide
When scaling in, this policy removes hosts quickly in order to maintain baseline performance
while keeping host counts to a practical minimum. It has the following thresholds:
This policy adds multiple hosts at a time when needed for memory or CPU, and adds hosts
incrementally when needed for storage. By default, hosts are added two at a time, but
beginning with SDDC version 1.14 you can specify a larger increment if you need faster scaling
for disaster recovery and similar use cases. When using this policy, scale-out time increases
with the number of hosts added and, when the increment is large (12 hosts), can take up to
40 minutes in some configurations. You must manually remove these hosts when they are no
longer needed. This policy has the following thresholds:
The minimum host count that permits EDRS scaling. Once minimum cluster size is reached,
EDRS cannot perform a scale-in operation, but you can still remove hosts manually until your
organization’s minimum host count is reached.
The maximum host count that permits EDRS scaling. Once maximum cluster size is reached,
EDRS cannot perform a scale-out operation, but you can still add hosts manually until your
organization’s maximum host count is reached.
VMware, Inc. 69
VMware Cloud on AWS Operations Guide
Procedure
3 On the card for the SDDC or cluster, click ACTIONS and choose Edit EDRS Settings.
The Default Storage Scale-Out policy has no parameters. For other policies, specify a
Minimum cluster size of 3 or more and a Maximum cluster size consistent with your
expected workload resource consumption. The Maximum cluster size applies to CPU and
Memory. To maintain storage capacity and ensure data durability, the service can add more
hosts than what you specified in Maximum cluster size.
5 Click Save.
Introduction to Auto-Remediation
The auto-scaler service within VMware Cloud on AWS monitors the health of your infrastructure.
This allows you to focus on the workload knowing that it will handle any failures should they
occur. With auto-scaler, you can build a resilient and a high availability SDDC.
Although AWS Infrastructure is reliable, but failures are inevitable. Failures can be anything that
can also occur in an on-premises data center, from an individual disk or host failures, to network
failures to even more widespread failures that can affect groups of hosts. The AWS Architecture
framework reliability pillar discusses design principles for reliability in the cloud. You cannot
assume that the cloud infrastructure is infallible and must plan for a failure, and an automatic
recovery. VMC on AWS provides a huge benefit by abstracting the underlying infrastructure
and letting your applications run in the cloud without modifying the architecture. It monitors
the infrastructure, detect failures and automatically remediate the infrastructure when a failure
occurs.
Most of the auto-remediation process happens in the background and is carried out without
affecting existing workloads. The auto-remediation detects when an error occurs as it always
monitors the health of the system. The service can quickly provide hardware into an SDDC. By
combining these two capabilities, the service reacts fast to a hardware failure by inserting a
new host into your cluster when a fault is detected. In addition, using VMware vSAN, VMs are
protected and vSphere HA automatically restarts any VMs which were running on a failed server.
Auto-remediation is part of the VMC Auto-scaler Service. One of the benefits of running your
workloads in VMware Cloud on AWS is that VMware manages the platform, including all the
infrastructure and management components. VMware also performs regular updates across the
SDDC fleet to deliver new features, bug fixes, and software upgrades.
AWS monitors the health of other components such as top-of-rack switches, power supplies,
and so on. Failure of these components triggers host failures and auto-remediation handles such
failures. The auto-remediation monitor checks:
VMware, Inc. 70
VMware Cloud on AWS Operations Guide
n SDDC: Every SDDC runs a monitoring service that checks the host health.
VMC
Autoscaler Service
SDDC
AWS
Lambda
ESXi vCenter vSAN NSX
n A monitoring service at the SDDC level receives notifications from the underlying
components of the system.
n AWS sends VMware, host level information, most notably AWS Planned Maintenance events.
The auto-scaler service receives these notifications and automatically remediates any issues
within the SDDC.
Functions of Auto-Remediation
Auto-remediation at a high-level monitors hardware and software faults, and automatically
attempts to remediate the problem by fixing an issue or replacing the hardware. When
everything else fails, the auto-remediation process requests SRE for a manual intervention.
VMware, Inc. 71
VMware Cloud on AWS Operations Guide
1 Monitoring - The VMC platform continuously monitors the system health of every SDDC and
every host in the fleet. When a failure is detected, an event is sent to auto-remediation.
2 Wait for transient events - Some of the detected failures can be temporary. For example,
the monitoring system can not reach a host due to a temporary connectivity issue. Auto-
remediation waits for 5 minutes to determine if the problem is temporary. If the problem
clears, the auto-remediation returns without taking any action.
3 Add a Host - If the error does not resolve after 5 minutes, auto-remediation begins adding
a host to the SDDC even if you do not yet know if a host is required or not. Adding a host
now ensures that it is available when required. Note that you are not billed for this host until it
replaces a faulty host in your SDDC.
4 Determine a failure type and take action - Hosts can fail for different reasons, and require
different action. For example, a vSAN disk failure on a host that is still connected to a vCenter
Server can be remediated through a soft reboot, whereas a PSOD host requires a hard
reboot. The auto-remediation logic for this is complex and constantly evolving, but you can
review the error and take the least intrusive action. Auto-Remediation is an internal process,
and customers have no access to the logic. If you encounter any issues, you can contact
VMware support.
5 Check Host Health - The next step is to check if the remediation action has fixed the host.
If the failed host is now healthy after a soft or hard reboot, auto-remediation avoids further
disruption to the SDDC. It collects and takes any other necessary actions and removing the
new host that was added pre-emptively in Step 2.
6 Replace Host - If the failed host cannot be revived then auto-scaler removes the failed host,
and replaces it with the host that was added in Step 2. At this stage, auto scaler removes the
failed host and replace it with the host that was added in Step 2. vSphere HA and vSAN are
triggered and compute policy tags are attached to the new host.
n Waits to check if a failure alert is transient - Some of the detected failures can be temporary.
Auto-Remediation waits for five minutes to determine if the problem is temporary.
n Add host pre-emptively - When the error does not resolve after 5 minutes, Auto-Remediation
begins adding a new host to the SDDC. You are not billed for this host until it replaces a faulty
host in your SDDC.
VMware, Inc. 72
VMware Cloud on AWS Operations Guide
n Determine failure type and take action - Depending on the failure type, a host can be
rebooted or it can be replaced right away.
n Replace host - When remediation (host reboot) did not fix the problem, then the auto-
remediation replaces the failed host.
Events in Auto-Scaler
Once the Auto-Remediation receives a failure event, it needs to figure out the type of failure that
has occurred to decide the appropriate action. The SDDC activity log includes any auto scaler
activities, but not the error that triggered.
Types of Events
n VC events
n An event is triggered when the ESX host is disconnected or not responding and so on.
n DAS events:
n vSAN events:
n Upgrade: Disable EDRS. Maintenance activities frequently require an extra host, this
host(s) is added as part of the maintenance event. EDRS is disabled for the duration of
any planned maintenance to prevent these activities from triggering Scale-in/out events.
VMware, Inc. 73
VMware Cloud on AWS Operations Guide
You can create or delete, but not modify, a compute policy. If you delete a category tag used
in the definition of the policy, the policy is also deleted. The system does not check for policy
conflicts. If, for example, multiple VMs subject to the same VM-Host affinity policy are also
subject to a VM-VM anti-affinity policy, DRS will be unable to place the VMs in a way that
complies with both policies.
Note Affinity policies in your VMware Cloud on AWS SDDC are not the same as the vSphere
DRS affinity rules you can create on premises. They can be used in many of the same ways, but
have significant operational differences. A compute policy applies to all hosts in an SDDC, and
cannot typically be enforced in the same way that a DRS "must" policy is enforced. The policy
create/delete pages have more information about operational details for each policy type.
Monitoring Compliance
Open the VM Summary page in the vSphere client to view the compute policies that apply to a
VM and its compliance status with each policy.
VM-Host affinity policies can be useful when host-based licensing requires VMs that are running
certain applications to be placed on hosts that are licensed to run those applications. They can
also be useful when virtual machines with workload-specific configurations require placement on
hosts that have certain characteristics.
To prevent a VM-Host affinity policy from blocking the upgrade of a host or cluster, VM-Host
affinity policies are constrained in several ways.
n A policy cannot prevent a host configured for HA from executing a failover. VMs with an
affinity for the failed host can be migrated to any available host in the cluster.
Prerequisites
VMware, Inc. 74
VMware Cloud on AWS Operations Guide
Procedure
1 Create a category and tag for VMs that you want to include in a VM-Host affinity policy.
Pick a category name that describes common characteristics, such as license requirements,
of VMs you plan to tag as members of that category.
2 Create a category and tag for hosts that you want to include in a VM-Host affinity policy.
You can use existing tags and categories or create new ones specific to your needs. See
vSphere Tags and Attributes for more about creating and using tags.
3 Tag the VMs and hosts that you want to include in a VM-Host affinity policy.
b From the vSphere Client Home, click Policies and Profiles > Compute Policies.
d Fill in the policy Name and choose VM-Host affinity from the Policy type drop-down
control.
e Provide a Description of the policy, then use the VM tag and Host Tag drop-down
controls to choose a Category and Tag to which the policy applies.
Unless you have multiple VM tags associated with a category, the wizard fills in the VM
tag after you select the tag Category.
5 (Optional) To delete a compute policy, open the vSphere Web Client, click Policies and
Profiles > Compute Policies to show each policy as a card. Click DELETE to delete a policy.
A VM-Host anti-affinity policy can be useful when you want to avoid placing virtual machines
that have specific host requirements such as a GPU or other devices, or capabilities such as
IOPS control, on hosts that can't support those requirements. After the policy is created, the
placement engine in your SDDC avoids deploying VMs covered by the policy on hosts covered
by the policy.
To prevent a VM-Host anti-affinity policy from blocking the upgrade of a host or cluster, these
policies are constrained in several ways.
n A policy cannot prevent a host configured for HA from executing a failover. VMs with an
anti-affinity for the failed host can be migrated to any available host in the cluster.
VMware, Inc. 75
VMware Cloud on AWS Operations Guide
Prerequisites
Procedure
1 Create a category and tag for VMs that you want to include in a VM-Host anti-affinity policy.
Pick a category name that describes common characteristics of VMs you plan to tag as
members of that category.
2 Create a category and tag for hosts that you want to include in a VM-Host anti-affinity policy.
You can use existing tags and categories or create new ones specific to your needs. See
vSphere Tags and Attributes for more about creating and using tags.
3 Tag the VMs and hosts that you want to include in a VM-Host anti-affinity policy.
b From the vSphere Client Home, click Policies and Profiles > Compute Policies.
d Fill in the policy Name and choose VM-Host anti-affinity from the Policy type drop-down
control.
e Provide a Description of the policy, then use the VM tag and Host Tag drop-down
controls to choose a Category and Tag to which the policy applies.
Unless you have multiple tags associated with a VM or host in a given category, the
wizard fills in the VM tag and Host tag after you select the tag Category.
5 (Optional) To delete a compute policy, open the vSphere Web Client, click Policies and
Profiles > Compute Policies to show each policy as a card. Click DELETE to delete a policy.
VM-VM affinity policies can be useful when two or more VMs in a category can benefit from
locality of data reference or where placement on the same host can simplify auditing.
VMware, Inc. 76
VMware Cloud on AWS Operations Guide
A VM-VM affinity policy establishes an affinity relationship between virtual machines in a given
category. After the policy is created, the placement engine in your SDDC attempts to deploy all
VMs in the category covered by the policy on the same host.
Prerequisites
Procedure
1 Create a category and tag for each group of VMs that you want to include in a VM-VM
affinity policy.
You can use existing tags and categories or create new ones specific to your needs. See
vSphere Tags and Attributes for more about creating and using tags.
b From the vSphere Client Home, click Policies and Profiles > Compute Policies.
d Fill in the policy Name and choose VM-VM affinity from the Policy type drop-down
control.
e Provide a Description of the policy, then use the VM tag drop-down control to choose
the Category and Tag to which the policy applies.
Unless you have multiple VM tags associated with a category, the wizard fills in the VM
tag after you select the tag Category.
4 (Optional) To delete a compute policy, open the vSphere Web Client, click Policies and
Profiles > Compute Policies to show each policy as a card. Click DELETE on the policy card
to delete the policy.
A VM-VM anti-affinity policy discourages placement of virtual machines in the same category on
the same host. This kind of policy can be useful when you want to place virtual machines running
critical workloads on separate hosts, so that the failure of one host does not affect other VMs in
the category. After the policy is created, the placement engine in your SDDC attempts to deploy
VMs in the category on separate hosts.
VMware, Inc. 77
VMware Cloud on AWS Operations Guide
n If the policy applies to more VMs than there are hosts in the SDDC, or if it's not possible to
place a VM on a host that satisfies the policy, DRS attempts to place the VM on any suitable
host.
Prerequisites
Procedure
1 Create a category and tag for each group of VMs that you want to include in a VM-VM
anti-affinity policy.
You can use existing tags and categories or create new ones specific to your needs. See
vSphere Tags and Attributes for more about creating and using tags.
b From the vSphere Client Home, click Policies and Profiles > Compute Policies.
d Fill in the policy Name and choose VM-VM anti affinity from the Policy type drop-down
control.
e Provide a Description of the policy, then use the VM tag drop-down control to choose
the Category and Tag to which the policy applies.
Unless you have multiple VM tags associated with a category, the wizard fills in the VM
tag after you select the tag Category.
4 (Optional) To delete a compute policy, open the vSphere Web Client, click Policies and
Profiles > Compute Policies to show each policy as a card. Click DELETE to delete a policy.
VMware, Inc. 78
VMware Cloud on AWS Operations Guide
This type of policy can be useful for a VM running an application that creates resources on the
local host and expects those resources to remain local. If DRS moves the VM to another host for
load-balancing or to meet reservation requirements, resources created by the application are left
behind and performance can be degraded when locality of reference is compromised.
A Disable DRS vMotion policy takes effect after a tagged VM is powered on, and is intended to
keep the VM on its current host as long as the host remains available. The policy does not affect
the choice of the host where a VM is powered on.
Prerequisites
Procedure
1 Create a category and tag for each group of VMs that you want to include in a
DisableDRSvMotion policy.
You can use existing tags and categories or create new ones specific to your needs. See
vSphere Tags and Attributes for more about creating and using tags.
b From the vSphere Client Home, click Policies and Profiles > Compute Policies.
d Fill in the policy Name and choose Disable DRS vMotion from the Policy type drop-down
control.
e Provide a Description of the policy, then use the VM tag drop-down control to choose
the VM category to which the policy applies.
Unless you have multiple VM tags associated with a category, the wizard fills in the VM
tag after you select the tag category.
4 (Optional) To delete a compute policy, open the vSphere Web Client, click Policies and
Profiles > Compute Policies to show each policy as a card. Click DELETE to delete a policy.
VMware, Inc. 79
VMware Cloud on AWS Operations Guide
n Take advantage of the efficiencies of the cloud while using already-purchased perpetual
licenses.
The requirements for bringing your own licenses to VMware Cloud on AWS depend on whether
you have Microsoft Software Assurance and license mobility benefits associated with those
licenses and when those licenses were acquired.
n If you do not have Software Assurance: You may migrate licenses for products purchased
before October 1, 2019, or which were added as a true-up as part of an Enterprise Enrollment
that was effective before October 1, 2019. These licenses can only be upgraded to versions
that were available before October 1, 2019. See The Amazon Web Services and Microsoft
FAQ for more information.
Note This scenario particularly applies to Windows Server licenses. Windows Server is not
eligibile for license mobility benefits, and therefore you cannot migrate any Windows Server
licenses purchased after October 1, 2019.
n If you have Software Assurance: Microsoft License Mobility through Software Assurance
allows many Microsoft licenses to be migrated to VMware Cloud on AWS. For more
information on License Mobility, see License Mobility.
n Offered per host for all hosts in the cluster. All hosts in a cluster must be licensed.
n Billed based on the maximum number of hosts that were deployed in that cluster during your
billing cycle. You are not billed for maintenance or remediation hosts.
Note Billing for Windows Server and SQL Server licenses begins in February 2021. You will
see charges for these licenses starting with your bill for the January-February billing period.
n Billing begins when you select the license. If you choose to remove the license, you are billed
for the entire billing period.
n After you have selected the licenses, you may deploy an unlimited number of the applicable
VMs on the licensed hosts.
VMware, Inc. 80
VMware Cloud on AWS Operations Guide
License Mobility
Eligible Microsoft server applications such as Microsoft SQL Server, may, in certain cases, be
deployed on VMware Cloud on AWS using existing licenses.
This allows you to more easily move your workloads to a VMware Cloud on AWS SDDC, without
any additional Microsoft software licensing fees. Microsoft Volume Licensing customers with
eligible server applications covered by active Microsoft Software Assurance (SA) contracts may
migrate licenses acquired after October 1, 2019. Not only will License Mobility make the transition
easier for existing SA customers, it provides customers who prefer to purchase perpetual
licenses the ability to continue doing so while still taking advantage of the efficiencies of the
cloud.
Note Licenses acquired before October 1, 2019 may be migrated with or without Software
Assurance (SA) or any additional steps.
How to Sign Up
All customers using License Mobility through Software Assurance must complete a
license verification process with Microsoft, and Microsoft will ensure that you have
eligible licenses with active Software Assurance. To start the verification process and
review additional details, go to: https://www.microsoft.com/en-us/licensing/licensing-programs/
software-assurance-license-mobility.aspx.
Within 10 days of deployment, complete the License Verification Form available on the Volume
Licensing Document Search website and provide it to your Microsoft representative or preferred
resell partner, so he or she can submit your form to Microsoft. Once submitted, Microsoft
will confirm your eligibility and communicate your verification status to you and your chosen
Authorized Mobility Partner.
n All Microsoft server products migrated to VMware Cloud on AWS must be eligible via the
Microsoft License Mobility through Software Assurance program as set forth by Microsoft at
http://www.microsoft.com/licensing/about-licensing/product-licensing.aspx
n The server applications must be on the list of eligible products published by Microsoft at
http://www.microsoft.com/licensing/about-licensing/product-licensing.aspx. The list includes:
n Exchange Server
n SharePoint Server
VMware, Inc. 81
VMware Cloud on AWS Operations Guide
n Dynamics AX Server
n Project Server
n BizTalk Server
Note The following products are not eligible for License Mobility through Software Assurance:
If you have purchased VMware Cloud on AWS services through a Managed Service Provider
(MSP), your MSP must have signed a Software Services Reseller Addendum in order to enable
these license options.
Procedure
n If your SDDC has a single cluster, click Edit Microsoft Server Licenses at the bottom of
the SDDC card.
n If your SDDC has multiple clusters, click Actions > Edit Microsoft Server Licenses at the
bottom of the cluster card.
4 If you are part of an academic institution recognized by Microsoft as eligible for associated
licensing rights and terms, select Academic institution recognized by Microsoft.
For more information on academic licensing for Microsoft products, see Programs for
Educational Institutions or speak with your Microsoft license representative. After you have
enabled academic licenses for a cluster, you cannot revert to regular licensing terms.
5 Select the checkboxes to confirm that you understand the license pricing and billing terms.
VMware, Inc. 82
VMware Cloud on AWS Operations Guide
6 Click Save.
VMware provides a curated Microsoft software repository with the following OVF packages for
your use:
The following configuration was applied to the default SQL Server instance:
Note If you need to add features to the installed instance, use the SQL Server ISO image
provided in the same Content Library. See Create a VM Using a Microsoft ISO for more
information on using the ISO images.
VMware, Inc. 83
VMware Cloud on AWS Operations Guide
n 4 CPUs
n 8 GB memory
Note Access to the VMware-curated Microsoft software repository should be used only for
deploying Microsoft binaries for use on VMware Cloud on AWS.
Procedure
3 Make any necessary modifications to the VM, such as adding vCPUs, adding disks, or
changing network settings.
4 Configure the compute gateway firewall using the VMC Console to allow outbound HTTP and
HTTPS access to the internet.
VMware, Inc. 84
VMware Cloud on AWS Operations Guide
What to do next
n Update your VM with the latest patches and security updates. Although VMware supplies
current versions of Microsoft products, it is your responsibility to apply the latest
maintenance updates and security patches to ensure that the products run in a secure
environment. Follow your enterprise recommendations on updating any binaries that you
import into your SDDC.
n Update firewall rules. The activation and patch update process requires that the VM have
internet access. Follow your enterprise guidelines on whether these firewall rules should be
modified after activation.
VMware provides a curated Microsoft software repository with the following ISOs for your use:
n SQL Server 2019 Enterprise. Use this ISO if you need to add features to the SQL Server
instance.
Note Access to the VMware-curated Microsoft software repository should be used only for
deploying Microsoft binaries for use on VMware Cloud on AWS.
Procedure
3 Install the guest operating system and any other software from the ISO images.
VMware, Inc. 85
VMware Cloud on AWS Operations Guide
An ISO image containing the activation script is located in the subscribed Content Library.
a In the vSphere Client, navigate to the subscribed Content Library and click Other Types.
b Attach the VM's CD/DVD drive to the Windows_Activation_Script ISO in the Content
Library.
For more information on connecting a Content Library ISO file to a VM, see
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/
GUID-BE1C18D2-8FF0-4F41-AA35-A4BA71D62EB4.html.
6 Make any necessary modifications to the VM, such as adding vCPUs, adding disks, or
changing network settings.
7 Configure the compute gateway firewall using the VMC Console to allow outbound HTTP and
HTTPS access to the internet.
a In the guest operating system, open a command window and change to the directory
where you copied the Activation.ps1 script.
What to do next
n Update your VM with the latest patches and security updates. Although VMware supplies
current versions of Microsoft products, it is your responsibility to apply the latest
maintenance updates and security patches to ensure that the products run in a secure
environment. Follow your enterprise recommendations on updating any binaries that you
import into your SDDC.
VMware, Inc. 86
VMware Cloud on AWS Operations Guide
n Update firewall rules. The activation and patch update process requires that the VM have
internet access. Follow your enterprise guidelines on whether these firewall rules should be
modified after activation.
When you import virtual machines, you are responsible for ensuring that a license is available for
the Microsoft workload either using BYOL or using VMware-supplied licenses.
Procedure
You can use a variety of methods to do this including cold migration, migration with vMotion,
migration with HCX, or cloning from a Content Library template you have created.
2 Do one of the following in order to maintain the activation status of the Windows Server
instance.
Operations that change the BIOS UUID or Disk ID of the Windows Server VM will result in the VM
requiring activation. These operations include:
n Cloning a VM
n Converting a VM to a template
Procedure
VMware, Inc. 87
VMware Cloud on AWS Operations Guide
An ISO image containing the activation script is located in the subscribed Content Library.
a In the vSphere Client, navigate to the subscribed Content Library and click Other Types.
b Attach the VM's CD/DVD drive to the Windows_Activation_Script ISO in the Content
Library.
For more information on connecting a Content Library ISO file to a VM, see
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/
GUID-BE1C18D2-8FF0-4F41-AA35-A4BA71D62EB4.html.
3 Configure the compute gateway firewall using the VMC Console to allow outbound HTTP and
HTTPS access to the internet.
a In the guest operating system, open a command window and change to the directory
where you copied the Activation.ps1 script.
What to do next
n Update your VM with the latest patches and security updates. Although VMware supplies
current versions of Microsoft products, it is your responsibility to apply the latest
maintenance updates and security patches to ensure that the products run in a secure
environment. Follow your enterprise recommendations on updating any binaries that you
import into your SDDC.
n Update firewall rules. The activation and patch update process requires that the VM have
internet access. Follow your enterprise guidelines on whether these firewall rules should be
modified after activation.
Cluster conversion is currently available in SDDCs running SDDC 1.10 bundle versions.
VMware, Inc. 88
VMware Cloud on AWS Operations Guide
You receive notifications when a cluster conversion is scheduled, about to start, and completed.
To ensure that you receive these notifications, ensure that donotreply@vmware.com is added to
your email allow list.
During the conversion, EDRS scale-in is turned off so that it does not interfere with the process.
Two i3en hosts are added to the cluster. The NSX Edge VMs are migrated to these two new
hosts. Then workload VMs are migrated off of one of the i3 hosts and it is removed from the
cluster. The removal of i3 hosts and the addition of i3en hosts is repeated until all i3 hosts have
been removed and the cluster is fully converted. Then EDRS scale-in is re-enabled. Based on
cluster utilization, the number of i3en hosts might differ from the number of i3 hosts you started
with.
The estimated number of i3en hosts in the target cluster is based on the following factors:
n The available capacity in the capacity tier of storage for the target i3en instances.
n The estimated fault domains required in the target cluster. Currently, the estimate does not
take into account the VM storage policies and estimates the required fault domains based
only on the cluster size. This means that for all clusters containing 7 hosts or fewer, the target
cluster will contain the same number of hosts as the source cluster.
The estimate is a best guess only. The final result might differ based on the state of the cluster.
After conversion, EDRS might scale in or scale out the cluster based on the resource usage and
the EDRS policy applied to the cluster. You can also manually resize the cluster after conversion
is complete.
Cluster conversion might take hours to days to complete. VMware recommends taking a backup
before the cluster conversion takes place.
n Removing hosts
VMware, Inc. 89
VMware Cloud on AWS Operations Guide
During cluster conversion, do not perform the following actions on the cluster which is being
converted:
n Do not perform hot or cold workload migrations to or from the cluster being converted.
n Do not make changes to Storage-based Policy Management settings for workload VMs.
n Avoid starting HCX migrations that might overlap with the conversion window.
n Planned migration
n Real failover
n Reprotect
Compute policy tags are not copied over during cluster conversion. You will need to attach host
policy tags after conversion is complete.
Converting the cluster from i3 to i3en hosts does not convert any term commitments that you
have purchased. If you purchased an i3 term commitment, you can choose to purchase a new
i3en term commitment or have your i3en hosts billed at the on-demand rate. In either case, you
will continue to pay for your i3 term commitment.
VMware, Inc. 90
Working With SDDC Add-On
Services 3
When you log in to the VMC Console, you'll see cards for My Services and More Services. You
can add services from the More Services list to your My Services list to make them available in
your SDDC.
A trial version of the vRealize Log Insight Cloud add-on is enabled by default in a new SDDC.
The trial period begins when a user in your organization accesses the vRealize Log Insight Cloud
add-on and expires in thirty days. After the trial period, you can choose to subscribe to this
service or continue to use a subset of service features at no additional cost. For more information
about using vRealize Log Insight Cloud, see the vRealize Log Insight Cloud Documentation.
VMware, Inc. 91
VMware Cloud on AWS Operations Guide
"text = (nsx AND audit AND true AND comp AND reqid)"
For information about how to use vRealize Automation with VMware Cloud on AWS, see Quick
cloud automation setup for VMware Cloud on AWS in the VMware vRealize Automation Cloud
Product Documentation.
VMware Carbon Black Workload™ delivers advanced protection purpose-built for securing
workloads running in VMware Cloud to reduce the attack surface and strengthen security
posture, while simplifying operations for IT and Security teams.
For more information, see VMware Carbon Black Workload for VMware Cloud on AWS
VMware, Inc. 92
VMware Cloud on AWS Operations Guide
NSX Advanced Firewall for VMware Cloud on AWS gives your SDDC access to advanced NSX
application security features, including:
To activate the NSX Advanced Firewall Add-On in your SDDC, open the Add-Ons tab and click
ACTIVATE on the NSX Advanced Firewall add-on card. After the add-on is activated, NSX-T
advanced security features become available on the Networking & Security tab of your SDDC.
See About NSX Advanced Firewall Features in the VMware Cloud on AWS Networking and
Security guide for details about how to use the new features, and for step-by-step instructions
for deactivating the add-on if you no longer need it.
VMware, Inc. 93
Getting Templates, ISOs, and
Other Content into Your SDDC 4
You might have a variety of .vmtx templates, OVF and OVA templates, ISO images, scripts, and
other content that you want to use in your SDDC.
OVA template Deploy the OVA template directly from a client machine
to your SDDC using the vSphere Web Client. Right-click
the Compute-ResourcePool resource pool and select
Deploy OVF template
scripts or text files n Import the file into an on-premises Content Library
and subscribe to the Content Library from your SDDC.
n Create a local Content Library in your SDDC and
upload the file to it.
n Use the Content Onboarding Assistant to transfer the
file to your SDDC.
VMware, Inc. 94
VMware Cloud on AWS Operations Guide
You have two options for how the Content Onboarding Assistant transfers .vmtx templates to
your SDDC
n Convert these templates to OVF templates in the SDDC Content Library. This option takes
less time.
n Transfer these templates as .vmtx templates in the vCenter Server inventory. In this case, the
templates undergo an intermediate conversion to OVF and then back to .vmtx templates.
Note The Content Onboarding Assistant adds scripts and ISO images to a Content Library that
is published from your on-premises data center and subscribed from your SDDC. It does not add
existing OVF or OVA templates to the Content Library. For other ways of transferring OVF or
OVA templates to your SDDC, see Getting Templates, ISOs, and Other Content into Your SDDC in
the VMware Cloud on AWS Operations Guide.
You can use the Content Onboarding Assistant on any MacOS, Linux, or Windows machine that
has network access to your on-premises data center and your SDDC.
If you use the Content Onboarding Assistant to transfer content to your SDDC, and then find that
there are additional items you want to transfer, you can run the Content Onboarding Assistant
again. The Content Onboarding Assistant recognizes which .vmtx templates have already been
transferred and does not allow you to select those to be transferred again. It also recognizes
ISO images and script files that have been transferred, and will only transfer new ISO images and
scripts.
Prerequisites
n Make sure that your on-premises data center is running vCenter Server 6.0 or later.
n Install the Java Runtime Environment (JRE) 1.8 or later. You can download the Java
Runtime installer from the Oracle website at http://www.oracle.com/technetwork/java/
javase/downloads/jre8-downloads-2133155.html.
n Set the $JAVA_HOME environment variable to the location where you installed the JRE.
n Set up a VPN connection between your on-premises data center and your SDDC. See
"Configuring VPNs and Gateways" in Getting Started With VMware Cloud on AWS.
VMware, Inc. 95
VMware Cloud on AWS Operations Guide
Procedure
1 Prepare scripts and ISO images for addition to the Content Library by moving them into a
single folder in your on-premises data center.
3 In the terminal or command line, switch to the directory where you placed the Content-
Onboarding-Assistant.jar file and enter the command java -jar jar_file_name --
cfg full_path_to_config_file.
In the configuration file, specify each parameter on its own line, and follow it with a space and
the value. For example
onpremServer vcenter.onprem.example.com
onpremInfraServer psc.onprem.example.com
You can also specify many parameters on the command line by specifying them as --
parameter parameter_value. Type java --jar jar_file_name --help to see a full list
of parameters, or consult the table below.
Parameter Description
onpremServer server The host name of the vCenter Server for your on-
premises data center.
cloudServer server The host name of the cloud SDDC vCenter Server.
cloudInfraServer infra-server The host name of the cloud SDDC vCenter Server. This
is optional.
cloudFolderName foldername The name of the vCenter Server folder on the cloud
SDDC where .vmtx templates will be stored.
cloudRpName resource-pool-name The resource pool on the cloud SDDC for the .vmtx
templates.
VMware, Inc. 96
VMware Cloud on AWS Operations Guide
Parameter Description
cloudNetworkName network-name The distributed virtual port group on the cloud SDDC
for the .vmtx templates.
4 Enter the passwords for the on-premises data center and the cloud SDDC when you are
prompted.
Content Onboarding Assistant tests the connections to the on-premises data center and
SDDC, and then displays a table showing all the .vmtx templates it has discovered.
You can enter single numbers separated by commas, or a range separated by a dash.
6 Confirm that the folder for ISO images and scripts is correct.
n Select option 1 to transfer the templates as OVF templates in the SDDC Content Library.
n Select option 2 to transfer the templates as .vmtx templates in the vCenter Server
inventory.
Results
n Copies .vmtx templates from your on-premises data center to your SDDC, using the options
you specified.
n Creates a Content Library in your on-premises data center, adds the ISO images and scripts
to that Content Library, and publishes it.
n Creates a subscribed Content Library in your SDDC and synchronizes the ISO images and
scripts to the SDDC.
What to do next
You can now use the .vmtx templates and ISO images to create virtual machines in your SDDC.
This method works for transferring OVF templates, ISO images, scripts, and other files.
VMware, Inc. 97
VMware Cloud on AWS Operations Guide
Prerequisites
n You must have a Content Library in your on-premises data center. See Create a Library
n Set up a VPN connection between your on-premises data center and your SDDC. See
"Configuring VPNs and Gateways" in Getting Started With VMware Cloud on AWS.
Procedure
1 Add your templates, ISO images, and scripts to the on-premises Content Library.
3 In your SDDC, create a Content Library that subscribes to the one you published in Step 2.
Content is synchronized from your on-premises data center to your SDDC in VMware Cloud
on AWS.
You can upload content to your SDDC's WorkloadDatastore. The vsanDatastore is managed by
VMware.
Prerequisites
Procedure
1 In the vSphere Client, select the Storage icon and select WorkloadDatastore and click Files.
Option Description
VMware, Inc. 98
Migrating Virtual Machines
5
VMware Cloud on AWS supports several ways to migrate your workload VMs from your on-
premises hosts to the ones in your SDDC and back again, as well as across hosts in your SDDC.
The method you choose should be based on your tolerance for workload VM downtime, the
number of VMs you need to move, and your on-premises networking configuration.
It's important to note that any constraints that apply to on-premises migrations are likely to apply
to hybrid migrations as well. For example, issues described in Enable Virtual CPU Performance
Counters can prevent migration of VMs that enable performance counters when the source or
destination cluster enables Enhanced vMotion Compatibility.
For a guided migration experience to help you use HCX to migrate VMs from your on-premises
data center to the cloud SDDC, you can use the VMware Cloud Migration solution, Integrated
Experiences for your Hybrid Cloud.
Hybrid Migration
Hybrid migration refers to migrating virtual machines between two different vSphere installations:
one that's in your on-premises data center and another that's in your VMware Cloud on AWS
SDDC. Because these two vSphere installations might have different versions, configurations,
or both, hybrid migration use cases typically carry additional prerequisites and configuration
that ensure both compatibility of the virtual machines and appropriate network bandwidth and
latency. VMware Cloud on AWS supports a variety of tools and methods for hybrid migration.
VMware, Inc. 99
VMware Cloud on AWS Operations Guide
For more information about using HCX for hybrid migration, see the VMware HCX User Guide and
the VMware Cloud Migration solution at https://vmc.vmware.com/solutions.
Requirement Description
On-premises vSphere version n For vMotion: vSphere 6.0, 6.5, 6.7, 7.0
n For bulk migration: vSphere 6.0, 6.5, 6.7, 7.0
n For cold migration: vSphere 6.0, 6.5, 6.7, 7.0
Installation of VMware HCX Manager in the on-premises See the VMware HCX Product Documentation. HCX
data center Connector Environment Requirements are described in
Software Version Requirements for the HCX Installations
in the VMware HCX Product Documentation.
Establish the HCX Interconnect with your SDDC Pair the VMware HCX Manager with your VMware
Cloud on AWS SDDC as a remote site as described
in Configuring and Managing the HCX Interconnect and
Configuring and Managing the HCX Interconnect with the
Multi-Site Service Mesh.
Requirement Description
VMware Cloud on AWS firewall rules Create firewall rules to open the ports used by HCX as
described in Network Port and Protocol Requirements.
On-premises firewall rules Create firewall rules to open the ports used by HCX as
described in Network Port and Protocol Requirements.
Virtual machine hardware and settings Ensure that these requirements are met for virtual
machine hardware.
n Virtual machine hardware version and virtual disk size
as specified in VMware Configuration Maximums.
n EVC is not supported in the VMware Cloud on AWS
SDDC.
n VMs that are created in the cloud SDDC or that
have been power-cycled after migration to the cloud
SDDC can't be migrated back to the on-premises
data center with vMotion unless the on-premises
EVC baseline is Broadwell. You can relocate these
VMs after powering them off, as long as their virtual
machine hardware version is compatible with the on-
premises data center.
The following virtual machines are not supported:
n VMs with shared .vmdk files.
n VMs with virtual media or ISOs attached.
To implement migration with vMotion, you can configure hybrid linked mode and use the vSphere
client. You can also use command-line (PowerShell) or API automation.
n VMs that use standard virtual switches for networking cannot be migrated back to an on-
premises data center after being migrated to the cloud SDDC.
n Any VM that has been power-cycled in the cloud SDDC can only be migrated back to an
on-premises host or cluster with the Broadwell chipset or EVC mode.
n If your on-premises hosts haven't been patched to address vulnerability to side channel
analysis due to speculative execution (also referred to as the Spectre Variant 2 vulnerability),
this may affect vMotion compatibility as shown in Table 5-1. vMotion Compatibility Effects
of Spectre patch. To find the correct patch for your on-premises hosts, see https://
kb.vmware.com/s/article/52245. All hosts in VMware Cloud on AWS SDDCs have been
patched.
Note You can find the Virtual Machine Hardware Version on the Summary tab for the
virtual machine. You can find the host processor type on the Summary tab for the host. For
a list of processor types in the Broadwell processor family, see https://ark.intel.com/products/
codename/38530/Broadwell.
Note HCX-based vMotion has a different set of requirements. See Hybrid Migration with
VMware HCX Checklist.
Requirement Description
Networking speed and latency Migration with vMotion requires sustained minimum
bandwidth of 250 Mbps between source and destination
vMotion vMkernel interfaces, and a maximum latency of
100 ms round trip between source and destination.
On-premises vSphere version Your on-premises vSphere installation must be one of the
following:
n vSphere 6.7U2 or higher.
n vSphere 6.5P03 or higher.
See VMware Knowledge Base article 56991 for more
information.
Hybrid Linked Mode Hybrid Linked Mode is required to initiate migration from
the vSphere Client. It is not required to initiate migration
using the API or PowerCLI.
See "Hybrid Linked Mode" in Managing the VMware Cloud
on AWS Data Center.
VMware Cloud on AWS firewall rules Ensure that you have created the necessary firewall rules
as described in Required Firewall Rules for vMotion.
Requirement Description
On-premises firewall rules Ensure that you have created the necessary firewall rules
as described in Required Firewall Rules for vMotion.
Virtual machine hardware and settings Ensure that these requirements are met for virtual
machine hardware.
n Virtual machine hardware version 9 or later is required
for migration with vMotion from the on-premises data
center to the cloud SDDC.
n EVC is not supported in the VMware Cloud on AWS
SDDC.
n VMs that are created in the cloud SDDC or that
have been power-cycled after migration to the cloud
SDDC can't be migrated back to the on-premises
data center with vMotion unless the on-premises
EVC baseline is Broadwell. You can relocate these
VMs after powering them off, as long as their virtual
machine hardware version is compatible with the on-
premises data center.
n Migration of VMs with DRS or HA VM overrides is not
supported. For more information on VM overrides, see
Customize an Individual Virtual Machine.
Important Source switch configurations (including NIOC, spoofguard, distributed firewall, and
Switch Security) and runtime state are not applied at the destination as part of migration in either
direction. Before you initiate vMotion, apply the source switch configuration to the destination
network.
Allow SSO vCenter Server remote (on-premises) vCenter SSO (TCP 7444)
Platform Services
Controller IP address
Actio
Rule n Source Destination Service Ports
Cloud SDDC Allow CIDR block for cloud SDDC On-premises HTTPS 443
to on- management network vCenter
premises Server, PSC,
vCenter Active
ServerAllow Directory
subnet
Cloud SDDC Allow CIDR block for cloud SDDC Remote LDAP TCP 389, 636
to Remote management network Server
LDAP
Cloud SDDC Allow CIDR block for cloud SDDC Remote ESXi TCP 8000
to ESXi management network host subnet
vMotion
To implement bulk migration, you can use command-line (PowerShell) or API automation,
described in the Multicloud Workload Migration whitepaper. For additional GUI and REST API
options, download the Cross vCenter Workload Migration Utility.
vSphere 5.0, 5.1, 5.5, 6.0, and 6.5 Internet or AWS Direct Connect and Any VMware Distributed Switch,
L2 VPN created through HCX vSphere standard switch, or Cisco
Nexus 1000v
To implement cold migration, you can configure hybrid linked mode and use the vSphere client.
You can also use command-line (PowerShell) or API automation.
vSphere 6.0u3 AWS Direct Connect or IPsec VPN VMware Distributed Switch version
6.0
vSphere 6.5 patch d AWS Direct Connect or IPsec VPN VMware Distributed Switch version
6.0 or 6.5
vSphere 5.5, 6.0, and 6.5 Internet or AWS Direct Connect and Any VMware Distributed Switch,
L2 VPN created through HCX vSphere standard switch, or Cisco
Nexus 1000v
Requirement Description
On-premises virtual switch configuration Standard switches, vSphere Distributed Switch 6.0, or
vSphere Distributed Switch 6.5
Hybrid Linked Mode Hybrid Linked Mode is required to initiate migration from
the vSphere Client. It is not required to initiate migration
using the API or PowerCLI.
See "Hybrid Linked Mode" in Managing the VMware Cloud
on AWS Data Center.
VMware Cloud on AWS and on-premises firewall rules Ensure that you have created the necessary firewall rules
as described in Required Firewall Rules for Cold Migration.
On-premises DNS configuration Ensure that your on-premises DNS server can correctly
resolve the address for the cloud vCenter Server.
Actio
Rule n Source Destination Service Ports
Cloud SDDC Allow CIDR block for cloud SDDC On-premises HTTPS 443
to on- management network vCenter
premises Server
vCenter
Server
Cloud SDDC Allow CIDR block for cloud SDDC VMware TCP 902
to ESXi management network Cloud on
Remote AWS vCenter
Console Server IP
address
Cloud SDDC Allow CIDR block for cloud SDDC Remote LDAP TCP 389, 636
to Remote management network Server
LDAP
(Required for
HLM only)
Using this connection, you can enable access between VMs in your SDDC and services in your
AWS account, such as EC2 and S3.
The default AWS Security Group in the connected VPC controls traffic from EC2 instances in the
VPC to VMs in the SDDC. This traffic must also pass through the Compute Gateway firewall (and
the Distributed Firewall if you're using that). All of these controls must be configured to allow the
intended traffic or the connection can't be established.
When you deploy an EC2 instance, the EC2 Launch Wizard associates it with a new Security
Group unless you have specified another group. A new AWS Security Group allows all outbound
traffic from the instance and no inbound traffic to it. To allow a connection between an EC2
instance and a VM in your SDDC, you typically need only create inbound rules.
n To allow traffic to be initiated from the EC2 instance to a VM in the SDDC, create an inbound
rule on the default Security Group.
n To allow traffic to be initiated from the VM to the EC2 instance, create an inbound rule on the
Security Group applied to the EC2 instance.
VMware Knowledge Base article 76577 has additional information that applies to cases where
the default AWS Security Group has a missing or altered allow-all rule for outbound traffic.
Bear in mind that when you use the default AWS Security Group with the instance, its inbound
rules are applied to traffic both when it transits the EC2 instance, and when it transits the SDDC.
To allow traffic initiated by either the VM in the SDDC or the EC2 instance to reach other,
inbound rules must allow inbound traffic from both the EC2 instance and the VM.
Prerequisites
n The CIDR blocks of the network segments the VMs in your SDDC are connected to. Click
Segments on the Networking & Security tab to list all segments.
n The connected Amazon VPC and subnet. Click Connected VPC in the System category on
the Networking & Security tab to open the Connected Amazon VPC page, which provides
this information under VPC ID and VPC Subnet.
Procedure
n The EC2 instance must be in the VPC that you selected during deployment of your SDDC,
or a connection can't be established over a private IP address.
n The EC2 instance can be deployed in any subnet within the VPC, but you might incur
cross-AZ traffic charges if it is a different AZ than the one you selected during SDDC
deployment.
n If possible, select a Security Group for your EC2 instance that already has an inbound
traffic rule configured as described in Step 2.
n The VPC subnet(s) used for the SDDC, as well as any VPC subnets on which AWS
services or instances communicate with the SDDC must all be associated with the VPC's
main route table.
n Workload VMs in the SDDC can communicate over the ENI connection with all subnets in
the primary CIDR block of the connected VPC. VMC is unaware of other CIDR blocks in
the VPC.
2 Add inbound rules to the Security Group applied to the instance. Select the EC2 instance that
you deployed in Step 1 and configure its Security Group to allow inbound traffic from the
logical network or IP address associated with the VM in your SDDC.
b In the instance description, click the instance's Security Group and click the Inbound tab.
c Click Edit.
e In the Type dropdown menu, select the type of traffic that you want to allow.
f In the Source text box, select Custom and enter the IP addresses or CIDR block of VMs in
the SDDC that need to communicate with the instance.
g (Optional) Add rules as needed for additional CIDR blocks or traffic type you want to
connect to the instance from VMs in your SDDC.
h Click Save.
3 (Optional) If you need to allow traffic initiated by the instance that you deployed in Step 1 to
a VM in your SDDC, edit the default Security Group for the connected Amazon VPC to add
inbound rules that identify the instances by CIDR block or Security Group.
a In the AWS console, select the default Security Group for the Connected Amazon VPC
and click the Inbound tab.
b Click Edit.
d In the Type dropdown menu, select the type of traffic that you want to allow.
e In the Source text box, select Custom and enter the IP addresses or CIDR block of VMs in
the SDDC that need to communicate with the instance.
If all the VMs are associated with the same SDDC Inventory Group, you can specify that
Group as the Source rather than using an IP address or CIDR block.
f (Optional) Add rules as needed for additional CIDR blocks or traffic type you want to
connect to the instance from VMs in your SDDC.
g Click Save.
See Add or Modify Compute Gateway Firewall Rules in VMware Cloud on AWS Networking
and Security.
n To allow inbound traffic from the instances in the connected Amazon VPC, create a rule
where the Source is Connected VPC Prefixes and the Destination is an inventory group
containing the VMs that require inbound access from the instance.
n To allow outbound traffic to instances in the connected Amazon VPC, create a rule where
the Source is an inventory group containing the VMs that require outbound access to the
instance and the Destination is Connected VPC Prefixes.
Note In either case, you can limit traffic to or from a subset of EC2 instances by defining a
workload inventory group in your SDDC that includes only the IP addresses or CIDR blocks
for those instances.
If any of the VMs that communicate with the instance is protected by distributed firewall,
you might need to adjust the rules for that firewall to allow the expected traffic. See Add or
Modify Distributed Firewall Rules.
Procedure
1 Create an S3 endpoint.
See Gateway VPC Endpoints and Endpoints for Amazon S3 in the Amazon Virtual Private
Cloud User Guide.
a For Service category, select AWS services.
c In the VPC drop down, select the VPC that is connected to your SDDC.
d Under Configure route tables, select the Route Table ID where the value in the Main
column is Yes. This Route Table is used by the SDDC and should also be associated with
the VPC subnet the SDDC is connected to.
e Under Policy select the default Full Access policy or create a more restrictive one. See
Endpoints for Amazon S3 in the Amazon Virtual Private Cloud User Guide. Traffic to S3
from the SDDC will have its source IP NATted to an IP from the subnet selected at SDDC
deployment, so any policy must allow traffic from that subnet.
f Click Create Endpoint to create the endpoint and add routes for the S3 public IP ranges in
the region to the main route table.
2 (Optional) Configure the security group for your connected Amazon VPC to allow outbound
traffic to the network segment associated with the VM in your SDDC.
The default security group allows this traffic, so you won't need to take this step unless you
previously customized the default security group.
a In the AWS console, select the default Security Group for the Connected Amazon VPC
and click the Outbound tab.
b Click Edit.
e In the Destination text box, select the prefix list associated with the S3 endpoint.
You can find this prefix list in the VPC's Managed prefix lists card. If you see multiple
prefix lists here, choose one that is specific to the region that contains the S3 service
you're interested in.
f Click Save.
By default, S3 access through the elastic network interface in the connected Amazon VPC
is enabled. If you disabled this access to allow S3 access through the internet gateway, you
must re-enable it.
a Log in to the VMC Console at https://vmc.vmware.com.
4 From the VMC Console, create a compute gateway firewall rule to allow HTTPS access to the
connected Amazon VPC.
c Click ADD RULE and add a rule with the following parameters, where Workload-CIDR is
the CIDR block for the segment that the workload VMs that need to access S3.
Results
Workload VMs in your SDDC can access files in the S3 bucket over an HTTPS connection.
Procedure
1 Ensure that the access permissions for the S3 bucket permit access from your cloud SDDC
from the internet.
See Managing Access Permissions to Your Amazon S3 Resources for more information.
By default, S3 access goes through the S3 endpoint of your connected Amazon VPC. You
must enable access to S3 over the internet before you can use it.
a Log in to the VMC Console at https://vmc.vmware.com.
b View Details
d Click System > Connected VPC to open the Connected Amazon VPC page, and then click
Disable next to the S3 entry under Service Access.
3 From the VMC Console, create a compute gateway firewall rule to allow https access to the
internet.
c Click ADD RULE and add a rule with the following parameters, where Workload-CIDR is
the CIDR block for the segment that the workload VMs that need to access S3.
Results
VMs in your SDDC can now access files on the S3 bucket using their https paths.
See the VMware Product Interoperability Matrices for the list of vRealize Automation versions
that you can use with VMware Cloud on AWS.
Procedure
1 Configure a VPN connection over the public Internet or AWS Direct connect.
See Configure VPN Connectivity to the On-Premises Data Center and Configure AWS Direct
Connect for VMware Cloud on AWS in VMware Cloud on AWS Networking and Security.
2 Verify that the vCenter Server FQDN is resolvable at a private IP address on the management
network.
See Set vCenter Server FQDN Resolution Address in VMware Cloud on AWS Networking and
Security.
Table 7-1. Management Gateway Firewall Rules Required by vRealize Automation (continued)
On Premises to ESXi Ping CIDR block of on-premises ESXi Management Only ICMP (All ICMP)
data center
On Premises to ESXi CIDR block of on-premises ESXi Management Only TCP 902
Remote Console and data center
Provisioning
On-Premises to SDDC VM CIDR block of on-premises CIDR block of SDDC Any (All Traffic)
data center logical network
SDDC VM to On-Premises CIDR block of SDDC CIDR block of on-premises Any (All Traffic)
logical network data center
See Add or Modify Management Gateway Firewall Rules in VMware Cloud on AWS
Networking and Security.
Prerequisites
n Ensure that you have completed all the steps in Prepare Your SDDC to Work with vRealize
Products.
n Ensure that all vRealize Automation VMs are configured to use TLS 1.2.
Procedure
u If you are using vRealize Automation 8.x, follow the example workflows in Create a VMware
Cloud on AWS cloud account in vRealize Automation Cloud Assembly within a sample
workflow to configure your on-premises installation of vRealize Automation 8.x to work with
VMware Cloud on AWS .
If you are using an earlier version of vRealize Automation, see Connect vRealize Automation
7.x to Your SDDC.
Prerequisites
n See the VMware Product Interoperability Matrices for the list of vRealize Automation 7.x
releases that you can use with VMware Cloud on AWS.
n Ensure that you have completed all the steps in Prepare Your SDDC to Work with vRealize
Products.
n Ensure that all vRealize Automation VMs are configured to use TLS 1.2.
Procedure
7 Create reservations for the components that the cloud admin has access to.
Option Description
Datastore WorkloadDatastore
Network Use the logical network that you created as part of the prerequisites
Important Because VMware Cloud on AWS places VMs provisioned for vRealize Automation
Business Groups in a non-standard folder, you must set the vRealize Automation custom
property VMware.VirtualCenter.Folder to reference the workloads folder (VM & Template
Folder). See the vRealize Automation Custom Properties Reference.
8 Create a Network Profile for the logical network you created as part of the prerequisites.
For more information on creating a network profile, see Create a Network Profile.
9 Create a Blueprint.
The notification gateway provides a central integration point for all customer-facing notifications
from VMware Cloud on AWS. The notification gateway is designed to keep you up-to-date on
Day 2 operations events and service updates, including maintenance notifications, Elastic DRS
Add Host events, subscription expiration reminders, and VMware Site Recovery notifications. You
can find a list of all notifications in Notifications Available from VMware Cloud on AWS.
The notification channels that are available include email, VMC Console, vSphere Client, the
Activity Log UI, and VMware Log Insight Cloud.
Outages and other service-wide events are reported on the VMware Cloud Services status page.
See View and Subscribe to the Service Status Page for more information.
Notifications for events such as SDDC deployment, removal, upgrades, and maintenance are
included in the Activity Log. See View the Activity Log.
For events such as customer-specific outages, upgrades, and maintenance, VMware also sends
email notifications to all organization owners and organization members. To ensure that you
receive these email notifications, add donotreply@vmware.com to your email allow list.
Procedure
Entries are displayed in reverse chronological order, with the newest entries at the top.
3 (Optional) If an entry indicates that a task failed, click to expand the task to show the error
message.
Subscribe to the status page to get real-time email or SMS notifications on the service status.
Procedure
3 Select the notification methods you prefer to subscribe to for the service.
For more information on identifying the notification and activity events in vRealize Log Insight
Cloud, see Using the vRealize Log Insight Cloud Add-On.
Host Issue Detected HostIssueDetected Autoscaler has detected an Contact support to opt-in.
issue with an ESXi host.
Host Successfully Replaced ReplaceHostSuccessful An ESXi host that was Contact support to opt-in.
experiencing an issue was
replaced.
Host Successfully RemediateHostSuccessful An ESXi host that was Contact support to opt-in.
Remediated experiencing an issue was
remediated.
Autoscaler/EDRS Notifications
Display Name Template ID Description Email Channel
Reminder for AWS account starshot- Reminder for Single Host Yes
linking on day 4 awsAccountLinkingDay4 SDDC user to complete
account linking.
Reminder for AWS account starshot- Reminder for Single Host Yes
linking on day 13 awsAccountLinkingDay13 SDDC user to complete
account linking.
Reminder for AWS account starshot- Reminder for Single Host Yes
linking on day 28 awsAccountLinkingDay28 SDDC user to complete
account linking.
Single Host SDDC Failure Single_Host_SDDC_Failure Host failure notification for Yes
a single host SDDC.
Activity Events
Display Name Template ID
Procedure
n Select the check box next to a notification category to receive email notifications for all
events in that category.
n Expand a category and select individual check boxes to receive notifications for individual
events.
This section also documents a number of known issues and workarounds that can help you
resolve problems.
n Get Support
n Compute Workloads Are Unable to Reach an On-Premises DNS Servers Over a Policy-Based
VPN
Get Support
VMware Cloud on AWS customers can get support by opening the VMware Cloud Services
console.
Procedure
1 Before you contact VMware for support, have the support information for your SDDC ready.
2 See How Do I Get Support for more information about using VMware Cloud Services in-
product support.
Problem
You might experience problems connecting to resources on VMware Cloud on AWS. For
example:
n You log in to the VMC Console and see only a blank screen.
n You try to log in to the vSphere Client or vSphere Web Client and see the error message,
User name and password are required.
Cause
Solution
u You can resolve this issue either by deleting the site cookies or opening an incognito or
private browsing window in your browser.
Option Description
Delete cookies Follow the instructions for your browser. If you want to delete only specific
cookies, delete ones with "vmware" and "vidm" in the name.
n Google Chrome: See https://support.google.com/chrome/answer/95647
n Mozilla Firefox: See https://support.mozilla.org/en-US/kb/delete-cookies-
remove-info-websites-stored
n Microsoft Internet Explorer: https://support.microsoft.com/en-us/help/
17442/windows-internet-explorer-delete-manage-cookies
n Microsoft Edge: https://support.microsoft.com/en-us/help/10607/
microsoft-edge-view-delete-browser-history
n Safari: https://support.apple.com/kb/PH21411?locale=en_US
Problem
When you click the link on the connection tab to open the vSphere Client interface to vCenter
Server, your browser reports that the site cannot be reached.
Cause
By default, the management gateway firewall is set to deny all traffic between the internet and
vCenter Server. Verify that the appropriate firewall rules are in place.
Solution
Problem
While deploying an SDDC, there is a step in which you select a VPC and subnet in your AWS
account to connect to your SDDC. You might be unable to select a subnet during this step. A
message in the UI indicates that there are no eligible subnets in the AWS availability zone (AZ)
and region where the SDDC will be created.
Cause
You must select a subnet created in the availability zone (AZ) where you plan to deploy your
SDDC. If you have created only a single subnet and it's not in an AZ that supports VMware Cloud
on AWS, you'll see this message.
Solution
2 Re-try the subnet selection step in Deploy an SDDC from the VMC Console.
Problem
You changed the cloudadmin@vmc.local for a vCenter Server system from the vSphere Client.
Now you no longer remember the password, so you use the Copy icon on the Default vCenter
Credentials page and paste the password into the VMware vCenter Single Sign-On Login Screen.
The login process fails.
Cause
When you change the password for your SDDC from the vSphere Client, the new password is
not synchronized with the password that is displayed on the Default vCenter Credentials page.
That page shows only the Default credentials. If you change the credentials, you are responsible
for keeping track of the new password.
Solution
Contact Technical Support and request a password change. See Get Support.
Problem
You connect to your VMware Cloud on AWS SDDC to your on-premises SDDC over a policy-
based VPN and can ping IP addresses in the on-premises network from VMs in the SDDC
network but workload VMs cannot reach your on-premises DNS servers.
Cause
The problem occurs if the policy-based VPN connection to your on-premises SDDC has not been
configured to allow DNS requests.
Solution
1 If you can configure your on-premises connection over a route-based VPN or Direct Connect,
you can skip the rest of these steps.
2 If you must use a policy-based VPN as your on-premises connection, configure the SDDC
side of the VPN tunnel to allow DNS requests over the VPN.
c Click the vertical ellipsis icon for the VPN and select Edit VPN.
e Click SAVE.