Configuration Guide VMware
Configuration Guide VMware
Configuration Guide VMware
Configuration Guide
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2022 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
1 Load Balancing 11
Cloud Connectors 12
Virtual Services 14
Wildcard VIP 18
Difference Between Virtual Service and Virtual IP 26
Create a Virtual Service 26
Disable a Virtual Service 36
Find Virtual Service UUID 37
Virtual Service Placement Settings 38
HTTP Policy Reuse 39
Block an IP Address from Access to a Virtual Service 40
Impact of Changes to Min-Max Scaleout Per Virtual Service 41
Enhanced Virtual Hosting 46
Custom Controller Utilization Alert Thresholds 51
Enabling Traffic on VIP 53
Wildcard SNI Matching for Virtual Hosting 54
Service Engine Group 56
Creating SE Group 57
Service Engine Datapath Isolation 65
Deactivating IPv6 Learning in Service Engines 70
Storing Inter-SE Distributed Object 71
Setting a Property for Newly Created Service Engine Group 72
Application Profile 73
Redirect HTTP to HTTPS 97
Overview of SSL/TLS Termination 101
TCP or UDP Profile 103
Data-Plane TCP Stack 103
TCP Fast Path 104
TCP Fast Path Configuration 105
TCP Proxy 106
UDP Fast Path 114
UDP Proxy 115
Internet Content Adaptation Protocol 116
ICAP Support for NSX Defender 118
Logs and Troubleshooting 120
ICAPs 123
VMware, Inc. 3
VMware NSX Advanced Load Balancer Configuration Guide
VMware, Inc. 4
VMware NSX Advanced Load Balancer Configuration Guide
VMware, Inc. 5
VMware NSX Advanced Load Balancer Configuration Guide
4 DNS 531
NSX Advanced Load Balancer DNS Feature 535
DNS Load Balancing 536
Configuring DNS 538
DNS Policy 541
Matches 542
Rule Configuration through the NSX Advanced Load Balancer UI 547
Integration with External DNS Providers 555
DNS Configuration 556
Custom IPAM Profile on NSX Advanced Load Balancer 558
Support for Authoritative Domains, NXDOMAIN Responses, NS and SOA Records 568
Adding Custom A Records to an NSX Advanced Load Balancer DNS Virtual Service 571
Clickjacking Protection 573
DNS Queries Over TCP 575
Adding DNS Records Independent of Virtual Service State 575
DNS TXT and MX Record 577
Add Servers to Pool by DNS 580
VMware, Inc. 6
VMware NSX Advanced Load Balancer Configuration Guide
5 Service Discovery using NSX Advanced Load Balancer as IPAM and DNS Provider
582
IPAM Configuration 584
DNS Configuration 586
Configuring the IPAM/DNS Profiles by Provider Type 587
7 Security 592
Overview of NSX Advanced Load Balancer Security 592
SSL Certificates 594
Integrating Let's Encrypt Certificate Authority with NSX Advanced Load Balancer System 608
Integrating Let's Encrypt Certificate Authority with NSX Advanced Load Balancer 611
OCSP Stapling in NSX Advanced Load Balancer 617
Client SSL Certificate Validation 626
HTTP Application Profile 626
PKI Profile 630
Certificate Authority 631
Physical Security for SSL Keys 632
Layer 4 SSL Support 633
EC versus RSA Certificate Priority 636
Client-IP-based SSL Profiles 637
Configuration Using the NSX Advanced Load Balancer CLI 638
SSL/TLS Profile 641
SSL Profile Templates 642
SSL Client Cipher in Application Logs on NSX Advanced Load Balancer 650
Configure Stronger SSL Cipher 651
Server Name Indication 652
Configuration 653
True Client IP in L7 Security Features 655
Configure True Client 657
App Transport Security 660
VMware, Inc. 7
VMware NSX Advanced Load Balancer Configuration Guide
Exporting PFX Client Key to the Keychain of the Local Workstation 673
Creating PKI Application Profile 673
Configuring HTTP Profile 675
Configuring L4 SSL/ TLS Profile 676
Associating Application Profile with Virtual Service 677
Full-chain CRL Checking for Client Certificate Validation 677
Updating SSL Key and Certificate 678
Customizing Notification of Certificate Expiration 679
VMware, Inc. 8
VMware NSX Advanced Load Balancer Configuration Guide
VMware, Inc. 9
About This Guide
The VMware NSX Advanced Load Balancer Configuration Guide provides information about
configuring the NSX Advanced Load Balancerincluding creating virtual services, creating and
managing Service Sngine groups, controlling the behavior of Service Engines using application
profiles, creating and configuring pools and pool groups, autoscaling Service Engines and virtual
services, configuring load balancing algorithms, configuring different types of persistence, health
monitoring of deployed servers, configurations for high availability, options for operating NSX
Advanced Load Balancer securely, certificate management including registration, renewal, and
Client Certificate Authentication, and more.
VMware, Inc. 10
Load Balancing
1
NSX Advanced Load Balancer is a software load balancer that provides scalable application
delivery across any infrastructure. NSX Advanced Load Balancer provides 100% software load
balancing to ensure a fast, scalable and secure application experience. It delivers elasticity and
intelligence across any environments. It scales from 0 to 1 million SSL transactions per second in
minutes. It achieves 90% faster provisioning and 50% lower TCO than traditional appliance-based
approach.
NSX Advanced Load Balancer is built on software-defined principles, enabling a next generation
architecture to deliver the flexibility and simplicity expected by IT and lines of business. The NSX
Advanced Load Balancer has three components.
To know more about the architecture of NSX Advanced Load Balancer, see NSX Advanced Load
Balancer Overview in the NSX Advanced Load Balancer Installation Guide.
n Cloud Connectors
n Virtual Services
n Application Profile
n Server Pools
n Pool Groups
VMware, Inc. 11
VMware NSX Advanced Load Balancer Configuration Guide
n Persistence
n Compression
n Caching
n Use Cases
n Health Monitoring
Cloud Connectors
Clouds are containers for the environment that NSX Advanced Load Balancer is installed or
operating within. During initial setup of NSX Advanced Load Balancer, a default cloud, named
Default-Cloud, is pre-configured. This is where the first Controller is deployed. Additional clouds
may be added, containing SEs and virtual services.
To view the clouds available, from the NSX Advanced Load Balancer UI, navigate to Infrastructure
> Clouds.
In this screen, you can find all the clouds that are created with the Type of environments, such as
vCenter, OpenStack, or bare metal servers (no orchestrator) and the Status of the cloud indicating
the readiness of the cloud. Hovering the mouse over the icon provides more information about the
status, such as ready for use or incomplete configuration.
Additionally, from this screen, you can perform the following functions.
n Convert the cloud from read access mode or write access mode to no access mode.
When in no access mode, Avi Controllers do not have access to the cloud’s orchestrator, such
as vCenter. See the installation documentation for the orchestrator to see the full implications
of no access mode.
VMware, Inc. 12
VMware NSX Advanced Load Balancer Configuration Guide
n Download the SE Image. When Avi Vantage is deployed in read access mode or no
access mode, SEs must be installed manually. Use this button to pull the SE image for the
appropriate image type (ova or qcow2). The SE image will have the Controller’s IP or cluster
IP address embedded within it, so an SE image may only be used for the Avi Vantage
deployment that created it.
n Generate Token. Authentication tokens are used for securing communication between
Controllers and SEs. If Avi Vantage is deployed in read access mode or no access mode, the
SE authentication tokens must be copied manually by the Avi Vantage user from the Controller
web interface to the cloud orchestrator.
n Click the plus icon or anywhere within the table row to expand the row and show more
information about the cloud. For instance, in AWS the Region, Availability Zone, and Networks
are shown.
n Select a cloud and click Delete to remove the cloud. However, a cloud cannot be deleted if it is
associated with a virtual service, or any other object such as a pool or Service Engine group.
Creating a Cloud
1 NSX Advanced Load Balancer UI, navigate to Infrastructure > Clouds.
2 Click Create and select the environment in which NSX Advanced Load Balancer has to be
installed.
3 Configure the settings based on the cloud selected. Click on an installation reference to view
the configuration options for each cloud/environment.
c OpenStack
d VMware NSX-T
f Microsoft Azure
h Cisco CSP
j Oracle Cloud
VMware, Inc. 13
VMware NSX Advanced Load Balancer Configuration Guide
Virtual Services
Virtual services are the core of the load balancing and proxy functionality. A virtual service
advertises an IP address and ports to the external world and listens for client traffic. When a
virtual service receives traffic, it can be configured to:
n Perform security, acceleration, load balancing, gather traffic statistics, and other tasks.
n Forward the client’s request data to the destination pool for load balancing.
A virtual service can be thought of as an IP address that NSX Advanced Load Balancer is listening
to, ready to receive requests. In a normal TCP/HTTP configuration, when a client connects to the
virtual service address, NSX Advanced Load Balancer will process the client connection or request
against a list of settings, policies and profiles, then send valid client traffic to a back-end server
that is listed as a member of the virtual service’s pool.
Typically, the connection between the client and NSX Advanced Load Balancer is terminated or
proxied at the SE, which opens a new TCP connection between itself and the server. The server
will respond directly to the NSX Advanced Load Balancer IP address, not to the original client
address. NSX Advanced Load Balancer forwards the response to the client via the TCP connection
between itself and the client.
Service Engine
A typical virtual service consists of a single IP address and service port that uses a single network
protocol. NSX Advanced Load Balancer allows a virtual service to listen to multiple service ports or
network protocols.
For instance, a virtual service could be created for both service port 80 (HTTP) and 443 SSL
(HTTPS). In this example, clients can connect to the site with a non-secure connection and
later be redirected to the encrypted version of the site. This allows administrators to manage a
single virtual service instead of two. Similarly, protocols such as DNS, RADIUS and Syslog can be
accessed via both UDP and TCP protocols.
It is possible to create two unique virtual services, where one is listening on port 80 and the other
is on port 443; however, they will have separate statistics, logs, and reporting. They will still be
owned by the same Service Engines (SEs) because they share the same underlying virtual service
IP address.
VMware, Inc. 14
VMware NSX Advanced Load Balancer Configuration Guide
To send traffic to destination servers, the virtual service internally passes the traffic to the
pool corresponding to that virtual service. A virtual service normally uses a single pool, though
an advanced configuration using policies or DataScripts can perform content switching across
multiple pools. A script also can be used instead of a pool, such as a virtual service that only
performs an HTTP redirect.
A pool can be associated with multiple virtual services if they have the same Layer 4 or 7
application profile.
When creating a virtual service, that virtual service listens to the client-facing network, which is
most likely the upstream network where the default gateway exists. The pool connects to the
server network.
Normally, the combined virtual service and pool are required before NSX Advanced Load
Balancer can place either object on an SE. When making an SE placement decision, NSX
Advanced Load Balancer must choose the SE that has the best reachability or network access
to both client and server networks. Alternatively, both the clients and servers may be on the same
IP network.
Field Description
Name Lists the name of each virtual service. Clicking the name of
a virtual service opens the Analytics tab of the respective
virtual service.
VMware, Inc. 15
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
Services Lists the service ports configured for the virtual service.
Ports that are configured for terminating SSL/TLS
connections are denoted in parenthesis.
A virtual service may have multiple ports configured. For
example:
n 80 (HTTP)
n 443 (SSL)
Service Engine Group Displays the group from which Service Engines may be
assigned to the virtual service.
Service Engines Lists the Service Engines to which the virtual service
is assigned. Clicking a Service Engine name opens the
Analytics tab of the respective Service Engine.
Total Service Engines Shows the number of SEs assigned to the virtual service as
a time series. This is useful to see if a virtual service scales
up or down the number of SEs.
Client RTT Displays the average TCP latency between clients of the
virtual service and the respective SEs.
DoS Attacks Displays the number of DoS attacks occurring per second.
To customize the columns in the table, click the settings icon. Add or remove columns by using the
arrows in the screen.
VMware, Inc. 16
VMware NSX Advanced Load Balancer Configuration Guide
Virtual Services
The Virtual Services screen shows extensive information about the virtual service selected.
To view the details of a specific virtual service, navigate to Applications > Virtual Services. Click
the required virtual service.
Alternatively, you can also navigae to Applications > Dashboard. Click the required virtual service.
The Virtual Service screen has the following tabs for the virtual service selected.
The Virtual Service quick info popup has the following buttons:
n Scale-Out distributes connections for the virtual service to one additional SE per click, up to
the maximum number of SEs defined in the SE group properties.
n Scale In removes the VIP address from the selected Service Engine. If Primary is selected, one
of the existing Secondaries will become the new Primary.
VMware, Inc. 17
VMware NSX Advanced Load Balancer Configuration Guide
n Migrate moves the virtual service from the SE it is currently on to a different SE within the
same SE group.
This popup also displays the following information (if applicable) for the virtual service:
Field Description
Uptime / Downtime The amount of time the virtual service has been in the
current up or downstate.
Service Port Service port(s) on which the virtual service is listening for
client traffic.
Real-Time Metrics When this option is disabled, metrics are collected every
five minutes, regardless of whether the Display Time is set
to the Real-Time. When the option is enabled, metrics are
collected every 15 seconds.
Client Log Filters Number of custom log filters applied to the virtual service.
Log filters can selectively generate non-significant logs.
Wildcard VIP
This section explains the configuration, common deployment, and use case scenarios of the
wildcard VIP.
In NSX Advanced Load Balancer, a virtual service is configured with an IP address as VIP and
ports as services to load balance the client traffic from the external world. NSX Advanced Load
Balancer processes the client connection or request against a list of settings, policies, and profiles,
then load balances valid client traffic to a back-end application server listed as a pool member of
the virtual service.
In addition to loadbalancing the client traffic to the application servers, NSX Advanced Load
Balancer also provides supportability, manageability, and scalability to the application servers.
VMware, Inc. 18
VMware NSX Advanced Load Balancer Configuration Guide
You can upgrade application serverswith zero downtime when deployed with the NSX Advanced
Load Balancer.
For more information, see NSX Advanced Load Balancer Platform Overview.
Wildcard VIP extends the capability of a virtual service to provide advanced load balancing
services to network elements like firewalls and firewall devices.
Wildcard VIP allows the network match configuration in a virtual service. The application virtual
service accepts the connections destined to a VIP, whereas the wildcard VIP accepts the
connections destined to a subnet and is configurable as CIDR notation.
Features Supported
The following profiles support wildcard virtual services:
Supported Environments
The following environments support wildcard VIP:
For example, it could be a prefix of 10.0.0.0/8 (to accept between 10.0.0.0 - 10.255.255.255) or
0.0.0.0/0 (to accept every incoming packet).
VMware, Inc. 19
VMware NSX Advanced Load Balancer Configuration Guide
www
Avi SE 1
FW 1 FW 2 FW 3
Internal
In this deployment mode, the wildcard virtual service is in the frontend, facing the client traffic.
Three firewalls: FW1, FW2, and FW3 are configured as pool members. The wildcard virtual service
load balances the traffic across the firewalls FW1, FW2, and FW3.
Firewalls are rarely the destination for the client traffic, and the traffic is expected to be
transparently forwarded to the firewall. Hence, the traffic from the clientmust be sent as is to
the pool member without any source address or a destination address translation (SNAT or DNAT)
In such a deployment, the network address match is configured in the traffic selection criteria of
the VIP. The wildcard VIP of the virtual service will only load balance the traffic to these firewalls
without changing the client traffic.
VMware, Inc. 20
VMware NSX Advanced Load Balancer Configuration Guide
IP Address Port
Note Currently only TCP/ UDP fast network profiles support Wildcard VIP.
In addition to the networkport supported, with the network address match feature, you can
configure all the aforesaid variants of the wildcard virtual service . If you configure multiple virtual
services with varying combinations as specified in the table, the virtual service with the most
specific matchis selected in the same order of preference as listed in the table.
www
NSX-ALB
The wildcard VIP is configured through the NSX Advanced Load Balancer Controller CLI as
follows:
VMware, Inc. 21
VMware NSX Advanced Load Balancer Configuration Guide
2. Configure the placement VIP. To enable wildcard VIP, the placement subnet is mandatory for
the virtual service that is referring the inline virtual service VIP.
VMware, Inc. 22
VMware NSX Advanced Load Balancer Configuration Guide
+---------------------------+-------------------------------------------+
[admin:abc-ctrl-wildcard]: >
As firewalls expect the client traffic unchanged for validation, configure the application
profile of the wildcard virtual servicewith preserve_client_ip, preserve_client_port, and
preserve_destination_ip_port.
[admin:abc-ctrl-wildcard]: >
show applicationprofile test1 | grep
preserve|
| preserve_client_ip | True || preserve_client_port
| True || preserve_dest_ip_port | True |
VMware, Inc. 23
VMware NSX Advanced Load Balancer Configuration Guide
In No-Access and Linux Server Cloud scenarios, the Controller cannot configure the vNICs on
demand on the SE. In this case, the SE is configured with a specific number of VNICs which have
access to specific sub-nets. To load balance a VIP which is not present on any of the sub-nets
accessible to the SE, the Controller cannot place the virtual service on the SE.
A placement network can be configured from the subnets accessible to the SE. Once that is
configured, the Controller will forcefully place the VIP on the VNICs which have access to the
placement networks. The user can then configure static routes on the SE, or on the previous hop
router to ensure the traffic for the VIP is forwarded to the placement vNIC.
n Router 2 provides connectivity between 4.4.4.0/24 and 5.5.5.0/2In this case, all traffic
intended for 1.1.10.10 and 1.1.20.10 is matched by the VIP 0.0.0.0/0 and is routed to the SE
via the VNIC in 2.2.2.0/24 network subnet, and then load balanced across the Servers via the
vNIC in 4.4.4.0/24 network subnet.
NSX Advanced Load Balancer checks all the matching networks on the SE and places the virtual
service on all the vNIC’s of the matching SEs.
NSX Advanced Load Balancer supports multiple placement networks and enables placement of
virtual services on the virtual services can be placed on multiple vNICs.
SE has access to the 2.2.2.0/24 SE 1eth1: 2.2.2.0/24SE The virtual service will be
exact subnet of the VIP 2eth1: 2.2.2.0/24 placed on the matching
placement network vNICs of both SE - eth1 on
SE 1 and eth1 on SE 2
Both the SEs have access to 2.2.2.0/24 SE 1 The virtual service will be
the same network, which is 3.3.3.0/24 SE 1 eth1: 2.2.2.0/24 placed on the matching
one of the two placement vNICs of both SE - eth1 on
SE 2 eth1: 2.2.2.0/24
networks SE 1 and eth1 on SE 2
VMware, Inc. 24
VMware NSX Advanced Load Balancer Configuration Guide
There are two placement 2.2.2.0/24 SE 1 eth1: 2.2.2.0/24 The virtual service will be
networks, and both SE 3.3.3.0/24 SE 2 eth1: 3.3.3.0/24 placed on the matching
have access to separate vNICs of both SE - eth1 on
placement networks SE 1 and eth1 on SE 2
SE has access to all the 2.2.2.0/24 SE 1 eth1: 2.2.2.0/24 The virtual service is placed
subnets of the placement 3.3.3.0/24 eth2: 3.3.3.0/24 on all the matching vNICs of
networks both SE - eth1, eth2 on SE 1
SE 2 eth1: 2.2.2.0/24
and eth1, eth2 on SE 2.
eth2: 3.3.3.0/24
SE gets access to a new 2.2.2.0/24 3.3.3.0/24 Before: The virtual service is placed
network after the virtual SE 1 on the matching vNICs of
service is placed. both SE - eth1, eth2 on SE
eth1: 2.2.2.0/24
1 and eth1, eth2 on SE 2
SE 2
eth1: 2.2.2.0/24
After:
SE 1 eth1: 2.2.2.0/24
eth2: 3.3.3.0/24
SE 2
eth1: 2.2.2.0/24
eth2: 3.3.3.0/24
Caveats
Wildcard virtual service does not support the following:
n Flow monitoring
n Shared VIP
n Traffic cloning
VMware, Inc. 25
VMware NSX Advanced Load Balancer Configuration Guide
n Virtual Service: A VIP plus a specific layer of four protocol ports used to proxy an application.
A single VIP can have multiple virtual services. As an example, all the following virtual services
can exist on a single VIP:
n 192.168.1.1:80,443 (HTTP/S)
n 192.168.1.1:20,21 (FTP)
n 192.168.1.1:53 (DNS)
The VIP in this example is 192.168.1.1. The services are HTTP/S, FTP, and DNS. Thus, VS HTTPS is
advertised with address 192.168.1.1:80,443, which is the VIP plus protocol port 443.
The VIP concept is essential in NSX Advanced Load Balancer because a given IP address can be
advertised (ARPed) from only a single SE. If the SE that owns a VIP is busy and needs to migrate a
virtual service’s traffic to a less active SE, then all the VSs are moved from the busy SE to the same
new (less busy) SE. If an SE fails, all of its virtual services would be moved to a single SE. This is
true even if multiple idle SEs are available in the SE group.
Procedure
3 If NSX Advanced Load Balancer is configured for multiple cloud environments, such as
VMware and Amazon Web Services (AWS), select required the cloud for the virtual service
deployment. If NSX Advanced Load Balancer exists in a single environment, skip this step.
6 Enter the VS VIP address. This is used during the creation of Shared virtual service.
VMware, Inc. 26
VMware NSX Advanced Load Balancer Configuration Guide
Option Description
HTTP The virtual service will listen for non-secure Layer 7 HTTP. Selecting this
option auto-populates the Service port field to 80. Override the default with
any valid port number; however, clients will need to include the port number
when accessing this virtual service. Browsers default to automatically append
the standard port 80 to HTTP requests. Selecting HTTP enables an HTTP
application profile for the virtual service. This allows NSX Advanced Load
Balancer to proxy HTTP requests and responses for better visibility, security,
acceleration, and availability.
HTTPS The virtual service will listen for secure HTTPS. Selecting this option auto-
populates port 443 as the service port. Override this default with any valid
service port number. However, clients will need to include the port number
when accessing this virtual service as browsers automatically append the
standard port 443 to HTTPS requests. When selecting HTTPS, use the
Certificate pull-down menu to reference an existing certificate or create a new
self-signed certificate. A self-signed certificate will be created with the same
name as the virtual service and will be an RSA 2048 bit cert and key. The
certificate can be swapped out later if a valid certificate is not yet available at
time of virtual service creation.
L4 The virtual service will listen for layer 4 requests on the port you specify in the
Service port field. Select this option to use the virtual service for non-HTTP
applications, such as DNS, mail, or a database.
L4 SSL/TLS The virtual service will listen for secure layer 4 requests. Selecting this option
auto-populates port 443 in the Service port field. Override this default with
any valid service port number.
8 In the Service field, accept the default port displayed for the selected Application Type.
Alternatively, you can enter the service port manually, as required. To add multiple service
ports or ranges, edit the virtual service after creation.
9 The pool directs load balanced traffic to this list of destination servers. The servers can be
configured by IP address, name, network or via IP Address Group. Add one or more servers to
the new virtual service by using one of the options:
n Select IP Address, Range, or DNS Name and enter the Server IP Address required. Click
Add Server.
n Select IP Address, Range, or DNS Name an click Select Servers by Network to open a
list of reachable networks to add the server from. See Select Servers by Network for more
information.
n Click the option IP Group to select an IP Group from a list of servers from the IP Address
Group available.
10 Click Save.
VMware, Inc. 27
VMware NSX Advanced Load Balancer Configuration Guide
Results
The virtual service is assigned automatically to a Service Engine. If an available SE already exists,
the virtual service will be deployed and be ready to accept traffic. If a new SE must be created, it
may take a few minutes before it is ready.
In some environments, NSX Advanced Load Balancer may require additional networking
information, such as IP addresses or clarification of desired networks, subnets, or port groups
to use prior to a new Service Engine creation. The UI will prompt for additional information if this is
required.
Procedure
3 If NSX Advanced Load Balancer is configured for multiple cloud environments, such as
VMware and Amazon Web Services (AWS), select required the cloud for the virtual service
deployment. If NSX Advanced Load Balancer exists in a single environment, skip this step.
4 Step 1: Settings .
5 Step 2: Policies.
6 Step 3: Analytics.
7 Step 4: Advanced.
VMware, Inc. 28
VMware NSX Advanced Load Balancer Configuration Guide
8 Click Save.
Step 1: Settings
Configure the basic setting for a virtual service like the VIP Address, pool, profiles and policies and
more.
Procedure
2 The Enabled? toggle icon is green by default. This implies that the virtual service will accept
and process traffic normally. To deactivate the virtual service click the toggle button. The
existing concurrent connections will be terminated, and the virtual service will be unassociated
from all Service Engines. No health monitoring is performed for deactivated virtual services.
3 The Traffic Enabled?option is to enable selected by default. Click the option to stop virtual
service traffic on its assigned service engines. This option is effective only when the virtual
service is enabled.
4 Select Virtual Hosting VS if this virtual service participates in virtual hosting via SSL’s Server
Name Indication (SNI). This allows a single SSL decrypting virtual service IP:port to forward
traffic to different internal virtual services based on the name of the site requested by the
client. The virtual hosting VS must be either a parent or a child.
Option Description
Parent The parent virtual service is external facing, and owns the listener IP address,
service port, network profile, and SSL profile. Specifying a pool for the parent
is optional, and will only be used if no child virtual service matches a client
request. The SSL certificate may be a wildcard certificate or a specific domain
name. The parent’s SSL certificate will only be used if the client’s request
does not match a child virtual service domain. The parent virtual service
will receive all new client TCP connections, which will be reflected in the
statistics. The connection is internally handed off to a child virtual service, so
subsequent metrics such as concurrent connections, throughput, requests,
logs and other stats will only be shown on the child virtual service.
Child The child virtual service does not have an IP address or service port. Instead,
it points to a parent virtual service, which must be created first. The domain
name is a fully qualified name requested by the SNI-enabled client within
the SSL handshake. The parent matches the client request with the child’s
domain name. It does not match against the configured SSL certificate. If no
child matches the client request, the parent’s SSL certificate and pool are
used.
6 Enter the VS VIP address. This is used during the creation of Shared virtual service.
VMware, Inc. 29
VMware NSX Advanced Load Balancer Configuration Guide
a The TCP/ UDP Profile to determine thenetwork settings such as protocol, TCP or UDP,and
related options for the protocol.
b The Application Profile to enable application layer specific features for the virtual service
d ICAP Profile to configure the ICAP server when checking the HTTP request.
e The Error Page Profile to be used for this virtual service. This profile is used to send the
custom error page to the client generated by the proxy.
8 Under the Service Port section, enter the Services, which are the service ports that the virtual
service will listen for incoming traffic. Click Add Port to add multiple ports.
d Enable Override TCP/UDP and select the profile required to override the virtual service's
default TCP/UDP profile on a per-service port basis.
e Click Add Port to add another range of service ports and configure the same.
9 Under the Pool section, either select a Pool or a Pool Group. Using the Pool drop-down
list, select the required pool that contains destination servers and related attributes such as
load-balancing and persistence.
10 Select Ignore network reachability constraints for the server pool, if required. If the pool
contains servers in networks unknown or inaccessible to NSX Advanced Load Balancer, the
Controller is unable to place the new virtual service on a SE, as it does not know which SE
has the best reachability. This requires you to manually choose the virtual service placement.
Selecting this option will allow the Controller to place the virtual service, even though some
or all servers in the pool may be inaccessible. For instance, you can select this option while
creating the virtual service, and later configure a static route to access the servers.
Step 2: Policies
Use the Policies tab to define policies or DataScripts for the virtual service. DataScripts and
policies consists of one or more rules that control the flow of connections or requests through
the virtual service to control security, client request attributes, or server response attributes. Each
rule is a match/action pair that uses if/then logic: If something is true, then it matches the rule
and corresponding actions will be performed.Policies are simple GUI-based, wizard-driven logic,
whereas DataScript allows more powerful manipulation using Avi Vantage’s Lua-based scripting
language.
VMware, Inc. 30
VMware NSX Advanced Load Balancer Configuration Guide
Procedure
1 Configure Network Security to explicitly allow or block traffic based on network (TCP/UDP)
information.
d Select the Logging checkbox for NSX Advanced Load Balancer to log when an action has
been invoked.
e Under Matching Rules, select the network security match criteria from the Add New
Matchdrop down list. For example, Service Port is 80.
f Under Actions, select a configurable action to be implemented when the match criteria is
met. For more information, see Network Security.
g In the Role-Based Access Control (RBAC) section click Add and configure the Key and
the corresponding Value to provide granular access to control, manage and monitor
applications.For more information, see Granular Role Based Access Controls per App.
2 Similarly, configure HTTP Security, HTTP Request, HTTP Response rules, as required.
a Select the Script to Execute from the drop-down list or Create DataScript.
4 Author custom authentication policies and attach the policies to identity providers (IdP). Under
Access, select and configure one of the following:
Option Description
PingAccess Ping Identity’s PingAccess Agent can be used to control client access to a
virtual service. To know how to create a PingAccess Agent profile, create an
SSO Policy of type PingAccess, and associate it with the virtual service.
VMware, Inc. 31
VMware NSX Advanced Load Balancer Configuration Guide
Option Description
JWT JWT validation is supported as one of the access policies for secure
communication through NSX Advanced Load Balancer and it is based on a
JWT issued by an authorization server. To know more, see Configuring NSX
Advanced Load Balancer for JSON Web Tokens (JWT) Validation.
LDAP LDAP is an extension of the basic authentication policy where the provided
username and password will be authenticated against the target LDAP
server. LDAP is a commonly used protocol for accessing a directory service.
A directory service is a hierarchical object oriented database view of
an authentication system. NSX Advanced Load Balancer supports LDAP
authentication for virtual services. To know more, see Configuring LDAP
Authentication.
5 Click Next.
Step 3: Analytics
The Analytics tab of the New Virtual Service wizard defines how the NSX Advanced Load
Balancer captures analytics for the virtual service. These settings control the thresholds for
defining client experience and the resulting impact on end-to-end timing and the health score,
the level of metrics collection, and the logging behavior.
Procedure
1 Select an Analytics Profile from the drop-down menu. This profile determines the thresholds
for determining client experience. It also defines errors that can be tailored to ignore
certain behavior that might not be an error for a site, such as an HTTP 401 (authentication
required) response code. The NSX Advanced Load Balancer uses errors and client experience
thresholds to determine the health score of the virtual service and might generate significant
log entries for any issues that arise.
2 There are several number of metrics, such as End to End Timing, Throughput, Requests, and
more. The NSX Advanced Load Balancer updates these metrics periodically, either at a default
interval of five minutes, or as defined in the Metric Update Frequency. Enable Real Time
Metrics to gather detailed metrics aggressively for a limited period, as required.
n Enter a value, for example, 30 min to collect real time metrics for the defined 30 minutes.
After this period of time elapses, the metrics collection reverts to slower polling. Real time
metrics is helpful when troubleshooting.
n Note Capturing real time metrics can negatively impact system performance for busy
Controllers with large numbers of virtual services or configured with minimal hardware
resource.
VMware, Inc. 32
VMware NSX Advanced Load Balancer Configuration Guide
3 Data about connecting clients can be captured using Client Insights. Specific clients may be
included or excluded via the Include URL, client IP address, and exclude URL options. By
default, No Insights is selected.
Option Description
Active For HTTP virtual services, the active mode goes further by inserting
an industry standard JavaScript query into a small number of server
responses to provide HTTP navigation and resource timing. Client browsers
transparently return additional information about their experience loading the
web page. NSX Advanced Load Balancer uses this information to populate
the Navigation Timing and Top URL Timing metrics. A maximum of one
HTTP web page per second will be selected for generating the sampled data.
Passive Record data passively flowing through the virtual service. This option enables
recording of the End-to-End Timingand client’s location. For HTTP virtual
services, device, browser, operating system, and top URLs metrics are also
included. No agents or changes are made to client or server traffic.
4 Configure user-defined logging under Client Log Settings. Click Log all headers to include all
the headers.
5 Enter the number of significant logs to be generated per second for this virtual service on
each SE as Significant log throttle. The default value is 10 sec per sec. Setting this value to 0
deactivates throttling for significant logs.
6 Enter User defined filters log throttle to limit the total number of UDF logs generated per
second for this virtual service on each SE.
7 Enable Non-significant logs to capture all client logs including connections and requests.
a Enter the number of non-significant logs to be generated per second for this virtual service
on each SE as Non-significant log throttle. The default value is 10 sec per sec. Setting this
value to 0 deactivates throttling for for non-significant logs.
8 Click Add Client Log Filter. In the Add Client Log Filter section configure the following.
c Select the condition for the match under Matching Filter. For example, Client IP.
d Select the criteria to match the filter. For example, Is 1.1.1.1. The filter will take effect if the
client IP address is 1.1.1.1.
e Click Add Item to add another critera for the same filter.
f Filter based on the request’s Path and select the required Match criteria and add a string
group or enter a custom string, as required.
VMware, Inc. 33
VMware NSX Advanced Load Balancer Configuration Guide
Step 4: Advanced
When creating the virtual service, Step 4: Advanced provides advanced and optional
configuration for the virtual service.
Procedure
1 Under Performance Limit Settings, click Performance Limits and define the performance
limits for a virtual service. The limits applied are for this virtual service only, and are based on
an aggregate of all clients. See Rate Shaping and Throttling Options.
Configure limits per client using the application profile’s DDoS tab. Use policies or DataScripts
for more per-client limits.
2 To limit the incoming connections to this virtual service use Rate Limit Number of New TCP
Connections and Rate Limit Number of New HTTP Requests . Configure the following fields.
Option Description
Time Period Enter the time (within the range 1-1000000000), in seconds, within which
the threshold is valid. Enter to 0 to keep the threshold perpetually valid.
Action Select the Action. NSX Advanced Load Balancer performs this action upon
rate limiting.
3 Enter the maximum amount of bandwidth for the virtual service in Mbps for each SE using Max
Throughput.
4 Specify the maximum number of concurrent open connections using Max Concurrent
Connections. Connection attempts that exceed this number will be reset (TCP) or dropped
(UDP) until the total number of concurrent connections falls below the threshold.
VMware, Inc. 34
VMware NSX Advanced Load Balancer Configuration Guide
Option Description
Weight Bandwidth is the packets per second through the SE’s hypervisor, saturation
of the physical interface of the host server, or similar network constrictions.
NSX Advanced Load Balancer provides bandwidth allocation to the traffic
that this virtual service transmits, depending on the weight you assign.
A higher weight prioritizes traffic in comparison to other virtual services
sharing the same Service Engines.
This setting is only applicable if there is network congestion, and only for
packets sent from the Service Engine.
Fairness Fairness determines the algorithm that the NSX Advanced Load Balancer
uses to ensure that each virtual service can send traffic when the Service
Engine experiences network congestion.
The Throughput Fairness algorithm considers the weight defined for the
virtual service into account to achieve this.
Throughput and Delay Fairness is a more thorough algorithm to accomplish
the same task. It consumes greater CPU on the Service Engine when there
are larger numbers of virtual services.
This option is only recommended for latency-sensitive protocols.
a Enable Auto Gateway to send response traffic to clients back to the source MAC
address of the connection, rather than statically sent to the default gateway of the NSX
Advanced Load Balancer. If the NSX Advanced Load Balancer has the wrong default
gateway, no configured gateway, or multiple gateways, client-initiated return traffic will
still flow correctly. The NSX Advanced Load Balancerdefault gateway will still be used for
management and outbound-initiated traffic.
b Enable Use VIP as SNAT for health monitoring and sending traffic to the back-end servers
instead of the SE interface IP. On enabling this option the virtual service cannot be
configured in an active-active HA mode. For example, in environments in which firewalls
separate clients from services (for example, AWS), the feature provides a consistent
source IP for traffic to the origin server. During packet capture, you can filter on the VIP
and capture traffic on both sides of the NSX Advanced Load Balancer, thus eliminating
extraneous traffic.
c Select Advertise VIP via BGP to enable Route Health Injection using the BGP
configuration in the vrf context.
d SelectAdvertise SNAT via BGPto enable Route Health Injection for Source network
address translated (NAT) floating IP Address using the BGP configuration in the vrf
context.
e Enter the network address translated (NAT) floating source IP Address(es) for upstream
connection to servers as the SNAT IP Address.
f Enter the Server network or list of servers for cloning traffic in the Traffic Clone Profile.
VMware, Inc. 35
VMware NSX Advanced Load Balancer Configuration Guide
g If the host header name in a client HTTP request is not the same as this field, or if it is
an IP address,NSX Advanced Load Balancer translates the host header to this name prior
to sending the request to a server. If a server issues a redirect with the translated name,
or with its own IP address, the redirect’s location header will be replaced with the client’s
original requested host name. Host Name Translation does not rewrite cookie domains or
absolute links that might be embedded within the HTML page. This option is applicable to
HTTP virtual services only. This capability can be manually created using HTTP request and
response policies.
h Select the required Service Engine Group. Placing a virtual service in a specific Service
Engine group is used to guarantee resource reservation and data plane isolation, such
as separating production from test environments. This field may be hidden based on
configured roles or tenant options.
i Enable Remove Listening Port when VS Down for the Service Engine to respond to
requests to the VIP and service port with a RST (TCP) or ICMP port unreachable (UDP),
when the virtual service is down. See Remove Listening Port when VS down.
j Enable Scale out ECMP if the network itself performs flow hashing with ECMP in
environments such as GCP. Deactivate the redistribution of flows across Service Engines
for a virtual service.
7 Configure Role-Based Access Control (RBAC)for the virtual service using markers. See
Granular Role Based Access Controls.
a Click Add.
While disabled, the virtual service is unattached from the SEs hosting it. Likewise:
n The pool is placed in a grey (unused) state and is eligible for use by another virtual service.
n Health monitors are not sent to the pool’s servers while the virtual service is disabled.
If a virtual IP needs to be disabled, each virtual service must first be disabled. Once all virtual
services using the VIP have been disabled, NSX Advanced Load Balancer SEs will no longer
respond to ARPs or network requests for the VIP.
Using UI
The following are the steps to disable a virtual service from the Controller’s web interface:
VMware, Inc. 36
VMware NSX Advanced Load Balancer Configuration Guide
3 The button will change from green to red when the virtual service is disabled.
Using CLI
Execute the following command to enable a virtual service from the CLI:
For automated interaction with NSX Advanced Load Balancer, particularly through the API, it
is useful to know how to obtain the UUID of objects such as a virtual service. For the example
mentioned, it is recommended to have Tenant Header (X-Avi-Tenant) set in the API calls so the
Controller can resolve the name to the correct tenant. The details for the header insertion are in
the SDK and API guide.
https://10.1.1.1/#/authenticated/applications/virtualservice/virtualservice-0523452d-
c301-4817-a5e0-ee66b95bd287/analytics?timeframe=6h
VMware, Inc. 37
VMware NSX Advanced Load Balancer Configuration Guide
| address | swapnil2 |
| ip_address | 10.130.129.14 |
Due to the distributed nature of the NSX Advanced Load Balancer Service Engines, the Controller
directly extends a Service Engine’s NICs to the virtual service and pool member IP networks.
The NSX Advanced Load Balancer Controller enables the user to make this network attachment
decision manually by providing options on the Virtual Service Placement Settings menu in
conjunction with the static routes. For more details, see Configuration.
Network Scenarios
Consider a use case with the following networks, as discovered by the Controller. The discovered
networks are considered as directly-connected networks by the NSX Advanced Load Balancer
Controller.
n 10.10.10.0/24
n 10.10.20.0/24
n 10.10.30.0/24
<Insert Image>
<Insert Image>
Note The first option on Virtual Service Placement Settings does not apply to the virtual IP unless
the second option is selected.
VMware, Inc. 38
VMware NSX Advanced Load Balancer Configuration Guide
<Insert Image>
In this case, the Layer 3 switch must have a proper static route entry to reach the virtual service.
Note In this example, choosing the second option without choosing the first has no effect on the
virtual service IP network selection.
Configuration
The Virtual Service Placement Settings menu is displayed during the initial installation menu of
the Controller and can be changed after the installation by navigating to Infrastructure > Cloud
and clicking the edit icon. The Static Routes menu is also available.
Select Prefer Static Routes vs Directly Connected Network and Use Static Routes for Network
Resolution of VIP check boxes to define placement settings.
admin@abc-controller:~$ shell
Login: admin
Password:
n Create a standalone http policy set named httppolicyset_demo. Configure the required rules
under the policy set and save it. See the following output for more details on the configuration.
+------------------------+----------------------------------------------------+
| Field | Value |
+------------------------+----------------------------------------------------+
| uuid | httppolicyset-dd4e996a-15cc-456c-ad56-086bf21b6e75 |
| name | httppolicyset_demo |
| http_request_policy | |
| rules[1] | |
| name | Demo_Rule1 |
VMware, Inc. 39
VMware NSX Advanced Load Balancer Configuration Guide
| index | 1 |
| enable | True |
| match | |
| path | |
| match_criteria | CONTAINS |
| match_case | INSENSITIVE |
| match_str[1] | index.html |
| switching_action | |
| action | HTTP_SWITCHING_SELECT_LOCAL |
| status_code | HTTP_LOCAL_RESPONSE_STATUS_CODE_429 |
| log | True |
| is_internal_policy | False |
| tenant_ref | admin |
+------------------------+----------------------------------------------------+
n Press the Tab key to display the list of the httppolicyset objects.
VS1-Default-Cloud-HTTP-Policy-Set-0 VS2-Default-Cloud-HTTP-Policy-Set-0.
*httppolicyset_demo*
n To reattach the HTTP policy to other virtual services, repeat the previous two steps for each
virtual service.
A client’s IP address may need to be prevented from accessing an application for several reasons.
Likewise, blocking a client’s access can be accomplished in numerous ways. While this article
focuses on IP addresses, a client also could be identified based on other identifiers such as a
username, session cookie, or SSL client certificate.
Blocking a Client IP
Navigate to virtual service Edit > Rules tab > Network Security tab > New Rule.
A network security policy can be used to deny a single IP address or multiple addresses. For large
IP lists, consider creating a blocklist (Templates > Groups > IP Group. This object can contain
extensive lists of IP addresses or network ranges.
VMware, Inc. 40
VMware NSX Advanced Load Balancer Configuration Guide
An IP group also may be leveraged across multiple virtual-service network security policies. This
simplifies adding or removing IP addresses, which can be performed for many applications by
changing a single IP group.
DataScript
For finer control, DataScripts may be used to evaluate additional criteria before discarding a client
connection.
For more information on scale out settings, see Service Engine Group.
1 The min-max settings of its SE group are dynamically changed. All virtual services placed on
the SE group are affected.
Impact of Changes
Changes to the min_scaleout_per_vs or max_scaleout_per_vs settings of an SE group result in
the same behavior as described above with the following exceptions:
Scenarios
The effect on virtual services when the minimum-maximum scaleout per VS settings of the SE
group are changed is illustrated in the following section.
VMware, Inc. 41
VMware NSX Advanced Load Balancer Configuration Guide
To understand the examples, it should be noted that internally, the number of SEs requested for a
VS is the sum of the following two numbers:
2 The user scaleout factor - The user scaleout factor is an internal variable which starts at 0 for
all virtual services. This number increases by 1 when a user scales out and decreases by 1 when
the user scales in.
General Behavior
Following are the rules governing all changes of minimum and maximum scale per VS:
n Decreasing the minimum scale per VS has no effect on the scale of existing virtual services in
the group.
n For this case, the user scaleout is increased by the amount that the minimum scale per VS
is decreased.
n For an existing VS, if the user wishes it to be scaled at the minimum level of the SE group,
the user must explicitly scale in.
n Increasing the minimum scale per VS only increases the scale of existing VSs if the new
minimum is greater than the current scale of the VS.
n Increasing or decreasing the maximum scale per VS of an SE group has no effect on the scale
of existing VSs in the group.
n For VSs with more SEs than the new maximum scale, the user is still able to manually scale
in.
n A VS which is disabled and re-enabled preserves its existing scale, capped by the current
maximum scale per VS of the SE group.
n A VS which is moved to another SE group will be placed on the minimum scale per VS of the
new SE group.
For a VS without any user scaleout, increasing the minimum scale of the SE group increases the
number of SEs of the VS to the new minimum.
initial state 1 0 1
VMware, Inc. 42
VMware NSX Advanced Load Balancer Configuration Guide
For a VS with user scaleout, increasing the minimum scale increases the number of SEs, only if the
new minimum is greater than the current scale of the VS.
Example 1
initial state 1 0 1
Example 2
initial state 2 1 1
Decreasing the minimum scale per VS of an SE group will have no effect on the scale of the
existing VS. To maintain the same number of SEs, the user scaleout is increased by the amount of
decrease in minimum scale.
Example
initial state 2 0 2
user scale in 1 0 1
The purpose of this behavior is to preserve the current state of all VSs residing inside an SE group
when min scale per VS is reduced. By increasing the user scaleout by the amount of decrease in
min_scaleout_per_vs, we keep the number of SEs requested the same.
If the desired outcome in the above example is to scale every VS in the SE group down to 1 SE,
there are three options:
1 After changing the SE group settings, manually scale down every VS to reduce the user
scaleout to 0.
2 Set the maximum scale per VS of the SE group to 1. Disable and enable all VSs (maximum
scale can also be reduced after the disable).
3 Move all VSs in the SE group to another SE group where the min_scaleout_per_vs is 1.
Changing the maximum scale per VS has no effect on the other variables.
VMware, Inc. 43
VMware NSX Advanced Load Balancer Configuration Guide
If the maximum scale per VS of an SE group is reduced, all VSs within the SE group retain the
same number of SEs. So the number of SEs requested for a VS in this situation can be greater
than the new maximum scale per VS. The user has the option of manually scaling in to reduce this
number to the new max.
Example
Num of SEs
Action User scaleout Min scaleout per VS Max scaleout per VS
Requested
initial state 3 2 1 4
user scale in 2 1 1 2
When a VS is disabled and then enabled, it is placed on (current min scale per VS + number of
user scaleouts), capped by the current max scale per VS of the SE group.
If a VS is disabled and then enabled without changing the scale settings of the SE group, the VS
remains at the same scale.
Example 1
Num of SEs
Action User scaleout Min scaleout per VS Max scaleout per VS
Requested
Initial state 3 2 1 4
VS disabled 0 2 1 4
VS enabled 3 2 1 4
Example 2
Num of SEs
Action User scaleout Min scaleout per VS Max scaleout per VS
Requested
Initial state 2 0 2 4
VS disabled 0 0 2 4
VS enabled 2 1 1 4
Example 3
VMware, Inc. 44
VMware NSX Advanced Load Balancer Configuration Guide
Num of SEs
Action User scaleout Min scaleout per VS Max scaleout per VS
Requested
Initial state 4 3 1 4
VS disabled 0 3 1 1
VS enabled 1 0 1 1
Moving a VS to another SE group will always place the VS on the min_scaleout_per_vs of the
new SE group.
Example 1
Num of SEs
Action User scaleout min scaleout per VS Max scaleout per VS
Requested
Initial state 2 0 2 4
VS moved to new SE
group with (min: 1, 1 0 1 2
max: 2)
Since the VS has been moved to a new SE group, the NSX Advanced Load Balancer does not
attempt to preserve its state and adheres to the settings of the new SE group.
Example 2
Num of SEs
Action User scaleout Min scaleout per VS Max scaleout per VS
Requested
Initial state 1 0 1 4
VS moved to new SE
group with (min: 2, 2 0 2 2
max: 2)
A legacy active-standby SE group effectively has a minimum scale per VS of 2 and a maximum
scale per VS of 2.
Summary
The following table summarizes expected changes in various scenarios:
VMware, Inc. 45
VMware NSX Advanced Load Balancer Configuration Guide
Enabling Virtual Hosting VS option for a virtual service indicates the virtual service is a parent
or child of another service, in a server name indication (SNI) deployment.Server Name Indication
(SNI) is a method of virtual hosting multiple domain names for an SSL enabled virtual IP.
For more information on virtual hosting enabled virtual service, see Server Name Indication,
Wildcard SNI Matching for Virtual Hosting.
The virtual service placement for EVH service follows the same conditions as SNI parent child.A
parent can either be a host SNI or EVH children but not both at the same time. The child of the
same virtual hosting type can be associated with parent virtual service, i.e. if the parent virtual
service is of SNI type then the associated children should also be of SNI type. Similarly if parent
virtual service is of enhanced virtual service type, then the children associated with this parent
virtual service should be of same type, i.e. EVH. The EVH child can not be associated with SNI
parents and vice versa.
VMware, Inc. 46
VMware NSX Advanced Load Balancer Configuration Guide
Multiple domains can be configured under a child virtual The same domain can be configured under multiple
service and are owned by that virtual service. children but with different path match criteria.
SNI can only handle HTTPS traffic. EVH children can handle both HTTP and HTTPS traffic.
The entire connection, including all its requests, the parent The connection is always handled by the parent virtual
virtual service will be handled by one of this child virtual service and individual requests in that connection will be
service, selected during TLS handshake. handled by the selected child virtual service based on the
matching host header, URI path and path match criteria
configured under child virtual service.
n Parent virtual service have the service ports configured on them and need to have SSL
enabled on them.
n In the child virtual service, FQDN field is used to specify the domains for which the virtual
service should be selected. HOST+PATH+match_criteria defines which child virtual service
under a parent virtual service will process a given request.
NSX Advanced Load Balancer supports the EVH switching of different requests (within one
connection) between the child virtual service of a single parent virtual service. Unlike SNI which
switches only TLS connections based on one-to-one mapping of children to FQDN, EVH maps one
FQDN to many children based on resource path requested.
n Equals
n Begins with
The above search order will be executed to find the matching child virtual service.
Notes
When configuring EVH for a virtual service, note the following:
n The child virtual service cannot have certs attached or SSL Profile attached to them.
VMware, Inc. 47
VMware NSX Advanced Load Balancer Configuration Guide
n Multiple vh_matches configuration with same host value are not allowed under a child
virtual service. A child virtual service can have multiple paths configured under a single
host.
n OCSP stapling will not work for certificate other than the first/ default certificate.
Procedure
VMware, Inc. 48
VMware NSX Advanced Load Balancer Configuration Guide
Option Description
Parent The parent virtual service in EVH is configured without any vh_matches
configuration. The virtual service receives all traffic and performs TLS
termination, if necessary, before receiving requests.
The parent virtual service allows multiple certificates to be configured in this
virtual hosting and for SSL connections, the parent virtual service picks the
matching server certificate based on the TLS server name requested by the
client and cipher used. If the server name is requested or no match is found,
the first certificate configured on the virtual service is used. For TLS mutual
authentication, the PKI profile must be configured only on the parent virtual
service.
After theTLS handshake is complete, the parent receives all the requests
and matches them with host names and paths configured on its children
and selects the matching child virtual service and hands off the request to
that virtual service. If none of the child virtual service’s config match the
request, then the request is processed by parent virtual service configuration.
Essentially the connection stays with the parent but request keep switching to
its children for processing.
Child The child virtual service in EVH is configured with host and path match
configuration. The parent virtual service will do the TCP and SSL termination
and request processing is sent to this virtual service if the request host
and URL matches the vh_matches configuration in the child virtual service.
Multiple hosts, each with multiple path matches can be configured under
a child virtual service. Multiple child virtual service with non-conflicting
vh_matches configuration can be associated with a parent virtual service. The
child virtual service cannot do TLS termination and does not accept SSL
configuration such as SSL profile, SSL key and certificate, PKI profile etc.
All request or response specific configuration settings from application
profile, policies, DataScript, caching and compression, WAF profile
configured on the child virtual service apply on the request being processed
by this child virtual service.
5 Select Enhanced Virtual hosting as the Virtual Hosting Type. Ensure that both parent and its
child virtual service have the same Virtual Hosting Type.
a In case of a child virtual service, under Virtual Hosting Match Criteria enter the virtual
service acting as Virtual Hosting Parent.
c Select the match Criteria and one or more string group to match the host or domain name
specified.
VMware, Inc. 49
VMware NSX Advanced Load Balancer Configuration Guide
The TLS server name will be looked up against the configured certificates and the matching
certificate will be served on the TLS connection. If no TLS server name is present or TLS server
name does not match any common name/ SAN/ DNS information in any of the certificates
configured, the first certificate in the list of certificates (default certificate) configured will be served
for that connection.
VMware, Inc. 50
VMware NSX Advanced Load Balancer Configuration Guide
Each of the child virtual service can have their individual app profiles, WAF profiles, etc.
Application Metrics
With EVH, the connection will technically be received by the parent virtual service and each
individual request will be processed by the matching child virtual service.
Each request will map the metrics data of the matching child virtual service and request level
metrics will be collected on that child. Connection level metrics, including TCP and SSL, will be
collected on the parent virtual service.
Note Features under Virtual Services > Security, for example, SSL Certificate, SSL/ TLS version,
SSL Score are not applicable for a child virtual service.
By default, this threshold has been preconfigured to be 85% for CPU, disk, and memory. In
some deployments, this predefined threshold may not be conservative enough, and a lower value
is desired. The following will provide an example of modifying these thresholds to meet your
deployment’s requirements.
VMware, Inc. 51
VMware NSX Advanced Load Balancer Configuration Guide
n CONTROLLER_CPU_THRESHOLD
n CONTROLLER_MEM_THRESHOLD
n CONTROLLER_DISK_THRESHOLD
When defining the configuration, there are two threshold options to be aware of:
n watermark_thresholds: Threshold value for which event is raised. There can be multiple
thresholds defined. Health score degrades when the target is higher than this threshold.
+----------------------------------+------------------------------------+
| Field | Value |
VMware, Inc. 52
VMware NSX Advanced Load Balancer Configuration Guide
+----------------------------------+------------------------------------+
| controller_analytics_policy | |
| metrics_event_thresholds[1] | |
| reset_threshold | 60.0 |
| watermark_thresholds[1] | 75 |
| metrics_event_threshold_type | CONTROLLER_CPU_THRESHOLD |
| metrics_event_thresholds[2] | |
| reset_threshold | 60.0 |
| watermark_thresholds[1] | 75 |
| metrics_event_threshold_type | CONTROLLER_MEM_THRESHOLD |
| metrics_event_thresholds[3] | |
| reset_threshold | 60.0 |
| watermark_thresholds[1] | 75 |
| metrics_event_threshold_type | CONTROLLER_DISK_THRESHOLD |
+----------------------------------+------------------------------------+
Virtual Service advertises itself by responding to ARP requests to receive traffic. However, this
can be disabled by using the no traffic_enabled command. On configuring this command, the
specific virtual service IP address stops responding to ARP requests.
This command applies only to VMware, VMware NSX, and Linux server cloud environments.
The following are the CLI commands to enable and disable this feature for a virtual service vs1:
2 Choose the required cloud option and select the checkbox for Traffic Enabled as shown
below:
VMware, Inc. 53
VMware NSX Advanced Load Balancer Configuration Guide
During the SSL handshake between a client and a parent virtual service, the parent virtual service
checks the domain names of its children virtual services for a match with the domain name in
the client’s handshake. If there is a match, the parent virtual service passes the client request to
the child virtual service with the matching domain name. Wildcards can be used to match the
beginning or end of the domain name.
Wildcards
Within a child virtual service’s configuration, a wildcard character can be used at the beginning or
end of the domain name:
n *.example.com - Matches on any labels at the beginning of the domain name if the rest
of the domain name matches. This example matches mail.example.com, app1.example.com,
app1.test.example.com, app1.test.b.example.com, any.set.of.labels.in.front.of.example.com,
and so on.
n .example.com - Matches on any set of first labels or no first label. This example matches not
only on any domain name matched by *.example.com but also on “example.com” (with no
other label in front).
A domain name can contain any of these wildcard characters, in the positions shown. The use of
wildcards in other label positions within a domain name is not supported. Likewise, using multiple
wildcard characters within the same domain name is not supported.
VMware, Inc. 54
VMware NSX Advanced Load Balancer Configuration Guide
For example, suppose a parent virtual service has the following child virtual services:
If the server certificate contains a domain name that ends with “.test.example.com,” the certificate
matches on VS2 but not on VS1.
Procedure
1 Access the Advanced Setup popup for the child virtual service:
2 On the Settings tab, select Virtual Hosting VS, then select Child. This displays the Domain
Name field.
VMware, Inc. 55
VMware NSX Advanced Load Balancer Configuration Guide
3 Enter the domain name to use for matching. For wildcard matching, enter the wildcard
character.
4 To save the virtual service configuration, click Next until the Review tab appears. If creating a
new pool, specify a name before saving the pool.
Each cloud will have at least one SE group. The options within an SE group might vary based
on the type of cloud they exist in and the cloud's settings, such as no access versus write
access mode. SEs might exist only within one group. Each group acts as an isolation domain.
SE resources within an SE group can be moved around to accommodate virtual services, but SE
resources are never shared between SE groups.
VMware, Inc. 56
VMware NSX Advanced Load Balancer Configuration Guide
n Requires existing SEs to first be disabled before the changes take effect.
Multiple SE groups can exist within a cloud. A newly created virtual service will be placed on the
default SE group. This can be changed through the Applications > Virtual Services page while
creating a virtual service through the Advanced Setup wizard.
To move an existing virtual service from an SE group to another, the virtual service should be
disabled, moved, and then re-enabled. SE groups provide data plane isolation. Therefore, moving
a virtual service from one SE group to another is disruptive to existing connections through the
virtual service.
Note SE group properties are cloud-specific. Based on the cloud configuration, some of the
properties discussed in this section may not be available.
To configure range of port numbers used to open backend server connections run the code
below.
configure serviceenginegroupproperties
configure serviceenginegroup Default-Group ephemeral_portrange_start 5000
configure serviceenginegroup [name] ephemeral_portrange_start 4096
configure serviceenginegroup [name] ephemeral_portrange_end 61440
Note By default, the range starts with 4096 and ends with 61440.
Creating SE Group
This section shows how to create an SE group in the NSX Advanced Load Balancer.
Procedure
1 From the NSX Advanced Load Balancer Controller, navigate to Infrastructure > Cloud
Resources > Service Engine Group.
3 Click Create.
6 Click Save.
VMware, Inc. 57
VMware NSX Advanced Load Balancer Configuration Guide
Procedure
1 Under the Basic Settings tab, enter the Service Engine Group Name.
2 There are several number of metrics, such as End to End Timing, Throughput, Requests, and
more. The NSX Advanced Load Balancer Controller updates these metrics periodically, either
at a default interval of five minutes, or as defined in the Metric Update Frequency. Enable
Real Time Metrics to gather detailed metrics aggressively for a limited period, as required.
n Enter a value, for example, 30 min to collect real time metrics for the defined 30 minutes.
After this period of time elapses, the metrics collection reverts to slower polling. Real time
metrics is helpful when troubleshooting.
VMware, Inc. 58
VMware NSX Advanced Load Balancer Configuration Guide
3 Under High Availability & Placement Settings configure the behavior of the SE group in the
event of an SE failure. You can also define how the load is scaled across SEs. Select one of the
modes, as required.
Option Description
Legacy HA Select this mode to mimic a legacy appliance load balancer for easy migration
(Active/Standby) to Avi Vantage. Only two Service Engines may be created. For every virtual
service active on one, there is a standby on the other, configured and ready
to take over in the event of a failure of the active SE. There is no Service
Engine scale out in this HA mode.
Health Monitoring on Standby Service Engine(s) to enable active health
monitoring from the standby SE for all placed virtual services.
n Distribute Load to use both the active and standby Service Engines for
virtual service placement in the legacy active standby HA mode.
n Auto-redistribute Load to make failback automatic so that virtual services
that are migrated back to the SE that replaces the failed SE.
Elastic HA Select this mode to permit up to N active SEs to deliver virtual services,
(Active/Active) with the capacity equivalent of M SEs within the group ready to absorb SE(s)
failure(s).
In case of Elastic HA (Active/Active), under VS Placement across Service
Engines, select the mod: required.
n Compact for NSX Advanced Load Balancer to spin up and fill up the
minimum number of SEs. It tries to place virtual services on SEs which are
already running.
n Distributed (default), for NSX Advanced Load Balancer to maximize the
virtual service performance by avoiding placements on existing SEs.
Instead, it places virtual services on newly spun-up SEs, up to the
maximum number of Service Engines.
Elastic HA Select this mode to distribute virtual services across a minimum of two SEs.
(N+M Buffer) In case of Elastic HA (N+M Buffer), under VS Placement across Service
Engines, select the mod: required.
n Compact (default), for NSX Advanced Load Balancer to spin up and fill up
the minimum number of SEs. It tries to place virtual services on SEs which
are already running.
n Distributed (default), for NSX Advanced Load Balancer to maximize the
virtual service performance by avoiding placements on existing SEs.
Instead, it places virtual services on newly spun-up SEs, up to the
maximum number of Service Engines.
4 In the field Virtual Services per Service Engine enter the maximum number of virtual services
(from 1 to 1000), that the Controller cluster can place on a single Service Engine in the group.
5 Select Service Engine Self-Election to enable SEs to elect a primary amongst themselves in
the absence of a connectivity to controller. This ensures Service Engine high availability in
handling client traffic even in headless mode.
VMware, Inc. 59
VMware NSX Advanced Load Balancer Configuration Guide
6 Under Service Engine Capacity and Limit Settings, enter the Max Number of Service Engines
to define the maximum number of service engines that can be created within an SE group.
This number, combined with the virtual services per SE setting, dictates the maximum number
of virtual services that can be created within an SE group. If this limit is reached, new virtual
services may not be deployed. The status will be grey indicating un-deployed status. This
setting can be useful to prevent NSX Advanced Load Balancer from consuming too many
virtual machines.
a Enable Host Geolocation Profile to provide extra configuration memory to support a large
geo DB configuration.
b Enter the value of total SE memory reserved for application caching (in percentage).
Restart the SE for this change to take effect. Available Memory for Connections and
Buffers is the memory available besides caching. This field is automatically updated
depemding on the percentage entered as Memory for Caching.
c Use the Connections and Buffers Memory Distributionslider to define the percentage of
memory (10% to reserved to maintain connection state. This is allocated at the expense of
memory used for HTTP in-memory cache.
8 Under the License section, NSX Advanced Load Balancer maps the license type based on the
type of cloud.
Option Description
Linux Sockets
9 Select Enable Per-app Service Engine Mode to deploy dedicated load balancers per
application, that is, per virtual service. In this mode, each SE is limited to a maximum of two
virtual services. vCPUs in per-app SEs count towards licensing at 25% rate.
10 Select the Service Engine Bandwidth Type for the license. This option is deactivated when
Enable Per-app Service Engine Mode is enabled.
11 Enter the Number of Service Engine Data Paths to configure the maximum number of se_dp
processes that handles traffic. If this field is not configured, NSX Advanced Load Balancer
takes the number of CPUs on the SE.
12 Select Use Hyperthreading to enable the use of hyper-threaded cores for se_dp processes.
Restart the SE for this change to take effect.
13 Click Save to complete the configuration. Optionally, you can click the Advanced tab to
continue configuring advanced options for the SE group.
VMware, Inc. 60
VMware NSX Advanced Load Balancer Configuration Guide
Procedure
2 Select a management network to use for the Service Engines as the Override Management
Network. If the SEs require a different network for management than the Controller,
then select the network here. The SEs will use their management route to establish
communications with the Controllers. This option is only available if the SE group’s overridden
management network is DHCP-defined. An administrator’s attempt to override a statically-
defined management network (Infrastructure > Cloud > Network) will not work due to not
allowing a default gateway in the statically-defined subnet.
4 In the field Sacale per Virtual Service, enter the maximum number of active Service Engines
for the virtual service. A pair of integers determine the minimum and number of active SEs
onto which a single virtual service may be placed. With native SE scaling, the greatest value
one can enter as a maximum is 4; with BGP-based SE scaling, the limit is much higher,
governed by the ECMP support on the upstream router.
5 Select CPU socket Affinity for NSX Advanced Load Balancer to allocate all cores for SE VMs
on the same socket of a multi-socket CPU. Appropriate physical resources need to be present
in the ESX Host. If not, then SE creation will fail and manual intervention will be required.
6 Select Dedicated dispatcher CPU to dedicate the core that handles packet receive or transmit
from the network to just the dispatching function. This option is particularly helpful in case of a
group whose SEs have three or more vCPUs.
7 Select the HSM Group under the section Security. Hardware security module (HSM) is an
external security appliance used for secure storage of SSL certificates and keys. The HSM
group dictates how Service Engines can reach and authenticate with the HSM. To know how
to configure HSM in NSX Advanced Load Balancer, see Chapter 9 Hardware Security Module
(HSM).
VMware, Inc. 61
VMware NSX Advanced Load Balancer Configuration Guide
a Enter Significant Log Throttle to define the number of significant log entries generated
per second per core on an SE. Set this parameter to zero to disable throttling of the UDF
log.
b Enter UDF Log Throttle to define the number of user-defined (UDF) log entries generated
per second per core on an SE. UDF log entries are generated due to the configured
client log filters or the rules with logging enabled. The default value is 100 log entries per
second. Set this parameter to zero to disable throttling of the UDF log.
c Enter Non-Significant Log Throttle to define the number of non-significant log entries
generated per second per core on an SE.
d Enter the Number of Streaming Threads (1 to 100) to use for log streaming.
e Click Save.
n Any: The default setting allows SEs to be deployed to any host that best fits the deployment
criteria.
n Cluster: Excludes SEs from deploying within specified clusters of hosts. Checking the Include
checkbox reverses the logic, ensuring SEs only deploy within specified clusters.
n Host: Excludes SEs from deploying on specified hosts. The Include checkbox reverses the
logic, ensuring SEs only be deploy within specified hosts.
Data Store Scope for Service Engine Virtual Machine: Sets the storage location for SEs to store
the OVA (vmdk) file for VMware deployments.
n Any: NSX Advanced Load Balancer will determine the best option for data storage. n Local:
The SE will only use storage on the physical host.
VMware, Inc. 62
VMware NSX Advanced Load Balancer Configuration Guide
n Shared: NSX Advanced Load Balancer will prefer using the shared storage location. When this
option is clicked, specific data stores may be identified for exclusion or inclusion.
Hyper-Threading Modes
Hyper-threading works by duplicating certain sections of the processor that store the architectural
state. However, the logical processors in a hyper-threaded core share the execution resources,
including the execution engine, caches, and system bus interface.This allows a logical processor to
borrow resources from a stalled logical core (assuming both logical cores are associated with the
same physical core). A processor stalls due to a delay in sent data owing to a cache miss, branch
misprediction, or data dependency so it can finish processing the current thread.
NSX Advanced Load Balancer has two knobs to control the use of hyper-threaded cores and the
distribution (placement) of se_dps on the hyper-threaded CPUs. These two knobs are part of the
SE group. The following are the two knobs:
n You can control the placement of se_dps on the hyper threaded CPUs using:
se_hyperthreaded_mode –
SE_CPU_HT_AUTO[default]
SE_CPU_HT_SPARSE_DISPATCHER_PRIORITY
SE_CPU_HT_SPARSE_PROXY_PRIORITY
SE_CPU_HT_PACKED_CORES
use_hyperthreaded_cores: You can use this knob to enable or disable the use of hyper-threaded
cores for se_dps. This knob can be configured using CLI or UI.
VMware, Inc. 63
VMware NSX Advanced Load Balancer Configuration Guide
se_hyperthreaded_mode: You can use this knob to influence the distribution of se_dp on the
hyper-threaded CPUs when the number of datapath processes are less than the number of hyper-
threaded CPUs online. The knob can be configured only through the CLI.
Note You should set use_hyperthreaded_cores to True for the mode configured using
se_hyperthreaded_mode to take effect.
SE_CPU_HT_AUTO — This is the default mode. The SE automatically determines the best placement.
This mode preserves the existing behavior following CPU hyper-threading topology. If the
number of data path processes is less than the number of CPUs, this is equivalent to
SE_CPU_HT_SPARSE_PROXY_PRIORITY mode.
SE_CPU_HT_PACKED_CORES — This mode places the data path processes on the same physical core.
Each core can have two dispatchers or two non-dispatcher (proxy) instances being adjacent to
each other. This mode is useful when the number of data path processes is less than the number
of CPUs. This mode exhausts the hyper-threads serially on each core before moving on to the next
physical core.
For instance, if num_non_dp_cpus is 5, 2 cores are reserved for non-datapath exclusivity. To use HT,
(with or without DP isolation), the following config knobs are provided in SEgroup:
Example Configuration
| use_hyperthreaded_cores |
True |
| se_hyperthreaded_mode | SE_CPU_HT_SPARSE_DISPATCHER_PRIORITY |
VMware, Inc. 64
VMware NSX Advanced Load Balancer Configuration Guide
The feature creates two independent CPU sets for datapath and control plane SE functions. The
creation of these two independent and exclusive CPU sets, will reduce the number of se_dp
instances. The number of se_dps deployed depends either on the number of available host CPUs
in auto mode or the configured number of non_dp CPUs in custom mode.
This feature is supported only on host CPU instances with >= 8 CPUs.
Note This mode of operation may be enabled for latency and jitter sensitive applications.
For Linux Server Cloud alone the following prerequisites must be met to use this feature:
1 The cpuset package cpuset-py3 must be installed on the host and be present in /usr/bin/
cset location (a softlink may need to be created)
For full access environments, the requisite packages will be installed as part of the Service Engine
installation.
1-7 0
8-15 1
16-23 2
24-31 3
32-1024 4
VMware, Inc. 65
VMware NSX Advanced Load Balancer Configuration Guide
Examples:
1 Isolation mode in an instance with 16 host CPUs in auto mode will result in 14 CPUs for
datapath instances and 2 CPUs for control plane applications.
n dp_hb_frequency
n dp_hb_timeout_count
n dp_aggressive_hb_frequency
n dp_aggressive_hb_timeout_count
n se_ip_encap_ipc
n se_l3_encap_ipc
License
n License Tier — Specifies the license tier to be used by new SE groups. By default, this field
inherits the value from the system configuration.
n License Type — If no license type is specified, Avi applies default license enforcement for the
cloud type. The default mappings are max SEs for a container cloud, cores for OpenStack and
VMware, and sockets for Linux.
n Instance Flavor — Instance type is an AWS term. In a cloud deployment, this parameter
identifies one of a set of AWS EC2 instance types. Flavor is the analogous OpenStack term.
Other clouds (especially public clouds) may have their own terminology for essentially the
same thing.
Service Engine Name Prefix: Enter the prefix to use when naming the SEs within the SE group.
This name will be seen both within NSX Advanced Load Balancer, and as the name of the virtual
machine within the virtualization orchestrator.
VMware, Inc. 66
VMware NSX Advanced Load Balancer Configuration Guide
Service Engine Folder — SE virtual machines for this SE group will be grouped under this folder
name within the virtualization orchestrator.
Delete Unused Service Engines After — Enter the number of minutes to wait before the
Controller deletes an unused SE. Traffic patterns can change quickly, and a virtual service may
therefore need to scale across additional SEs with little notice. Setting this field to a high value
ensures that the NSX Advanced Load Balancer keeps unused SEs around in the event of a sudden
spike in traffic. A shorter value means the Controller will need to recreate a new SE to handle a
burst of traffic, which might take a couple of minutes.
n Any: The default setting allows SEs to be deployed to any host that best fits the
deployment criteria.
n Cluster: Excludes SEs from deploying within specified clusters of hosts. Checking the
Include checkbox reverses the logic, ensuring SEs only deploy within specified clusters.
n Host: Excludes SEs from deploying on specified hosts. The Include checkbox reverses the
logic, ensuring SEs only be deploy within specified hosts.
n Data Store Scope for Service Engine Virtual Machine: Sets the storage location for SEs to
store the OVA (vmdk) file for VMware deployments.
n Any: NSX Advanced Load Balancer will determine the best option for data storage.
n Shared: NSX Advanced Load Balancer will prefer using the shared storage location. When
this option is clicked, specific data stores may be identified for exclusion or inclusion.
VMware, Inc. 67
VMware NSX Advanced Load Balancer Configuration Guide
n Scale Per Virtual Service: A pair of integers determine the minimum and number of active
SEs onto which a single virtual service may be placed. With native SE scaling, the greatest
value one can enter as a maximum is 4; with BGP-based SE scaling, the limit is much higher,
governed by the ECMP support on the upstream router.
n See also:
n Service Engine Failure Detection: This option refers to the time NSX Advanced Load Balancer
takes to conclude SE takeover should take place. Standard is approximately 9 seconds and
aggressive 1.5 seconds.
n Auto-Rebalance: If this option is selected, virtual services are automatically migrated (scaled
in or out) when CPU loads on SEs fall below the minimum threshold or exceed the maximum
threshold. If this option is off, the result is limited to an alert. The frequency with which NSX
Advanced Load Balancer evaluates the need to rebalance can be set to some number of
seconds.
n Affinity: Selecting this option causes NSX Advanced Load Balancer to allocate all cores for
SE VMs on the same socket of a multi-socket CPU. The option is applicable only in vCenter
environments. Appropriate physical resources need to be present in the ESX Host. If not, then
SE creation will fail and manual intervention will be required.
Note The vCenter drop-down list populates the datastores if the datastores are shared. The
non-shared datastores (which means each ESX Host has their own local datastore) are filtered
out from the list because, by default when an ESX Host is chosen for SE VM creation, the local
datastore of that ESX Host will be picked.
n Dedicated dispatcher CPU: Selecting this option dedicates the core that handles packet
receive/transmit from/to the data network to just the dispatching function. This option makes
most sense in a group whose SEs have three or more vCPUs.
n Override Management Network: If the SEs require a different network for management than
the Controller, that network is specified here. The SEs will use their management route to
establish communications with the Controllers.
For more information, see Deploy SEs in Different Datacenter from Controllers.
Note This option is only available if the SE group’s overridden management network
is DHCP-defined. An administrator’s attempt to override a statically-defined management
network (Infrastructure > Cloud > Network) will not work due to not allowing a default
gateway in the statically-defined subnet.*
VMware, Inc. 68
VMware NSX Advanced Load Balancer Configuration Guide
Security
HSM Group: Hardware security modules may be configured within the Templates > Security >
HSM Groups. An HSM is an external security appliance used for secure storage of SSL certificates
and keys. An HSM group dictates how SEs can reach and authenticate with the HSM.
n UDF Log Throttle: This limits the number of user-defined (UDF) log entries generated per
second per core on an SE. UDF log entries are generated due to the configured client
log filters or the rules with logging enabled. Default is 100 log entries per second. Set this
parameter to zero to disable throttling of the UDF log.
n Non-Significant Log Throttle: This limits the number of non-significant log entries generated
per second per core on an SE. Default is 100 log entries per second. Set this parameter to zero
to disable throttling of the non-significant log.
n Number of Streaming Threads: Number of threads to use for log streaming, ranging from 1 to
100.
Other Settings
By default, the NSX Advanced Load Balancer Controller creates and manages a single security
group (SG) for an SE. This SG manages the ingress/egress rules for the SE’s control- and data-
plane traffic. In certain customer environments, it may be required to provide custom SGs to be
also be associated with the SE management- and/or data-plane vNICs.
n For more information about SGs in OpenStack and AWS clouds, see:
n Security Group Options for AWS Deployment with NSX Advanced Load Balancer.
n NSX Advanced Load BalancerManaged Security Group: Supported only for AWS clouds,
when this option is enabled, NSX Advanced Load Balancer will create and manage security
groups along with the custom security groups provided by the user. If disabled, it will only
make use of custom SG provided by the user.
n Management vNIC Custom Security Groups: Custom security groups to be associated with
management vNICs for SE instances in OpenStack and AWS clouds.
n Data vNIC Custom Security Groups: Custom security groups to be associated with data vNICs
for SE instances in OpenStack and AWS clouds.
VMware, Inc. 69
VMware NSX Advanced Load Balancer Configuration Guide
n Add Custom Tag: Custom tags are supported for Azure and AWS clouds and are useful in
grouping and managing resources. Click the Add Custom Tag hyperlink to configure this
option. The CLI interface is described here.
n Azure tags enable key:value pairs to be created and assigned to resources in Azure. For
more information on Azure tags, refer to Azure Tags.
n AWS tags help manage instances, images, and other Amazon EC2 resources, you can
optionally assign your own metadata to each resource in the form of tags. For more
information on AWS tags, see AWS Tags and Configuring a Tag for Auto-created SEs in
AWS.
VIP Autoscale
n Display FIP Subnets Only: Only display FIP subnets in the drop-down menu.
n VIP Autoscale Subnet: UUID of the subnet for the new IP address allocation.
Log into the NSX Advanced Load Balancer CLI and use the deactivate_ipv6_discovery command
under the configure serviceenginegroup <se-group name> mode to disable IPv6 learning for the
selected Service Engine group.
For deactivate_ip6_discovery to take effect, reboot all the Service Engines present in the specific
Service Engine group.
Use the show serviceenginegroup <se_group_name> command to check if the knob is enabled or
not.
VMware, Inc. 70
VMware NSX Advanced Load Balancer Configuration Guide
The current system utilises the Controller to distribute this information across the participating
SEs. Each SE has a local REDIS instance that connects to the Controller, and the objects are
allocated and synchronised across the SEs through the Controller.
This scheme has limitations on the scale, convergence time and so on. The SEs perform this
distribution and synchronisation without the involvement of the Controller. The SE-SE persistence
sync will be through a new distributed architecture. The VMware and LSC platforms are
supported. The transport for the same will be on port 9001. This port needs to be made open
between SEs.
For more information on port details, see Protocol Ports Used by NSX Advanced Load Balancer
for Management Communication.
Command Description
show pool <pool_name> objsync filter vs_ref Summary of Objsync view of Pool Persistence objects
<vs_name>
Note
n For any changes to the port 9001 via ‘objsync_port’, you need to change the security
group, ACL etc. See Protocol Ports Used by NSX Advanced Load Balancer for Management
Communication.
n In Azure, for SE object sync you need to configure a port which is less than 4096.
VMware, Inc. 71
VMware NSX Advanced Load Balancer Configuration Guide
The Controller distributes this information across the participating SEs. Each SE has a local REDIS
instance that connects to the Controller, and the objects are allocated and synchronised across the
SEs through the Controller. This scheme has limitations on the scale, convergence time, and so on.
The SEs can perform this distribution and synchronisation without the involvement of the
Controller. The SE-SE persistence sync will be through a distributed architecture. The VMware,
and LSC platforms are supported. The transport for the same will be on port 9001. This port needs
to be made open between SEs. For more information, see Ports and Protocols.
Note In Azure, for SE object sync you need to configure a port which is less than 4096.
Command Description
show pool <pool_name> objsync filter vs_ref Summary of Objsync view of Pool Persistence objects
<vs_name>
A new SE group is created using the Template Service Engine Group option available on NSX
Advanced Load Balancer UI. The new SE group's properties will be the same as the Template
Service Engine Group. Using this option, you can change any property for the default SE group or
any other SE group.
Procedure
VMware, Inc. 72
VMware NSX Advanced Load Balancer Configuration Guide
2 Select the appropriate cloud, and use the Template Service Engine Group option shown in
the screenshots below for different clouds. Use this option to customize settings for the SE
group as per the requirement.
Application Profile
Application profiles determine the behavior of virtual services, based on application type.
The application profile types and their options are described in the following sections.
n HTTP Profile
n DNS Profile
n L4 Profile
n SSL Profile
n Syslog Profile
n SIP Profile
VMware, Inc. 73
VMware NSX Advanced Load Balancer Configuration Guide
NSX Advanced Load Balancer displays all the application profiles created and their type as shown
here.
n Click the search icon and start typing the name of the application profile to find it.
n Click the edit icon against an application profile to modify the configuration.
Note If the profile is still associated with any virtual services, the profile cannot be removed. In
this case, an error message lists the virtual service that still is referencing the application profile.
Neither can any one of the system-standard profiles (as illustrated below) be deleted.
DNS
HTTP
L4
Catch-all for any virtual service that is not using an application-specific profile.
L4 SSL/TLS
Catch-all for any virtual service that is SSL-encrypted and not using an application-specific
profile.
VMware, Inc. 74
VMware NSX Advanced Load Balancer Configuration Guide
SIP
Syslog
The New Application Profileand the Edit Application Profile screens share the same interface
regardless of the application profile chosen.
HTTP Profile
The HTTP application profile allows NSX Advanced Load Balancer to be a proxy for any
HTTP traffic. HTTP-specific functionality such as redirects, content switching, or rewriting server
responses to client requests may be applied to a virtual service. The settings apply to all HTTP
services that are associated with the HTTP profile. HTTP-specific policies or DataScripts also may
be attached directly to a virtual service.
General Configuration
In the General tab configure the basic HTTP basic settings.
Connection Multiplex
This option controls the behavior of HTTP 1.0 and 1.1 request switching and server TCP
connection reuse. This allows NSX Advanced Load Balancer to reduce the number of open
connections maintained by servers and better distribute requests across idle servers, thus
reducing server overloading and improving performance for end-users. The exact reduction
of connections to servers will depend on how long-lived the client connections are, the HTTP
version, and how frequently requests/responses are utilizing the connection. It is important to
understand that “connection” refers to a TCP connection, whereas “request” refers to an HTTP
request and subsequent response. HTTP 1.0 and 1.1 allow only a single request/response to go
over an open TCP connection at a time. Many browsers attempt to mitigate this bottleneck by
opening around six concurrent TCP connections to the destination website.
X-Forwarded-For
VMware, Inc. 75
VMware NSX Advanced Load Balancer Configuration Guide
With this option, NSX Advanced Load Balancer will insert an X-Forwarded-For (XFF) header
into the HTTP request headers when the request is passed to the server. The XFF header
value contains the original client source IP address. Web servers can use this header for
logging client interaction instead of using the layer 3 IP address, which will incorrectly reflect
the Service Engine’s source NAT address. When enabling this option, the XFF Alternate
Name field appears, which allows the XFF header insertion to use a custom HTTP header
name. If the XFF header or the custom name supplied already exists in the client request,
all instances of that header will first be removed. To add the header without removing pre-
existing instances of it, use an HTTP request policy.
WebSockets Proxy
Enabling WebSockets allows the virtual service to accept a client’s upgrade header request.
If the server is listening for WebSockets, the connection between the client and server will
be upgraded. WebSocket is a full-duplex TCP protocol. The connection will initially start over
HTTP, but once successfully upgraded, all HTTP parsing by NSX Advanced Load Balancer will
cease and the connection will be treated as a normal TCP connection.
Note NSX Advanced Load Balancer supports HTTP/2 WebSocket. However, it supports only
WebSocket clients with the same HTTP version as the server.
Clicking this option causes the NSX Advanced Load Balancer SE to use the client-IP rather
than its own as the source-IP for load-balanced connections from the SE to back-end
application servers. Enable IP Routing in the SE group is a prerequisite for enabling this
option. Preserve Client IP Address is mutually exclusive with SNAT-ting the virtual services.
Connection Multiplexing from HTTP(s) Application Profile cannot be used with Preserve Client
IP.
Save
Select another tab from the top menu to continue editing or Save to return to the Application
Profiles tab. See also the Preserve Client IP section.
Enabled Disabled Client connections and their requests are decoupled from the server side of the Service Engine.
Requests are load-balanced across the servers in the pool using either new or pre-existing
connections to those servers.
The connections to the servers may be shared by requests from any clients.
Enabled Enabled Client connections and their requests are sent to a single server.
These requests may share connections with other clients who are persisted to the same server.
HTTP requests are not load balanced.
VMware, Inc. 76
VMware NSX Advanced Load Balancer Configuration Guide
Disabled Enabled NSX Advanced Load Balancer opens a new TCP connection to the server for each connection
received from the client.
Connections are not shared with other clients.
All requests received through all connections from the same client are sent to one server.
HTTP client browsers may open many concurrent connections, and the number of client
connections will be the same as the number of server connections.
Disabled Disabled Connections between the client and server are one-to-one.
Requests remain on the same connection they began on.
Multiple connections from the same client may be distributed among the available servers.
Security Configuration
The Security tab of the HTTP application profile controls the security settings for HTTP
applications that are associated with the profile.
Security Information
The HTTP security settings affect how a virtual service should handle HTTPS. If a virtual service is
configured only for HTTP, the HTTPS settings discussed in this section will not apply. Only if the
virtual service is configured for HTTPS, or HTTP and HTTPS, will the settings take effect.
VMware, Inc. 77
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
SSL Everywhere This option enables all of the following options, which together provide
the recommended security for HTTPS traffic.
HTTP to HTTPS Redirect For a single virtual service configured with both an HTTP service port
(SSL disabled) and an HTTPS service port (SSL enabled), this feature
will automatically redirect clients from the insecure to the secure port.
For instance, clients who type www.avinetworks.com into their browser
will automatically be redirected to https://www.avinetworks.com. If the
virtual service does not have both an HTTP and HTTPS service port
configured, this feature will not activate. For two virtual services (one
with HTTP and another on the same IP address listening to HTTPS), an
HTTP request policy must be created to manually redirect the protocol
and port.
VMware, Inc. 78
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
Secure Cookies When NSX Advanced Load Balancer is serving as an SSL proxy for
the backend servers in the virtual service’s pool, NSX Advanced Load
Balancer communicates with the client over SSL. However, if NSX
Advanced Load Balancer communicates with the backend servers over
HTTP (not over SSL), the servers will incorrectly return responses as
HTTP. As a result, cookies that should be marked as secure will not be
so marked. Enabling secure cookies will mark any server cookies with
the Secure flag, which tells clients to send only this cookie to the virtual
service over HTTPS. This feature will only activate when applied to a
virtual service with SSL/TLS termination enabled.
HTTP Strict Transport Security (HSTS) Strict Transport Security uses a header to inform client browsers that
this site should be accessed only over SSL/TLS. The HSTS header is sent
in all HTTP responses, including error responses. This feature mitigates
man-in-the-middle attacks that can force a client’s secure SSL/TLS
session to connect through insecure HTTP. HSTS has a duration setting
that tells clients the SSL/TLS preference should remain in effect for the
specified number of days.
Insert the includeSubdomains directive in the HTTP Strict-Transport-
Security header, if required. Doing so signals the user agent that the
HSTS policy applies to this HSTS host as well as any subdomains of the
host’s domain name. This setting will activate only on a virtual service
that is configured to terminate SSL/TLS.
HTTP-only Cookies NSX Advanced Load Balancer supports setting an HTTP-Only flag for the
cookie generated by the Controller. Setting this attribute prevents third-
party scripts from accessing this cookie, if supported by the browser.
This feature activates any HTTP or terminated HTTPS virtual services.
When a cookie has an HTTP-Only flag, it informs the browser that this
special cookie must only be accessed by the server. Any attempt to
access the cookie from the client-side script is strictly forbidden.
To check the CLI command to enable HTTP-Only attribute, see CLI
Command to Enable HTTP-Only flag
VMware, Inc. 79
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
Rewrite Server Redirects to HTTPS When a virtual service terminates client SSL/TLS and then passes
requests to the server as HTTP, many servers assume that the
connection to the client is HTTP. Absolute redirects generated
by the server may therefore include the protocol, such as http://
www.avinetworks.com. If the server returns a redirect with HTTP in
the location header, this feature will rewrite it to HTTPS. Also, if the
server returns a redirect for its IP address, this will be rewritten to
the hostname requested by the client. If the server returns redirects
for hostnames other than what the client requested, they will not be
altered.
X-Forwarded-Proto Enabling this option makes NSX Advanced Load Balancer insert the
X-Forwarded-Proto header into HTTP requests sent to the server, which
informs that server whether the client connected to NSX Advanced Load
Balancer over HTTP or HTTPS. This feature activates for any HTTP or
HTTPS virtual service.
+-------------------------------|---------------------------------------+
|Field |Description |
+-------------------------------+---------------------------------------+
|uuid |applicationpersistenceprofile-04ca34e1 |
|name |System-Persistence-Http-Cookie |
|persistence_type |PERSISTENCE_TYPE_HTTP_COOKIE |
|server_hm_down_recovery |HM_DOWN_PICK_NEW_SERVER |
|http_cookie_persistence_profile| |
| cookie_name |VAJOSFML |
| key[1] | |
| name |40015eba-ee51-40c6-8f8d-06e2ec0516e9 |
| aes_key |b'WX9pow2nYKYTfENMZSdwODZQu8e37Zdraoovt|
| always_send_cookie |False |
| http_only |True |
| is_federated |False |
| tenant_ref |admin |
+-------------------------------+---------------------------------------+
VMware, Inc. 80
VMware NSX Advanced Load Balancer Configuration Guide
1 Navigate to Applications > > > Virtual Services, select the desired virtual service, click on the
edit icon on the right side.
5 Click Save.
The System-Secure-HTTP profile is similar to the System-HTTP profile except that under SSL
Everywhere the HTTP to HTTPS Redirect option, is enabled by default.
Note
n Relative redirects are not altered, only absolute. Therefore it is encouraged to have both the
options enabled.
n This profile setting will have no impact for virtual services that does not have HTTPS
configured.
VMware, Inc. 81
VMware NSX Advanced Load Balancer Configuration Guide
1 Navigate to Applications > > > Virtual Services, select the desired virtual service, click on the
edit icon on the right side.
4 Select Service Port from the drop-down list for Matching Rules, select is and enter 80 in the
Ports field.
5 Save the rule. Optionally, the required criteria can be added to determine when to perform the
redirect.
6 In the Action section, select Redirect from the drop-down menu. Then set the protocol to
HTTPS. This will set the redirect port to 443 and the redirect response code to 302 (temporary
redirect).
HTTP Request Policies are quick and easy to set up, and impact only a single virtual service at a
time.
Adding a Query
Use add_stringfor adding a redirect action in the HTTP Request policy.
The keep_query field when enabled, uses the incoming request’s query parameters to the final
redirect URI.
The add_string field, appends the query string to the Redirect URI.
To understand how keep_query and add_string work, consider the example http://
test.example.com/images?name=animals as an incoming request and the request is to be
redirected to http://google.com.
VMware, Inc. 82
VMware NSX Advanced Load Balancer Configuration Guide
[admin:abc-controller]: httppolicyset:http_request_policy:rules>[admin:abc-controller]:
httppolicyset:http_request_policy:rules> redirect_action
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action>
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> add_string
images=cat keep_query
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> status_code
http_redirect_status_code_302
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> port 80
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> host
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host> type
uri_param_type_tokenized
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host> tokens
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host:tokens>
type uri_token_type_string str_value www.google.com
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host> save
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> save
[admin:abc-controller]: httppolicyset:http_request_policy:rules> save
[admin:abc-controller]: httppolicyset:http_request_policy> save
[admin:abc-controller]: httppolicyset> save
+------------------------+----------------------------------------------+
| Field | Value |
+------------------------+----------------------------------------------+
| uuid | httppolicyset-2ee5<truncated>
| |
| |
| name | vs1-Default-Cloud-HTTP-Policy-Set-0 |
| http_request_policy | |
| rules[1] | |
| name | Rule 1 |
| index | 1 |
| enable | True |
| match | |
| method | |
| match_criteria | IS_IN |
| methods[1] | HTTP_METHOD_GET |
| redirect_action | |
| protocol | HTTP |
| host | |
| type | URI_PARAM_TYPE_TOKENIZED |
| tokens[1] | |
| type | URI_TOKEN_TYPE_STRING |
| str_value | www.vmware.com |
| tokens[2] | |
| type | URI_TOKEN_TYPE_STRING |
| str_value | www.google.com |
| port | 80 |
| keep_query | True |
| status_code | HTTP_REDIRECT_STATUS_CODE_302 |
| add_string | images=cat |
| is_internal_policy | False |
| tenant_ref | admin |
+------------------------+----------------------------------------------+
VMware, Inc. 83
VMware NSX Advanced Load Balancer Configuration Guide
To add a DataScript,
1 Navigate to Applications > > > Virtual Service, select the desired virtual service, and click the
edit option.
7 Enter the following script in the HTTP Request Event Script text box and save.
Field Description
Validation Type Enables client validation based on their SSL certificates. Select one of the
following:
n None — Disables validation of client certificates.
n Request — This setting expects clients to present a client certificate.
If a client does not present a certificate, or if the certificate fails the
CRL check, the client connection and requests are still forwarded to
the destination server. This allows NSX Advanced Load Balancer to
forward the client’s certificate to the server in an HTTP header, so
that the server may make the final determination to allow or deny the
client.
n Require — NSX Advanced Load Balancer requires a certificate to be
presented by the client, and the certificate must pass the CRL check.
The client certificate, or relevant fields, may still be passed to the
server through an HTTP header.
PKI Profile The Public Key Infrastructure (PKI) profile contains configured certificate
authority (CA) and the CRL. A PKI profile is not necessary if validation is
set to Request, but is mandatory if validation is set to Require.
VMware, Inc. 84
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
HTTP Header Name Optionally, NSX Advanced Load Balancer may insert the client’s
certificate, or parts of it, into a new HTTP header to be sent to the server.
To insert a header, this field is used to determine the name of the header.
HTTP Header Value Used with the HTTP Header Name field, the Value field is used to
determine the portion of the client certificate to insert into the HTTP
header sent to the server. Using the plus icon, additional headers may
be inserted. This action may be in addition to any performed by HTTP
policies or DataScripts, which could also be used to insert headers in
requests sent to the destination servers.
Compression
Compression is an HTTP 1.1 standard for reducing the size of text-based data using the Gzip
algorithm. The typical compression ratio for HTML, Javascript, CSS and similar text content types
is about 75%, meaning that a 20-KB file may be compressed to 5 KB before being sent across the
Internet, thus reducing the transmission time by a similar percentage.
Compression enables HTTP Gzip compression for responses from NSX Advanced Load Balancer
to the client.
Use the Compression tab to view or edit the application profile’s HTTP compression settings.
VMware, Inc. 85
VMware NSX Advanced Load Balancer Configuration Guide
The compression percentage achieved can be viewed using the Client Logs tab of the virtual
service. This may require enabling full client logs on the virtual service’s Analytics tab to log some
or all client requests. The logs will include a field showing the compression percentage with each
HTTP response.
Field Description
Enable Compression Select the checkbox to enable compression. Enabling this option displays
the other settings for compression.
Compressible Content Types This field determines which HTTP content-types are eligible to be
compressed. Selecta string group which contains the compressible type list
from the dropdown list.
Remove Accept Encoding Header This field removes the Accept Encoding header, which is sent by HTTP
1.1 clients to indicate they are able to accept compressed content.
Removing the header from the request prior to sending the request to the
server allows NSX Advanced Load Balancer to ensure the server will not
compress the responses. Only NSX Advanced Load Balancer will perform
compression.
Number of Buffers Specify the number of buffers to use for compression output.
Buffer Size Specify the size of each buffer used for compression output, this should
ideally be a multiple of pagesize.
VMware, Inc. 86
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
Normal Level Specify the level of compression to apply on content selected for normal
compression.
Aggressive Level Specify the level of compression to apply on content selected for aggressive
compression.
Window Size Specify the window size used by compression, rounded to the last power of
2.
Hash Size Specify the hash size used by compression, rounded to the last power of 2.
Response Content Length Specify the minimum response content length to enable compression.
Max Low RTT If client RTT is higher than this threshold, enable normal compression on the
response.
Min High RTT If client RTT is higher than this threshold, enable aggressive compression on
the response.
Mobile Browser Identifier Select the values that identify mobile browsers in order to enable
aggressive compression.
Custom Compression
To create a custom compression filter:
Field Description
VMware, Inc. 87
VMware NSX Advanced Load Balancer Configuration Guide
3 The Action section determines what will happen to clients or requests that meet the match
criteria, specifically the level of HTTP compression that will be used.
Field Description
Aggressive compression It uses Gzip level 6, which will compress text content by about 80% while
requiring more CPU resources from both NSX Advanced Load Balancer
and the client.
Normal compression It uses Gzip level 1, which will compress text content by about 75%, which
provides a good mix between compression ratio and the CPU resources
consumed by both NSX Advanced Load Balancer and the client.
No Compression It disables compression. For clients coming from very fast, high bandwidth
and low latency connections, such as within the same data center,
compression may actually slow down the transmission time and consume
unnecessary CPU resources.
HTTP Caching
NSX Advanced Load Balancer can cache HTTP content, thereby enabling faster page load times
for clients and reduced workloads for both servers and NSX Advanced Load Balancer. When a
server sends a response, such as logo.jpg, NSX Advanced Load Balancer can add the object to its
cache and serve it to subsequent clients that request the same object. This can reduce the number
of connections and requests sent to the server.
Enabling caching and compression allows NSX Advanced Load Balancer to compress text-based
objects and store both the compressed and original uncompressed versions in the cache.
Subsequent requests from clients that support compression will be served from the cache,
meaning that NSX Advanced Load Balancer will need not compress every object every time, which
greatly reduces the compression workload.
Note Regardless of the configured caching policy, an object can be cached only if it is eligible for
caching. Some objects may not be eligible for caching.
VMware, Inc. 88
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
X-Cache NSX Advanced Load Balancer will add an HTTP header labeled X-Cache for
any response sent to the client that was served from the cache. This header is
informational only and will indicate the object was served from an intermediary
cache.
Age Header NSX Advanced Load Balancer will add a header to the content served from the
cache that indicates to the client the number of seconds that the object has been
in an intermediate cache. For example, if the originating server declared that the
object should expire after 10 minutes and it has been in the NSX Advanced Load
Balancer cache for 5 minutes, then the client will know that it should only cache
the object locally for 5 more minutes.
Date Header If a date header was not added by the server, then NSX Advanced Load Balancer
will add a date header to the object served from its HTTP cache. This header
indicates to the client when the object was originally sent by the server to the
HTTP cache in NSX Advanced Load Balancer.
Cacheable Object Size The minimum and maximum size of an object (image, script, and so on) that can
be stored in the NSX Advanced Load Balancer HTTP cache, in bytes. Most objects
smaller than 100 bytes are web beacons and should not be cached despite being
image objects.
Cache Expire Time An intermediate cache must be able to guarantee that it is not serving stale
content. If the server sends headers indicating how long the content can be
cached (such as cache control), then NSX Advanced Load Balancer will use
those values. If the server does not send expiration timeouts and NSX Advanced
Load Balancer is unable to make a strong determination of freshness, then NSX
Advanced Load Balancer will store the object for no longer than the duration of
time specified by the Cache Expire Time.
Heuristic Expire If a response object from the server does not include the Cache-Control header
but does include an If-Modified-Since header, then NSX Advanced Load Balancer
will use this time to calculate the cache-control expiration, which will supersede
the Cache Expire Time setting for this object.
VMware, Inc. 89
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
Cache URL with Query Arguments This option allows caching of objects whose URI includes a query argument.
Disabling this option prevents caching these objects. When enabled, the request
must match the URI query to be considered a hit. Below are two examples of URIs
that include queries. The first example may be a legitimate use case for caching
a generic search, while the second may be a unique request posing a security
liability to the cache.
n www.search.com/search.asp?search=caching
n www.foo.com/index.html?loginID=User
Cacheable MIME Types Statically defines a list of cacheable objects. This may be a string group, such as
System-Cacheable-Resource-Types, or a custom comma-separated list of MIME
types that NSX Advanced Load Balancer should cache. If no MIME types are listed
in this field, then NSX Advanced Load Balancer will by default assume that any
object is eligible for caching.
Non-Cacheable MIME Types Statically define a list of objects that are not cacheable. This creates a blacklist that
is the opposite of the cacheable list.
HTTP DDoS
The Distributed Denial of Service (DDoS) section allows the configuration of mitigation controls for
HTTP and the underlying TCP protocols. By default, NSX Advanced Load Balancer is configured
to protect itself from a number of types of attacks. For instance, if a virtual service is targeted by a
SYN flood attack, NSX Advanced Load Balancer will activate SYN cookies to validate clients before
opening connections. Many of the options listed below are not quite as straightforward, as bursts
of data may be normal for the application. NSX Advanced Load Balancer provides a number of
knobs to modify the default behavior to ensure optimal protection.
In addition to the DDoS settings described below, NSX Advanced Load Balancer also can
implement connection limits to a virtual service and a pool, configured through the Advanced
properties page. Virtual services also may be configured with connection rate limits and burst
limits in the Network Security Policies section. Because these settings apply to individual virtual
services and pools, they are not configured within the profile.
VMware, Inc. 90
VMware NSX Advanced Load Balancer Configuration Guide
HTTP Limits
The first step in mitigating HTTP-based denial of service attacks is to set parameters for the
transfer of headers and requests from clients. Many of these settings protect against variations of
HTTP SlowLoris and SlowPOST attacks, in which a client opens a valid connection then very slowly
streams the request headers or POSTs a file. This type of attack is intended to overwhelm the
server (in this case the SE) by tying up buffers and connections.
Clients that exceed the limits defined below will have that TCP connection reset and a log
generated. This does not prevent the client from initiating a new connection and does not
interrupt other connections the same client might have open.
Field Description
Client Header Timeout Set the maximum length of time the client is allowed for successfully
transmitting the complete headers of a request. The default is 10 seconds.
HTTP Keep-alive Timeout Set the maximum length of time an HTTP 1.0 or 1.1 connection may be idle.
This affects only client-to-NSX Advanced Load Balancer interaction. The
NSX Advanced Load Balancer-to-server keep-alive is governed through the
Connection Multiplex feature.
Client Body Timeout Set the maximum length of time for the client to send a message body. This
usually affects only clients that are POSTing (uploading) objects. The default
value of 0 disables this timeout
Post Accept Timeout Once a TCP three-way handshake has been completed, the client has this
much time to send the first byte of the request header. Once the first byte
has been received, this timer is satisfied and the client header timeout
(described above) kicks in.
Send Keep-Alive header Check this to send the HTTP keep-alive header to the client.
VMware, Inc. 91
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
Use App Keep-Alive Timeout When the above parameter is checked such that keep-alive headers are
sent to the client, a timeout value needs to be specified therein. If this box
is unchecked, NSX Advanced Load Balancer will use the value specified in
the HTTP Keep-Alive Timeout field. If it is checked, the timeout sent by the
application will be honored.
Client Post Body Size Set the maximum size of the body of a client request. This generally limits
the size of a client POST. Setting this value to 0 disables this size limit.
Client Request Size Set the maximum combined size of all the headers in a client request.
Client Header Size Set the maximum size of a single header in a client request.
Rate Limits
This section controls the rate at which clients may interact with the site. Each enabled rate limit has
three settings:
Field Description
Threshold The client has violated the rate limit when the defined threshold of connections,
packets, or HTTP requests has occurred within the specified time period.
Time Period The client has violated the rate limit when the defined threshold of connections,
packets, or HTTP requests have occurred within the specified time period.
Action Select the action to perform when a client has exceeded the rate limit. The options
will depend on whether the limit is a TCP limit or an HTTP limit.
n Report Only— A log is generated on the virtual server log page. By default, no
action is taken. However, this option may be used with an alert to generate an
alert action to send a notice to a remote destination or to take action through a
ControlScript.
n Drop SYN Packets — For TCP-based limits, silently discard TCP SYNs from the
client. NSX Advanced Load Balancer also will generate a log. However, during
high volumes of DoS traffic, repetitive logs may be skipped.
n Send TCP RST — Reset client TCP connection attempts. While more graceful
than the Drop SYN Packet option, sending a TCP reset does generate extra
packets for the reset, versus the Drop SYN Packet option which does not
send a client response. NSX Advanced Load Balancer also will generate a log.
However, during high volumes of DoS traffic, repetitive logs may be skipped.
n Close TCP Connection — Resets a client TCP connection for an HTTP rate limit
violation.
n Send HTTP Local Response — The Service Engine will send an HTTP response
directly to the client without forwarding the request to the server. Select the
HTTP status code of the response, and optionally a response page.
n Send HTTP Redirect — Redirect the client to another location.
VMware, Inc. 92
VMware NSX Advanced Load Balancer Configuration Guide
Rate Limit Connections from a Client Rate limit all connections made from any single client IP
address to the virtual service.
Rate Limit Requests from a Client to all URLs Rate limit all HTTP requests from any single client IP
address to all URLs of the virtual service.
Rate Limit Requests from all Clients to a URL Rate limit all HTTP requests from all client IP addresses to
any single URL.
Rate Limit Requests from a Client to a URL Rate limit all HTTP requests from any single client IP
address to any single URL.
Rate Limit Failed Requests from a Client to all URLs Rate limit all requests from a client for a specified period
of time once the count of failed requests from that
client crosses a threshold for that period. Clients are
tracked based on their IP address. Requests are deemed
failed based on client or server-side error status codes,
consistent with how NSX Advanced Load Balancer logs
and how metrics subsystems mark failed requests.
Rate Limit Failed Requests from all Clients to a URL Rate limit all requests to a URI for a specified period
of time once the count of failed requests to that URI
crosses a threshold for that period. Requests are deemed
failed based on client- or server-side error status codes,
consistent with how NSX Advanced Load Balancer logs
and metrics subsystems mark failed requests.
Rate Limit Failed Requests from a Client to a URL Rate limit all requests from a client to a URI for a
specified period of time once the count of failed requests
from that client to the URI crosses a threshold for that
period. Requests are deemed failed based on client- or
server-side error status codes, consistent with how NSX
Advanced Load Balancer logs and metrics subsystems
mark failed requests.
Rate Limit Scans from a Client to all URLs Automatically track clients and classify them into three
groups: Good, Bad, and Unknown. Clients are tracked
based on their IP address. Clients are added to the
Good group when the NSX Advanced Load Balancer scan
detection system builds a history of requests from the
clients that complete successfully. Clients are added to the
Unknown group when there is insufficient history about
them. Clients with a history of failed requests are added
to the Bad group and their requests are rate limited
with stricter thresholds than the Unknown client's group.
The NSX Advanced Load Balancer scan detection system
automatically tunes itself so that the Good, Bad, and
Unknown client-IP group members change dynamically
with changes in traffic patterns through NSX Advanced
Load Balancer. In other words, if a change to the
website causes mass failures (such as 404 errors) for most
customers, NSX Advanced Load Balancer adapts and does
not mark all clients as attempting to scan the site.
Rate Limit Scans from all Clients to all URLs Similar to the previous limit, but restricts the scanning from
all clients as a single entity rather than individually. Once
a limit is collectively reached by all clients, any client that
sends the next failed request will be reset.
VMware, Inc. 93
VMware NSX Advanced Load Balancer Configuration Guide
Note You can upload any type of file as a local response. It is recommended to configure a local
file using the UI. To update the local file using API, encode the base64 file out of band and use the
encoded format in the API.
DNS Profile
A DNS application profile specifies settings dictating the request-response handling by NSX
Advanced Load Balancer.
By default, this profile will set the virtual service’s port number to 53, and the network protocol to
UDP with per-packet parsing.
VMware, Inc. 94
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
Number of IPs returned by DNS Specifies the number of IP addresses returned by the DNS service. Default is 1.
server Enter 0 to return all IP addresses. Otherwise, the valid range is 1 to 20.
TTL The time in seconds (default = 30) a served DNS response is to be considered valid
by requestors of the DNS service. The valid range is 1 to 86400 seconds.
Subnet prefix length This length is used in concert with the DNS client subnet (ECS) option. When the
incoming request does not have any ECS and the prefix length is specified, NSX
Advanced Load Balancer inserts an ECS option in the request to upstream servers.
Valid lengths range from 1 to 32.
Process EDNS Extensions This option makes the DNS service aware of the Extension mechanism for DNS
(EDNS). EDNS extensions are parsed and shown in logs. For GSLB services, the
EDNS subnet option can be used to influence load balancing. EDNS is supported.
Negative TTL Specifies the TTL value (in seconds) for SOA (Start of Authority) (corresponding to
a authoritative domain owned by this DNS Virtual Service) record’s minimum TTL
served by the DNS Virtual Service. Negative TTL is a value in the range 0-86400.
(Options for) Invalid DNS Query Specifies whether the DNS service should drop or respond to a client when
processing processing its request results in an error. By default, such a request is dropped
without any response, or passed through to a passthrough pool, if configured.
When set to respond, an appropriate response is sent to the client, e.g.,
NXDOMAIN response for non-existent records, empty NOERROR response for
unsupported queries, etc.
Respond to AAAA queries with Enable this option to have the DNS service respond to AAAA queries with an
the empty response empty response when there are only IPv4 records.
Rate Limit Connections from a Limits connections made from any single client IP address to the DNS virtual
Client service for which this profile applies. The default (=0) is interpreted as no rate
limiting.
Threshold Specifies the maximum number of connections or requests or packets that will be
processed in the time value specified in the Time Period field (legitimate values
range from 10 to 2500). A higher number will result in rate-limiting. Specifying a
number higher than 0 makes the Time Periodfield mandatory.
Time Period The span of time, in seconds, during which NSX Advanced Load Balancer monitors
for exceeded threshold. The allowed range is from 1 to 300. NSX Advanced
Load Balancer calculates and takes specified action if the inbound request rate
is exceeded. This rate is the ratio of the maximum number to the time span.
Action Choose one of three actions from the pulldown to be performed when rate limiting
is required: Report Only, Drop SYN Packets, or Send TCP RST.
Preserve Client IP Address Enablethis option to have the client IP address pass through to the back end. Be
sure you understand what the back-end DNS servers expect and what they will do
when offered the client IP address. This option is not compatible with connection
multiplexing.
VMware, Inc. 95
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
Authoritative Domain Names A comma-delimited set of domain names for which the GSLB DNS’ SEs can provide
authoritative translation of FQDNs to IP addresses. Queries for FQDNs that are
subdomains of these domains and do not have any DNS record in NSX Advanced
Load Balancer are either dropped or an NXDOMAIN response is sent (depending
on the option set for invalid DNS queries, described above). Authoritative domain
names are configured with ends-with semantics.
Note
n All labels in a subdomain and authoritative domain names must be complete. To illustrate
by example, suppose alpha.beta.com, delta.beta.com, delta.eta.com, and gamma.eta.com are
valid FQDNs. If we intend the GSLB DNS to return authoritative responses to queries for each
of the four FQDNSs, two authoritative domains could be identified, beta.com and eta.com. It
is not sufficient to stipulate eta.com alone because “eta” is not a complete label, and therefore
doesn’t match either alpha.beta.com or delta.beta.com.
n EDNS option is enabled by default for the System-DNS profile. If NSX Advanced Load
Balancer is upgraded from an older version to a newer version, EDNS is not enabled by
default in the existing DNS profile. However, if a new DNS profile is created on the same NSX
Advanced Load Balancer Controller, EDNS is enabled by default.
L4 Profile
The L4 Profile is used for any virtual service that does not require application-layer proxying.
Note Using an L4 profile is equivalent to setting the virtual service’s application profile to ‘none’.
Rate limits may be placed on the number of TCP connections or UDP packets that may be made to
the virtual service from a single client IP address.
VMware, Inc. 96
VMware NSX Advanced Load Balancer Configuration Guide
Field Description
Threshold The client has violated the rate limit when the defined threshold of connections
(TCP) or packets (UDP) is reached within the specified time period.
Time Period The client has violated the rate limit when the defined threshold of connections
(TCP) or packets (UDP) is reached within the specified time period.
Action Select the action to perform when a client has exceeded the rate limit.
n Report Only — A log is generated in the virtual service logs page. By default,
no action is taken. However, this option may be used with an alert to generate
an alert action to send a notice to a remote destination or to take action using a
ControlScript.
n Drop SYN Packets — For TCP-based limits, silently discard TCP SYNs from the
client. NSX Advanced Load Balancer also will generate a log. However, during
high volumes of DoS traffic, repetitive logs may be skipped.
n Send TCP RST — Reset client TCP connection attempts. While more graceful
than the Drop SYN Packet option, sending a TCP reset does generate extra
packets for the reset, versus the Drop SYN Packet option which does not
send a client response. NSX Advanced Load Balancer also will generate a log.
However, during high volumes of DoS traffic, repetitive logs may be skipped.
Syslog Profile
The Syslog application profile allows NSX Advanced Load Balancer to decode the Syslog protocol.
This profile will set the virtual service to understanding Syslog, and the network profile to UDP
with per-stream parsing.
SIP Profile
SIP profile allows NSX Advanced Load Balancer to process traffic for SIP applications. This profile
defines the transaction timeout allowed for SIP traffic through NSX Advanced Load Balancer.
Configure the timeout within the range of 16 to 512 seconds.
If the virtual service is configured for both HTTP (usually port 80) and HTTPS (usually SSL on port
443), enable HTTP-to-HTTPS redirect through the attached HTTP application profile.
VMware, Inc. 97
VMware NSX Advanced Load Balancer Configuration Guide
Use the following steps to configure HTTPS redirect through the application profile.
n Navigate to Applications > Virtual Services, select the desired virtual service, click the edit
icon on the top right corner, and navigate to the Profiles section under Settings tab.
n Select the edit option for the attached Application Profile (System-HTTP profile), and
navigate to the Security tab. Under the SSL Everywhere section of this tab, select the HTTP-
to-HTTPS-Redirect check box.
The NSX Advanced Load Balancer also has the option for System-Secure-HTTP profile in the
drop-down menu for the Application Profile. This profile is identical to the System-HTTP profile
with the exception that the SSL Everywhere check box, that includes the HTTP to HTTPS Redirect
option, is already enabled.
n Option 2
Rewrite Server Redirects to HTTPS option is available under the Security tab in Edit Application
Profile screen. This option changes the Location header of redirects from HTTP to HTTPS, and
also removes any hard-coded ports. The following example shows a Location header sent from a
server.
http://www.test.com:5000/index.htm
The NSX Advanced Load Balancer rewrites the Location header, sending the following request to
the client.
https://www.test.com/index.htm
Note
n Absolute redirects are altered, while relative redirects are not. Therefore it is suggested to
have both the check boxes enabled.
n This profile setting does not have any impact on virtual services that do not have HTTPS
configured.
Enter the desired name for the new rule, select Service Port from the drop-down menu under
Matching Rules, and provide 80 as the value for Ports option.
Optionally, the required criteria can be added to determine when to perform the redirect by
choosing between Is and Is not options and specifying one or more ports.
Note When redirecting to the same virtual service, you must specify a match criteria to prevent a
redirect loop.
VMware, Inc. 98
VMware NSX Advanced Load Balancer Configuration Guide
Under the Action section, select Redirect from the drop-down menu. Set the Protocol to HTTPS.
This sets the redirect Port to 443 and the redirect response code (Status Code) to 302 (temporary
redirect).
HTTP Request Policies are quick and easy to set up, and impact only a single virtual service at a
time. For more information on the usage of HTTP request policy, see HTTP Request Policy.
Adding a Query
The field add_string is used for redirect action in the HTTP Request policy. When the field
keep_query is enabled, the query parameters of the incoming request are used in the final
redirect URI. The field add_string appends the query string to the Redirect URI. To understand
how keep_query and add_string work, consider the following example.
VMware, Inc. 99
VMware NSX Advanced Load Balancer Configuration Guide
Using DataScript
For maximum granularity and re-usability, use a DataScript to specify the redirect behavior.
DataScript can be used for both basic or complex requirements. Use the following steps to
configure HTTPS redirect using DataScript.
n Navigate to Applications > Virtual Service and click the edit icon for the desired virtual
service.
n Provide a name for the script. Under Events tab, click Add and choose HTTP Request from
the drop-down menu.
n Paste the following text in the space provided and click Save.
For more information on using DataScript for redirecting HTTP to HTTPS, see DataScript for HTTP
Redirect.
Using NSX Advanced Load Balancer as the endpoint for SSL enables it to maintain full visibility
into the traffic and also to apply advanced traffic steering, security, and acceleration features. The
following deployment architectures are supported for SSL:
n None: SSL traffic is handled as pass-through (layer 4), flowing through NSX Advanced Load
Balancer without terminating the encrypted traffic.
n Client-side: Traffic from the client to NSX Advanced Load Balancer is encrypted, with
unencrypted HTTP to the back-end servers.
n Server-side: Traffic from the client to NSX Advanced Load Balancer is unencrypted HTTP, with
encrypted HTTPS to the back-end servers.
n Both: Traffic from the client to NSX Advanced Load Balancer is encrypted and terminated at
NSX Advanced Load Balancer, which then re-encrypts traffic to the back-end server.
n Intercept: Terminate client SSL traffic, send it unencrypted over the wire for taps to intercept,
then encrypt to the destination server.
SSL Profile
The profile contains the settings for the SSL-terminated connections. This includes the list of
supported ciphers and their priority, the supported versions of SSL/TLS, and a few other options.
n SSL Profile
SSL Certificate
An SSL certificate is presented to a client to authenticate the application. A virtual service may be
configured with two certificates at the same time, one each of RSA and elliptic curve cryptography
(ECC). A certificate may also be used for authenticating NSX Advanced Load Balancer to back-end
servers.
n SSL Certificates
SSL Performance
SSL-terminated traffic performance depends on the underlying hardware allocated to the NSX
Advanced Load Balancer SE, the number of SEs available to handle the virtual service, and the
certificate and cipher settings negotiated. Generally, each vCPU core can handle about 1000 RSA
2K transactions per second (TPS) or 2500 ECC SSL TPS. A vCPU core can push about 1 Gb/s
SSL throughput. SSL-terminated concurrent connections are more expensive than straight HTTP
or layer 4 connections and may necessitate additional memory to sustain high concurrency.
n SSL Performance
n SE Memory Consumption
Additional Topics
SSL is a complicated subject, occasionally requiring redirects, rewrites, and other manipulation
of HTTP to ensure proper traffic flow. NSX Advanced Load Balancer includes a number of useful
tools for troubleshooting and correcting SSL-related issues. They are described in the articles
below:
n SSL Everywhere
NSX Advanced Load Balancer rewrites the client IP address before sending any TCP connection
to the server, regardless of which type of TCP profile is used by a virtual service. Similarly, the
destination address is rewritten from the virtual service IP address to the IP address of the server.
The server always sees the source IP address of the Service Engine. UDP profiles have an option
to disable SE source NAT.
For the UDP and TCP fast path modes, connections occur directly between the client and the
server, even though the IP address field of the packet has been altered. For HTTP applications,
NSX Advanced Load Balancer can insert the client’s original IP address using X-Forwarded-For
(XFF) into an HTTP header sent to the server. For more information, see X-Forwarded-For Header
Insertion.
The following profiles are explained in detail with information on how to create them:
n TCP Proxy
n UDP Proxy
TCP Settings
Note This section focuses on the data-plane NICs of the NSX Advanced Load Balancer SEs.
The TCP stack outlined, excludes the SE management NIC and the NSX Advanced Load Balancer
Controller, which rely on a different TCP stack.
Avi SE
Avi Linux
Hypervisor
n TCP Proxy
By default, most new virtual services use the System-TCP-Proxy profile, which is configured for
TCP proxy. This is the recommended setting for protocols such as HTTP. Some protocols such as
DNS, can automatically select a different TCP/UDP profile, such as UDP.
On receiving a TCP SYN from the client, the NSX Advanced Load Balancer makes a load-
balancing decision and forwards the SYN and all subsequent packets directly to the server.
The client-to-server communication occurs over a single TCP connection, using the parameters,
sequence numbers, and TCP options negotiated between the client and the server.
n Enable SYN Protection — When disabled, the NSX Advanced Load Balancer performs load
balancing based on the initial client SYN packet. The SYN is forwarded to the server. The
NSX Advanced Load Balancer merely forwards the packets between the client and the
server, leaving servers vulnerable to SYN flood attacks from spoofed IP addresses. With
SYN protection enabled, the NSX Advanced Load Balancer proxies the initial TCP three-way
handshake with the client to validate that the client is not a spoofed source IP address. Once
the three-way handshake has been established, the NSX Advanced Load Balancer replays the
handshake on the server side. After the client and server are connected, it drops back to the
pass through (fastpath) mode. This process is also called delayed binding.
Note Consider using TCP Proxy mode for maximum TCP security.
n Session Idle Timeout — Idle flows terminate (time out) after the specified period. The NSX
Advanced Load Balancer issues a TCP reset to both the client and the server.
Procedure
2 Click Create and selct TCP Fast Path from the dropdown list.
NSX Advanced Load Balancer will complete the three-way handshake with the client
before forwarding any packets to the server. It will protect the server from SYN flood
and half open SYN connections.
This is the time for which a connection needs to be idle before it is eligible to be deleted.
6 Click Save.
Disabled by default, the timeout parameter SYN Protection, modifies the connection setup
behavior slightly. The initial three-way handshake of client is first proxied by the NSX
Advanced Load Balancer SE. On completion of the three-way handshake, the SE replays this
process on the server side of the SE, including passing through client TCP supported options.
This enables the NSX Advanced Load Balancer to provide TCP DoS mitigation and validation
of the connection before handing off the connection to the server.
TCP Proxy
The TCP proxy terminates client connections to the virtual service, processes the payload, and
then opens a new TCP connection to the destination server. Any application data from the
client that is destined for a server is forwarded to that server over the new server-side TCP
connection. Separating (or proxying) the client-to-server connections enables the NSX Advanced
Load Balancer to provide enhanced security, such as TCP protocol sanitization and denial of
service (DoS) mitigation.
SYN
SYN+ACK
ACK SERVICE
CLIENT ENGINE SERVER
SYN
SYN+ACK
ACK
REQUEST REQUEST
RESPONSE RESPONSE
The TCP proxy mode also provides better client and server performance, such as maximizing
client and server TCP maximum segment size (MSS) or window sizes independently, and buffering
server responses.
Each connection negotiates the optimal TCP settings for the connecting device. For example,
consider a client connecting to the NSX Advanced Load Balancer with a 1400-byte MTU, while the
server is connected to it with a 1500-byte MTU. In this case, the NSX Advanced Load Balancer
buffers the 1500-byte server responses and sends them back to the client separately as 1400-byte
responses.
If the client connection drops a packet, the NSX Advanced Load Balancer handles re-transmission,
as the server might have already finished the transmission and moved on to handling the next
client request. This optimization is particularly useful in environments with high-bandwidth, low-
latency connectivity to the servers and low-bandwidth, high-latency connectivity to the clients (as
is typical of Internet traffic).
Use a TCP/UDP profile with the type set to Proxy for application profiles such as HTTP.
1 In the New TCP/UDP Profile screen, enter the Name of the network profile.
3 Under TCP Proxy, select the mode (Auto Learn or Custom) to set the configurations for this
profile.
4 Click Save.
TCP Parameters
The NSX Advanced Load Balancer exposes only the configurable parameters of the TCP protocol
that might have tangible benefits on application performance. Additional configuration options are
available through the NSX Advanced Load Balancer CLI or REST API.
Auto Learn
Auto Learn mode sets all parameters to default values and dynamically changes the buffer size.
In practice, many NSX Advanced Load Balancer administrators have found that manual TCP
tweaking is rarely needed. The default TCP Profile in NSX Advanced Load Balancer is set to Auto
Learn and a majorit of its customers might never have to deviate from this top level setting. This
approach is for reducing the complexity involved in managing application delivery platforms and
simplifying service consumption by application owners.
With the TCP Proxy profile, enabling Auto Learn makes the NSX Advanced Load Balancer set
the configuration parameters. The NSX Advanced Load Balancer can make changes to the TCP
settings at any point in time. For example, if an SE is running low on memory, it might reduce
buffers or window sizes to ensure application availability.
On selecting the auto learn mode, the default values configured in each field are as shown in the
following table:
Max Retransmissions 8
Buffer Management The receive window advertised to the client and on the
server dynamically change. It starts small (2 KB) and
can grow when needed up to 64 MB for a single TCP
connection. The algorithm also takes into account the
amount of memory available in the system and the number
of open TCP connections.
Custom Mode
The custom mode is used to configure the TCP Proxy Settings manually. When the TCP proxy
profile is set to custom, administrators can use the NSX Advanced Load Balancer UI, CLI or REST
API to alter the TCP proxy profile default parameters described in the following section.
Timeout Parameters
Idle Connections - specified by the Idle Duration parameter, the NSX Advanced Load Balancer
terminates the connection. Any packet sent or received over the connection by the SE, client or
server will reset the Idle Duration timer.
n Select either TCP keepalive or Age Out Idle Connections to control the behavior of the idle
connections.
a TCP keepalive: Periodically send a keepalive packet to the client that will reset the idle
duration timer on successful client acknowledgment. The keepalive packet sent from the
SE does not reset the timer.
b Age Out Idle Connections: Terminates the idle connections that have no keep-alive signal
from the client, as specified by the Duration field. The NSX Advanced Load Balancer does
not send out keepalives, though it still honors keepalive packets received from clients or
servers.
n Enter the Idle Duration in seconds (between 5-14400 seconds, or a 0 for an infinite timeout).
This is the time before the TCP connection is eligible to be proactively closed by NSX
Advanced Load Balancer. The timer resets when any packet is sent or received by the client,
server or SE.
Note
n Setting this value higher can be appropriate for long-lived connections that do not use
keepalive packets. Higher settings can also increase the vulnerability of NSX Advanced
Load Balancer to denial of service attacks, as the system will not proactively close out idle
connections.
n The Default value for Idle Duration is 600 seconds. The Range is - 5 - 3600 seconds. (0
seconds for infinite timeout and disabling proactive closing of idle connections).
n When a connection between a SE and a client or the SE and a server is closed, the unique
client or server IP: Port + service engine IP: port (called a 4-tuple) is placed in a TIME_WAIT
state for some time. This 4-tuple cannot be reused till it is determinedthat there are no more
delayed packets on the network that are still in flight or that are yet to be delivered. The
Time Wait value defines the timeout period before this 4-tuple can be reused. Enter a value
between 500 – 2000 ms or enable the Ignore Time Wait option to allow NSX Advanced Load
Balancer to immediately reopen the 4-tuple connection, if it receives an SYN packet from the
remote IP that matches the same 4-tuple. Default value is 2000 ms.
2 Max SYN Retransmissions - Enter a value (between 3 and 8). It is the maximum number of
attempts at re-transmitting an SYN packet before giving up. Default value is 8.
Max Segment Size (MSS) is calculated by using the Maximum Transmission Unit (MTU) length for
a network interface. The MSS determines the largest size of data that can be safely inserted into a
TCP packet.
In some environments, the MSS must be smaller than the MTU. For example, traffic between the
NSX Advanced Load Balancer and a client that is traversing a site-to-site VPN might require some
space reserved for padding with encryption data. Click Use Network Interface MTU for Size to
set the MSS based on the MTU size of the network interface. The MSS is set to MTU - 40 bytes to
account for the IP and TCP headers. For an MTU of 1500 bytes, the MSS is be set to 1460.
Alternatively, you can enter a custom value in the range 512–9000 bytes.
The following parameters canbe configured using through the NSX Advanced Load Balancer CLI
and REST API.
a Enabled — 10x.
b Disabled — 1x the size of the MSS.
c Default value is Disabled.
2 CC Algo - The congestion control algorithm governs the behavior for identifying and
responding to detected network congestion. The following are the possible values for the field:
a New Reno — A versatile TCP congestion control algorithm for most networks.
b Cubic — Designed for long fat networks (LFN), with high throughput and high latency.
c HTCP — Recommended only for high-throughput and high-latency networks.
d Default value is New Reno.
3 Congestion Recovery Scaling Factor - Defines the congestion window scaling factor
after recovery and used in conjunction with aggressive congestion avoidance. It can be in the
range 0 to 8 and defaults to 2.
4 Min Rexmt Timeout - TCP has built-in logic for ensuring that packets are received by the
remote device, failing which the sender re-transmits the packets. This parameter sets the
minimum time to wait before re-transmitting a packet. The value can be in between 50 and
5000 ms.
5 Reassembly Queue Size - Defines the size of the buffer used to reassemble TCP segments
when the segments have arrived out of order, i.e. the maximum number of TCP segments
that can be queued for reassembly. Lower values might lead to issues in downloading large
content or handling bulk traffic. The value can be between 0 and 5 k. Default value is 0
(provides unlimited queue size).
6 Reorder Threshold - Controls the number of duplicate ACKs required to trigger a re-
transmission. A higher value means less number of re-transmissions caused by packet
reordering. If out-of-order packets are common in the environment, it is advisable to use a
higher number. The value can be in between 1 and 100. Default value is 8 for public clouds
(e.g., AWS, Azure, GCP) and 3 for others.
7 Slow Start Scaling Factor - Congestion window scaling factor during slow start. It is
different from the window scaling factor. This parameter is in effect only when aggressive
congestion avoidance is enabled. The field value can be between0 and 8. Default value is 1.
8 Time Wait Delay - The time to wait before closing a connection in the TIME_WAIT state.
The field can take the following values:
c Default — 0.
There are a few more optimization parameters that are enabled by default in the NSX Advanced
Load Balancer TCP stack that cannot be changed by users. These parameters are described in the
following section.
Unalterable Parameters
1 Window Scaling Factor - Window scaling determines the amount of TCP data the receiver
(i.e., the SE) can buffer for a connection. The default initial window is 65535 bits. For modern
TCP clients supporting this TCP extension, the window scaling factor increases this number
significantly by doubling the window size by x number of times (where x is the scale factor).
This is helpful for networks with high latency and high throughput, which describes most
broadband Internet connections. The NSX Advanced Load Balancer window scale factor is 10,
which implies that it can buffer up to 67,104,840 bits. Value for the field is 10 (which means a
buffer of up to 67,104,840 bits when the receive window is set to 65535).
2 Selective ACK - With selective acknowledgments, the data receiver can inform the sender
about all segments that have arrived successfully. So the sender needs to only re-transmit the
segments that have actually been lost. Consider the scenario where the first five packets are
successfully received, the sixth packet is lost and is not yet received and the packets seven
to ten are successfully received. In this case, without SACK, the sender would re-transmit all
packets starting from packet six, since it cannot figure out which packets were actually lost.
This would lead to unnecessary re-transmits, further consuming bandwidth and impacting TCP
performance. The value for this field is Enabled.
3 Limited Transmit Recovery - This parameter is used to more effectively recover lost
segments when the congestion window of a connection is small, or when a large number
of segments are lost in a single transmission window. The limited transmit algorithm allows
sending a new data segment in response to each of the first two duplicate acknowledgments
that arrive at the sender. Transmitting these segments increases the probability that TCP can
recover from a single lost segment using the fast re-transmit algorithm, instead of using a
costly re-transmission timeout. The value for this field is Enabled.
4 Delayed ACK - Instead of sending one ACK segment per data segment received, the NSX
Advanced Load Balancer can improve efficiency by sending delayed ACKs. This is part of TCP
congestion control. As per RFC, timestamp to delay ACK is less than 0.5 seconds, and in a
stream of full-sized segments, an ACK would be available for at least every second segment.
n If the installation is in an environment using VXLAN or some other type of overlay network
(for example, OpenStack), the MTU must be reduced to accommodate the additional tunnel
headers.
n If the DHCP option sets the MTU to 9000 (jumbo), but the entire infrastructure (switches and
routers) does not support jumbo MTU. It can happen in AWS environments.
configure serviceengineproperties
se_runtime_properties
global_mtu 1500
Overwriting the previously entered value for global_mtu
save
save
Note NSX Advanced Load Balancer SEs support MTU a maximum of 1500 bytes.
SYN flood A form of denial-of-service attack The NSX Advanced Load Balancer starts sending SYN
in which an attacker sends cookies by default if the TCP table has half-open
a succession of SYN requests connections. There is currently no configuration to allow
to a target system without specific clients from this behavior. In a TCP fastpath
acknowledging the SYN ACKs. profile where there is no TCP proxying, SYN protection
This is done in an attempt can be enabled, causing the NSX Advanced Load
to consume enough server Balancer to delay establishing a TCP session with the
resources and make the system server, until a complete three-way handshake with the
unresponsive to legitimate traffic. client has taken place. This protects the server from SYN
flood or half-open states.
LAND attacks This acts like a SYN flood attack. When this attack is detected, the NSX Advanced Load
The difference is that the source Balancer drops the packets at the dispatcher layer.
and destination IP addresses are
identical, which makes the IP
stack process the same packet
over and over again, potentially
leading to a crash of the victimized
system.
Port scan An attacker launches a port When this attack is detected, the NSX Advanced Load
scan by sending TCP packets on Balancer drops the packets at the dispatcher layer.
various ports to find out listening
ports for next level of attacks.
Most of these ports are non-
listening ports.
Procedure
1 In the New TCP/UDP Profile: screen, enter the Name of the network profile.
a Enabling NAT Client IP Address (SNAT) performs source NAT for all client UDP packets.
NAT Client IP Address (SNAT): By default, NSX Advanced Load Balancer translates the
source IP address of the client to an IP address of the Avi SE. This can be disabled for
connectionless protocols which do not require server response traffic to traverse back
through the same SE. For example, a syslog server will silently accept packets without
responding. Therefore, there is no need to ensure response packets route through the
same SE. When SNAT is disabled, it is recommended to ensure the session idle timeout is
kept to a lower value.
b Enable Per-Packet Load Balancing to consider every UDP packet as a new transaction.
When disabled, packets from the same client source IP and port are sent to the same
server.
Per-Packet Load Balancing: By default, NSX Advanced Load Balancer treats a stream of
UDP packets from the same client IP:Port as a session, making a single load balancing
decision and sending subsequent packets to the same destination server. For some
application protocols, each packet should be treated as a separate session that can be
uniquely load balanced to a different server. DNS is one example where enabling per-
packet load balancing causes NSX Advanced Load Balancer to treat each packet as an
individual session or request.
c Enter the Session Idle Timeout (between 2-3600 seconds). It is the amount of time a flow
needs to be idle before it is deleted.
Session Idle Timeout: Idle UDP flows terminate (time out) after a specified time period.
Subsequent UDP packets could be load balanced to a new server unless a persistence
profile is applied.
6 Click Save.
UDP Proxy
The UDP proxy profile is currently supported only for SIP applications. This profile maintains
different flow for both front end and back end transmissions.
Procedure
1 In the New TCP/UDP Profile: screen, enter the Name of the network profile.
3 Enter the Session Idle Timeout (between 2-3600 seconds). It is the amount of time a flow
needs to be idle before it is deleted.
4 Click Save.
For more information, see Configuring VMware NSX Advanced Load Balancer for SIP
Application.
ICAP is supported for HTTP request processing through NSX Advanced Load Balancer. With
the implementation of the ICAP client functionality within the NSX Advanced Load Balancer, the
following use-cases are supported:
n Other request modification options using ICAP services, for example, URL filtering
Starting with NSX Advanced Load Balancer version 21.1.3, ICAPs are supported.
n Preview functionality
n Streaming of payload
n Content rewrite
The followings are the main configuration components for enabling ICAP for virtual service on an
NSX Advanced Load Balancer:
n Configuring an HTTP Policy for the virtual service with the action set as Enable ICAP
Navigate to Application > Pool Group and create a pool group. The field for the Fail Action
under the Pool Group Failure Settings needs to be empty.
Navigate to create an ICAP pool. Configure the default port as 1344. Multiple Servers can be
added as Pool members.
Refer to the following table for the various attributes used in the ICAP profile configuration:
Request Buffer Size Maximum buffer size for request Default: 51200 (50 MB)
body
Enable ICAP Preview Enable ICAP Preview functionality, Default: Enabled (Boolean)
where the ICAP server can make
decisions by examining the preview
size payload
Preview Size Payload size for ICAP preview Default: 5000 (5 MB)
Response Timeout When this threshold is hit, the Default: 60000 (60 seconds)
request will be handled as an
error and the failure action will be
executed
Slow Response warning Threshold When this threshold is hit, the Default: 10000 (10 seconds)
request will cause a significant log
entry, but will still be served
Actions
Failure Action Handling of error with ICAP server. Fail Closed/ Fail Open
If failed closed, a 503 will be sent if
an error is occurring.
Large Upload Failure Action Handling of size exceeded error. If Fail Closed/ Fail Open
fail closed, a 413 will be sent.
Navigate to Virtual Service > Edit > ICAP Profile or Templates > Profiles > ICAP Profile to
create an ICAP profile.
4 Navigate to the Application > Virtual Service, select the required virtual service, and select
the ICAP profile (created in the previous step).
Create a security policy to define the rules based on which the ICAP scanning should be
performed. Navigate to Application > Virtual Service, select the desired virtual service, and
click Edit. Select Policies > HTTP Security, and create a new rule with the following options:
Note The rule name configured in this step will appear in the logs, so it is recommended to
make it self-explanatory for ease of troubleshooting.
With these steps, the ICAP configuration for the virtual service is complete. Incoming requests
on the virtual service that match the rule or the match criteria of the HTTP security policy will
use ICAP.
NSX Advanced Load Balancer supports the following ICAP servers (Third party AV-
Malware/CDR vendors):
n OPSWAT
Limitations
The followings are the limitations for ICAP support on NSX Advanced Load Balancer:
n ICAP Server
n INLINE ANALYSIS
n X-LASTLINE HEADER
n LASTLINE LOGO
The followings are the blocking types available on the NSX Defender. For more information, see
NSX Defender documentation.
n PASSIVE - No blocking is attempted on this type of file, but any relevant content will be
analyzed.
n SENSOR-KNOWN - Block all artifacts known to be malicious by the Sensor (listed in its local
cache). This method offers the lowest levels of protection but ensures minimal lag.
n MANAGER-KNOWN - Block all artifacts known to be malicious by the NSX Defender Manager.
These data are listed in the Manager cache and shared across all managed appliances.
n FULL - This mode allows the proxy to stall an ICAP request for as long as necessary to
provide a verdict on the file, within the limits set by the ICAP timeout. Depending on the client
implementation, this can cause the transaction to appear as unresponsive for a long time (in
the order of minutes in some cases).
This blocking mode is particularly suitable for the integration with third-party proxies that
implement mechanisms to improve the user experience. Such mechanisms can include data
trickling or “patience pages”, providing feedback to the user.
n FULL WITH FEEDBACK - This mode will generate “patience pages” that provide feedback to
the user on the analysis progress. These mechanisms have been tested exclusively with the
squid proxy. They can lead to unwanted side-effects when using third-party proxies, which
can implement caching mechanisms that disrupt the NSX Defender operation. Such third party
proxies often implement their own mechanisms to improve user experience and therefore can
perform better with the Full blocking mode.
n Service URI - This needs to be set to /lastline to use the NSX Defender service.
n ICAP Pool - ICAP pool needs to point to NSX Defender ICAP server:port.
n Status URL - Only applicable to NSX Defender and has a default value of https://
user.lastline.com/portal#/analyst/task/$uuid/overview.
Rest all the configuration options are generic and not tied to any particular ICAP server.
n X-Lastline-Status - Provides information on the state of the object at the time of analysis. The
following values are possible:
n new - The specific file hash has not been recently analyzed by NSX Defender and a score is
not currently available.
n known - The specific file is known, and a score is associated with it.
n timeout - The process reached its timeout while waiting for the analysis of the file.
n X-Lastline-Score — The score currently associated with the file, if known, is expressed as a
value between 0 and 100.
n X-Lastline-Task — The NSX Defender task UUID is associated with the analysis of the file. It is
possible to use this UUID to access the analysis details from the NSX Defender Portal/Manager
Web UI. The following is the REST API to access information about any upload using UUID:
https://user.lastline.com/portal#/analyst/task//overview
n X-Infection-Found: Type=0;Resolution=1;Threat=LastlineArtifact(score=XX;md5=;uuid=)
n X-Virus-ID: LastlineArtifact(score=100;md5=;task_uuid=)
Log for the requests that are handled by the ICAP server has an icap_log section populated.
If the ICAP server blocks or modifies a request, the consequent log entry is significant. The
following example shows details of the available logs on NSX Advanced Load Balancer. As shown
under the Response Information, the overall request is blocked, and a 403 response code is sent
back to the client.
n The following log exhibits ICAP scan detects an infection (JSON log file):
"icap_log": {
"action": "ICAP_BLOCKED",
"request_logs": [
{
"icap_response_code": 200,
"icap_method": "ICAP_METHOD_REQMOD",
"http_response_code": 403,
"http_method": "HTTP_METHOD_POST",
"icap_absolute_uri": "icap://100.64.3.15:1344/OMSScanReq-AV ",
"complete_body_sent": true,
"pool_name": {
"val": "ICAP-POOL-GROUP",
"crc32": 1799851903
},
"pool_uuid": "poolgroup-c7dd3b93-60c1-4190-b6d6-26c22d55dc30",
"latency": "1275",
"icap_headers_sent_to_server": "Host: 100.64.3.15:1344\r\nConnection:
close\r\nPreview: 653\r\nAllow: 204\r\nEncapsulated: req-hdr=0, req-body=661\r\n",
"icap_headers_received_from_server": "Date: Thu, 19 Nov 2020 13:55:00
G11T\r\nServer: Metadefender Core V4\r\nISTag: \"001605794100\"\r\nX-ICAP-Profile:
File process\r\nX-Response-Info: Blocked\r\nX-Response-Desc: Infected\r\nX-Blocked-Reason:
Infected\r\nX-Infection-Found: Type=0",
"action": "ICAP_BLOCKED",
"reason": "Infected",
"threat_id": "EICAR-Test-File (not a virus)"
}]
},
n The following is the log entry when the ICAP server modifies the ICAP request:
n The following log shows that the ICAP scan is performed successfully. The action field for the
icap_log exhibits the value as ICAP_PASSED.
{"icap_log":
{"action": "ICAP_PASSED", "request_logs":
[{
"icap_response_code": 204,
"icap_method": "ICAP_METHOD_REQMOD",
"http_method": "HTTP_METHOD_POST",
"icap_absolute_uri":
"icap://100.64.3.15:1344/OMSScanReq-AV ",
"complete_body_sent": true,
"pool_name": {"val": "ICAP-POOL-GROUP", "crc32": 1799851903},
"pool_uuid": "poolgroup-c7dd3b93-60c1-4190-b6d6-26c22d55dc30",
"latency": "456",
"icap_headers_sent_to_server": "Host: 100.64.3.15:1344\r\nConnection:
close\r\nPreview: 0\r\nAllow: 204\r\nEncapsulated: req-hdr=0, null-body=661\r\n",
"icap_headers_received_from_server": "Date: Wed, 18 Nov 2020 12:54:06
G11T\r\nServer: Metadefender Core V4\r\nISTag: \"000000000096\"\r\nX-Response-Info:
Allowed\r\nEncapsulated: null-body=0\r\n", "action": "ICAP_PASSED"}]}
n The log entries will show the action for icap_log as ICAP_DISABLED if the ICAP feature is not
enabled.
Log Analytics
When ICAP is enabled, the log analytics on NSX Advanced Load Balancer provides an additional
overview. All data items are clickable and allow the quick addition of filters for a detailed log view.
Troubleshooting
ICAP Server Connection Failed: The following example shows a log error message for a failed
ICAP server connection. The ICAP Error is logged against the Significance field. To solve this
issue, check the direct connectivity from the SEs to the ICAP servers.
ICAP Server Error: The following example shows the ICAP Request is blockedmisconfiguration of
the ICAP server will exhibit the action for the ICAP log as ICAP_BLOCKED. The reason for the action
is No security rule matched as available in the ICAP header.
"icap_log":
{"action": "ICAP_BLOCKED",
"request_logs":
[{
"icap_response_code": 200,
"icap_method": "ICAP_METHOD_REQMOD",
"http_response_code": 403,
"http_method": "HTTP_METHOD_POST",
"icap_absolute_uri": "icap://100.64.3.15:1344/OMSScanReq-AV ",
"complete_body_sent": true, "pool_name": {"val": "ICAP-POOL-GROUP", "crc32":
1799851903}, "pool_uuid": "poolgroup-c7dd3b93-60c1-4190-b6d6-26c22d55dc30", "latency": "17",
"icap_headers_sent_to_server": "Host: 100.64.3.15:1344\r\nConnection:
close\r\nPreview: 0\r\nAllow: 204\r\nEncapsulated: req-hdr=0, null-body=661\r\n",
"icap_headers_received_from_server": "Date: Thu, 19 Nov 2020 13:25:15 G11T\r\nServer:
Metadefender Core V4\r\nISTag: \"001605792300\"\r\nX-Response-Info: Blocked\r\nX-Response-
Desc: No security rule matched\r\nEncapsulated: res-hdr=0, res-body=91\r\n", "action":
"ICAP_BLOCKED"}]}
To solve this issue, see the ICAP server used for the deployment.
ICAPs
Starting with NSX Advanced Load Balancer 21.1.3, ICAPs is supported. ICAP traffic can now be
encrypted using SSL.
n To configure ICAPs on NSX Defender, enable Secure ICAP in Proxy configurations as shown
below:
n In NSX Advanced Load Balancer, when configuring a pool for ICAPs, ensure SSL is enabled
in the Pool, that is referred to in the ICAP profile (has IPs of ICAP servers) and configure the
default port as 11344.
Starting with NSX Advanced Load Balancer version 21.1.3, the ICAP supports HTTP2 traffic to the
virtual service. If the virtual service has HTTP2 enabled for any port, and ICAP is configured, the
HTTP2 traffic will be subjected to the ICAP server.
Server Pools
This section contains the following sections:
n Pools Page
Pools maintain the list of servers assigned to them and perform health monitoring, load balancing,
persistence, and functions that involve NSX Advanced Load Balancer-to-server interaction. A
typical virtual service will point to one pool; however, more advanced configurations may have a
virtual service content switching across multiple pools via HTTP Request Policies or DataScripts. A
pool may only be used or referenced by only one virtual service at a time.
Service Engine
Creating a virtual service using the basic method automatically creates a new pool for that virtual
service, using the name of the virtual service with a -pool appended. When creating a virtual
service via the advanced mode, an existing, unused pool may be specified, or a new pool may be
created.
Pools Page
Navigate to Applications > Pools to open the pools page. This page displays a high-level overview
of configured pools.
You can create a new pool by clicking CREATE POOL, or edit the pool by clicking the pencil icon.
The following are the information for each pool. The columns shown may be modified using the
sprocket icon in the top right of the table:
Field Description
Name Lists the name of each pool. Clicking the name opens the
Analytics tab of the Pool Details page.
Field Description
Virtual Service The VS the pool is assigned to. Clicking a name in this
column opens the VS Analytics tab of the Virtual Service
Details page. If no virtual service is listed, this pool is
considered unused.
n Analytics
n Logs
n Health
n Servers
n Events
n Alerts
n End-to-End Timing
n Metric Tiles
n Chart Pane
n Overlays Pane
n Anomalies
n Alerts
n Config Events
n System Events
It may be helpful to compare the end-to-end time against other metrics, such as throughput,
to see how traffic increases impact the ability of the application to respond. For instance, if
new connections double but the end-to-end time quadruples, you may need to consider adding
additional servers.
From left to right, this pane displays the following timing information:
Field Description
App Response The time the servers take to respond. This includes the
time the server took to generate content, potentially
fetch back-end database queries or remote calls to other
applications, and begin transferring the response back to
NSX Advanced Load Balancer. This time is calculated by
subtracting the Server RTT from the time of the first byte
of a response from the server. If the application consists of
multiple tiers (such as web, applications, and database),
then the App Response represents the combined time
before the server in the pool began responding. This
metric is only available for a layer 7 virtual service.
Data Transfer Represents the average time required for the server to
transmit the requested file. This is calculated by measuring
from the time the Service Engine received the first byte of
the server response until the client has received the last
byte, which is measured as the when the last byte was sent
from the Service Engine plus one half of a client round
trip time. This number may vary greatly depending on the
size of objects requested and the latency of the server
network. The larger the file, the more TCP round trip times
are required due to ACKs, which are directly impacted by
the client RTT and server RTT. This metric is only used for
a Layer 7 virtual service.
Total Time Total time from when a client sent a request until they
receive the response. This is the most important end-to-
end timing number to watch, because it is the sum of the
other four metrics. As long as it is consistently low, the
application is probably successfully serving traffic.
Pool Metrics
The sidebar metrics tiles contain the following metrics for the pool. Clicking any metric tile will
change the main chart pane to show the chosen metric.
Field Description
End to End Timing Shows the total time from the pool’s End to End Timing
graph. To see the complete end-to-end timing, including
the client latency, refer to Analytics tab of the Virtual
Service Details page, which includes the client to Service
Engine metric.
Field Description
view.
n Hovering the mouse over any point in the chart will display the results for that selected time in
a popup window.
n Clicking within the chart will freeze the popup at that point in time. This may be useful when
the chart is scrolling as the display updates over time.
Many charts contain radio buttons in the top right that allow customization of data that should be
included or excluded from the chart. For instance, if the End to End Timing chart is heavily skewed
by one very large metric, then deselecting that metric by clearing the appropriate radio button
will re-factor the chart based on the remaining metrics shown. This may change the value of the
vertical Y-axis.
Some charts also contain overlay items, which will appear as color-coded icons along the bottom
of the chart.
n Each overlay type displays the number of entries for the selected time period.
n Clicking an overlay button toggles that overlay’s icons in the chart pane. The button lists the
number of instances (if any) of that event type within the selected time period.
n Selecting an overlay button displays the icon for the selected event type along the bottom of
the chart pane. Multiple overlay icon types may overlap. Clicking the overlay type’s icon in the
chart pane will bring up additional data below the overlay Items bar. The following overlay
types are available:
n Anomalies — Display anomalous traffic events, such as a spike in server response time,
along with corresponding metrics collected during that time period.
n Alerts — Display alerts, which are filtered system-level events that have been deemed
important enough to notify an administrator.
n Config Events — Display configuration events, which track configuration changes made to
NSX Advanced Load Balancer by either an administrator or an automated process.
n System Events — Display system events, which are raw data points or metrics of interest.
System Events can be noisy, and are best used as the basis of alerts which filter and
classify raw events by severity.
Clicking Anomalies Overlay button displays yellow anomaly icons in the chart
pane. Selecting one of these icons within the chart pane brings up additional information in a
table at the bottom of the page. During times of anomalous traffic, NSX Advanced Load Balancer
records any metrics that have deviated from the norm, which may provide hints as to the root
cause of the anomaly.
An anomaly is defined as a metric that has a deviation of 4 sigma or greater across the moving
average of the chart.
Anomalies are not recorded or displayed while viewing with the real-time display period.
Field Description
Timestamp Date and time when the anomaly was detected. This may
either span the full duration of the anomaly, or merely be
near the same time window.
Type The specific metric deviating from the norm during the
anomaly period. To be included, the metric deviation must
be greater than 4 sigma. Numerous types of metrics, such
as CPU utilization, bandwidth, or disk I/O may trigger
anomalous events.
Entity Type Type of entity that caused the anomaly. This may be one of
the following:
n Virtual Machine (server): these metrics require NSX
Advanced Load Balancer to be configured for either
read or write access to the virtualization orchestrator
such as vCenter or OpenStack. In the example shown
above, CPU utilization of the two servers was learned
by querying vCenter.
n Virtual service
n Service Engine
Alerts may be transitory, meaning that they may expire after a defined period of time. For
instance, NSX Advanced Load Balancer may generate an alert if a server is down and then allow
that alert to expire after a specified time period once the server comes back online. The original
event remains available for later troubleshooting purposes.
Clicking the alerts icon in the overlay items bar displays any red alerts icons in
the chart pane. Selecting one of these chart alerts will bring up additional information below the
overlay Items bar, which will show the following information:
Field Description
Level Severity of the alert. You can use the priority level to
determine whether additional notifications should occur,
such as sending an email to administrators or sending
a log to Syslog servers. The level may be one of the
following:
n High — Red
n Medium — Yellow
n Low — Blue
Clicking Config Events icon in the Overlay Items bar displays any blue config
event icons in the chart pane. Selecting one of these chart alerts will bring up additional
information below the Overlay Items bar, which will show the following information:
Field Description
n CONFIG_UPDATE
n CONFIG_DELETE
Expand/Contract Clicking the plus (+) or minus sign (-) for a configuration
event either expands or contracts a sub-table showing
more detail about the event. When expanded, this shows a
difference comparison of the previous configuration versus
the new configuration, as follows:
n Additions to the configuration, such as adding a health
monitor, will be highlighted in green in the new
configuration.
n Removing a setting will be highlighted in red in the
previous configuration.
n Changing an existing setting will be highlighted in
yellow in both the previous and new configurations.
Clicking the system events icon in the overlay items bar displays any purple
system event icons in the Chart Pane. Select a system event icon in the chart pane to bring up
more information below the overlay items bar.
Field Description
Expand/Contract Clicking the plus (+) or minus sign (-) for a system event
expands or contracts that system event to show more
information.
For the complete descriptions of logs, refer to the VS logs page for more details.
Field Description
Performance Score Performance score (1-100) for the selected item. A score of
100 is ideal, meaning clients are not receiving errors and
connections or requests are quickly returned.
Health Score The final health score for the selected item equals the
performance score minus the Resource and anomaly
penalty scores
The sidebar tiles show the scores of each of the three subcomponents of the health score, plus the
total score. To determine why a pool may have a low health score, select one of the first three tiles
that are showing a sub-par score.
This will bring up additional sub-metrics which feed into the top-level metric/tile selected. Hover
the mouse over a time period in the main chart to see the description of the score degradation.
Some tiles may have additional information shown in the main chart section that requires scrolling
down to view.
Server Page
The Server Page may be accessed by clicking on the server’s name from either the Pool > Servers
page or the Pool > Analytics Servers tile. When viewing the Server Details page, the server
shown is within the context of the pool it was selected within. Rephrased, if the server (IP: Port) is
a member of two or more pools, the stats and health monitors shown are only for the server within
the context of the viewed pool.
Not all metrics within the Server Page are available in all environments. For instance, servers
that are not virtualized or hooked into a hypervisor are not able to have their physical resources
displayed.
The statistics can be changed or skewed by switching between Average Values, Peak Values, and
Current Values. To see the highest CPU usage over the past day, change the time to 24 hours and
the Value to Peak. This will show the highest stats recorded during the past day.
Field Description
CPU Stats The CPU Stats box shows the CPU usage for this server,
the average during this time period across all servers in
the pool, and the hypervisor host.
Memory Stats The memory Stats box shows the Memory usage for this
server, the average during this time period across all
servers in the pool, and the hypervisor host.
Health Monitor This table shows the name of any health monitors
configured for the pool. The Status column shows the
most current up or down health of the server. The Success
column shows the percentage of health monitors that
passed or failed during the display time frame. Clicking the
plus will expand the table to show more info for a down
server. Refer to Why a Server Can Be Marked Down for
more details.
Main Panel The large panel shows the highlighted metric, similar
to the Virtual Service Details and Pool Details pages.
Overlay Items shows anomalies, alerts, configuration
events, and system events that are related to this server
within the pool.
Field Description
Pool Tile Bar The pool in the top right bar shows the health of the pool.
This can also be used to jump back up to the Pool Page.
Under the pool name is a pull-down menu that enables
quick access to jump to the other servers within the pool.
Metrics Tile Bar The metrics options will vary depending on the hypervisor
NSX Advanced Load Balancer is plugged into. For
non-virtualized servers, the metrics are limited to non-
resource metrics, such as end-to-end timing, throughput,
open connections, new connections, and requests. Other
metrics that may be shown include CPU, memory, and
virtual disk throughput.
Field Description
Search The search field enables you to filter the events using
whole words contained within the individual events.
Field Description
Clear Selected If filters have been added to the Search field, clicking the
Clear Selected (X) icon on the right side of the search bar
will remove those filters. Each active search filter will also
contain an X that you can click to remove the specific filter.
The table at the bottom of the Events tab displays the events that matched the current time
window and any potential filters. The following information appears for each event:
Field Description
Resource Name Name of the object related to the event, such as the pool,
virtual service, Service Engine, or Controller.
Expand/Contract Clicking the plus (+) or minus sign (-) for an event log
either expands or contracts that event log. Clicking the +
and – icons in the table header expands and collapses all
entries in this tab.
For configuration events, expanding the event displays a different comparison between the
previous and new configurations.
Field Description
Search The search field enables you to filter the alerts using whole
words contained within the individual alerts.
Dismiss Select one or more alerts from the table below then click
dismiss to remove the alert from the list.
Alerts are transitory, which means they will eventually and automatically expire. They intend to
notify an administrator of an issue, rather than being the definitive record for issues. Alerts are
based on events, and the parent event will still be in the Events record.
The table at the bottom of the Alerts tab displays the following alert details:
Field Description
Timestamp Date and time when the alert was triggered. Changing
the time interval using the display pull-down menu may
potentially show more alerts.
Resource Name Name of the object that is the subject of the alert, such as
a Server or virtual service.
Field Description
Expand/Contract Clicking the plus (+) or minus sign (-) for an event log
either expands or contracts that event log to display more
information. Clicking the + and – icon in the table header
expands and collapses all entries in this tab.
Create Pool
The Create Pool popup and the Edit Pool popup share the same interface that consists of the
following tabs:
n Settings
n Servers
n Advanced
n Review
Step 1: Settings
The Create/Edit Pool > Settings tab contains the basic settings for the pool. The exact options
shown may vary depending on the types of clouds configured in NSX Advanced Load Balancer.
For instance, servers in VMware may show an option to “Select Servers by Network”.
Field Description
Default Server Port New connections to servers will use this destination
service port. The default port is 80, unless it is either
inherited from the virtual service (if the pool was created
during the same workflow), or the port was manually
assigned. The default server port setting may be changed
on a per-server basis by editing the Service Port field for
individual servers in the Step2: Servers tab.
Graceful Disable Timeout A time value ranging from 1 to 7,200 minutes used to
gracefully disable a back-end server. The virtual service
will wait for the specific time before terminating existing
connections to disabled servers. To values are special:
0 causes immediate termination and -1 (negative one,
standing for “infinite”) never terminates.
Field Description
Field Description
Field Description
Field Description
AutoScale Policy
Field Description
AutoScale Launch Config If configured, then NSX Advanced Load Balancer will
trigger orchestration of pool-server creation and deletion.
This option is only supported for public cloud autoscale
groups and OpenStack.
Rewrite Host Header to Server Name Rewrite the incoming host header to the name of the
server to which the request is proxied. Enabling this
feature rewrites the host header of requests sent to all
servers in the pool.
Field Description
SSL to Backend Servers Enables SSL encryption between the NSX Advanced Load
Balancer Service Engine and the back-end servers. This
is independent from the SSL option in the virtual service,
which enables SSL encryption from the client to the NSX
Advanced Load Balancer Service Engine.
n SSL Profile: Determines which SSL versions and
ciphers NSX Advanced Load Balancer will support
when negotiating SSL with the server.
n Server SSL Certificate Validation PKI Profile: This
option validates the certificate presented by the
server. When not enabled, the Service Engine
automatically accepts the certificate presented by the
server when sending health checks. Refer to the
PKI Profile section for additional help on certificate
validation.
n Service Engine Client Certificate: When establishing
an SSL connection with a server, either for normal
client-to-server communications or when executing a
health monitor, the Service Engine will present this
certificate to the server.
Enable real time metrics Checking this option enables real time metrics for server
and pool metrics. Default is OFF.
Step 2: Servers
The Servers tab supports the addition/removal/disablement/enablement of servers and displays
the results of those actions.
Add Servers
2 IP group
3 Auto-scaling groups defined by public cloud ecosystems such as Amazon Web Services (AWS)
and Microsoft Azure.
Field Description
IP Address, Range, or DNS Name Add one or more servers to the pool using one or more
of the listed methods. The example below shows servers
created using multiple methods.
n Add by IP Address - Into the Server IP Address field
enter the IP address of a server you want to add.
The Add Server button will change from light grey to
green. You may also enter a range of IP addresses via
a dash, such as 10.0.0.1-10.0.0.20.
n Add by DNS Resolvable Name - Into the Server
IP Address field enter the FQDN of the server you
want to add. If the server successfully resolves, the
IP address will appear and the Add Server button will
change to green. Click the Add Server button to add
it to the pool server list. See Add Servers by DNS for
more information.
n n Clicking the Select Servers by Network opens
a list of reachable networks. What appears will
resemble the below example:
Field Description
Auto Scaling groups External environments such as AWS and Azure define and
manage autoscaling groups of their own.
n Clicking this option reveals one’s choices.
Servers
Field Description
Changing Server Status Adding servers to the pool populates the table within
the Servers tab. Use it to remove, enable, disable, or
gracefully disable servers. Changes to server status take
effect immediately when changes are saved. The table
below shows two servers have been enabled.
Field Description
Step 3: Advanced
The Advanced tab of the Pool Create/Edit popup specifies optional settings for the pool.
Field Description
Pool Full Settings This section configures HTTP request queuing, which
causes NSX Advanced Load Balancer to queue
requests that are received after a back-end server has
reached its maximum allowed number of concurrent
connections. Queuing HTTP requests provides time for
new connections to become available on the server, thus
avoiding the configured pool-down action. For complete
details, refer to the HTTP Request Queueing article.
Field Description
Pool Failure Settings Fail Action — Three fail actions are defined.
n Close Connection — If all servers in a pool are down,
the default behavior of the virtual service is to close
new client connection attempts by issuing TCP resets
or dropping UDP packets. Existing connections are not
terminated, even though their server is marked down.
The assumption is the server may be slow but may
still be able to continue processing the existing client
connection.
n HTTP Local Response — Returns a simple web page.
Specify a status code of 200 or 503. If a custom HTML
file has not been uploaded to NSX Advanced Load
Balancer, it will return a basic page with the error
code.
Field Description
Field Description
Field Description
Max Connections per Server Specify the maximum number of concurrent connections
allowed for a server. If all servers in the pool reach this
maximum the virtual service will send a reset for TCP
connections or silently discard new UDP streams unless
otherwise specified in the Pool Down Action, described
above. As soon as an existing connection to the server
is closed, that server is eligible to receive the next client
connection. A value of 0 disables the connection limit.
HTTP Server Reselect This option retries an HTTP request that fails or returns
one of a set of user-specified error codes from the
backend server. Normally, NSX Advanced Load Balancer
forwards these error messages back to the client. For
more information, see HTTP Server Reselect.
Step 4: Review
The Review tab displays a summary of the information entered in the previous pool creation tabs.
Review this information and then click Save to finish creating the pool. If needed, you may return
to any previous step by clicking the appropriate tab at the top of the window.
Note The Review tab only displays when creating a new pool; it does not display when editing an
existing pool.
Note This knob can currently be configured only using the CLI/API.
Note This knob can only be enabled if user_service_port (Disable port translation) is set to true.
So NSX Advanced Load Balancer will keep the client’s destination port to the back-end server.
Configure the ssl_profile on the pool’s side to use the option use_service_ssl_mode.
These four parameters are currently accessible via the REST API and NSX Advanced Load Balancer
CLI. It is not required to restart the SE for the changes to take effect.
Procedure
1 Navigate to Applications > Pools. If no pools exist, create one. Otherwise, click the pencil icon
in the row of the pool you wish to edit.
2 Once into the Pool Editor, click on the Advanced tab of the pool-creation wizard, as shown
below.
Option Description
Request Queuing Enable or Disable Request Queuing when the pool is full by selecting
appropriate radio buttons.
Queue Length Specify the minimum number of requests to be queued when pool is full.
Option Description
Fail Action Select an action when a pool failure happens from the drop-down menu. The
menu displays the following values:
n Close Connection
n HTTP Local Response
n HTTP Redirect
By default, the connection will be closed, if a pool experiences a failure.
Option Description
Connection Idle Timeout Specify the idle connection timeout. This time starts on the last time the
connection is used, the connection will be closed after this time.
Connection Life Timeout Specify the connection life timeout. This time starts when the connection is
created, the connection will be closed after this time.
Connection Max Used Times Specify the maximum number of times the connection is used.
Max Cache Connections Per Server Specify the maximum cache connections per server.
Option Description
Connection Ramp Specify the duration for which new connections will be gradually ramped up
to a server recently brought online.
Default Server Timeout Specify a value between 0 milliseconds and 21600000 millisecond (6 hours).
Server timeout value specifies the time within which a server connection
needs to be established and a request response exchange completes
between NSX Advanced Load Balancer and the server.
Note If the Server Timeout value is not entered, by default the value will be
set to 3600000 milliseconds (1 hour).
Max Connections per Server Specify the maximum number of concurrent connections allowed to each
server within the pool.
HTTP Server Reselect Select HTTP Server Reselect box when server responds with specific
response codes.
7 Click Save.
Configurable Options
HTTP server reselect is disabled by default. The feature can be configured within individual pools.
One can optionally select the error codes that trigger the feature. Once enabled, the feature works
in all connection or SSH failure scenarios.
Error Codes
The pool configuration specifies the HTTP error response codes that must result in server
reselection. The error codes can be specified in any of the following ways.
n Range of codes
Enter a range between 400 and 499 or 500 and 599 (for example, 501-503).
Maximum Retries
The default maximum retry setting is 4. Following the first error response, the NSX Advanced Load
Balancer resends the request to the pool up to 4 more times, for a total of 5 attempts. Each retry is
sent to a different server, and each server can receive only one attempt.
If the setting for maximum retries is higher than the number of enabled and running servers within
the pool, each of those servers still receives only one attempt. For example, if maximum retries
is set to 4 but the pool has only 3 servers, the maximum number of retries is only 2. The initial
attempt that fails goes to one of the servers, leaving 2 more servers to try. If the second server
also sends a 4xx or 5xx error code in response to the request, the request is sent to the last server
in the pool. If the last server also sends a 4xx or 5xx, the response from the server is sent back to
the client.
a If you are enabling the feature in an existing pool, click the edit icon for the pool.
b For creating a new pool, click Create Pool, select the cloud name and click Next. Enter a
name for the pool on the Settings tab, select the servers on the Servers tab.
a If creating a new pool, click Next to review the settings and click Save.
The following example enables HTTP server reselection for all 4xx error codes.
Based on this configuration, if a server in this pool responds to a client request with a 4xx error
code, the NSX Advanced Load Balancer retries the request by sending it to another server in the
pool. The retry process can happen up to 4 times (to 4 different servers).
CLI Example
Note Only significant lines of interest from the CLI output are included in the following example.
| name | vs-test-pool |
. .
. .
. .
| server_reselect | |
| enabled | False |
| num_retries | 4 |
| retry_nonidempotent | False |
| srv_retry_timeout | 0 milliseconds |
. .
. .
. .
+---------------------------------------+------------------------------------------------
+
[admin:10-10-27-18]: pool> server_reselect enabled
[admin:10-10-27-18]: pool:server_reselect> srv_retry_timeout 5000
Overwriting the previously entered value for srv_retry_timeout
[admin:10-10-27-18]: pool:server_reselect> save
[admin:10-10-27-18]: pool> exit
+---------------------------------------+------------------------------------------------+
| Field | Value |
+---------------------------------------+------------------------------------------------+
| uuid | pool-8e91b1a6-17bf-490e-b59a-05efd942a3f6 |
| name | vs-test-pool |
. .
. .
. .
| server_reselect | |
| enabled | True |
| num_retries | 4 |
| retry_nonidempotent | False |
| srv_retry_timeout | 5000 milliseconds |
. .
. .
. .
+---------------------------------------+------------------------------------------------+
| name | vs-test-pool |
. .
. .
. .
| server_reselect | |
| enabled | False |
| num_retries | 4 |
| retry_nonidempotent | False |
| srv_retry_timeout | 0 milliseconds |
. .
. .
. .
+---------------------------------------+------------------------------------------------+
[admin:10-10-27-18]: >
The Graceful Disable Timeout parameter set for a pool governs how servers within the pool
are disabled as follows:
n Disable with immediate effect: All client sessions are immediately terminated. The
pool’s Graceful Disable Timeout parameter must be set to 0.
n Gracefully disable with a finite timeout: No new sessions are sent to the server.
Existing sessions are allowed to terminate on their own, up to the specified timeout. Once the
timeout is reached, any remaining sessions are immediately terminated. The pool’s Graceful
Disable Timeout parameter must range from 1 to 7200 minutes.
n Gracefully disable with infinite timeout: No new sessions are sent to the server.
All existing sessions are allowed to terminate on their own. The pool’s Graceful Disable
Timeout parameter must be set to -1.
When servers are gracefully deactivated until all flows drain, the idle connections, if any, will be
deleted immediately.
In-flight connections are closed at the end of a request for a request-switched virtual service and
the end of a client connection for a connection-switched virtual service.
When servers are gracefully deactivated with a timeout value, the idle connections are closed
immediately, while any busy or bound connection will be closed at the end of the timeout. Any
existing request will be processed until the end of the timeout. Any new requests, whether from an
existing connection or a new connection, will be sent to a new server.
2 Identify the pool containing the servers whose timeout parameter is to be set, and click on the
pencil icon at the right end of that pool.
3 In the Edit Pool window, set the Graceful Disable Timeout field to 0, -1, or within the range of
1 to 7200 minutes.
4 Select the checkbox next to the name of each server that you wish to disable.
Note NSX Advanced Load Balancer can be configured to use information in the health-check
responses from servers to detect when a server is in maintenance mode. For information, see
Detecting Server Maintenance Mode with a Health Monitor.
You can configure how the pool server should behave when it is disabled through the CLI as
follows:
save
save
When disabled, the node or pool member continues to process persistent and active connections.
New connections can be accepted only if the connections belong to an existing persistent
connection.
These persistence matches for new connections continue until persistence times out.
When proxying a request to a back-end server through NSX Advanced Load Balancer, an SE
can rewrite the host header to the server name of the back end server to which the request is
forwarded. This functionality can be turned on for selected or all servers in the pool.
1 Under the Settings tab, select the Rewrite Host Header to Server Name checkbox.
2 Under the Servers tab, select the Rewrite Host Header checkbox corresponding to the
individual server for which this behavior is intended.
The pool-level checkbox (option 1) takes precedence over option 2. If the pool-level option is
selected, the behavior is ON for all servers, no matter what selections have been made on a
per-server basis.
If the rewrite host header to SNI is turned ON as well as this feature, it takes precedence over the
“to server name” feature.
n For SSL back-end servers with the TLS SNI Enabled flag set as OFF: The
rewrite_host_header_to_sni has no effect. The Host header is set according to the
rewrite_host_header_to_server_name flag.
n For SSL back-end servers with the TLS SNI Enabled flag set as ON – Incoming Host Header =
Abc.com.
Note The following combination of the configuration options is not supported because the SNI
name used in the SSL handshake, and the host header used in the request do not match.
n The TLS SNI Enabled flag is set as ON.
To update the port to the hostname in the host header, the following options are available under
the pool configuration:
n Append port if not default port for protocol (80 and 443)
The following screenshot shows the Append Port option available under the Pool > Settings on
the NSX Advanced Load Balancer UI.
Object Names
Object names within the NSX Advanced Load Balancer, such as the names of virtual services and
pools, have the following limitations:
n Uniqueness within tenants: an object name must be unique within a given tenant. Different
tenants can use the same name.
Object names can be changed without impact to linked objects. For instance, each virtual service
is associated with a pool. The name of a virtual service can be changed without requiring a change
to the configuration of the pool that the virtual service is associated with.
Note
n User accounts created through Keystone or LDAP / AD have the same limitations as other user
accounts in those authentication systems.
n The NSX Advanced Load Balancer user names that include any of the supported special
characters ( . @ + - _ ) can access the Controller through the web interface, API, or CLI.
However, these accounts cannot access the Controller’s Linux shell. For example:
shell
Shell access not allowed for this user
Pool Groups
A pool group is a list of server pools, accompanied by logic to select a server pool from the list.
Wherever a virtual service can refer to a server pool (directly, or via rules, DataScripts, or service
port pool selector), the virtual service could instead refer to a pool group.
The pool group is a powerful construct that can be used to implement the following:
n Priority Pools/Servers
n Backup Pools
n A/B Pools
n Blue/Green Deployment
n Canary Upgrades
When a Service Engine responsible for a virtual service needs to identify a server to which to
direct a particular client request, these are the steps.
n Step 1: Identify the best pools within the group. This is governed by pool priority. This group
of nine members defines three priorities— high_pri, med_pri, and low_pri — but pool1, pool2,
and pool3 are the preferred (best) ones because they’ve all been assigned the highest priority.
NSX Advanced Load Balancer will do all it can to pick one of them.
n Step 2: Identify one of the highest-priority pools. This choice will be governed by the weights
assigned to the three pool members, weight_1, weight_2, and weight_3. The ratio implied by
those weights governs the percentage of traffic directed to each of them.
n Step 3: Identify one server with the chosen pool. Each of the 9 members can be configured
with a different load-balancing algorithm. The algorithm associated with the chosen pool will
govern which of its servers is selected.
To enable persistence in a pool, navigate to Applications > Pools > Edit Pool > Settingsand select
a persistence from the Persistence drop-down menu provided.
On the other hand, if the functionality of a pool group is not anticipated, use a pool. A simple pool
that does the job is more efficient than a pool group. It consumes less SE and Controller memory
by avoiding the configuration of an additional full-fledged uuid object.
Note The list of pools eligible to be members of a pool group will exclude those associated with
other virtual servers.
Configuration
Considering a pool group consisting of two pools, following are the steps to configure the feature:
Create Pool
Create individual pools that will be attached to the pool group by navigating to Applications >
Pools > CREATE POOL. The pools pool-1, pool-2, and cart2 have been created here.
2 In the Pool Group Members section, add the previously created pools as member pools or
create new member pools. Note that each pool has been assigned a priority here.
d Select a Deployment State from the drop-down menu. The deployment state options are:
1 EvaluationFailed
2 Evaluation In Progress
3 In Service
4 Out Of Service
4 In the Pool Servers section, specify the optional settings for the pool group:
a Enable HTTP2 - Select to enable HTTP/2 for traffic from virtual service to all the backend
servers in all the pools configured under this pool group.
Minimum number of servers - The minimum number of servers to distribute traffic. You
can enter a range from 1-65535.
5 In the Pool Group Failure Settings section, specify the action to be executed when the pool
group experiences failure. There are three options available as fail actions:
a Close Connection- If all servers in a pool are down, the default behavior of the virtual
service is to close new client connection attempts by issuing TCP resets or dropping UDP
packets. Existing connections are not terminated, even though their server is marked
down. The assumption is the server may be slow but may still be able to continue
processing the existing client connection.
b HTTP Local Response - Returns a simple web page. Specify a status code of 200 or 503. If
a custom HTML file has not been uploaded to NSX Advanced Load Balancer, it will return a
basic page with the error code.
c HTTP Redirect - Returns a redirect HTTP response code, including a specified URL.
1 Status Code - Choose 301, 302, or 307 from the drop-down menu.
2 HTTP/HTTPS - By default NSX Advanced Load Balancer will redirect using HTTPS
unless HTTP is clicked instead.
6 Select/ create a Pool Group Deployment Policy. Autoscale manager automatically promotes
new pools into production when deployment goals are met as defined in the Pool Group
Deployment Policy.
To know more about configuring labels, see Granular Role Based Access Controls per App.
2 Navigate to Applications > CREATE VIRTUAL SERVICE > Advanced Setup > New Virtual
Service.
a Under Step 1: Settings tab, Select Pool Group radio button to attach the previously
created pool group to the virtual service.
b The pool group is attached and the virtual service is active, as shown below:
3 To view the overall setup of the virtual service and pool groups, navigate to Applications >
Dashboard and select VS Tree from View VS List drop down menu.
Use Cases
Priority Pools/Servers
Consider a case where a pool has different kinds of servers — newer, very powerful ones, older
slow ones, and very old, very slow ones. In the diagram, imagine the blue pools are comprised
of the new, powerful servers, the green pools have the older slow ones, and the pink pool the
very oldest. Further note they’ve been assigned priorities from high_pri down to low_pri. This
arrangement causes NSX Advanced Load Balancer to pick the newer servers in the 3 blue pools
as much as possible, potentially always. Only if no server any of the highest priority pools can be
found, NSX Advanced Load Balancer will send the slower members some traffic as well, ranked by
priority.
One or a combination of circumstances trigger such an alternate selection (of a lower priority
pool):
2 Similar to #1, no server at the given priority level will accept an additional connection. All
candidates are saturated.
3 No pool at the given priority level is running the minimum server count configured for it.
Operational Notes
n It is recommended to keep the priorities spaced, and leave gaps. This makes the addition of
intermediate priorities easier at a later point.
n For the pure priority use case, the ratio of the pool group is optional.
n Setting the ratio to 0 for a pool results in sending no traffic to this pool.
n For each of the pools, normal load balancing is performed. After NSX Advanced Load
Balancer selects a pool for a new session, the load balancing method configured for that pool
is used to select a server.
With only three pools in play, each at a different priority, the values in the Ratio column don’t
enter into pool selection. The cart2 will always be chosen, barring any of the three circumstances
described above.
Backup Pools
The pre-existing implementation of backup pools is explained in the Pool Groups section. The
existing option of specifying a backup pool as a pool-down/fail action is deprecated. Instead,
configure a pool group with two or more pools, with varying priorities. The highest priority pool
will be chosen as long as a server is available within it (in alignment with the three previously
mentioned circumstances).
Operational Notes
n A pool with a higher value of priority is deemed better, and traffic is sent to the pool with the
highest priority, as long as this pool is up, and the minimum number of servers is met.
n It is recommended to keep the priorities spaced, and leave gaps. This makes the addition of
intermediate priorities easier at a later point.
n For each of the group’s pool members, normal load balancing is performed. After NSX
Advanced Load Balancer selects a pool for a new session, the load balancing method
configured for that pool is used to select a server.
n The addition or removal of backup pools does not affect existing sessions on other pools in the
pool group.
1 Create a pool group ‘backup’, which has two member pools — primary-pool with a priority of
10, and backup-pool which has a priority of 3.
Object details:
{
url: "https://10.10.25.20/api/poolgroup/poolgroup-f51f8a6b-6567-409d-9556-835b962c8092",
uuid: "poolgroup-f51f8a6b-6567-409d-9556-835b962c8092",
name: "backup",
tenant_ref: "https://10.10.25.20/api/tenant/admin",
cloud_ref: "https://10.10.25.20/api/cloud/cloud-3957c1e2-7168-4214-bbc4-dd7c1652d04b",
_last_modified: "1478327684238067",
min_servers: 0,
members:
[
{
ratio: 1,
pool_ref: "https://10.10.25.20/api/pool/pool-4fc19448-90a2-4d58-bb8f-
d54bdf4c3b0a",
priority_label: "10"
},
{
ratio: 1,
pool_ref: "https://10.10.25.20/api/pool/pool-
b77ba6e9-45a3-4e2b-96e7-6f43aafb4226",
priority_label: "3"
}
],
fail_action:
{
type: "FAIL_ACTION_CLOSE_CONN"
}
}
A/B Pools
NSX Advanced Load Balancer supports the specification of a set of pools that could be deemed
equivalent pools, with traffic sent to these pools in a defined ratio.
For example, a virtual service can be configured with a single priority group having two pools, A
and B. Further, the user could specify that the ratio of traffic to be sent to A is 4, and the ratio of
traffic for B is 1.
The A/B pool feature sometimes referred to as blue/green testing, provides a simple way to
gradually transition a virtual service’s traffic from one set of servers to another. For example,
to test a major OS or application upgrade in a virtual service’s primary pool (A), a second
pool (B) running the upgraded version can be added to the primary pool. Then, based on the
configuration, a ratio (0-100) of the client-to-server traffic is sent to the B pool instead of the A
pool.
To continue this example, if the upgrade is performing well, the NSX Advanced Load Balancer
user can increase the ratio of traffic sent to the B pool. Likewise, if the upgrade is unsuccessful or
sub-optimal, the ratio to the B pool easily can be reduced again to test an alternative upgrade.
To finish transitioning to the new pool following successful upgrade, the ratio can be adjusted to
send all traffic to the pool, which now makes pool B the production pool.
To perform the next upgrade, the process can be reversed. After upgrading pool A, the ratio of
traffic sent to pool B can be reduced to test pool A. To complete the upgrade, the ratio of traffic to
pool B can be reduced back to 0.
Operational Notes
n Setting the ratio to 0 for a pool results in sending no traffic to this pool.
n For each of the pools, normal load balancing is performed. After NSX Advanced Load
Balancer selects a pool for a new session, the load balancing method configured for that pool
is used to select a server.
n The A/B setting does not affect existing sessions. For example, setting the ratio sent to B to
1 and A to 0 does not cause existing sessions on pool A to move to B. Likewise, A/B pool
settings do not affect persistence configurations.
n If one of the pools that has a non-zero ratio goes down, new traffic is equally distributed to the
rest of the pools.
n For pure A/B use cases, the priority of the pool group is optional.
n Pool groups can be applied as default on the virtual service, or attached to rules, DataScripts
and Service port pool selector as well.
1 Create a pool group ‘ab’, with two pools in it — a-pool and b-pool — without specifying any
priority:
In this example, 10% of the traffic is sent to b-pool, by setting the ratios of a-pool and b-pool to
10 and 1 respectively.
2 Apply this pool group to the VS, where you would like to have A/B functionality:
Object details:
{
url: "https://
/api/poolgroup/poolgroup-7517fbb0-6903-403e-844f-6f9e56a22633", uuid:
"poolgroup-7517fbb0-6903-403e-844f-6f9e56a22633", name: "ab", tenant_ref: "https://
/api/pool/pool-23853ea8-aad8-4a7a-8e9b-99d5b749e75a" } ], }
This is a release technique that reduces downtime and risk by running two identical production
environments, only one of which (e.g., blue) is live at any moment, and serving all production
traffic. In preparation for a new release, deployment and final-stage testing takes place in an
environment that is not live (e.g., green). Once confident in green, all incoming requests go to
green instead of blue. Green is now live, and blue is idle. Downtime due to application deployment
is eliminated. In addition, if something unexpected happens with the new release on the green, roll
back to the last version is immediate; just switch back to blue.
Canary Upgrades
This upgrade technique is so-called because of its similarity to miner’s canary, which would detect
toxic gasses before any humans might be affected. The idea is that when performing system
updates or changes, a group of representative servers gets updated first, are monitored/tested for
a period of time, and only thereafter are rolling changes made across the remaining servers.
The Process
n A pool group is configured with members (each with different priorities).
n By default, the pool configured with the highest priority acts as the primary pool and receives
all the connections or requests.
n When the highest priority pool goes down, the next available priority pool takes over the
current primary role and receives all connections and requests.
n When the previous primary pool comes back online, it does not resume the current primary
role automatically. Once the primary pool goes down, it is not eligible to take over, until the
administrator manually makes it the primary pool.
n When the admin configures one of the members to primary, it clears off all the connections to
the old primary and makes the requested one, the new primary.
Use the enable_primary_pool option to make the highest priority pool primary:
A Pool Groups is a list of member (server) pools, combined with logic to select a member from the
list. Like a pool, a pool group can be shared by the same type of Layer 7 virtual service. This article
explains the feature’s capabilities, the related CLI commands, and the present limitations.
n Through policy-based content-switching, a virtual service might choose one of its pool groups
n Via DataScript, a virtual service might programmatically choose one of its pool groups
A pool group can be referenced by multiple virtual services. In accessing the shared pool group,
each virtual service can independently use any one of the multiple techniques listed above. As
before, a virtual service may access multiple pools, some of them shared and others not. Virtual
services sharing a pool group need not be placed on the same SE group.
Note This feature is supported for combinations of IPv4, IPv6, and IPv4v6 addresses.
Restrictions
These are some restrictions when sharing a pool group:
3 A pool can be part of multiple pool groups either through the same virtual service or different
virtual services.
6 A pool directly linked to a virtual service should not be part of a pool group.
While working with pools or pool groups continue to be the same, with pool group sharing:
n There is an increased number of pool group choices when configuring a virtual service.
n There are more ways to extract pool-related information when querying for statistics.
Procedure
The Edit Virtual Service: screen appears as shown in the following figure:
Note You can create a pool group by clicking Create Pool Group or navigating to
Applications > Pool Groups > Create Pool Group. Refer to Configuration section in the Pool
Groups for more details.
5 Click Save.
Results
The selected pool groups are now assigned to the required virtual services. With pool group
sharing, you can see there is a broader set of pool groups available.
Reporting
This section covers the steps to view the overall setup of the pool groups.
Procedure
Pool group sharing set up with the virtual service is represented as shown in the following
image:
The following image shows the Virtual Service screen for a selected virtual service, with the
pool groups shared:
You can see the ability for a single virtual service to be associated with multiple pool groups.
n Consistent Hash
n Core Affinity
n Fastest Response
n Fewest Servers
n Least Connections
n Least Load
n Round Robin
n Fewer Tasks
The load balancing algorithm is changed using NSX Advanced Load Balancer UI and NSX
Advanced Load Balancer CLI. Select a local server load-balancing algorithm using the Algorithm
field within the Applications > Pool > Settings page. Changing a pool’s LB algorithm will only
affect new connections or requests, and will have no impact on existing connections. The available
options in alphabetic order are:
Consistent Hash
New connections are distributed across the servers using a hash that is based on a key specified in
the field that appears below the LB Algorithm field or in a custom string provided by the user via
the avi.pool.chash DataScript function. Below is an example of persisting on a URI query value:
This algorithm inherently combines load balancing and persistence, which minimizes the need
to add a persistence method. This algorithm is best for load balancing large numbers of cache
servers with dynamic content. It is ‘consistent’ because adding or removing a server does not
cause a complete recalculation of the hash table. For the example of cache servers, it will not
force all caches to have to re-cache all content. If a pool has nine servers, adding a tenth server
will cause the pre-existing servers to send approximately 1/9 of their hits to the newly-added
server based on the outcome of the hash. Hence, persistence may still be valuable. The rest of the
server’s connections will not be disrupted. The available hash keys are:
Field Description
Custom Header Specify the HTTP header to use in the Custom Header
field, such as Referer. This field is case-sensitive. If
the field is blank or if the header does not exist, the
connection or request is considered a miss and will hash
to a server.
Call-ID Specifies the Call ID field in the SIP header. With this
option, SIP transactions with new call IDs are load
balanced using consistent hash, while existing call IDs are
retained on the previously chosen servers. The state of
existing call IDs is maintained for an idle timeout period
defined by the ‘Transaction timeout’ parameter in the
Application Profile. The state of existing call IDs is relevant
for as long as the underlying TCP/UDP transport state for
the SIP transaction remains the same.
Source IP Address and Port Source IP Address and Port of the client.
HTTP URI It includes the host header and the path. For instance,
www.avinetworks.com/index.htm.
Core Affinity
Each CPU core uses a subset of servers, and each server is used by a subset of cores. Essentially
it provides a many-to-many mapping between servers and cores. The sizes of these subsets
are parameterized by the variable lb_algorithm_core_nonaffinity in the pool object. When
increased, the mapping increases up to the point where all servers are used on all cores.
If all servers that map to a core are unavailable, the core uses servers that map to the next (with
wraparound) core.
Fastest Response
New connections are sent to the server that is currently providing the fastest response to new
connections or requests. This is measured as time to the first byte. In the End-to-End Timing chart,
this is reflected as Server RTT plus App Response time. This option is best when the pool’s servers
contain varying capabilities or they are processing short-lived connections. A server that is having
issues, such as a lost connection to the data store containing images, will generally respond very
quickly with HTTP 404 errors. It is best practice when using the fastest response algorithm to also
enable the Passive Health Monitor, which recognizes and adjusts for scenarios like this by taking
into account the quality of server response, not just speed of response.
Note A server that is having issues, such as a lost connection to the data store containing images,
will generally respond very quickly with HTTP 404 errors. You should therefore use the Fastest
Response algorithm in conjunction with the Passive Health Monitor, which recognizes and adjusts
for scenarios like this.
Fewest Servers
Instead of attempting to distribute all connections or requests across all servers, NSX Advanced
Load Balancer will determine the fewest number of servers required to satisfy the current client
load. Excess servers will no longer receive traffic and may be either de-provisioned or temporarily
powered down. This algorithm monitors server capacity by adjusting the load and monitoring the
server’s corresponding changes in response latency. Connections are sent to the first server in the
pool until it is deemed at capacity, with the next new connections sent to the next available server
down the line. This algorithm is best for hosted environments where virtual machines incur a cost.
Least Connections
New connections are sent to the server that currently has the least number of outstanding
concurrent connections. This is the default algorithm when creating a new pool and is best
for general-purpose servers and protocols. New servers with zero connections are introduced
gracefully over a short period of time via the Connection Ramp setting in the Pool > Advanced
page. This feature slowly brings the new server up to the connection levels of other servers within
the pool.
NSX Advanced Load Balancer uses Least Connections as the default algorithm because generally
provides an equal distribution when all servers are healthy, and yet is adaptive to slower or
unhealthy servers. It works well for both long-lived and quick connections.
Note A server that is having issues, such as rejecting all new connections, may have a concurrent
connection count of zero and be the most eligible to receive all new connections. NSX Advanced
Load Balancer recommends using the Least Connections algorithm in conjunction with the Passive
Health Monitor which recognizes and adjusts for scenarios like this. A passive monitor will reduce
the percent of new connections sent to a server based on the responses it returns to clients.
Least Load
New connections are sent to the server with the lightest load, regardless of the number of
connections that the server has. For example, if an HTTP request requiring a 200-kB response
is sent to a server and a second request that will generate a 1-kB response is sent to a server, this
algorithm will estimate that —based on previous requests— the server sending the 1-kB response
is more available than the one still streaming 200 kB. The idea is to ensure that a small and fast
request does not get queued behind a very long request. This algorithm is HTTP-specific. For
non-HTTP traffic, the algorithm will default to the least connections algorithm.
Round Robin
New connections are sent to the next eligible server in the pool in sequential order. This static
algorithm is best for basic load testing but is not ideal for production traffic because it does not
take the varying speeds or periodic hiccups of individual servers into account. A slow server will
still receive as many connections as a better-performing server.
In the example illustration, a server was causing significant app response time in the end-to-end
timing graph as seen by the orange in the graph. By switching from the static round-robin
algorithm to a dynamic LB algorithm (the blue config event icon at the bottom), NSX Advanced
Load Balancer successfully directed connections to servers that were responding to clients faster,
virtually eliminating the app response latency.
Fewest Tasks
Load is adaptively balanced, based on server feedback. This algorithm is facilitated by an external
health monitor. It is configurable via the NSX Advanced Load Balancer CLI and REST API but
is not visible in the NSX Advanced Load Balancer UI. For details, refer to the Fewest Tasks
Load-Balancing Algorithm.
An external health monitor can feedback a number (for example, 1-100) to the algorithm by writing
data into the <hm_name>.<pool_name>.<ip>.<port>.tasks file. Each output from this file would
be used to feedback to the algorithm. The range of numbers provided as feedback and the send
interval of the health monitor may be adjusted to tune the load balancing algorithm behavior to
the specific environment.
For example, consider a pool p1 with 2 back-end servers, s1 and s2. Suppose the health monitor
ticks every 10 seconds (send-interval), and sends back feedback of 100 (high load) and 10 (low
load). At time t1, s1 and s2 are set with 100 tasks and 10 tasks respectively. Now, if you send 200
requests, the first 90 would go to s2, since it had “90” more units available. The next 110 would
be sent equally to s1 and s2. At time t2 = t1 + 10 sec, s1 and s2 get replenished to the new data
provided by the external health monitor.
#!/usr/bin/python
import sys
import httplib
import os
conn = httplib.HTTPConnection(sys.argv[1]+':'+sys.argv[2])
conn.request("GET", "/")
r1 = conn.getresponse()
print r1
if r1.status == 200:
print r1.status, r1.reason ## Any output on the screen indicates SUCCESS for health monitor
try:
fname = sys.argv[0] + '.' + os.environ['POOL'] + '.' + sys.argv[1] + '.' + sys.argv[2] +
'.tasks'
f = open(fname, "w")
try:
f.write('230') # Write a string to a file - instead of 230 - find the data from the curl
output and feed it.
finally:
f.close()
except IOError:
pass
You can use the show pool <foo> detail and show poo <foo> server detail commands to see
detailed information about the number of connections being sent to the servers in the pool.
Weighted Ratio
NSX Advanced Load Balancer does not include a dedicated weighted ratio algorithm. Instead,
weight may be achieved via the ratio, which may be applied to any server within a pool. The ratio
may also be used in conjunction with any load balancing algorithm. With the ratio setting, each
server receives statically adjusted ratios of traffic. If one server has a ratio of 1 (the default) and
another server has a ratio of 4, the server set to 4 will receive 4 times the amount of connections
it otherwise would. For instance, using the least connections, one server may have 100 concurrent
connections while the second server has 400.
Persistence
A persistence profile governs the settings that force a client to stay connected to the same server
for a specified duration of time. This is sometimes referred to as sticky connections.
By default, load balancing can send a client to a different server, every time the client connects
with a virtual service or even distribute every HTTP request to a different server, when connection
multiplex is enabled. Server persistence guarantees the client will reconnect to the same server
every time they connect to a virtual service, as long as the persistence is still in effect. Enabling a
persistence profile ensures that the client will reconnect to the same server every time, or at least
for a desired duration of time. Persistent connections are critical for most servers that maintain
client session information locally.
All persistence methods are based on the same principle, which is to find a unique identifier of a
client and remember it for the desired length of time. The persistence information can be stored
locally on NSX Advanced Load Balancer SEs or can be sent to a client through a cookie or TLS
ticket. The client will then present that identifier to the SE, which directs the SE to send the client
to the correct server.
Persistence is an optional profile configured within Templates > Profiles > Persistence Profile.
Once the profile is created, it may be attached to one or more pools.
Types of Persistence
NSX Advanced Load Balancer can be configured with a number of persistence templates:
n HTTP Cookie Persistence: NSX Advanced Load Balancer inserts a cookie into HTTP responses.
n App Cookie Persistence: NSX Advanced Load Balancer reads existing server cookies or URI
embedded data such as JSessionID.
n HTTP Custom Header Persistence: Administrators may create custom, static mappings of
header values to specific servers
n Client IP Persistence: The client’s IP is used as the identifier and mapped to the server
n GSLB Site Cookie Persistence: GSLB application can be configured to persist to the sites in
which the transactions are initiated
Outside of the persistence profiles, two other types of persistence are available:
n DataScript: Custom persistence may be built using DataScripts for unique persistence
identifiers
n Consistent Hash: This is a combined load balancing algorithm and persistence method, which
can be based on a number of different identifiers as the key
Persistence Mirroring
Persistence data is either stored locally on NSX Advanced Load Balancer Service Engines or is
sent to and stored by clients.
Client stored persistence, which includes HTTP cookie, HTTP header mapping, and consistent
hash, are not kept locally on Service Engines. When the data, such as a cookie presented by the
client, is received, it contains the IP address and port of the persisted server for the client. No local
storage or memory is consumed to mirror the persistence. Persist tables may be infinite in size, as
no table is locally maintained.
Locally stored persistence methods, which includes HTTP app cookies, TLS, client IP addresses,
and DataScripts, NSX Advanced Load Balancer SEs maintain the persist mappings in a local table.
This table is automatically mirrored to all other Service Engines supporting the virtual service
as well as the Controllers. An SE failover will not result in a loss of persistence mappings. To
support larger persistence tables, allocate more memory to Service Engines and the SE Group >
Connection table setting.
n Delete: A profile may only be deleted if it is not currently assigned to a virtual service. An error
message will indicate the virtual service referencing the profile. The default system profiles can
be edited, but not deleted.
The table on this tab provides the following information for each persistence profile:
Field Description
Field Description
Field Description
Select New Server When Persistent Server Down Determine how this profile will handle a condition when
the Health Monitor marks a server as down while NSX
Advanced Load Balancer is persisting clients to it.
n Immediate: NSX Advanced Load Balancer will
immediately select a new server to replace the one
that has gone down and switch the persistence entry
to the new server.
n Never: No replacement server will be selected.
Persistent entries will be required to expire normally
based upon the persistence type.
To use HTTP cookie persistence, no configuration changes are required on the back-end servers.
HTTP persistence cookies created by the NSX Advanced Load Balancer have no impact on
existing server cookies or behavior.
Note The NSX Advanced Load Balancer also supports an app cookie persistence mode, that
relies on cookies. The app cookie method enables persistence based on information in existing
server cookies, rather than inserting a new NSX Advanced Load Balancer-created cookie.
To validate if HTTP cookie persistence is working, enable all headers for the virtual service
analytics and view logs to see the cookies sent by a client.
See Overview of Server Persistence for descriptions of other persistence methods and options.
Cookie Format
The following is an example of an HTTP session-persistence cookie created by NSX Advanced
Load Balancer.
Set-Cookie: JKQBPMSG=026cc2fffb-b95b-41-dxgObfTEe_IrnYmysot-VOVY1_EEW55HqmENnvC;
path=/
The cookie payload contains the back-end server IP address and port.
The payload is encrypted with AES-256. When a client makes a subsequent HTTP request, it
includes the cookie, which the SE uses to ensure that the client’s request is directed to the same
server.
The persistence timeout applies to persistence cookies that are created by NSX Advanced Load
Balancer for individual client sessions with virtual services that use the persistence profile.
Generally, the client or browser has the responsibility to clear a persistent session cookie, after
the session associated with the cookie is terminated, or when the browser is closed. Setting a
persistence timeout takes care of cases where the client or browser does not clear the session
cookies.
If the persistence timeout is set, the maximum lifetime of any session cookie that is created based
on the profile is set to the timeout. In this case, the cookie is valid for a maximum of the configured
timeout, beginning when the NSX Advanced Load Balancer creates the cookie.
For example, if the persistence timeout is set to 720 minutes, a cookie created based on the profile
is valid for a maximum of 12 hours, from the cookie creation time. After the persistence timeout
expires, the cookie expires and is no longer valid.
By default there is no timeout. The cookie sent is a session cookie, which is cleared by the client
after the session ends.
Note
n If the flag is_persistent_cookie is disabled , the timeout behavior remains unchanged (the
cookie expires according to the non-zero value of the timeout).
n If the flag is enabled and the value of timeout is zero, the cookie expires immediately, as the
max-age is set to zero.
Example:
Set-Cookie: JKQBPMSG=026cc2fffb-b95b-41-dxgObfTEe_IrnYmysot-VOVY1_EEW55HqmENnvC;
path=/ ; Max-Age=3600.
Persistence Mirroring
Since clients maintain the cookie and present it when visiting the site, the NSX Advanced Load
Balancer does not need to store the persistence information or mirror the persistence mappings
to other SEs. This allows for greater scale with minimal effort.
Persistence Duration
HTTP cookie persistence leverages a session-based cookie, which is valid as long as the client
maintains an HTTP session with the NSX Advanced Load Balancer. If the client closes a browser,
the cookie is deleted and the persistence is terminated.
The following table describes the fields needed to configure a persistence profile in the
persistence profile editor:
Select New Server When Persistent Server Down Action to be taken when a server is marked down, such as
by a health monitor or when it has reached a connection
limit. Indicates whether existing persisted users continue
to be sent to the server, or load balanced to a new server.
Immediate: The NSX Advanced Load Balancer
immediately selects a new server to replace the one
marked down and switch the persistence entry to the new
server.
Never: No replacement server will be selected. Persistent
entries will be required to expire normally based upon the
persistence type.
Type HTTP Cookie. Changing the type will change the profile to
another persistence method.
HTTP Cookie Name This field comes up blank. By populating this optional
field, the cookie will be inserted with the user-chosen
custom name. If it is not populated, the NSX Advanced
Load Balancer auto-generates a random eight-character
alphabetic name.
Persistence Timeout The maximum lifetime of any session cookie. The allowed
range is 1-14400 minutes. No value or zero indicates no
timeout
Note Starting with version 21.1.1, the NSX Advanced Load Balancer supports setting an HTTP-
Only flag for the cookie set by it. Setting this attribute helps to prevent the third-party scripts
from accessing this cookie if supported by the browser. This feature will activate for any HTTP or
terminated HTTPS virtual service.
When you set a cookie with the HTTP-Only flag, it informs the browser that this special cookie
should only be accessed by the server. Any attempt to access the cookie from a client-side script
is strictly forbidden.
Using session ID, servers do not have control over whether a client will accept a cookie. For this
purpose, they can choose to embed the session ID in both a cookie and the URI. Older browsers
or clients from Europe can skip the cookie and still include the session ID within the query of their
requests. For this reason the NSX Advanced Load Balancer automatically checks both locations.
Once an identifier has been located in a server response and a client’s request, the NSX Advanced
Load Balancer creates an entry in a local persistence table for future persistence like below:
www.avinetworks.com/index.html?jsessionid=a1b2c3d4e5
Note This method involves using an existing server cookie. For the NSX Advanced Load
Balancer to use its own cookie for persistence, use the HTTP Cookie persistence mode, which
is straightforward and more scalable.
See also Overview of Server Persistence for descriptions of other persistence methods and
options.
Persistence Table
Since app cookie persistence is stored locally on each SE, larger tables consume more memory.
For very large persist tables, consider adding additional memory to the SEs through the SE Group
properties for SE memory and the SE Group > Connection table setting. See also SE Memory
Consumption.
The app cookie persistence table is automatically mirrored to all SEs supporting the virtual service
using a pool configured with this persistence type.
Configuration Options
For details on fields for configuring App Cookie persistence profile, see NSX Advanced Load
Balancer UI Configuration Options under HTTP Cookie Persistence. Ensure that App Cookie is
selected from the Type drop-down menu.
The SE inspects the value of the defined HTTP header and matches the value against a statically
assigned header field for each server. If there is a match, the client is persisted to the server. The
server’s header field is configured in the Application > Pool > edit server page using the Header
Value field within the server table.
In the example below, when a client sends an HTTP request, the controller checks if a header
exists, based on the name configured in the customer HTTP header persistence profile. If the
header exists in the client’s request, the value is mapped against the servers as shown. If the value
was server2, the controller sends the client to apache2. If the header does not exist, or the value
does not match, the client is free to be load balanced to any server:
Persist Table
This method is a static mapping of header values to servers, to avoid need to maintaining
persistence table on each SE and mirroring. All SEs supporting a virtual service whose pool is
configured with this persistence type, automatically direct or persist users correctly to the same
servers.
Configuration Options
For details for configuring Custom HTTP Header persistence profile, see NSX Advanced Load
Balancer UI Configuration Options under HTTP Cookie Persistence. Ensure that Custom HTTP
Header is selected from the Type drop-down menu.
Client IP Persistence
This section discusses about the client IP persistence and its configuration.
The client IP address mode of persistence can be applied to any virtual service, regardless of TCP
or UDP. With this persistence method, NSX Advanced Load Balancer SEs will stick the client to the
same server for the configurable duration of time and store the mapping in a local database.
See also Persistence for descriptions of other persistence methods and options.
Persist Table
Since client IP persistence is stored locally on each SE, larger tables will consume more memory.
For extensive persist tables, consider adding additional memory to the SEs through the SE Group
Properties for SE memory and through the Infrastructure > Service Engine Group > Edit >
Memory Allocation.
The client IP persistence table is automatically mirrored to all Service Engines supporting the
virtual service and pool configured with this persistence type. To validate if a client IP address is
currently persisted, from the CLI use the following command to view entries in the table.
The following example searches the persistence table for the test-pool, searching for client 10.1.1.1.
Configuration Options
n Name: A unique name for the persistence profile.
n Type: TLS. Changing the type will change the profile to another persistence method.
n **Select New Server When Persistent Server Down**: If a server is marked DOWN, such as by
a health monitor or when it has reached a connection limit, should existing persisted users
continue to be sent to the server or load balanced to a new server?
n Immediate: NSX Advanced Load Balancer immediately selects a new server to replace the
one marked DOWN and switches the persistence entry to the new server.
n Never: No replacement server will be selected. Persistent entries will be required to expire
normally based upon the persistence type.
n Persistence Timeout: NSX Advanced Load Balancer keeps the persistence value for the
configured time once a client has closed any open connections to the virtual service. Once
the time has expired without the client reconnecting, the entry is expired from the persist
table. If the client reconnects before the timeout has expired, they are persisted to the same
server, and the timeout is canceled. The default timeout value is 5 minutes.
TLS Persistence
This section discusses about the TLS persistence and its configuration.
The TLS mode of persistence can be applied to any virtual service configured to terminate HTTPS.
With this persistence method, the NSX Advanced Load Balancer embeds the client-to-server
mapping in the TLS ticket ID sent to the client. It is similar to how HTTP cookies behave. The data
is embedded in an encrypted format that a SE can read should a client reconnect to a different SE.
Note This persistence method is often confused for an older, broken method of persistence
called SSL Session ID. While both are used for secure connections, these methods are unrelated.
See also Persistence for descriptions of other persistence methods and options.
Persist Table
The TLS ticket ID is automatically mirrored to all Service Engines supporting the virtual service,
regardless of this persistence mode. If this persistence is enabled, it adds no additional overhead
to the SEs or the automated TLS ticket mirroring.
As with any SSL/TLS concurrency, additional memory is beneficial for increasing the maximum size
of concurrent connections and, therefore, TLS persistence mappings.
Configuration Options
n Name: A unique name for the persistence profile.
n Type: TLS. Changing the type will change the profile to another persistence method.
n Select New Server When Persistent Server Down: If a server is marked DOWN, such as by
a health monitor or when it has reached a connection limit, should existing persisted users
continue to be sent to the server or load balanced to a new server?
n Immediate: NSX Advanced Load Balancer will immediately select a new server to replace
the one marked DOWN and switch the persistence entry to the new server.
n Never: No replacement server will be selected. Persistent entries will be required to expire
normally based upon the persistence type.
Compression
The compression option on NSX Advanced Load Balancer enables HTTP Gzip compression for
responses from NSX Advanced Load Balancer to the client.
Compression is an HTTP 1.1 standard for reducing the size of text-based data using the Gzip
algorithm. The typical compression ratio for HTML, Javascript, CSS and similar text content types
is about 75%, meaning that a 20-KB file may be compressed to 5 KB before being sent across the
internet, thus reducing the transmission time by a similar percentage.
Configuring Compression
The Compression tab permits one to view or edit the application profile’s compression settings.
To configure compression:
Procedure
2 Click Create to create a new profile or use the existing application profile as required.
The Auto and Custom mode is described in the later section. The compression percentage
achieved can be viewed using the Client Logs tab of the virtual service. This may require
enabling full client logs on the virtual service’s Analytics tab to log some or all client requests.
The logs will include a field showing the compression percentage with each HTTP response.
n Check the Compression checkbox to enable compression. You may only change
compression settings after enabling this feature.
n Select either Auto or Custom, which enables different levels of compression for different
clients. For instance, filters can be created to provide aggressive compression levels for
slow mobile clients while disabling compression for fast clients from the local intranet. Auto
is recommended, to dynamically tune the settings based on clients and available Service
Engine CPU resources.
n Auto mode enables NSX Advanced Load Balancer to determine the optimal settings.
Note By default, the Compression Mode is Auto. The content compression depends on
the client’s RTT, as mentioned below:
n Custom mode allows the creation of custom filters that provide more granular control
over who should receive what level of compression.
n Remove Accept Encoding Header removes the Accept-Encoding header, which is sent
by HTTP 1.1 clients to indicate they can accept compressed content. Removing the header
from the request prior to sending the request to the server allows NSX Advanced Load
Balancer to ensure the server will not compress the responses. Only NSX Advanced Load
Balancer will perform compression.
Custom Compression
This section covers the steps to create a custom compression filter.
Procedure
n Matching Rules: determine if the client (via Client IP or User Agent string) is eligible
to be compressed via the associated Action. If both Client IP and User Agent rules are
populated, then both must be true for the compression action to fire.
n Client IP Address allows you to use an IP Group to specify eligible client IP addresses.
For example, an IP Group called Intranet that contains a list of all internal IP address
ranges. Clearing the Is In button reverses this logic, meaning that any client that is not
coming from an internal IP network will match the filter.
n User-Agent matches the client’s User-Agent string against an eligible list contained
within a String Group. The User-Agent is a header presented by clients indicating
the type of browser or device they may be using. The System-Devices-Mobile Group
contains a list of HTTP User-Agent strings for common mobile browsers.
3 The Action section determines what will happen to clients or requests that meet the Match
criteria, specifically the level of HTTP compression that will be used.
n Aggressive compression uses Gzip level 6, which will compress text content by about
80% while requiring more CPU resources from both NSX Advanced Load Balancer and the
client.
n Normal compression uses Gzip level 1, which will compress text content by about 75%,
which provides a good mix between compression ratio and the CPU resources consumed
by both NSX Advanced Load Balancer and the client.
n No Compression disables compression. For clients coming from very fast, high bandwidth,
and low latency connections, such as within the same data center, compression may
actually slow down the transmission time and consume unnecessary CPU resources.
Caching
NSX Advanced Load Balancer caches HTTP content, thereby enabling faster page load times for
clients and reduced workloads for both servers and NSX Advanced Load Balancer.
When a server sends a response (for example logo.png), NSX Advanced Load Balancer adds the
object to its HTTP cache and serves the cached object to subsequent clients that request the same
object. Caching thus reduces the number of connections and requests sent to the server.
logo.png
logo.png logo.png
logo.png
Enabling caching and compression allows NSX Advanced Load Balancer to compress text-based
objects and store both the compressed and original uncompressed versions in the cache.
Subsequent requests from clients that support compression will be served from the cache. NSX
Advanced Load Balancer does not need to compress every object every time, greatly reducing the
compression workload.
n HTTP/HTTPS
NSX Advanced Load Balancer also supports caching objects from servers in HTTPS pools.
n Request Headers:
n Cache-Control: no-store
n Authorization
n Response Headers:
n Cache-Control: no-cache
Note It is possible for caching to not work with policies or DataScripts present on the virtual
service. Consider disabling caching in the application profile if policies and DataScripts need to be
applied to the virtual service.
Cache Size
The size of a cache is indirectly determined based on the memory allocation for a Service
Engine handling a virtual service that has caching enabled. This is determined within the SE
Group properties via the connection memory slider. Memory allocated to buffers is used for TCP
buffering (and hence accelerating), HTTP request and response buffering, and also for HTTP
cache.
X-Cache - NSX Advanced Load Balancer adds an HTTP header labeled X-Cache for any response
sent to the client that was served from the cache. This header is informational in nature, and
indicates that the object was served from an intermediary cache.
Age Header - NSX Advanced Load Balancer adds a header to the content served from cache that
indicates to the client the number of seconds that the object has been in an intermediate cache.
For example, if the originating server declared that the object must expire after 10 minutes and it
has been in the NSX Advanced Load Balancer cache for 5 minutes, the client knows that it must
only cache the object locally for 5 more minutes.
Date Header - If a date header was not added by the server, then Avi Vantage will add a date
header to the object served from its HTTP cache. This header indicates to the client when the
object was originally sent by the server to the HTTP cache in Avi Vantage.
Cacheable Object Size - The minimum and maximum size of an object to be cached, in bytes.
Most objects smaller than 100 bytes are web beacons and must not be cached despite being
image objects. Large objects, such as streamed videos can be cached, though it might not be
appropriate and might saturate the cache size quickly.
Cache Expire Time - An intermediate cache must be able to guarantee that it is not serving
stale content. If the server sends headers indicating how long the content can be cached (such
as cache control), the NSX Advanced Load Balancer uses those values. If the server does not
send expiration timeouts and the NSX Advanced Load Balancer is unable to make a strong
determination of freshness, it stores the object for no longer than the duration of time specified by
the Cache Expire Time.
Heuristic Expire - If a response object from the server does not include the Cache-Control header
but includes an If-Modified-Since header, the NSX Advanced Load Balancer uses this time to
calculate the cache-control expiration, which supersedes the Cache Expire Time setting for this
object.
Cache URI with Query Arguments - This option allows caching of objects whose URI includes a
query argument. Disabling this option prevents caching these objects. When enabled, the request
must match the URI query to be considered a hit. Following are two examples of URIs that include
queries. The first example might be a legitimate use case for caching a generic search, while the
second, a unique request posing a security liability to the cache.
n www.search.com/search.asp?search=caching
n www.foo.com/index.html?loginID=User
Cacheable Mime Types - Statically defines a list of cacheable object types. This can be a String
Group, such as System-Cacheable-Resource-Types, or a custom comma-separated list of Mime
types that the NSX Advanced Load Balancer must cache. If no Mime Types are listed in this field,
the NSX Advanced Load Balancer by default assumes that any object is eligible for caching.
Non-Cacheable Mime Types- Statically define a list of object types that are not cacheable. This
creates an exclusion list that is the opposite of the cacheable list. An object listed in both lists is not
cached.
The following commands show how to perform this action from the CLI.
Procedure
1 Check to see if the desired object exists within the cache. The truncated example below
returns the stats from the object found in the cache.
--------------------------------------------------------------------------------
URI: /path1/analytics.js
ctype: text/javascript
raw_key: pool-0-4]avinetworks.com:/path1/analytics.js
key: e6ce7ac2ab8668a8acc9f2d505281412
key_extn:
data_size: 146398 meta_size: 172 hdr_size: 414
body_size: 145984
date_time: 1449185388 last_mod_time: -1 etag:
"-725089702"
(Thu Dec 3 23:29:48 2015) (Wed Dec 31 23:59:59
1969)
in_time: 1449187395 exp_age: 120 init_age: 2007
last_used:
(Fri Dec 4 00:03:15 2015) (Fri Dec 4 00:05:15
2015)
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Use Cases
This section covers the following topics:
n Setting up Microsoft Exchange Server 2016 with NSX Advanced Load Balancer
Internal AMQP
Mobile App Microservice
Often, API response time directly impacts the end-user experience; therefore, it is critical to also
have a monitoring tool that can provide complete API transaction logs.
n Score API quality based on response time, response code error ratio, and resource
utilization
n Pinpoint API bottlenecks: are they in the client-facing network? Datacenter network? API
gateway itself?
n Full API transaction logs per client IP, device type, and so on:
n DDoS attack mitigation with detailed attack information (example: Top- N attackers)
Horizontal scale: You do not have to be caught off guard by a sudden traffic surge. NSX
Advanced Load Balancer can adjust the capacity of the load balancer infrastructure dynamically by
scaling out and scaling in its data plane engines called Service Engine (SE).
Analytics and visibility: Analytics and visibility play a key role in troubleshooting issues and
evaluating risks that can affect end-user experience. Unlike other ADC vendors, NSX Advanced
Load Balancer provides an end-to-end timing chart, pinpointing latency distribution across
segments of a client, the ADC, and servers. NSX Advanced Load Balancer understands the
resource utilization of servers, combines it with observed performance, and presents the result
as a health score. By looking at the health score, you can judge the current end-user experience
and risk coming from resource utilization.
SSL offload and management with ease of use: Simply select NSX Advanced Load Balancer's
SSL Everywhere and import a certificate. The rest will be taken care of by NSX Advanced Load
Balancer. You do not have to convert a certificate and configure multiple things to make Exchange
secure. Other significant advantages include SSL compute offload and HTTP visibility. In particular,
SSL compute offload allows the reduction of the number of CAS units and related license cost.
By terminating SSL on NSX Advanced Load Balancer, you can fully enjoy NSX Advanced Load
Balancer's innovative analytics and visibility engine.
Cloud-optimized deployment and high availability: The NSX Advanced Load Balancer Controller
automatically discovers available resources, such as networks and servers in the virtual
infrastructure. This allows IT admins to be less vulnerable to human errors. In addition, the NSX
Advanced Load Balancer Controller detects a problem when its SE or a hypervisor has a problem;
it automatically looks for a best available hypervisor and launches an SE to recover. Unlike other
ADC solutions, this approach does not require a redundant device.
Deployment Architecture
Load Balancer
IIS
POP
CAS2013 SMTP UM SIP+RTP
IMAP
HTTP Proxy
POP
HTTP IMAP SMTP
IIS
POP
Transport UM
IMAP
RpcProxy
MBX2013
RPS OWA, EAS, EWS, ECP, OAB
RPC CA MDB MailQ
Exchange Server 2016 has two roles for servers, the Client Access server (CAS) and the Mailbox
server, which comprise CAS Array and DAG (Database Access Group) respectively for high
availability and increased performance. The CAS provides client protocols, SMTP, and a Unified
Messaging Call Router. The client protocols include HTTP/HTTPS and POP3/IMAP4. The UM Call
Router redirects SIP traffic to a Mailbox server.
Note An external load balancer is required to build a CAS array. Unlike CAS array, DAG does
NOT require an external load balancer. A server can take both roles of the Client Access and the
Mailbox.
Outlook Web Access It enables any Web browser to connect to the Exchange
server, offering Outlook-client like experience on the
browser.
n In this case, a Windows 2012 Server (using a 2012 iso) was brought up on a VM with an 8-core
CPU, 8 GB of RAM, and 100 GB of disk capacity. (Ideally, the disk should be partitioned into
four drives for OS, Logs, Exchange Install Directory, and Databases).
n An Exchange server in 2016 then needs to be installed on the Windows 2012 server. An
Exchange server license can be obtained free of cost for 180 days using Outlook credentials
(personal). The license can be obtained from here: Microsoft Exchange Server 2016 product
page, Microsoft Exchange Server 2016 download page.
n With an Exchange 2016 server, it's a prerequisite that the server has a static IP.
n Before Exchange 2016 can be installed, it's necessary that the prerequisites are installed, else
the setup.exe file for 2016 fails with multiple errors. The same can be installed using Windows
PowerShell from the 2012 server VM that was created. Once installed, the server needs to be
rebooted. ** .NET 4.5 support (Ideally, you need 4.5.2, but the same would be upgraded to
4.5.2 automatically once the setup.exe is run.) ** Desktop Experience ** Internet Information
Service (IIS) ** Windows Failover Clustering.
n After the reboot, install Unified Communications Managed API (UCMA) 4.0 Runtime: download
page
n In case the server chosen is 2012 RTM, Windows Management Framework 4.0 needs to be
installed as well: download page
n Install the Active Directory Remote Server Administration Tools plugin on the Exchange server
using PowerShell.
n Install Active Directory per the steps outlined here: Setting up an Active Directory Lab (Part 1).
n An important step to note is that the DNS Resolver under System Settings in NSX Advanced
Load Balancer should point to the local DNS server set-up during Active Directory install. In
this case, AD, Exchange 2016, DNS, and IIS were installed on one single server.
n From the link above we need to make sure that we have a client machine that can be a
part of the domain we create ( avitest.com in this case) and the user that we create in Active
Directory can log in to the same. For test purposes, a Win7 test machine was chosen as
the client machine ( VM spawned out of a Windows 7 iso) which was made a part of the
domain avitest.com and with credentials configured in AD for the said test user from the client
machine.
n Once the client machine is a part of the domain, switch to the 2012 server PowerShell prompt
wherein the 2016 setup file resides and then configure Active Directory to receive Exchange
2016. The Exchange Schema version should be on 15317. Verify this using ADSI edit.
n The setup.exe for 2016 can now be executed and we need to set it up for the Mailbox rule.
n Once set up, ECP can be browsed using https://servername/ecp (in our case the server name
is lab-dc01).
n Since this is a lab-only environment, we need to skip the namespace part of Split DNS for
external and internal access. In this case, the internal and external hostname was kept as same
for being lab-dc01.avitest.com for all the Exchange services. (The same needs to be done from
the ECP login as done above.)
n MAPI and auto-discover services cannot be configured through ECP in the browser and need
to be configured via Exchange Management Shell.
n Log in to the Exchange Admin Center and create a self-signed certificate for the server. Export
the same to the desktop, as the same would be used for importing in the VS that we create.
n Create two mailbox users using EAC so that emails can be sent from two accounts.
n An Exchange client could be on Outlook 2016 or Outlook 2013. For tests, we used the OWA
access through a normal Chrome/Firefox browser.
n To enable SSL offload on Exchange 2016, and make changes to each Exchange service as
described in the Configuring SSL offloading in Exchange 2013 Microsoft TechNet article.
n To set up a secondary Exchange Server, follow the steps above. We don’t need to go ahead
with an AD installation but have to make sure that the secondary Exchange Server is part of
the same domain and that a new forest domain is NOT created. We just need the existing
domain that was created.
Load-Balancing Policies
VS-IMAP4
VS-POP3 Pool-IMAP4
VS-SMTP Pool-POP3
Pool-SMTP
CAS MBX
Avi SE
CAS MBX
NSX Advanced Load Balancer supports the deployment of an Exchange solution in three different
ways:
1 One virtual service (VS) and one pool: This is the quickest way to deploy the Exchange service
and requires only one virtual IP address. However, individual health monitoring for different
services is not possible. If you deploy Exchange 2016, you have to choose one persistence
method across all services; this may result in suboptimal operational results because different
Exchange 2016 services require different persistence methods for the best result. The statistics
and analytics information from the NSX Advanced Load Balancer system will be an aggregate
of all services.
2 One virtual service and multiple pools: This requires configuring the Layer 7 policy on
NSX Advanced Load Balancer, to forward an HTTP message based on the host header to
a corresponding pool. This deployment requires only one virtual IP address and enables
individual health monitoring for different services. In addition, for Exchange 2016,NSX
Advanced Load Balancer supports a different persistence method per pool. This deployment
enables NSX Advanced Load Balancer to provide statistics and analytics information on a
per-pool basis.
3 Multiple virtual services and one pool per virtual service: This requires as many IP addresses
as Exchange services to load balance. Each virtual service will have one pool. This deployment
enables NSX Advanced Load Balancer to provide statistics and analytics information on a
per-VS basis.
In this section, we are going to use the second deployment model. We will create a single virtual
service for all services with multiple pools. Each pool corresponds to an Exchange service. The
table below lists all the Exchange services and ports to load balance and health check methods.
Exchange 2016 provides pre-defined HTML pages for health monitoring by a load balancer.
/rpc/
Outlook Anywhere 443/HTTPS 80/HTTP lab-dc01.avitest.com
healthchecks.htm
/OWA/
Outlook Web Access 443/HTTPS 80/HTTP lab-dc01.avitest.com
healthchecks.htm
Exchange
/ECP/
Administration 443/HTTPS 80/HTTP lab-dc01.avitest.com
healthchecks.htm
Center
Exchange /PowerShell/
443/HTTPS 80/HTTP lab-dc01.avitest.com
Management Shell healthchecks.htm
/Autodiscover/
AutoDiscover 443/HTTPS 80/HTTP lab-dc01.avitest.com
healthchecks.htm
/Microsoft-Server-
ActiveSync 443/HTTPS 80/HTTP lab-dc01.avitest.com ActiveSync/
healthchecks.htm
/OAB/
Offline Address Book 443/HTTPS 80/HTTP lab-dc01.avitest.com
healthchecks.htm
Messaging
Application /MAPI/
443/HTTPS 80/HTTP lab-dc01.avitest.com
Programming healthchecks.htm
Interface
POP3 995/POP3 with SSL 995/POP3 with SSL lab-dc01.avitest.com TCP port 995
IMAP4 993/IMAP4 with SSL 993/IMAP4 with SSL lab-dc01.avitest.com TCP port 993
SMTP 465/SMTP with SSL 465/SMTP with SSL lab-dc01.avitest.com TCP port 465
Health Monitor
1 Navigate to Templates > Profile > Monitor.
2 Create an HTTP health monitor for each Exchange service (8 in number). Use URLs listed
in table 1. Client Request Data needs to be set to GET //healthcheck.htm HTTP/1.1. As an
example, this one is set for OWA as GET /OWA/healthcheck.htm HTTP/1.1.
3 Create a TCP health monitor each for POP3, IMAP4, and SMTP on specific port numbers as
shown in table 1.
SSL Certificate
1 Navigate to Template > Profile > Certificate.
2 Click Create > Application Certificate. Import the self-signed certificate that was exported
when the CSR was created on Exchange Server. The Exchange Server that is exported is in
PFX format and needs to be converted to .pem format to be imported into the NSX Advanced
Load Balancer UI. This can be achieved as “openssl pkcs12 -in cert.PFX -out cert.pem -nodes”.
Virtual Service
1 Navigate to Application > Virtual Services. Create an L7 Virtual Service for Exchange service
and associate it with other objects, such as an application profile, health monitor, SSL, etc.
2 For HTTPS, use System-Secure-HTTP and System-TCP-Proxy for Application Profile and
TCP/UDP Profile. Note: When HTTPS or the System-Secure-HTTP profile are used, disable
the "Secure Cookies" and "HTTP-only Cookies" checkboxes in the Security tab for that HTTP
profile.
3 Create three L4 Virtual Services each for POP3, IMAP4, and SMTP, use System-L4-
Application and System-TCP-Proxy with the same IP address as the L7 VS (this is optional)
but different service port numbers than the L7 VS.
Pool
n This can be accessed separately or from the Virtual Services configuration wizard. The pool
is a construct that includes servers, load balancing method, persistence method, and health
monitor. Add servers across which load is to be balanced and choose Least-Connections for
the load balancing method. Below is an example of a pool created for the Outlook Web Access
(OWA) service.
n The Active health monitor is chosen as the one created above. In this case, it’s the OWA health
monitor which is chosen.
n The server IP address is the IP of the Exchange server which resolves to lab-dc01.avitest.com.
HTTP Policy
1 This can be added after creating a virtual service or from the Virtual Service configuration
wizard.
2 Create a HTTP policy and it includes 8 HTTP request rules, each rule corresponding to an
Exchange service.
4 Navigate to Application > Virtual Services. Click the virtual services edit icon. This will pop up
in the Edit Virtual Service menu.
8 Select Path and Begins With for Matching Rules. Then, enter /rpc.
9 Select Content Switch and Pool for Action. Then, select a corresponding pool, e.g., pool-oa.
Below we can see an example of creating the same for an L7 virtual service for OWA.
Below we see all HTTP-based policies created for the L7 virtual service.
n Repeat the steps for each Exchange pool. Refer to table 2 for URLs and pools.
Exchange Administration
pool-eac 80/HTTP /ecp/
Center
Exchange Management
pool-ems 80/HTTP /powershell/
Shell
/microsoft-server-
ActiveSync pool-as 80/HTTP
activesync/
Messaging Application
pool-mapi 80/HTTP /mapi/
Programming Interface
Load Balancing
External DNS
DNS response: 71.72.221.140 (GoDaddy, etc.)
HTTPS
Connection established
71.72.221.140
translated to
External 10.15.1.7
Internal
Ex 16-01
10.15.1.13
n To support load balancing across Exchange Servers on a single VIP, choose the “Round
Robin” load balance option under all pools that have been configured. Below we show this
being done for the owa-pool.
n Add the secondary exchange server IP under all pools. This is seen below for the owa-pool.
Non-significant logs having been on, one observes a total of 43 log entries, including the
successful ones (return code = 200). The most recent log entry is shown expanded. The other 42,
collapsed into single-line rows, are not shown in the screenshot. The L7 virtual service successfully
content-switched requests to the pool-owa pool as a result of the rule-pool-owa request policy
rule.
The NSX Advanced Load Balancer solution provides additional information about the client from
which the request originated, including the client’s operating system (Android), device type (Moto
G Play), browser (Chrome Mobile), SSL version (TLSv1.2), certificate type (RSA), and so on.
Passive FTP
NSX Advanced Load Balancer supports passive FTP using the following configuration:
Exactly one SE in an SE group may deliver the FTP service at any given time. Virtual service
scale-out to two or more SEs is not supported with NSX Advanced Load Balancer FTP. Therefore,
legacy active/standby and 1+M elastic HA are supported. Active/active elastic HA is not.
Application Profile: L4
Service Ports: Set to Advanced via the NSX Advanced Load Balancer UI
Port: 21 to 21
Pool Settings:
Persistence: Client IP
Active FTP
Active FTP is not supported. NSX Advanced Load Balancer recommends the use of passive FTP as
a workaround.
In passive FTP, the client sends a PASV command to the server on port 21. The server responds
with the server IP address data port that is greater than 1023 to connect to. On using a virtual
IP on the load balancer for passive FTP, the server IP has to be changed to the virtual IP on
the load balancer so that the client connects to the load balancer instead of connecting to the
server directly. A DataScript is used for changing the server IP to a virtual IP configured in the FTP
payload in the server response.
4 Configuring Layer 4 virtual service with port configuration for the data channel
n Paste the below bash script for the FTP health monitor in the Script Code section.
#!/bin/bash
curl -s ftp://$IP/$path --ftp-pasv -u $user:$pass
n Enter the Username, Password, and the Filepath in the Script Variables section.
The file path is the absolute path of the file to be checked in the health monitor. The curl opens
up an FTP connection using the username and password provided to the servers in the pool and
requests for a directory listing in the path provided. The curl runs in silent mode (as specified by
the option -s) and returns a directory listing output only if a file exists in the file path and the health
monitor will pass. If no file exists in the file path, the health monitor will fail. The path is optional,
and if not specified, the curl will retrieve the root directory listing.
Configuring Pool
To configure the pool with required FTP servers, on NSX Advanced Load Balancer UI navigate to
Applications > Pools and click Create Pool.
Consistent Hash with Source IP Address is chosen as the load balancing algorithm to avoid a
different server being selected by each SE if a virtual server is scaled out to multiple Service
Engines.
n Click +Add Active Monitor and from the dropdown list select the health monitor configured in
the previous step - FTP.
n Under Other Settings, click the checkbox for Disable Port Translation to enable the option.
The FTP data channel will be established on an ephemeral port and this port has to be used to
send the traffic to the server without any modification. Hence, Disable Port Translation has to be
enabled.
Configuring DataScript
To configure the Layer 4 response DataScript, on NSX Advanced Load Balancer UI navigate to
Templates > Scripts > DataScripts and click Create.
Add the below DataScript to the VS Datascript Evt L4 Response Event Script section and click
Save.
1 Navigate to Applications > Virtual Service, click on Create Virtual Service, and select
Advanced Setup.
2 Under Profiles,
Note From a security perspective, it is recommended to identify the specific passive port
range configured on the FTP servers and to configure this port range under the Virtual Service
rather than the full range of high ports.
5 Under Pool, click the dropdown and select the pool configured - FTP.
6 Click Next.
7 In the Policies tab, under DataScripts, click + Add DataScript. From the dropdown, select the
DataScript configured in the previous section - FTP-DataScript.
9 Click Next to navigate to the next two tabs and Save the configuration.
Additional Configuration
The FTP servers could enforce that the control and data connections are sourced from the same
IP. Hence, the Service Engines that load balances the control and data traffic should be the same.
This can be achieved by deploying the Service Engines in an active/standby high availability mode.
For deployment in active/active mode with native Layer 2 scaleout, to ensure the same Service
Engine load balances the traffic to the FTP servers, configure the following on the virtual service
using the CLI:
Note On using BGP / ECMP scaleout, as in the deployment for FTP load balancing in Azure or
GCP, the flow would reach the Service Engines based on the routing hash done on the upstream
device. Therefore the above CLI configuration is not applicable for BGP / ECMP scaleout.
The virtual service is now ready for load balancing FTP. The FTP server IP for clients would be the
VIP configured on the FTP virtual service.
The support for load balancing Active FTP is available starting with NSX Advanced Load Balancer
release 20.1.6. NSX Advanced Load Balancer uses the Layer 4 application virtual service that
listens on the FTP port and the preserve_client_ip option to achieve the Active FTP load
balancing.
Prerequisites
n Knowledge of Active FTP and its configuration.
n Preserve client IP - See Preserve Client IP for the deployment requirements and
configuration options.
n NAT Policy - See Configuring NAT on NSX Advanced Load Balancer Service Engine for
the deployment requirements and configuration options.
IP routing feature is required for NAT functionality, hence the requirement of SE HA mode of
Legacy(Active/Standby) is mandatory.
Topology
NSX Advanced Load Balancer is logically inline between the user’s network and the FTP Server
Network. All traffic to FTP Servers and the return traffic from FTP Servers to users flow to the NSX
Advanced Load Balancer (Service Engines).
In the active mode FTP, the client connects from a random port (N > 1023) to the FTP server’s
command port, port 21. Then, the client starts listening on port N+1 and sends the FTP command
port N+1 to the FTP server.
The server will then connect back to the client’s specified data port from its local data port, which
is port 20.
To support the active mode FTP, the following communication channels need to be opened at the
server-side firewall:
n FTP server’s port 21 to ports > 1023 (Server responds to client’s control port)
n FTP server’s port 20 to ports > 1023 (Server initiates data connection to client’s data port)
n FTP server’s port 20 from ports > 1023 (Client sends ACKs to server’s data port)
n For FTP load balancing, the SE exists between the client and server. FTP virtual service
(Listening on port 21) is configured on the SE, and the FTP servers are configured as the pool
members. Also, the Preserve Client IP Address is enabled on the virtual service application
profile.
n Floating Interface IP is configured that can act as the default gateway for the back-end
server network.
n If the deployment Network has a firewall, configure NAT for the server’s connection with FTP
virtual service IP address.
n In the absence of a firewall in the deployment network, the random NAT IP address
configuration is required, and still, the active FTP works as expected.
Configuration
Follow the steps mentioned below to configure NSX Advanced Load Balancer for FTP load
balancing:
1 Create FTP virtual service using System L4 Application with FTP port (21) as listening service.
3 Configure the floating interface IP address under the Network Service, which acts as the
default gateway for the back-end server network.
a Match Criteria: Server subnet as source IP address match and source port as 20 (for the
active FTP).
b Action: Nat IP should be the same as virtual service IP address at step1. (This is to prevent
the firewall problems in the front-end deployments).
5 Attach the above NAT Profile to the Network Service to ensure that the Server Originating FTP
Requests is NAT’ed properly.
Note The rule has Server Network and the source port 20 included in the match. The source
port rule is necessary to match only FTP traffic, or else the SSH connections to the server from the
client will fail.
Supportability
The following tech-support commands and packet captures are available to debug the problems
regarding the Active FTP.
FTP VS:
n show serviceengine <se> vshash # listening service on VNIC with FTP command port 21.
Packet Captures:
Prerequisites
n Knowledge of Cisco ISE and its configuration is required before configuring NSX Advanced
Load Balancer to load balance RADIUS traffic to Cisco ISE.
Topology
PSN
ISE-PSN-1
PSN
End User Network Access Network / Router Avi Load Balancer
Device (Service Engine) ISE-PSN-2
PSN
ISE-PSN-3
As shown in the topology, NSX Advanced Load Balancer is logically in line between the user’s
network and the ISE Policy Service nodes (PSN). All traffic to ISE PSNs flow via NSX Advanced
Load Balancer load balancers (Service Engines), as well as return traffic from ISE PSNs to users.
Scenario
An NSX Advanced Load Balancer VIP is configured as a RADIUS server on the network access
device (NAD). Once NSX Advanced Load Balancer receives the RADIUS authentication traffic from
the users, it is load balanced to one of the ISE PSNs using configured load balancing algorithms.
A persistence entry is created using DataScripts which parses the RADIUS requests and creates an
entry based on the configured RADIUS attributes. Any subsequent RADIUS authentication traffic
or DHCP profile traffic from the same client will be sent to the same server using the persistence
entry.
The Cisco-ISE will send a Change of Authorization (CoA) request with the following details:
The NAD expects the source IP to be that of the configured RADIUS server; in this case, it is the
NSX Advanced Load Balancer VIP.
The NAT policy has been configured on NSX Advanced Load Balancer to NAT the source IP of the
server to the VIP if the destination port of the packet is UDP 1700.
Configuration
Follow the below-mentioned steps to configure NSX Advanced Load Balancer for RADIUS load
balancing:
1 Configure DataScript to parse RADIUS and DHCP packets and persistence using required
fields.
2 Configure the health monitor for RADIUS. The SE IP needs to be configured as NAD on the ISE
with the same credentials on the ISE and NSX Advanced Load Balancer.
5 Configure NAT for CoA and attach to required Service Engine group.
DataScript
DataScript Logic
DHCP packets are parsed and the host populated client-identifier is noted, if any. Client-identifier
is expected to be the host MAC address. If the client-identifier is populated, then it will match the
persistence entry created for RADIUS using calling-station-id and will send the DHCP packet
to the same PSN as RADIUS. If the client-identifier is not present in the DHCP packet, it will be
forwarded using the configured load balancing algorithms to one of the three ISE PSNs.
Field Description
Description Specify the description for the name given for the health
monitor.
Configuring Pool
1 A single pool needs to be configured for all protocols. The pool members will be ISE-PSN. The
default server port should be 1812.
4 Click Save.
Note
a The application profile selected should be System-L4-Application with the Preserve Client
IP option enabled.
2 Configure all required ports for RADIUS and DHCP. For DHCP, use System-UDP-Per-Pkt by
overriding the TCP/UDP profile. Use UDP per packet profile as the ISE does not respond
to the DHCP packets. If HTTPS is configured, it should be overridden to use the System-TCP-
Proxy profile.
4 The script parses requests from the client towards the server; hence, it is a request event
script.
6 In the Pools section, select the pool configured for RADIUS and DHCP.
8 Select required protocol parsers. Select Default-DHCP and Default-Radius in this DataScript.
9 Attach the DataScript to the VS. Navigate to Edit Virtual Service > Policies > DataScripts >
Add DataScript and select the configured DataScript. Click Save DataScript.
Configuring NAT
NAT rules are configured as a policy called nat policy via the NSX Advanced Load Balancer CLI
and are attached to the Service Engine group. NAT rules are per-VRF. NAT rules match criteria
can be from source/dest IP/ranges or source/dest port/ranges.
The action for NAT in the ISE use case is to set the source IP as the virtual service VIP for CoA
packets. The ISE sends the CoA packets to UDP port 1700 (by default) to ensure there are match
criteria. The nat_ip is the IP, that the source IP of the matched traffic will be translated to. In this
case, it is the NSX Advanced Load Balancer VIP of the RADIUS virtual service.
Refer to Configuring NAT on NSX Advanced Load Balancer Service Engine for more details on
NAT configuration. It is recommended to use a separate Service Engine group for RADIUS load
balancing.
Note
1 NAT will work only if IP routing is enabled on the SE group, hence all the limitations that are
applicable to enable IP routing will apply here. SEs must be in legacy active/standby. Refer to
Default Gateway (IP Routing on NSX Advanced Load Balancer SE) for more details.
2 For RADIUS load balancing with ISE, it is recommended to preserve the client IP, since the
ISE sends CoA to the NAD IP which is obtained from the IP header and not the IP from the
RADIUS header. If the client IP is not preserved, the ISE will see SE as NAD and CoA will fail.
Refer to Preserve Client IP for more details.
3 NAT will work only for UDP traffic as of release 18.2.5. It will not work for any other traffic
(ICMP/TCP).
Since NSX Advanced Load Balancer SEs are configured with IP routing enabled, any traffic that
does not require load balancing and is destined directly to/from the ISE PSN IPs will be routed by
the SE from/to network hosts.
Health Monitoring
This section describes the details of health monitors used by NSX Advanced Load Balancer. NSX
Advanced Load Balancer uses servers to accommodate additional workload before load balancing
a client to a server. NSX Advanced Load Balancer ensures that the servers perform correctly.
Health monitors perform this function either by actively sending a synthetic transaction to a server
or by passively monitoring client experience with the server. NSX Advanced Load Balancer sends
active health monitors periodically that originate from Service Engines hosting the virtual service.
n A pool that is not attached to a virtual service will not send health monitors and is considered
as an inactive configuration.
n A pool can have multiple actively concurrent health monitors, such as ping, TCP, and HTTP,
and a passive monitor.
n All active health monitors must be successful for the server to be marked up.
Active health monitors originate from the Service Engines hosting the virtual service. Each SE
must be able to send monitors to the servers, which ensures there are no routing or intermediate
networking issues that might prevent access to a server from all the active Service Engines. If one
SE marks a server up and another SE marks a server down, each SE will include or exclude the
server from load balancing according to their local monitor results.
n DNS Monitor
n External Monitor
n GSLB Monitor
n HTTP Monitor
n HTTPS Monitor
n Ping Monitor
n RADIUS Monitor
n TCP Monitor
n UDP Monitor
n SIP Monitor
With active health monitors, NSX Advanced Load Balancer will mark a server down after the
specified number of consecutive failures and will no longer send new connections or requests until
that the server can correctly pass the periodic active health monitors.
With passive health monitors, server failures will not cause NSX Advanced Load Balancer to mark
that server as down. Rather, the passive health monitor will reduce the number of connections or
requests sent to the server relative to the other servers in the pool by about 75%. Further failures
may increase this percentage.
Note Best practice is to enable both a passive and an active health monitor to each pool.
2 Click the edit icon at the top right to edit health monitors.
5 Click Save.
n Search: Click the search icon to search across the list of objects.
n Create: Click the edit icon to open the Edit Health Monitor window.
n Edit: Click the edit icon to open the Edit Health Monitor window.
n Delete: You can delete a profile if it is not currently assigned to a virtual service. An error
message will indicate the VS referencing the profile. You can edit the default system profiles,
but cannot delete the same.
The table on this tab provides the following information for each health monitor profile:
Field Description
Field Description
Send Interval The system displays the frequency at which the health
monitor initiates a server check, in seconds.
Receive Timeout The system displays the maximum amount of time before
the server must return a valid response to the health
monitor, in seconds.
Note The New Health Monitor and Edit Health Monitor windows share the same interface.
To create or edit a health monitor specify the following details (applicable to active health monitors
of every type):
Field Description
Receive Timeout Specify the maximum amount of time before the server
must return a valid response to the health monitor,
in seconds. The minimum value is 1 second, and the
maximum is the shorter of either 2400 seconds or the
Send Interval value minus 1 second. If the status of
a server continually flips between up and down, this
may indicate that the value for Receive Timeout is too
aggressive for the server.
Type Select the type of health monitor from the drop-down list.
The following are the options available in the drop-down
list:
n DNS Monitor
n External Monitor
n HTTP Monitor
n HTTPS Monitor
n Ping Monitor
n Radius Monitor
n TCP Monitor
n UDP Monitor
n SIP Monitor
Successful Checks Specify the number of consecutive health checks that must
succeed before NSX Advanced Load Balancer marks a
down server as up. The minimum is 1, and the maximum
is 50.
Failed Checks Specify the number of consecutive health checks that must
fail before NSX Advanced Load Balancer marks an up
server as down. The minimum is 1, and the maximum is
50.
Is Federated Check this box to replicate the health monitor across the
federation of Controller clusters. If you uncheck this box,
the health monitor will be visible within the Controller
cluster and its associated SEs.
Note In NSX Advanced Load Balancer, once the Type field is set and the monitor profile is
created, you cannot amend this field.
For more details on generic field explanations, refer to the Health Monitoring.
To edit a health monitor you can check on the relevant check box and click the edit icon.
To create a new DNS health monitor, click Create button. Select the DNS option from the drop-
down list of the Type field. The following screen is displayed:
You can specify the following details related to DNS request and response settings:
n Request Name — Specify the request name. The DNS monitor will query the DNS server for
the fully qualified name in this field. For instance, www.avinetworks.com.
n Response Matches — Select one of the appropriate response matches. The following are the
options:
n Anything — Any DNS answer from the server will be successful, even an empty answer.
n Any Type — The DNS response must contain at least one non-empty answer.
n Query Type — The response must have at least one answer of which the resource record
type matches the query type.
n Response Code — Select one of the appropriate response code. The following are the options:
n Anything — The monitor ignores the DNS server’s response code, and any potential
errors, hence will not result in a health check failure.
n No Error — The monitor marks the DNS query as failed if any error code is returned by the
server.
n Response String — Specify the IP address. The DNS response must contain this IP address to
be considered successful.
n Record Type — Select the record types used in the health monitor DNS query. The following
are the options:
n A
n AAAA
The external monitor type allows you to write scripts to provide highly customized and granular
health checks. The scripts can be Linux shell, Python, or Perl, which can be used to execute
wget, netcat, curl, snmpget, mysql-client, or dig. External monitors have constrained access to
resources, such as CPU and memory to ensure the normal functioning of NSX Advanced Load
Balancer Service Engines. As with any custom scripting, thoroughly validate the long-term stability
of the implemented script before pointing it at production servers.
You can view the errors generated from the script in the output by navigating to Operations >
Events log.
NSX Advanced Load Balancer includes three sample scripts via the System-Xternal Perl, Python,
and Shell monitors.
Note NSX Advanced Load Balancer supports IPv6 external health monitors.
While building an external monitor, you need to manually test the successful execution of the
commands. To test command from an SE, you need to switch to the proper namespace or tenant.
The production external monitor will correctly use the proper tenant.
n System-Xternal-Perl
n System-Xternal-Python
n System-Xternal-Shell
To create a new External health monitor, click Create button. Select the External option from the
drop-down list of the Type field. The following screen is displayed:
Field Description
Script Code Specify the script code. You can either upload the script by
clicking on the Upload File option or paste the script code
by clicking the Paste Text option.
Script Parameters Specify the optional arguments to feed into the script.
These strings are passed in as arguments to the script,
such as $1 = server IP, $2 = server port.
Field Description
Health Monitor Port Specify the health monitor. Use this port instead of the
port defined for the server in the pool. If the monitor
succeeds to this port, the load-balanced traffic will still be
sent to the port of the server defined within the pool.
Script Variables Specify the environment variables to be fed into the script.
For instance, a script that authenticates to the server may
have a variable set to USER=test.
n Best Practice: For busy Service Engines, keep the monitoring interval lower and receive
timeout larger since external checks tend to use more system resources than the system
default monitors.
n Receive Timeout: Maximum time before the server must return a valid response to the health
monitor in seconds.
n Successful Checks: Number of consecutive health checks that must succeed before NSX
Advanced Load Balancer marks a down server as being back up.
n Failed Checks: Number of consecutive health checks that must fail before NSX Advanced
Load Balancer marks an up server as being down.
While building an external monitor, you need to manually test the successful execution of the
commands. To test a command from an SE, it may be necessary to switch to the proper
namespace or tenant. The production external monitor will correctly use the proper tenant. To
manually switch tenants when testing a command from the SE CLI, follow the commands in the
following article: Manually Validate Server Health.
n Script Code: Upload the script via copy/paste or uploading the file.
n Script Parameters: Enter any optional arguments to apply. These strings are passed in as
arguments to the script, such as $1 = server IP, $2 = server port.
n Script Variables: Custom environment variables may be fed into the script to allow simplified
re-usability. For instance, a script that authenticates to the server may have a variable set to
USER=test.
n Script Success: If a script exits with any data, it is considered as success and marks as server
up. If there is no data from the script, the monitor will mark the server down.
In the SharePoint monitor example below, the script includes a grep "200 OK". If this is found, this
data is returned and the monitor exits as success. If the grep does not find this string, no data is
returned and the monitor marks the server down.
#!/bin/bash
#mysql --host=$IP --user=root --password=s3cret! -e "select 1"
#!/bin/bash
#curl http://$IP:$PORT/Shared%20Documents/10m.dat -I -L --ntlm -u $USER:$PASS -I -L > /run/
hmuser/$HM_NAME.out 2>/dev/null
curl http://$IP:$PORT/Shared%20Documents/10m.dat -I -L --ntlm -u $USER:$PASS -I -L | grep
"200 OK"
Example 1:
In this example, the script makes NSX Advanced Load Balancer SE to query the database. On
getting successful response, NSX Advanced Load Balancer SE marks the server UP, else it marks
the server DOWN.
#!/bin/bash
#exporting username's password
export PGPASSWORD='password123'
psql -U aviuser -h $IP -p $PORT -d aviuser -c "SELECT * FROM employees"
Example 2:
In this example, the script makes the NSX Advanced Load Balancer SE to query the database
and parse the response for cell present at the provided row, column and match it to the provided
string. If it is matched, then the server will be marked as up, else the server will be marked DOWN.
#!/bin/bash
#example script for
#string match to cell present at row,column of query response
row=2
column=2
match_string="bob"
#exporting username's password
export PGPASSWORD='password123'
response="$(psql --field-separator=' ' -t --no-align -U aviuser -h $IP -p $PORT -d aviuser -c
"SELECT * FROM employees")"
str="$(awk -v r="$row" -v c="$column" 'FNR == r {print $c}' <<< "$response")"
if [ "$str" = "$match_string" ]; then
echo "Matched"
fi
The below example performs an Access-Request using PAP authentication against the RADIUS
pool member and checks for an Access-Accept response.
#!/usr/bin/python3
import os
import radius
try:
r = radius.Radius(os.environ['RAD_SECRET'],
os.environ['IP'],
port=int(os.environ['PORT']),
timeout=int(os.environ['RAD_TIMEOUT']))
if r.authenticate(os.environ['RAD_USERNAME'], os.environ['RAD_PASSWORD']):
print('Access Accepted')
except:
pass
RAD_SECRET, RAD_TIMEOUT, RAD_USERNAME and RAD_PASSWORD can be passed in the health monitor
script variables, for example:
Applications like curl can have different syntax for v4 and v6 addresses. The external health
monitor scripts should be aware of these syntax. Following are the examples:
Starting with NSX Advanced Load Balancer 21.1.3, to resolve domain names, DNS Resolution on
Service Engine should be configured.
EXT_HM=exthm.example.com
curl <http://$EXT_HM:8123/path/to/resource> | grep "200 OK"```
#!/bin/bash
#curl -v $IP:$PORT >/run/hmuser/$HM_NAME.$IP.$PORT.out
if [[ $IP =~ : ]];
then curl -v [$IP]:$PORT;
else curl -v $IP:$PORT;
fi
#!/usr/bin/perl -w
my $ip= $ARGV[0];
my $port = $ARGV[1];
my $curl_out;
if ($ip =~ /:/) {
$curl_out = `curl -v "[$ip]":"$port" 2>&1`;
} else {
$curl_out = `curl -v "$ip":"$port" 2>&1`;
}
if (index($curl_out, "200 OK") != -1) {
print "Server is up";
}
List of SE Packages
Scripting Languages
n Perl
n Python
n curl
n snmp
n dnsutils
n libpython2.7
n python-dev
n mysql-client
n nmap
n freetds-dev
n freetds-bin
n ldapsearch
n postgresql-client
n pymssql
n py-radius
#!/usr/bin/perl
# ntpdate.pl
# this code will query a ntp server for the local time and display
# it. it is intended to show how to use a NTP server as a time
# source for a simple network connected device.
#
# For better clock management see the offical NTP info at:
# http://www.eecis.udel.edu/~ntp/
#
$HOSTNAME=shift;
$HOSTNAME="192.168.1.254" unless $HOSTNAME ; # our NTP server
$PORTNO=123; # NTP is port 123
$MAXLEN=1024; # check our buffers
use Socket;
# build a message. Our message is all zeros except for a one in the protocol version field
# $msg in binary is 00 001 000 00000000 .... or in C msg[]={010,0,0,0,0,0,0,0,0,...}
#it should be a total of 48 bytes long
$MSG="\01
Note The ntpdate or ntpq programs are not packaged in the Service Engine, and hence cannot
be used at this point in time.
The external Python health monitors should be converted to Python 3.0 syntax as part of upgrade
procedure.
Before initiating the upgrade to NSX Advanced Load Balancer release 20.1.1, execute the following
steps:
2 Remove the health monitors, or replace them with a non-Python health monitor.
3 Ensure that the health monitor script is modified to Python 3.0 syntax.
1 Replace the existing (Python 2.7) health monitor script with the Python 3 script.
2 Re-apply the health monitor to the required pools, and remove the temporary non-Python
health monitor (if configure).
The following are the two categories of GSLB service health monitoring:
n Control-plane
n Data-plane
Note The health monitor is applicable for GSLB only if the is_federated option is checked in the
health monitor configuration.
SSL Attributes Required for IMAPS (secure IMAP) Mandatory for IMAPS (SSL Profile
monitor. Attribute).
Note Currently, the IMAP Monitor can be configured only using the CLI.
Example:
Example:
The following are the SSL configurations used for the IMAP health monitor:
SSL Profile: Select an existing SSL profile or create a new one, as required. This defines the
ciphers and SSL versions to be used for the health monitor traffic to the backend servers.
PKI Profile: Select an existing PKI profile or create a new one, as required. This is used to validate
the SSL certificate presented by the server.
SSL key and certificate: Select an existing SSL Key and Certificate or create a new one, as
required.
The SMTP monitor marks the server up on successful transfer and down in case of failure. A basic
SMTP health monitor checks if the server is up or down by sending ELHO NOOP QUIT commands.
SSL Attributes Required for SMTPS (secure SMTP) Mandatory for SMTPS (SSL Profile
monitor. Attribute).
Note Currently the SMTP monitor can be configured only using the CLI.
The following are the SSL configurations used for SMTPS health monitor:
n SSL Profile: Select an existing SSL Profile or create a new one, as required. This defines the
ciphers and SSL versions to be used for the health monitor traffic to the backend servers.
n PKI profile : Select an existing PKI profile or create a new one, as required. This is used to
validate the SSL certificate presented by the server.
n SSL key and certificate : Select an existing SSL Key and Certificate or create a new one, as
required.
The HTTP health monitor may only be applied to a pool whose virtual service has an HTTP
application profile attached.
To create a new HTTP health monitor, click Create button. Select the HTTP option from the
drop-down list of the Type field. The following screen is displayed:
n Health Monitor Port — Specify the port defined for the server in the pool. If the monitor
succeeds to this port, the load-balanced traffic will still be sent to the port of the server defined
within the pool.
n Client Request Data — Specify the client request data in the USER INPUT field to send an
HTTP request to the server. The converted data will be displayed in the CONVERTED VALUE
PREVIEW field.
The default GET / HTTP/1.0 may be extended with additional headers or information. For
instance, GET /index.htm HTTP/1.1 Host: www.site.com Connection: Close.
n Use Exact Request — Specify the exact http_request string without any automatic insert of
headers like host header.
The system automatically adds three default headers in addition to any user-specified headers
as follows where hostname is automatically derived from each pool member's configuration:
Header Values
User-Agent avi/1.0\r\n
Host <hostname>\r\n
Accept */*;\r\n\r\n
In some situations, it may be necessary to override these default headers, for instance, to
configure a specific Host header value for all servers.
To allow full control over the exact request that is sent, the exact_http_request (CLI) or Use
Exact Request (GUI) option should be enabled. This option prevents the addition of these default
headers. Ensure that all mandatory and required headers are explicitly configured.
n Server Response Data — Specify the snippet of content in the USER INPUT field from the
server’s HTTP response by copying and pasting text from either the source HTML or the web
page of the server. NSX Advanced Load Balancer inspects raw HTML data and not rendered
web pages. For instance, NSX Advanced Load Balancer does not follow HTTP redirects and
will compare the redirect response with the defined Server Response string, while a browser
will show the redirected page. The Server Response content is matched against the first
2KB of data returned from the server, including both headers and content/body. The Server
Response data can also be used to search for a specific response code, such as 200 OK.
When both Response Code and Server Response Data are populated, both must be true for
the health check to pass.
n Response Code — Select the HTTP response codes to match as successful from the drop-
down list. The list displays the following values:
n 1XX
n 2XX
n 3XX
n 4XX
n 5XX
n ANY
A successful HTTP monitor requires either the Response Code, the Server Response Data, or
both fields to be populated. The Response Code expects the server to return a response code
within the specified range. For a GET request, a server should usually return a 200, 301, or 302.
For a HEAD request, the server will typically return a 304. A response code by itself does not
validate the server’s response content, just the status.
You can use a custom server response to mark a server as disabled. During this time, health
checks will continue, and servers will operate the same as if it is manually disabled, which means
existing client flows are allowed to continue, but new flows are sent to other available servers.
Once a server stops responding with the maintenance string it will be brought online, being
marked up or down as it normally would be based on the server response data.
This feature allows an application owner to gracefully bleed connections from a server prior to
taking the server offline without the requirement to log into NSX Advanced Load Balancer to first
place the server in the disabled state.
n Maintenance Response Code — Specify the maintenance response code. If the defined HTTP
response code is seen in the server response, place the server in maintenance mode. Multiple
response codes may be used via comma separation. A successful match results in the server
being marked down.
n Maintenance Server Response Data — Specify the maintenance server response date. If
the defined string is seen in the server response, place the server in maintenance mode. A
successful match results in the server being marked down.
Example
The following is the sample HTTP health check send string:
HTTP/1.0 200 OK
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: text/plain
Content-Length: 15
Date: Fri, 20 May 2016 18:23:05 GMT
Connection: close
Health Check Ok
Notice that NSX Advanced Load Balancer automatically includes additional headers in the send
string, including User-Agent, Host, and Accept to ensure that the server receives a fully formed
request.
By default, NSX Advanced Load Balancer appends additional HTTP headers (Host, User-Agent
and Accept) into HTTP health monitor requests.
Header Values
User-Agent avi/1.0\r\n
Host <hostname>\r\n
Accept */*;\r\n\r\n
For instance, if an NSX Advanced Load Balancer admin (user) adds a Host header in HTTP client
request data field of a health monitor, NSX Advanced Load Balancer will send this additional Host
header together with the existing Host header (Host header inserted by NSX Advanced Load
Balancer).
You can prevent from adding additional Host header. Use Exact Request option on NSX Advanced
Load Balancer UI or use_exact_request flag in NSX Advanced Load Balancer CLI for a health
monitor instructs NSX Advanced Load Balancer to pass the exact HTTP request string as specified
by NSX Advanced Load Balancer admin (user), without any automatic insertion of the additional
HTTP headers. This indicates that user is now responsible for adding the appropriate headers to
the HTTP client request field.
Configuration from NSX Advanced Load Balancer CLI
Login to NSX Advanced Load Balancer CLI, and use configure healthmonitor System-HTTP
command to change the value of the flag exact-http-request.
2 Choose the desired HTTP health monitor and click edit icon.
4 Click Save.
The HTTPS monitor type can be used to validate the health of HTTPS encrypted web servers. Use
this monitor when NSX Advanced Load Balancer is either passing SSL encrypted traffic directly
from clients to servers, or NSX Advanced Load Balancer is providing SSL encryption between itself
and the servers.
To create a new HTTP health monitor, click Create button. Select the HTTPS option from the
drop-down list of the Type field. The following screen is displayed:
n SSL Attributes — Check this box to specify SSL attributes for HTTPS health monitor. The
system allows SSL encrypted traffic to pass to servers without decrypting in the load balancer
(the SE).
n TLS SNI Server Name — Specify a fully qualified DNS hostname that will be used in the TLS
SNI extension in server connections indicating that SNI is enabled. If you do not specify any
value, the system will inherit the value from the pool.
n SSL Profile — Select the SSL profile from the drop-down list. SSL profile defines ciphers and
SSL versions to be used for health monitor traffic to the back-end servers. The following are
the options in the drop-down list:
n System Standard
n PKI Profile — Select the PKI profile from the drop-down list. PKI profile is used to validate the
SSL certificate presented by a server.
n SSL Key and Certificate — Select SSL key and certification options from the drop-down list.
Service engines will present this SSL certificate to the server. The following are the options in
the drop-down list:
For more details on other fields in the HTTPS section, refer to Configuring HTTP Health Monitor
section in this guide.
To create a new ping health monitor, click Create button. Select the Ping option from the drop-
down list of the Type field. The following screen is displayed:
NSX Advanced Load Balancer Service Engines will send an ICMP ping to the server. This monitor
type is generally very fast and lightweight for both Service Engines and the server. However, it is
not uncommon for ping to drop a packet and fail. Ensure that Failed Checks field is set to 2. This
monitor type does not test the health of the application, so it generally works best when applied in
conjunction with an application-specific monitor for the pool.
Note ICMP rate limiters can prevent Service Engines from aggressive health checking a server via
ping. This may be caused by an intermediate network firewall or rate limits set up on the server
itself.
For Remote Authentication Dial-In User Service (RADIUS) applications, you can monitor the server
health using the RADIUS request and response. You can generate the RADIUS requests using the
password, username, and secret. The server status will be marked Up only if the RADIUS response
is Access-Accept or Access-Challenge. Otherwise, the server will be marked Down.
To create a new RADIUS health monitor, click Create button. Select the RADIUS option from the
drop-down list of the Type field. The following screen is displayed:
n Username — Specify the user name. RADIUS monitor will query the RADIUS server with this
username.
n Password — Specify the password. RADIUS monitor will query the RADIUS server with this
password.
n Shared Secret — Specify the shared secret. RADIUS monitor will query the RADIUS server
with this shared secret.
For SIP applications, the server health is monitored using the SIP request code and response.
Currently, only SIP options are supported for the request code. The monitor greps for the
configured response string in the response payload. If a valid response is not received from the
server within the configured timeout, then the server status is marked down.
To create a new SIP health monitor, click Create button. Select the SIP option from the drop-down
list of the Type field. The following screen is displayed:
n SIP Request Code — Select the SIP request code to be sent to the server from the drop-down
list. By default, a SIP options request will be sent.
n SIP Monitor Transport — Select the SIP monitor transport protocol from the drop-down list, to
be used for the SIP health monitor. The following are the options in the drop-down list:
n UDP
n TCP
n SIP Response — Match for a keyword in the first 2KB of the server header and body response.
By default, it matches SIP/2.0.
For any TCP application, this monitor will wait for the TCP connection establishment, send the
request string, and then wait for the server to respond with the expected content. If no client
request and server response are configured, the health check will pass once a TCP connection is
successfully established.
To create a new TCP health monitor, click Create button. Select the TCP option from the drop-
down list of the Type field. The following screen is displayed:
n Health Monitor Port — Specify a port that should be used for the health check. If the monitor
succeeds to this port, the load-balanced traffic will still be sent to the port of the server
defined within the pool. If you do not specify any value, then the system uses the default port
configured for the server.
n Client Request Data — Specify the request data to send after completing the TCP handshake
in the USER INPUT field. The converted data will be displayed in the CONVERTED VALUE
PREVIEW field.
n Half-Open (Close connection before completion) — If you check this box the monitor sends
a SYN. Upon receipt of an ACK, the server is marked up and the Service Engine responds
with a RST. Since the TCP handshake is never fully completed, the system does not validate
application health. The purpose of this monitor option is for applications that do not gracefully
handle quick termination. If the handshake is not completed, the application is not touched, no
application logs are generated or app resources are wasted for setting up the connection from
the health monitor.
n Configure TCP health monitor to use half-open TCP connections to monitor the health of
backend servers thereby avoiding consumption of a full-fledged server-side connection and
the overhead and logs associated with it. This method is lightweight as it makes use of a
listener in the server's kernel layer to measure the health and a child socket or user thread is
not created on the server-side.
n Server Response Data — Specify the expected response from the server in the USER INPUT
field. NSX Advanced Load Balancer checks to see if the Server Response data is contained
within the first 2KB of data returned from the server. The converted data will be displayed in
the CONVERTED VALUE PREVIEW field.
You can send a UDP datagram to the server, then match the server’s response against the
expected response data.
Default System-UDP health monitor will detect failure only when ICMP unreachable is received. It
will keep the server UP until it receives ICMP unreachable for the defined UDP port. Hence it does
not detect the failure:
n If the UDP health monitor request gets dropped or blackholed before reaching the server.
To create a new TCP health monitor, click Create button. Select the TCP option from the drop-
down list of the Type field. The following screen is displayed:
For field explanation in the UDP section, refer to Configuring TCP Health Monitor section in this
guide.
The POP3 (Post Office Protocol version 3) health monitor is used to monitor POP services. It will
issue LIST command to get messages present in mailbox, after executing CAPA (capabilities) and
verifying user using username and password. On successful completion of these commands, POP3
monitor will mark the server UP, else it will mark the server DOWN.
SSL Attributes Required for POP3S (secure pop3) Mandatory for POP3S
monitor
n SSL Profile: Select an existing SSL Profile or create a new one, as required. This defines the
ciphers and SSL versions to be used for the health monitor traffic to the backend servers.
n PKI Profile: Select an existing PKI profile or create a new one, as required. This will be used as
to validate the SSL certificate presented by the server.
n SSL Key and Certificate: Select an existing SSL Key and Certificate or create a new one, as
required.
The FTP/FTPS health monitor checks the health of the FTP servers configured as pool members.
A file will be downloaded from the server. On successful download, the server is marked as UP. If
the file transfer fails, the server is marked DOWN.
Field Description
Password Enter the password for the user account if the FTP server
requires authentication.
Note Currently FTP/FTPS health monitor can be configured only using the CLI.
Note Ensure that an SSL Profile exists before configuring the FTPS health monitor.
The following are the SSL configurations that can be used for FTPS health monitor:
n SSL Profile: Select an existing SSL Profile or create a new one, as required. This defines the
ciphers and SSL versions to be used for the health monitor traffic to the backend servers.
n PKI Profile: Select an existing PKI profile or create a new one, as required. This will be used as
to validate the SSL certificate presented by the server.
n SSL Key and Certificate: Select an existing SSL Key and Certificate or create a new one, as
required.
SSL Attributes Enter SSL Attributes in case of LDAPS Mandatory for LDAPS Health Monitor
health monitor
Note Currently, LDAP/LDAPS health monitor can be configured only using the CLI.
The following are the SSL configurations that can be used for the LDAPS health monitor:
n SSL Profile - Select an existing SSL Profile or create a new one, as required. This defines the
ciphers and SSL versions to be used for the health monitor traffic to the back end servers.
n PKI profile - Select an existing PKI profile or create a new one, as required. This will be used to
validate the SSL certificate presented by the server.
n SSL key and certificate - Select an existing SSL Key and Certificate or create a new one, as
required.
Note
n When attributes are configured, the SE will match configured attributes in server response
data. When the match is not found it marks the server down.
n For lesser consumption of resources, configure specific base_dn having less number of entries
with base scope so that server response data will not be large.
n Health Monitoring
n Multi-Pool — A server that exists in multiple pools will receive health checks for each pool it
has membership within. If the pools are on the same Service Engine and configured with the
same health monitor, then the system will not perform redundant monitoring.
n Disabled — Health checks are not performed for disabled servers, servers within a pool that
are not assigned to a VS, or attached to a disabled virtual service.
n Scaled SEs — When scaling out a virtual service across multiple Service Engines, the servers
will receive active health checks from each SE for the virtual service. If one SE marks a server
as up, it will be included in the load balancing. If a second SE is unable to access the server, it
will mark it down and not send traffic to that server. From the Controller UI, the server health
icon can flip intermittently between red and green (or other colors). The status flipping is due
to the frequency when SEs report their status to the Controller.
n SNAT IP — If a SNAT IP is configured for a virtual service (see virtual service), the active SE will
send monitors from the SNAT IP address. If a SNAT IP is not configured, the active SE initiates
monitors from its interface IP. The standby SE will always send monitors from its interface IP.
n Standby SE — By default, the standby SE will send health checks. This behavior can be
changed from the CLI for the Service Engine Group of the SE.
n Send Interval — By default, NSX Advanced Load Balancer sends checks based on the
frequency defined by a monitor's Send Interval timer. However, if you add a new health
monitor or a new server to a pool, or if there is a positive monitor response received after a
server that has been marked down for a long time, NSX Advanced Load Balancer will quickly
send additional checks. For instance, if a new server is added to a pool with a monitor set
to query every 20 seconds, and requires 3 consecutive positive responses, the server will not
be marked up for nearly one minute. In this example, when the new server is added to the
pool, NSX Advanced Load Balancer will send the first 3 checks immediately to the server. The
server will respond, potentially marking the server up within one or two seconds. The system
performs the subsequent checks at the interval specified by the Send Interval setting of the
health monitor.
b If active monitoring is needed, but the ports to be monitored are not explicitly defined, the
NSX Advanced Load Balancer infers them from the defined server ports (on a per-server
basis).
a If active monitoring is needed, but the ports to be monitored are not explicitly defined, the
NSX Advanced Load Balancer does not infer them automatically from the defined server
ports.
b You must add a health monitor for each port on the servers needing to be monitored.
Using GUI
From the GUI, the following are the ways to check the status of a server:
n Navigate to Pool > Server page, click the Failed Monitor in the health monitor table to expand
the results.
n Check for the events of the virtual server and pool record status changes and reasons.
For more details, refer to Reasons Servers Can Be Marked Down section.
CLI Description
debug_vs_hm_none This default omits health monitor packets from the capture
n The system inspects the content returned from servers and compares it to the monitor's
Server Response Data as case sensitive.
n Most monitors only inspect up to 2k within the server response, which includes both headers
and content. If the desired result is further within the response, the server will be marked
down.
n Duplicate IP is one of the most common issues causing intermittent failures of health checks.
Passive
The system will trigger the passive monitor in the event of a significant error, which will
automatically generate the logs for the virtual service. When drilling into a server page, the
passive monitor can show less than 100%. You can view the virtual service logs by filtering for the
server in question. Then click the Significance tile from the Log Analytics sidebar.
You can check if failures are occurring and increasing over time using the following CLI:
Ping
Some devices, including servers and firewalls, restrict the frequency of ICMP messages and can
silently discard them. In such cases, you need to lower the frequency of the Send Interval option.
HTTP
You need to send the exact request headers in the send string to the servers. For instance, a
space in a host header can cause issues for IIS, such as Host: Avi Server. The HTTP monitor adds
a few headers to emulate a valid request. To omit these extra headers, you can use a TCP monitor,
which is explicit to the send string defined in the Client Request Data field. If you are using a TCP,
ensure that you add \r\n characters for the carriage return line feed.
NSX Advanced Load Balancer includes \r\n at the end of each line of the request. HTTP 1.0
requires a second \r\n to be sent after the last line, which includes:
For HTTP/S, NSX Advanced Load Balancer does not render the results but inspects them literally.
For instance, a server can send a 302 redirect back to NSX Advanced Load Balancer, which does
not include server is good. A browser will follow the redirect and display the page with the correct
content. The URI encoding of content can also cause an HTTP/S response to failing.
External
You can run external health monitors using hmuser users with lower privileges. You can attach to
a Service Engine and log in as root as su - hmuser <-- login as hmuser.
root@test-se2:~# su - hmuser
hmuser@10-10-25-28:~$ pwd
/run/hmuser
To overcome the above situation and mark the server down or virtual service down, you can tune
the ICMP rate limit configuration.
If ICMP unreachable messages are dropped, in high scale cases due to ICMP unreachable rate-
limiter, you can confirm the occurrence of this issue, using the following command:
| icmp_rx_rl_cfg_pps | 100 |
| icmp_rx_rl_confirming | 30 |
| icmp_rx_rl_drops | 0 |
Pool Configurations
n Use min_servers_up to specify the minimum number of servers required to be UP for the
pool’s health to be marked as available. If this parameter is not defined, the pool is marked as
available as long as at least one server is UP.
For example, If two servers are marked DOWN. This does not meet the minimum threshold (three
servers in UP state). Therefore, the pool is marked DOWN and is not available for any virtual service
referencing it.
Note If the minimum threshold parameters are not defined, NSX Advanced Load Balancer retains
the default behavior.
In scenarios where multiple services are monitored on a backend server using separate monitors.
If either one is available, NSX Advanced Load Balancer will mark that server as UP. For example,
GET for /foo.html and GET for /bar.html. If either is available, NSX Advanced Load Balancer marks
the server UP.
In a similar use case, to specify the minimum number of health monitors required to succeed
and to decide whether to mark the corresponding server as UP, define the parameter
min_health_monitors_up. If this parameter is not defined, the server is marked as UP only if all
the health monitors are successful.
Minimum Servers
NSX Advanced Load Balancer marks a pool as up when one of the servers present in that pool
is UP. In a scenario where at least two servers are required to be marked as up to mark the
pool as UP, the option min_servers_up can be used to specify the numbers of servers that should
be up to mark the pool as UP. If this parameter is not defined, the pool is marked as available as
long as at least one server is UP.
See also,
The reason a server is marked down can be accessed in the following three different ways:
n Down Health Score Icon — Hover the mouse over a server's red status icon in the UI.
n Down Event — Navigate to the events for the server, the pool, and the virtual service. Expand
the event to see the full details. This information can be used to automatically generate an alert
and potentially make further system changes. Refer to alerts overview for more information.
n Server Page — Navigate to Applications > Pools > pool-name > Servers > server-name. This
displays the analytics page for the server.
Note The Passive monitor is a special type. A passive monitor will not mark a server down.
Instead, if a passive monitor detects bad server-to-client responses, the monitor reduces the
percentage of traffic load balanced to that server. Click the plus sign next to the health monitor
to show additional information regarding the server's health status.
n ARP Unresolved — If the Service Engine is unable to resolve the MAC address of the server's
IP address (when in the same layer 2 domain) or is unable to initiate a TCP connection (when
the server is a layer 3 hop away).
n Payload Mismatch — The health monitor expects specific content to be returned in the body
of the response (HTTP or TCP). In the example, an excerpt of the server's response is shown.
Often this type of error occurs when a server's first response is to send a redirect to a client.
The expected content appears in the client browser, but from NSX Advanced Load Balancer's
perspective, the client receives a redirect.
n Response Code Mismatch — HTTP health checks can be configured to expect a specific
response code, such as 2xx. Meanwhile, the server can be sending back a different code, such
as 404.
n Response Timeout with a Threshold Violation — Health monitors wait a timeout period for a
response and every health monitor can be assigned to its threshold and timeout period. If a
valid response is not received within the timeout period, for N consecutive times equal to the
threshold, then the server is marked down.
While NSX Advanced Load Balancer is engineered for easy troubleshooting, you will require more
advanced tools. Hence you can capture a trace of the conversation between the SE and the server
by navigating to Operations > Traffic Capture.
You can use tools such as ping and curl while launching from a client machine to the server.
However, these tools are not reliable if the tools are executed by administrators from SEs. This
is due to the dual network stacks used for the data plane and management. For instance, a tool
such as ping is executed from Linux using the SE management IP and network. The results can be
different than the SE that is reporting its health check via its data NICs and networks. For instance,
use ping -1 to verify the interface used.
External health monitor on NSX Advanced Load Balancer uses scripts to provide highly
customized and granular health checks. The scripts may be Linux shell, Python, or Perl, which
can be used to execute wget, netcat, curl, snmpget, etc.
Troubleshooting Steps
The directory structure of NSX Advanced Load Balancer is not exposed in the NSX Advanced
Load Balancer UI. This is available only through the admin shell/console access. External health
monitor scripts have limited access, so as to not affect the normal functioning of the NSX
Advanced Load Balancer system. CPU, memory, disk, and other resources are limited for the
external health monitor scripts. Hence, it is recommended to have relaxed timeouts for external
health monitors.
To attach to an NSX Advanced Load Balancer SE using NSX Advanced Load Balancer CLI, refer to
SSH Access for Super User.
For more information on the script parameters, refer to External Health Monitors.
If the external health monitor script provides an output for the stdout command, this indicates the
successful execution of the health monitor. If the script does not provide any output, this is treated
as a failure.
Troubleshooting Examples:
The netcat command's output is written to stderr. The grep command operates on stdout.
Hence, the output data is available under stderr.
n EINTR, ETIMEDOUT: Connection Timeout. (Generated by NSX Advanced Load Balancer infra
upon script timeout)
Note
n The script can write an error to $HM_NAME.$IP.$PORT.out, and this output will be available in
the above command’s output, to aid debugging. This works only when the external health
monitor debugging is enabled.
n In order to run the script to troubleshoot the script, the superuser can log in to the Service
Engine console with root privileges, and then as a sudo - hmuser and run the script which is
stored in the /run/hmuser directory.
n Although you can modify the script on the Service Engine for troubleshooting, this change
is temporary. Once the Service Engine restarts or you modify the pool/health monitor, the
changes will be lost. The correct way to modify the health monitor configuration is from the
NSX Advanced Load Balancer UI/CLI/API.
Packet Capture
External health monitor packets are not captured using the option available under Operations
> Packet Capture. Use the tcpdump command with filter options from the shell prompt of NSX
Advanced Load Balancer Controller.
tcpdump -i <avi_ethX>”
The output for the above commands shows the external health monitor traffic.
For more details on SSH Key-based Login to NSX Advanced Load Balancer Controller, refer to
SSH Key-based Login to NSX Advanced Load Balancer Controller.
To validate if a server is flapping, you need to check the specific server's analytics page within the
pool. You can enable the Alerts and System Events Overlay icons for the main chart. This will
show server up and down events over the time period selected. The page also displays the list of
failed health monitors.
Compare the response times from the server to the health monitor's configured receive timeout
window. If the failures can be attributed to these timers, you can use the following steps to rectify
the same:
n Add additional servers — This will not help if the slowdown is due to a backend database, but
for servers that are simply busy or overloaded, this can be a quick and permanent fix.
n Increase the health monitor's receive timeout window — The timeout value can be 1-300
seconds. The timeout value must always be shorter than the send interval for the health
monitor.
n Raise the number of successful checks required, and decrease the number of failed checks
allowed — This will ensure the server is not brought back into the rotation as quickly,
potentially giving it more time to handle the processes that are causing the slow response.
n Change the connection ramp-up (if using the least connections load-balancing algorithm)—
Servers can be susceptible to receive too many connections too fast when first brought up.
For instance, if one server has 1 connection and the rest have 100 connections, then as per
the least connections algorithm, the new server should get the next 99 connections. This can
easily overwhelm the server, leaving a flash crowd of connections that must be dealt with
the remaining servers, causing a domino effect. You can configure the connection ramp-up
feature on the Advanced tab of the pool's configuration. The connection ramp-up feature
slowly ramps up the percentage of new connections sent to a new server. Increasing the
ramp-up time can be beneficial if you are seeing a cascading failure of servers.
n Set the maximum number of connections per server — This option, configurable on the
Advanced tab of the pool configuration, ensures that servers are not overloaded and can
handle connections at optimal speed.
SEs have multiple network stacks, one for the control plane which uses Linux, and a second for the
data plane. Simply logging into an SE and pinging a server will go out the management port and
IP address, which can route through a different infrastructure than the SE data plane.
Prerequisites
The following are the prerequisites to validate server health.
1 Determine the IP address of the Service Engine hosting the virtual service.
shell
admin@10-10-25-28:~$ ip netns
In this case, the vrf_id is 2, and the namespace is avi_ns2.This information can also be
obtained using the following CLI command:
2 If there are multiple SEs, find the vrf-id on the specific SE:
Curl — The following are the steps to validate server health of curl option:
root@test-se2:~# sudo ip netns exec avi_ns1 curl 10.90.15.62:8000Welcome - Served from port
80!
Note This step is not necessary when the SE is on a Docker and bare-metal setup and the Docker
container itself exists in a namespace.
Administrators and application developers can use information in the health-check responses from
servers to detect if a server is in maintenance mode.
The information can be a specific response code, for instance, HTTP code 503, or a specific
response message string, for instance, "Server is under maintenance". Such an event is
operationally different from a case where the server process is down due to a software issue.
During the time a server is under maintenance, you should not send new connections to the server
and should drain the existing connections.
n Response code — You can configure HTTP and HTTPS health monitors to filter for a specific
HTTP response code (101-599). If the code is detected in a server's response to a health check
based on the HTTP or HTTPS monitor, NSX Advanced Load Balancer changes the server's
status to down for maintenance.
n Response data — You can configure TCP, UDP, HTTP, and HTTPS health monitors to filter
for specific data (a response string). If the string is detected in a server's response to a health
check based on the HTTP or HTTPS monitor, NSX Advanced Load Balancer changes the
server's status to down for maintenance. The response data must be within the first 2000
bytes of the response data.
An HTTP or HTTPS health monitor can filter for up to 4 maintenance response codes.
The HTTP and HTTPS health monitors can contain any of the following combinations of filters for
detecting a maintenance mode:
n Response string
TCP and UDP health monitors can contain a filter for maintenance mode based on either or both
of the following:
n Response string
When a server is marked down for maintenance, the existing connections to the server are left
untouched and are allowed to close on their own. NSX Advanced Load Balancer continues to send
health checks to the server. When the server stops responding with the maintenance string or
code, this indicates to NSX Advanced Load Balancer that the maintenance mode has concluded,
and changes the server's health status to up.
Similarly, the server's change into and back out of maintenance mode is indicated in the event log.
The following are the steps to configure web interface to detect server maintenance mode:
b Click the edit icon next to the name of the health monitor.
2 Click Create button to create a new health monitor. Specify a name, and select the monitor
type, such as TCP or UDP for layer 4, HTTP or HTTPS for layer 7.
3 In the Server Maintenance Mode section, specify the response code(s) or data to use as the
indicator that a server is in maintenance mode.
4 Click Save.
3 Select the monitor by clicking Add Active Monitor button. The drop-down list displays the list
of health monitors.
CLI
The following commands configure an HTTP health monitor to filter for the string under
construction in health-check responses from servers:
The following commands configure the same HTTP health monitor to filter for response codes
500 and 501 in health-check responses from servers. The following commands configure an HTTP
health monitor to filter for the string under construction in health-check responses from servers:
The following commands edit the health monitor's configuration to remove the filter a response
string:
+-------------------------
+---------------------------------------------------------------------------------------------
-----------------------------+
| Field |
Value
|
+-------------------------
+---------------------------------------------------------------------------------------------
-----------------------------+
| uuid | healthmonitor-b8b7cd94-7076-4a55-
a90a-77d6e768f4b1 |
| name |
NTLM
|
| send_interval | 10
sec
|
| receive_timeout | 4
sec
|
| successful_checks |
2
|
| failed_checks |
2
|
| type |
HEALTH_MONITOR_HTTPS
|
| https_monitor
|
|
| http_request | POST /EWS/Exchange.asmx
HTTP/1.1
|
| | Content-Type: text/xml;
charset=utf-8
|
| http_response_code[1] |
HTTP_2XX
|
| http_response | GetFolderResponseMessage
ResponseClass="Success"
|
| ssl_attributes
|
|
| ssl_profile_ref | System-
Standard
|
| exact_http_request |
False
|
| auth_type |
AUTH_NTLM
|
| http_request_body | <?xml version="1.0" encoding="UTF-8"?
>< |
| | soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/
envelope/" |
| | xmlns:t="http://schemas.microsoft.com/exchange/services/2006/
types" |
| | xmlns:m="http://schemas.microsoft.com/exchange/services/2006/
messages"> |
| |
<soap:Header><soap:Header><soap:Body>
|
| | <GetFolder xmlns="http://schemas.microsoft.com/exchange/
services/2006/messages"> |
| | <FolderShape><t:BaseShape>IdOnly</
t:BaseShape> |
| | </
FolderShape><FolderIds>
|
| | <t:DistinguishedFolderId Id="inbox">
<t:Mailbox> |
| | <t:EmailAddress></t:EmailAddress> </
t:Mailbox> |
| | </t:DistinguishedFolderId><</
FolderIds> |
| | </GetFolder></soap:Body></
soap:Envelope> |
| authentication
|
|
| username |
<sensitive>
|
| password |
<sensitive>
|
| is_federated |
False
|
| tenant_ref |
admin
|
+-------------------------
+---------------------------------------------------------------------------------------------
-----------------------------+
Note You can configure NTLM authentication for GET method, and HTTP health monitor in a
similar way.
The following is the configuration example for basic authentication for GET method on HTTP
health monitor:
You can configure basic authentication for POST method, or enable basic auth in HTTPS health
monitor in a similar way.
Note
n Enabling authentication is available for HTTP and HTTPs monitors only.
n You cannot configure exact_http_request for HTTP(S) health monitors using NTLM
authentication.
n VRFs
n Routing
n BGP
n Per-App SE Mode
n SE Memory Consumption
n Preserve Client IP
VRFs
This section covers the following topics:
n Change VRF Context Setting for NSX Advanced Load Balancer SE's Management Network
The following are the packet processing for the SE data path:
n Terminate SSL
Controller Cluster
SE Mgr /
Metrics - Mgr Log Mgr
VS Mgr
Datastore
Dispatcher Flow-Table
(v)NIC
The following are the features of each component in SE system logical architecture:
Work Process
n SE-DP
n SE-Agent
n SE-Log-Agent
n Proxy-alone — Full TCP/IP, L4/L7 processing and policies defined for each app/virtual
service.
n Dispatcher-alone —
n Processes Rx of (v)NIC and distributes flows across the proxy services via per proxy
lock-less RxQ based on the current load of each proxy service.
n The dispatcher manages the reception and transmission of packets through the NIC.
SE–Agent — This acts as a configuration and metrics agent for Controller. This can run on any
available core.
SE-Log-Agent — This maintains a queue for logs. This performs the following actions:
n Batches the logs from all SE processes and sends them to the log manager in Controller.
Flow-Table
This is a table that stores relevant information about flows. It maintains flow to proxy service
mapping.
Based on the resources available, the service engine configures an optimum number of
dispatchers. You can override this by using Service Engine group properties. There are multiple
dispatching schemes supported based on the ownership and usage of Network Interface Cards
(NICs):
n Multi-queue configuration where all dispatcher cores poll one or more NIC queue pairs, but
with mutually exclusive se_dp to queue pair mapping.
The remaining instances are considered as a proxy. The combination of NICs and dispatchers
determine the Packets per Second (PPS) that a SE can handle. The CPU speed determines
the maximum data plane performance (CPS/RPS/TPS/Tput) of a single core and linearly scales
with the number of cores for a SE. You can dynamically increase the SE’s proxy power without
the need to reboot. A subset of the se_dp processes is active in handling the traffic flows. The
remaining se_dp processes will not be selected to handle new flows. All the dispatcher cores are
also selected from this subset of processes.
The active number of se_dp processes can be specified using SE group property max_num_se_dps.
As a run-time property, it can be increased without a reboot. However, if the number is
decreased, it will not take effect until after the SE is rebooted.
INTEGER 1-128 Configures the maximum number of se_dp processes that handles traffic. If not
configured, defaults to the number of CPUs on the SE.
[admin:aziz-tb1-ctr2]: serviceenginegroup> max_num_se_dps
INTEGER 1-128 Configures the maximum number of se_dp processes that handles traffic. If not
configured, defaults to the number of CPUs on the SE.
[admin:ctr2]: serviceenginegroup> max_num_se_dps 2
[admin:ctr2]: serviceenginegroup> where | grep max_num
| max_num_se_dps | 2 |
[admin:ctr2]: serviceenginegroup>
n Proxy
n SSL Termination
n HTTP Policies
n WAF
n Dispatcher
n High PPS
n High Throughput
Single Root I/O Virtualization (SR-IOV) assigns a part of the physical port (PF - Platform
Function) resources to the guest operating system. A Virtual Function (VF) is directly mapped
as the vNIC of the guest VM and the guest VM needs to implement the specific VF’s driver.
For more information on SR-IOV, see SR-IOV with VLAN and NSX Advanced Load Balancer
(OpenStack No-Access) Integration in DPDK.
Virtual Switch
Virtual switch within hypervisor implements L2 switch functionality and forwards traffic to each
guest VM’s vNIC. Virtual switch maps a VLAN to a vNIC or terminates overlay networks and
maps overlay segment-ID to vNIC.
Note AWS/Azure clouds have implemented the full virtual switch and overlay termination
within the physical NIC and network packets bypass the hypervisor.
In these cases, as VF is directly mapped to the vNIC of the guest VM, the guest VM needs to
implement a specific VF’s driver.
VLAN is a logical physical interface that can be configured with an IP address. This acts as
child interfaces of the parent vNIC interface. VLAN interfaces can be created on port channels/
bonds.
VRF Context
A VRF identifies a virtual routing and forwarding domain. Every VRF has its routing table
within the SE. Similar to a physical interface, a VLAN interface can be moved into a VRF. The
IP subnet of the VLAN interface is part of the VRF and its routing table. The packet with a
VLAN tag is processed within the VRF context. Interfaces in two different VRF contexts can
have overlapping IP addresses.
Health Monitor
Health monitors run in data paths within proxy as synchronous operations along with packet
processing. Health monitors are shared across all the proxy cores, hence linearly scales with the
number of cores in SE.
For instance, 10 virtual services with 5 servers in a pool per virtual service and one HM per server
is 50 health monitors across all the virtual services. 6 core SE with dedicated dispatchers will have
5 proxies. Each proxy will run 10 HMs and all the HM status is maintained within shared memory
across all the proxies.
Custom external health monitor runs as a separate process within SE and script provides HM
status to the proxy.
You can enable DHCP from the Controller using the following command: configure
serviceengine <serviceengine-name>
You can check the desired data_vnics index ( i ) using the following command:
To disable DHCP on a particular data_vnic, you can replace dhcp_enabled with no dhcp_enabled
in the above command sequence.
Note If DHCP is turned-ON on unmanaged/ unconnected interfaces, it could slow down the SE
stop sequence and SE could get restarted by the Controller.
Change VRF Context Setting for NSX Advanced Load Balancer SE's
Management Network
The option to change the VRF context setting is available on the NSX Advanced Load Balancer UI.
The VRF setting can be changed by navigating to Infrastructure > Service Engine and clicking the
edit option.
Configuring static routes available under the Management Network does not affect the SE. As
part of SE boot-up process, the NSX Advanced Load Balancer Controller only picks the default
gateway that is applicable to the specific SE based on the SE’s management network.
The VRF edit option on the NSX Advanced Load Balancer UI is only applicable to the Data NIC’s.
The VRF context setting for the management network can be changed using NSX Advanced Load
Balancer CLI.
This section also explains how to configure NSX Advanced Load Balancer when there are multiple
management networks used across Service Engines Groups.
Instructions
Login to the NSX Advanced Load Balancer CLI, and execute the following commands:
n static_routes route_id <ID> prefix 0/0 next_hop <IP address of the next
hop>
After the changes, the show output will exhibit the following information. The VRF for the
management network has the routing entries for the two different subnets.
Virtual Routing Framework (VRF) is a method of isolating traffic within a system. This is also
referred to as a “route domain” within the load balancer community.
n No Access Cloud
Note Multiple VRFs are only supported in Linux Server Clouds for SEs with DPDK enabled.
n Physical interfaces
n Port-channel interfaces
n VLAN interfaces
The following types of data interfaces do not support modification of the VRF property. Any
attempt to modify them will result in an error.
n Management interface
n If in-band management is enabled on an SE, that SE will not support multiple VRFs.
n To enable multiple VRFs on an SE, it must be deployed with in-band management disabled.
The caveat with disabling in-band management is that the management interface will not be
used for data plane traffic, and hence no VS will be placed on this interface and this interface
will not be used to communicating with back-end servers.
Procedure
Note If the VMware vCenter cloud is the only one configured, or was the first one configured,
the cloud name is “Default-Cloud”.
Prerequisites
The steps to create a virtual service in a VRF can be performed from the admin tenant or another
tenant.
Procedure
5 Click Next.
6 Select the VRF context from the list and click Next.
7 Enter a name for the virtual service, virtual IP address (VIP) and other properties of the virtual
service.
8 Click Save.
Routing
This section covers the following topics:
A static route is required on the next-hop router from the NSX Advanced Load Balancer SE to the
pool, in the following case:
n The virtual service’s VIP address or SNAT addresses are not in any of the SE interface subnets.
To make the static route work, the following are the prerequisites:
This section shows sample topologies that use static routing for server response traffic.
Here is a sample topology without HA. The virtual service’s VIP and SNAT IP addresses are not in
any of the SE interface subnets. As a result, a static route from the back end server to the SE is
required on the next-hop router.
Static routes can be provisioned on the next-hop router to point to the interface IP of the Avi SE.
However, it is recommended to configure a floating interface IP for the SE group and to have the
static route use the floating interface as the adjacency. This will allow the smooth addition of a
second Avi SE in the future if required, for HA purposes (using legacy HA mode).
Client
Interface
192.168.1.0/24
10.10.1.1/24
Active
SE group 1
Legend
Management network
Datapath network
Similarly, static routes or a default gateway will also need to be provisioned on the SE group, to
enable reachability to servers and clients, which might not be Layer-2 adjacent. For information
on provisioning a default gateway and static routes on an SE, see NSX Advanced Load Balancer
Infrastructure.
The active SE responds to Address Resolution Protocols (ARPs) for the VIP and SNAT IP
addresses that are in the same subnet as the SE. The active SE also carries traffic corresponding to
all the virtual services. The standby SE remains idle unless the active SE becomes unavailable. In
this case, the standby SE takes over the active role and assumes ownership of the virtual service’s
IP addresses.
Note The use of static routes for VIP and SNAT IP reachability in cluster HA configurations is not
supported.
Client
Interface
192.168.1.0/24
10.10.1.1/24 10.10.1.2/24
Active Standby
SE group 1
HA mode = Legacy Active/Standby
Legend
Management network
Datapath network
In this example, neither the VIP nor the SNAT IP is part of the SE interface’s subnet. For this
reason, a floating interface IP (10.10.1.100) is configured. The floating interface IP must be in the
same subnet as the attached interface subnet through which the VIP or SNAT-IP is reachable
(10.10.1.0/24 subnet in the above topology).
A separate floating interface IP is required for each of the attached interface subnets through
which VIP or SNAT IP traffic flows. On the next-hop router used by the server pool for return
traffic back to the SE, static routes to the VIP and SNAT IP addresses are configured, with the
next-hop set to the floating interface IP.
Following failover, ownership of the VIP, SNAT IPs, and floating interface IP are taken over by the
new active SE, as shown here:
Client
Interface
192.168.1.0/24
10.10.1.1/24 10.10.1.2/24
Failed Active
SE group 1
HA mode = Legacy Active/Standby
Legend
Management network
Datapath network
The connecting router thus does not see any change, except for the gratuitous ARP update for the
floating interface’s IP address, which is now mapped to the interface MAC address the new active
SE.
Configuration
On the NSX Advanced Load Balancer Controller, the VIP and SNAT IP addresses are part of the
individual virtual service’s configuration.
The HA mode and floating IP address are configured within the SE group.
Note The SE group for the non-HA topology contains a single SE. The SE group for the legacy
HA topology contains two SEs.
VIP Address
The VIP address is the IP address that DNS will return in response to queries for the load-
balanced application’s domain name. This is the destination IP address of requests sent from the
client browser to the application.
SNAT IP Address
When the SE forwards a request to a back end server, the SE uses the SNAT IP address as the
source address of the client request. In deployments that handle VIP traffic differently depending
on the application, the source NAT IP address provides a way to direct the traffic. The SNAT IP
address also ensures that response traffic from the back end servers goes back through the SE
that forwarded the request.
Within the SE group configuration, legacy HA mode is selected and the floating IP address is
specified.
In the absence of a router in the server networks, the NSX Advanced Load Balancer SE can be
used for routing the traffic of server networks by using the IP routing feature of Service Engines.
Also, you need NAT functionality in the SE to use a NAT gateway for the entire private network of
servers.
NAT will function in the post-routing phase of the packet path in the SE. It is recommended to go
through the SE default gateway (IP routing on Service Engine) feature. For more information, see
Default Gateway (IP Routing on NSX Advanced Load Balancer SE).
Enabling IP routing on Service Engine and using the SE as the gateway is a necessary prerequisite
to use the outbound NAT feature. Hence all necessary requirements for enabling IP routing on the
Service engine is also applicable to the outbound NAT feature.
NAT Guidelines
NAT is VRF-aware and must be programmed per SE group using a network service of Routing
Service type. For more information, see Network Service.
NAT/IP routing is supported on two-armed, no-access configurations of Linux server clouds and
VMware clouds.
NSX Advanced Load Balancer supports NAT for VMware cloud deployments in write access
mode. For this feature to work on VMware write access clouds, at least one virtual service must be
configured with the following configurations:
n One arm (in the two-arm mode deployment) must be placed in the back end network. For this
network, the SE acts as the default gateway.
n NAT functions are done by Service Engine IP stack, so the routing_by_linux_ipstack attribute
of Routing Service should be set to False.
n On VMware write access mode, if a virtual service has already been created. This virtual
service creates the required Service Engines.
n NAT IP of a NAT Rule cannot be the same as any interface IP present in the VRF. Such NAT IP
will be ignored.
n NAT IP is configured on an interface as a secondary IP. Hence different Service Engine groups
can not share a NAT IP in a given VRF.
NAT Service
The diagrammatic representation of NAT service traffic initiated from inside to outside is as
follows:
4 5
3 6
FE-NW 10.100.0.0/24
Floating IP
10.100.0.2/24
2 7
Floating IP
1 8
The flow details mentioned in the diagrammatic representation of NAT service traffic is from 1 to 8.
The details of the flow are as follows:
Note
n The router acts as a front end floating IP for the SE group. SE backend network is not routable
on the front end.
n In the floating IP, the back end network is not routable on the front end.
On the NSX Advanced Load Balancer Controller, you can Enable IP Routing in the Service Engine
group (only Legacy HA) in Advanced tab configuration.
On the front end router, configure static routes to the back end server networks with the next-hop
as floating IP in the front end network.
On the back end router, configure the SE’s floating IP in the back end server network as the
default gateway.
Step 1: Assume 10.100.0.78 is the destination-IP on which the server is trying to reach,
10.100.0.26 is the NAT IP. This IP is owned by Service Engine. Note that the NAT IP has to be
configured as a static route on the front end router with next-hop as front-end floating-interface-ip
(10.100.0.2) of the SE.
Assume that the Service Engine Group name is set to DefaultGroup with SE-interfaces present in
VRF global.
The following are the available debugging commands to get the information of the NAT flows/
stats:
Match Criteria
The following match criteria options are supported:
For every option, is not option is available. This option can be used to exclude packets having
certain parameters from matching the rule.
Match Operations
1 If two or more of the same parameters are used as match criteria, then OR operation is used
for matching.
Example:
match
addrs 192.168.100.21
This will match if the source IP is 192.168.100.21 or if the source IP falls in the range of
192.168.100.2 - 192.168.100.10.
2 If two different parameters are used in the match criteria, then AND operation is used for
matching.
Example:
match
addrs 192.168.100.21
ports 80
This will match if the source IP is 192.168.100.21 or if the source IP falls in the range of
192.168.100.2 - 192.168.100.10 and if the destination port is 80.
3 If there are multiple rules configured, the rules are evaluated in the ascending order as
indexed. The evaluation stops on the first match. No subsequent rules are checked if a packet
already matches a rule.
Action Options
n NAT IP - can be NSX Advanced Load Balancer VIP, floating interface IP, or IP address in the
subnet of SE interface. NAT IP cannot be SE interface IP.
n NAT IP range.
The SNAT IP address can be specified as part of the virtual service configuration.
In the following example, SNAT is used to identify the application type for a VIP’s traffic. Traffic
destined for email servers must pass through a SPAM filter and anti-virus checks, while traffic
destined for DocShare servers needs to undergo anti-virus and malware filter checks.
EmailApp
Firewall Rules Servers
IP source-IP == 1.1.1.1, run through
SPAM filter and anti-virus
IP sourceIP == 1.1.1.2, run through
anti-virus and malware filters
DocShare
Avi SE Firewall Servers
(The topology representation is logical rather than physical. For instance, email and DocShare
servers can both be running on the same host and be in the same pool. Such as the set of email or
DocShare servers does not need to be physically connected to the rest of the network through a
single segment, and so on.)
Note Unlike some other load balancing systems, NSX Advanced Load Balancer does not require
a entire pool of SNAT IP addresses per virtual service, even for a single load balancing appliance.
NSX Advanced Load Balancer does not have the limitation of 64k port numbers for a single
device. NSX Advanced Load Balancer is designed to allow a single source IP to have more than
64k connections across an application’s back end servers. Up to 48k open connections can be
established to each back end server.
Configuring SE SNAT
To enable source NAT for a virtual service:
Procedure
a If you are creating a new virtual service, click Create > Advanced Setup.
b If you are adding SNAT to an existing virtual service, click the edit icon in the row where
the virtual service is listed.
2 On the Advanced tab, select the SNAT IP in the SNAT IP Address field.
If the SE group allows scaling out to more than one SE, add a unique SNAT IP for each SE. Use
a comma between each IP as a delimiter.
3 Click Save.
Results
The following configuration changes are disruptive, i.e., the virtual service will get removed from
the existing Service Engines and get added back again:
n In Layer 3 HA, the upstream router is used to provide equal-cost multipath (ECMP) load
balancing across the virtual service’s SEs.
n For Layer 3 HA, the configuration might be required on the router between the SEs and the
back end servers to enable return traffic from the server to reach the SEs.
Virtual services can have SNAT enabled when associated with a Service Engine Group and VRF
that have a network service with IP routing enabled. However, on any given virtual service
preserve_client_ip will take precedence over SNAT IP.
If the default Layer-2 forwarding option is used, the connections from clients can always go to
the primary SE and then get distributed using Layer 2 forwarding. Here is an example of a typical
Layer 2 cluster HA topology.
Client
Interface
10.10.1.0/24
10.10.1.1/24 10.10.1.2/24
Active Active
SE group 1
HA mode = Cluster Active-Active
Legend
Management network
Datapath network
In this topology, two virtual services are configured. Each of the virtual services is provisioned with
a distinct SNAT IP. Since cluster HA is selected, each virtual service will need to be provisioned
with as many SNAT IPs as the number of SEs in the SE group. The Avi Controller will automatically
distribute the SNAT IPs to the individual SEs on which the virtual services are enabled.
Here is the SNAT configuration in the web interface for Virtual Service 1 with IPv4 addresses in the
example topology.
Here is the SNAT configuration in the web interface for Virtual Service 1 with IPv6 addresses in the
example topology.
Client
Interface
10.10.1.0/24
10.10.1.1/24 10.10.1.2/24
Active Standby
SE group 1
HA mode = Legacy HA
Legend
Management network
Datapath network
In case of a failover, the newly active SE will take over the traffic and ownership of the SNAT IP
from the failed SE. Health monitoring is performed only by the active SE.
Here is the SNAT configuration in the web interface for Virtual Service 1 in the example topology.
When SNAT is enabled, NSX Advanced Load Balancer Controller users will need to provide as
many SNAT IPs as the width of the scale-out desired. For instance, to support a maximum of four
SEs, four unique SNAT IPs are required in the virtual service configuration. If fewer SNAT IPs are
configured than the maximum scale outsize, scale-out is limited to one SE per configured SNAT IP.
Here is an example topology with SNAT enabled in scale-out HA mode with BGP enabled.
Client
Interface
192.168.1.0/24
BGP BGP
10.10.1.1/24 10.10.1.2/24
Active Active
SE group 1
HA mode = Clustered Active-Active
Legend
Management network
Datapath network
Here is the SNAT configuration in the web interface for Virtual Service 1 in the example topology.
For more information on enabling BGP to advertise SNAT IP addresses, see BGP Support for
Scaling Virtual Services.
A floating interface IP needs to be provisioned to provide adjacency to the upstream router for the
SNAT IP.
For more information, see Legacy HA for NSX Advanced Load Balancer Service Engines.
: snat_ip 10.200.1.1
: snat_ip 2001::10
: save
A connection is considered unique if any combination of the client source IP (for SNATed
connections, the SE IP) and protocol port plus the server destination IP and port are unique.
For typical application traffic, the source port from an Avi SE is unique for each SNATed TCP
connection. When SNAT is used, an SE can open up to 64k connections to each destination
server. Every new server added to a pool adds 64k potential concurrent connections. If a
virtual service is scaled across multiple SEs, each SE can maintain a maximum of 64k SNATed
connections to each server.
By default, NSX Advanced Load Balancer SE source-NAT (SNAT) is the traffic destined to servers.
Due to SNAT, application server logs will show the L3 IP address of the SE rather than the original
client’s IP address. Protocol extensions such as the “X-Forwarded-For” header for HTTP require
knowledge of the underlying protocol (such as HTTP). For L4 applications, NSX Advanced Load
Balancer supports version 1 (human-readable format) and version 2 (binary format) of the PROXY
protocol (PROXY protocol spec), which conveys the original connection parameters, such as the
client IP address, to the back-end servers. For L4 SSL applications, version 1 is supported. The
NSX Advanced Load Balancer SE requires no knowledge of the encapsulated protocol, and the
impact on performance caused by the processing of transported information is minimal.
Note For applications served over SSL, the server should be configured to accept proxy protocol,
otherwise the SSL handshake may fail.
PROXY TCP4 (real source address) (proxy address) (TCP source port) (TCP destination port)
(CRLF sequence)
Application Support
Applications must be configured to capture the IP address embedded within the proxy header,
which is in turn embedded in the TCP options. For more information, see PROXY protocol spec.
PROXY TCP6 (real source IPv6 address) (proxy IPv6 address) (TCP source port) (TCP destination
port) (CRLF sequence)
The following is an example with IPv6 addresses as the source IPv6 address and the proxy IPv6
addresses.
All the features which are applicable or valid for IPv4 address, still applicable with these changes
also.
The NSX Advanced Load Balancer Controller can migrate a virtual service to an unused SE, or
scale out the virtual service across multiple SEs for even greater capacity. This allows multiple
active SEs to concurrently share the workload of a single virtual service.
In vertical scaling, the allocated resources for a virtual machine running the SE increases manually,
and the VM must reboot. The physical limitations of a single virtual machine restrict this scaling.
For instance, a SE is not allowed to consume more resources than the physical host allows.
In horizontal scaling, a virtual service is placed on additional Service Engines. The first SE on which
the virtual service is placed is called the primary SE and all the additional SEs are called secondary
SEs for the virtual service.
With native scaling, the primary SE receives all connections for the virtual service and distributes
them across all secondary SEs. As a result, all virtual service traffic is routed through the primary
SE. At some point, the primary SE's packet processing capacity will reach a limit. Although
secondary SEs might have the capacity, the primary SE cannot forward enough traffic to utilize
that capacity. Thus, the packet-processing capacity of the primary SE decides the effectiveness of
native scaling.
For instance, when a virtual service is scaled out to four SEs, that is, one primary SE and three
secondary SEs, the primary SE's packet processing capacity will reach a limit and have only
marginal benefits to scale out the virtual service to a fifth Service Engine.
To scale beyond the native scaling's limit of four Service Engines, NSX Advanced Load Balancer
supports BGP-based horizontal scaling. This method relies on RHI and ECMP and requires manual
intervention to scale the load balancing infrastructure. For more information, see BGP Support for
Scaling Virtual Services.
Both horizontal methods can be used in combination. Native scaling requires no changes to the
first SE but instead relies on distributing load to the additional SEs. The scaling capacity requires
no changes within the network or applications.
During a normal steady-state, all traffic can be handled by a single SE. The MAC address of this SE
will respond to any Address Resolution Protocol (ARP) requests.
SE SE SE
secondary 1 primary secondary 2
n As traffic increases beyond the capacity of a single SE, the NSX Advanced Load Balancer
Controller can add one or more new SEs to the virtual service. These new SEs can process
other virtual service traffic, or they can be newly created for this task. Existing SE s can be
added within a couple of seconds, whereas instantiating a new SE VM may take up to several
minutes, depending on the time necessary to copy the SE image to the VM’s host.
n Once the new SEs are configured (both for networking and configuration sync), the first SE,
known as the primary, will begin forwarding a percentage of inbound client traffic to the
new SE. Packets will flow from the client to the MAC address of the primary SE, then be
forwarded (at layer 2) to the MAC address of the new SE. This secondary SE will terminate the
Transmission Control Protocol (TCP) connection, process the connection and/or request, then
load balance the connection/request to the chosen destination server.
n The secondary SE will source NAT the traffic from its IP address when load balancing the flow
to the chosen server. Servers will respond to the source IP of the connection (the SE), ensuring
a symmetrical return path from the server to the SE that owns the connection.
n For OpenStack with standard Neutron, such behavior presents a security violation. To avoid
this, it is recommended to use port security. For more information, see Neutron ML2 Plugin.
n If you (administrator) wish to take direct control of how an SE routes responses to clients, you
can use the CLI (or REST API) to control the se_tunnel_mode setting, as shown:
The tunnel mode setting won’t take effect until the SE is rebooted. This is a global change.
>reboot serviceengine
Scaling In
In this mode, NSX Advanced Load Balancer is load balancing the load balancers, which allows a
native ability to grow or shrink capacity on the fly.
To scale traffic in, NSX Advanced Load Balancer reverses the process, allowing secondary SEs
30 seconds to timeout active connections by default. At the end of this period, the secondary
terminates the remaining connections. Subsequent packets for these connections will now be
handled by the primary SE, or if the virtual service was distributed across three or more SEs,
the connection could hash to any of the remaining SEs. This timeout can be changed using the
following CLI command: vs_scalein_tmeout seconds
Distribution
When scaled across multiple Service Engines, the percentage of load may not be entirely equal.
For instance, the primary SE must make a load balancing decision to determine which SE should
handle a new connection, then forward the ingress packets. For this reason, it will have a higher
workload than the secondary SEs and may therefore own a smaller percentage of connections
than secondary SEs. The primary will automatically adjust the percentage of traffic across the
eligible SEs based on available CPU.
n Traffic that involves minimal ingress and greater egress traffic, such as client/server apps,
HTTP or video streaming protocols. For instance, SEs may exist on hosts with single 10-Gbps
NICs. While scaled out, the virtual service can still deliver 30 Gbps of traffic to clients.
n Protocols or virtual service features that consume significant CPU resources, such as
compression or Secure Sockets Layer (SSL)/ Transport Layer Security (TLS).
Scaling does not work well for the following use case:
n Traffic that involves significant client uploads beyond the network or packet per second
capacity of a single SE (or specifically the underlying virtual machine). Since all ingress
packets traverse the primary SE, scaling may not be of many benefits. For packet per second
limitations, see documentation on the desired platform or hypervisor.
Secondary SE Failure
If a secondary SE fails, the primary will detect the failure quickly and forward subsequent packets
to the remaining SEs handling the virtual service. Depending on the high availability mode
selected, a new SE may also be automatically added to the group to fill the gap in capacity. Aside
from the potential increase in connections, traffic to other SEs is not affected.
Primary SE Failure
If the primary SE fails, a new primary will be automatically chosen among the secondary SEs.
Similar to a non-scaled failover event, the new primary will advertise a gratuitous ARP for
the virtual service IP address. If the virtual service was using source IP persistence, the newly
promoted primary will have a mirrored copy of the persistence table. Other persistence methods
such as cookies and secure HTTPS are maintained by the client; therefore no mirroring is
necessary. For TCP and UDP connections that were previously delegated to the newly promoted
primary SE, the connections continue as normal, although now there is no need for these packets
to incur the extra hop from the primary to the secondary.
For connections that were owned by the failed primary or by other secondary SEs, the new
primary will need to rebuild their mapping in its connection table. As a new, non-SYN packet is
received by the new primary, it will query the remaining SEs to see if they had been processing
the connection. If they had, the connection flow will be reestablished to the same SE. If no SE
announces it had been handling the flow, it is assumed the flow was owned by the failed primary.
The connection will be reset for TCP, or load balanced to a remaining SE for UDP.
Relation to HA modes
Scaling is different from high availability, however, the two are heavily intertwined. A scaled-out
virtual service will experience no more than a performance degradation if a single SE in the
group fails. Legacy HA active/standby mode - a two-SE configuration - does not support scaling.
Instead, service continuity depends on the existence of initialized standby virtual services on the
surviving SE. These are capable of taking over with a single command.
NSX Advanced Load Balancer’s default HA mode is elastic HA N+M mode, which starts each
virtual service for the SE group in a non-scaled mode on a single SE. In such a configuration,
failure of an SE running non-scaled virtual services causes a brief service outage (of those
virtual services only), during which the Controller places the affected virtual services on spare
SE capacity. In contrast, a virtual service that has scaled to two or more SEs in an N+M group
suffers no outage, but instead a potential performance reduction.
Migrate
In addition to scaling, a virtual service can also be migrated to a different SE. For instance,
multiple underutilized SEs can be consolidated into a single SE. Or a single SE with two busy
virtual services can have one virtual service migrated to its SE. If further capacity is required, the
virtual service can still be scaled out to additional SEs. The migration process behaves similar to
scaling. A new SE is added to an existing virtual service as a secondary. Shortly the NSX Advanced
Load Balancer Controller will promote the secondary to become primary. The new SE will now
handle all new connections, forwarding any older connections to the now secondary SE. After 30
seconds, the old SE will terminate the remaining connections and be removed from the virtual
service configuration.
Manual Scaling
Manual scaling is the default mode. Scale-out is initiated from the Analytics page for the virtual
service. Point to the Quick Info popup (the virtual service name in the top left corner) to show
options for Scale-Out, Scale In, and Migrate. Select the desired option to scale or migrate. If NSX
Advanced Load Balancer is configured in full access mode, then scale out will begin. This can take
a couple of seconds if an existing SE has available resource capacity and can be added to the
VS, or up to a couple of minutes if a new SE must be instantiated. For read or no access modes,
the NSX Advanced Load Balancer Controller can not install new SEs or change the networking
settings of existing SEs. Therefore, the administrator may be required to manually create new
SEs and properly configure their network settings before initiating a scale-out command. If an
eligible SE is not available when attempting to scale out, an error message will provide further
info. Consider scaling out when the SE CPU exceeds 80% for any sustained amount of time, the
SE memory exceeds 90% or the packets per second reach the limit of the hypervisor for a virtual
machine.
Automated Scaling
The default for scaling is manual. This may be changed on a per-SE-group basis to automatic
scaling (auto-rebalance), which allows the Avi Controller to determine when to scale or migrate
a virtual service. By default, NSX Advanced Load Balancer Controller may scale-out or migrate
a virtual service when the SE CPU exceeds an 80% average. It will migrate or scale in a virtual
service if the SE CPU is under 30%. The Controller inspects SE groups at a five-minute interval.
If the last 30 seconds of that 5-minute interval is above the maximum or below the minimum
settings, the Controller may take an action to rebalance the virtual services across SEs. The
Controller will only initiate or allow one pending change per five-minute period. This could be
a scale in, scale-out, or virtual service migration.
5 Min Interval
Yes Does 1VS Yes
Check Is SE CPU Scale Out
consume >
above 80%? 70% of SE’s largest VS
PPS?
No No
n If a single virtual service exists on an SE and that SE is above the 80% threshold, the virtual
service will be scaled out.
n The ratio of consumption of SEs by virtual services is determined by comparing the PPS
(packets per second) during the 5-minute interval. If the SE is above the 80% CPU threshold,
and one virtual service is generating more than 70% of the PPS for the SE, this virtual service
will be scaled out. However, if the SE CPU is above the 80% mark, and no single virtual service
is consuming more than 70% of the SE’s PPS, the Controller will elect to migrate a virtual
service to another SE. The virtual service that is consuming the most resources has a higher
probability of being chosen to migrate.
n If two virtual services exist on an SE, and each are consuming 45% of the SE’s CPU, in other
words neither is violating the 70% PPS rule, one virtual service will be migrated to a new SE.
For more information, see How to Configure Auto-rebalance Using NSX Advanced Load Balancer
CLI.
Configuring Auto-Rebalance
The auto re-balance feature helps in automatically migrating or scaling virtual services when the
load on the Service Engines goes beyond or falls below the configured threshold.
For more information, see How to Configure Auto-rebalance using NSX Advanced Load Balancer
CLI
Typically, a virtual service is placed on one or more NICs, as determined by a list ascertained by
the NSX Advanced Load Balancer Controller. However, the list may not include all SE interfaces.
This feature enables placing the VIP on all NICs of the SEs in the SE group which is useful when
using the default gateway feature. Otherwise, the back-end servers might never be able to reach
the VIP placed on interfaces other than one set as their default gateway.
BGP
This section covers the following topics:
n BGP/BFD Visibility
n Multihop BGP
n How to Access and Use Quagga Shell using NSX Advanced Load Balancer CLI
n BGP Support in NSX Advanced Load Balancer for OpenShift and Kubernetes
n Advertising NSX Advanced Load Balancer Service Engine as default routes to a set of peers.
Note
n This feature is not supported for IPv6.
Learning Back end Routes and Advertising the same to the Front-end
The following is the diagrammatic representation of learning back end routes and advertising the
same to the front end:
Learning Default Route from the Front end and Advertising itself as Default
Route to Back end
The following is the diagrammatic representation of learning default route from the front end and
advertising itself as the default route to the back end:
AVI SE
10.10.118.18/24
Key Considerations
The following are the constraints with learning and advertising NSX Advanced Load Balancer BGP:
n The advertisement option is supported only when routing is enabled (Default Gateway (IP
Routing on NSX Advanced Load Balancer SE). Routing is supported only with Legacy-HA
mode. Only active SE will advertise the routes.
n Configurable route attributes, such as AS path prepend, IP communities, local preference, will
not be applied on learned routes.
n The filters to learning routes and advertising of learned routes are not allowed.
n The peers are grouped to exchange routes based on the associated label.
n From a peer, you can either learn routes or learn the default route, but not both.
n The assumption for instance is that when you learn routes from back end peers, there will be
no default route.
n You will not be advertising NSX Advanced Load Balancer Service Engine as the default route
to any peer belonging to a group from which you are learning the default route.
n You will not be advertising the default route to any peer in the group to which you are
advertising the learned routes.
Note The routes learned through BGP will not be used for placement decisions. The Controller
will not use the routes learned by Service Engines through BGP to evaluate reachability to the pool
servers.
| routing_options[1] | |
| label | backend |
| learn_routes | True |
| advertise_default_route | True |
| max_learn_limit | 100 |
| routing_options[2] | |
| label | frontend |
| learn_only_default_route | True |
| learn_routes | False |
| advertise_learned_route | True |
| max_learn_limit | 50 |
| shutdown | False |
| system_default | True |
| lldp_enable | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+
The example shows a configuration where the default route is learned from the front end,
advertises the default route to the back end, learns routes from the back end and advertises
the learned routes to the front end.
The following is the Service Engine route outputs to illustrate the learning and advertisement
feature:
[admin:amit-ctrl-bgp]: >
[admin:amit-ctrl-bgp]: > show serviceengine Avi-se-mrcps route
+-----------------+-------------+-----------+---------------+---------------------------+
| IP Destination | Gateway | Interface | Interface IP | Route Flags |
+-----------------+-------------+-----------+---------------+---------------------------+
+-----------------+-------------+-----------+---------------+---------------------------+
VRF 0
+-----------------+-------------+-----------+---------------+---------------------------+
| 4.4.4.0/24 | 100.64.1.64 | eth3 | 100.64.1.24 | Up, Learned, Gateway, GWUp |
| 5.5.5.1/32 | 0.0.0.0 | eth3 | 5.5.5.1 | Up, GWUp |
| 6.6.6.0/24 | 100.64.2.65 | eth2 | 100.64.2.56 | Up, Learned, Gateway, GWUp|
| 7.7.7.1/32 | 0.0.0.0 | eth3 | 7.7.7.1 | Up, GWUp |
| 100.64.1.0/24 | 0.0.0.0 | eth3 | 100.64.1.24 | Up, GWUp |
| 100.64.1.104/32 | 0.0.0.0 | eth3 | 100.64.1.104 | Up, GWUp |
| 100.64.1.105/32 | 0.0.0.0 | eth3 | 100.64.1.105 | Up, GWUp |
| 100.64.1.106/32 | 0.0.0.0 | eth3 | 100.64.2.106 | Up, GWUp
| 100.64.1.108/32 | 0.0.0.0 | eth3 | 100.64.1.108 | Up, GWUp |
| 100.64.2.0/24 | 0.0.0.0 | eth2 | 100.64.2.56 | Up, GWUp|
+-----------------+-------------+-----------+---------------+---------------------------+
[admin:admin-ctrl-bgp]: >
Note
n The AS path prepend and local preference features works with the same pre-requisites or
ecosystem support that is listed in the BGP Support for Scaling Virtual Services.
Prepending AS Path
When multiple paths to an IP address or prefix are available through BGP in a router, the router
will prefer the path with the least number of AS identifiers in the path.
The BGP can signal lower priority to a route by prepending an arbitrary number of AS identifiers.
This route will be picked only when the route with the lower number of AS identifiers goes down.
This feature allows you to prepend AS identifiers in the path. This is applicable only for routes
advertised over eBGP connections.
A higher value means higher preference. This is applicable only over iBGP connections.
eBGP Peer
Local AS: 66000 (In AS different than SE’s AS)
Learns following VIP routes:
10.10.116.19/24 20.20.116.19/24
You can deploy the same service in two different data centers involving two different NSX
Advanced Load Balancer clusters. Both use the same VIP.
The upstream router to which both the SEs get connected will pick the path with the shortest AS
path.
If the service with a short AS path gets disrupted, the system picks the one with the longer AS
path. This is a method for deploying active stand-by across datacenters/geographies.
eBGP Peer
Local AS: 65000 (Same local AS as SE)
Learns following VIP routes:
10.10.116.19/24 20.20.116.19/24
DataCenter-1 DataCenter-2
AVI-SE-DC-1 AVI-SE-DC-2
Local AS:65000 Local AS:65000
Advertising Advertising same VS as
VS1:VIP:1.1.1.1 VS2:VIP:1.1.1.1
without any explicit local 10.10.116.18/24 20.20.116.18/24 with local preference set
preference. to 200.
Default is 100.
You can deploy the same service in two different data centers involving two different NSX
Advanced Load Balancer clusters. Both use the same VIP.
The upstream router to which both the SEs get connected will pick the path with the shortest local
preference path.
If the service with a short local preference path gets disrupted, the system picks the one with the
longer local preference path. This is a method for deploying active stand-by across datacenters/
geographies.
The community feature allows you to configure a default community string and separate
community strings for address ranges and a default community string.
The AS path prepend and local preference is route qualifiers like the community. The same
process can be followed for AS path prepend and local preference.
The configuration supports setting a local preference value for all the VIP and SNAT routes
advertised. This is a field in the BGP profile which is part of VRF.
The configuration supports setting the number of times the local AS is to be prepended in the VIP
and SNAT routes advertised. This is a field in the BGP profile which is part of VRF.
Navigate to Infrastructure > Routing > BGP Peering and provide the value for the AS-Path
Prepend as shown below:
| shutdown | False |
| system_default | True |
| lldp_enable | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+-----------------------+-------------------------------------------------+
As per the above use case, on the upstream router, the AS path has been prepended with N+1,
wherein the N=AS path is defined while doing the configuration in the BGP profile.
Navigate to Infrastructure > Routing > BGP Peering and provide the value for the Local
Preference as shown below:
Note Any configuration change in AS path prepend or local preference parameters can result in a
BGP connection to the peers being flapped.
| hold_time | 180 |
| send_community | True |
| local_preference | 500 |
| shutdown | False |
| system_default | False |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------+-------------------------------------------------+
As per the above use case, on the upstream router, the local preference has been updated to the
configured value.
When a VRF and its BGP profile is deployed in an SE, if there are peer configurations with
ibgp_local_as_override set and the peer subnet applies to the SE, the profile level local_as will
be overridden with the peer level remote_as.
n If there are multiple peers with subnets to the same TOR in the SE and
ibgp_local_as_override is enabled, all the peers must have the same remote_as value.
Example config
+----------------------------+-------------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------------+
| uuid | vrfcontext-553674bd-44b9-4a22-b4d6-8bf804e0f046 |
| name | global |
| bgp_profile | |
| local_as | 100 |
| ibgp | True |
| peers[1] | |
| remote_as | 200 |
| peer_ip | 100.64.3.10 |
| subnet | 100.64.3.0/24 |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| ibgp_local_as_override | True |
| peers[2] | |
| remote_as | 200 |
| peer_ip | 100.64.4.10 |
| subnet | 100.64.4.0/24 |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| ibgp_local_as_override | True |
| peers[3] | |
| remote_as | 300 |
| peer_ip | 100.64.5.10 |
| subnet | 100.64.5.0/24 |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| ibgp_local_as_override | True |
| peers[4] | |
| remote_as | 100 |
| peer_ip | 100.64.6.10 |
| subnet | 100.64.6.0/24 |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| keepalive_interval | 60 |
| hold_time | 180 |
| send_community | True |
| shutdown | False |
| system_default | True |
| lldp_enable | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+
With the above config, the following are the only valid SE peering:
Note Any other combination of peering is invalid and results in all the BGP virtual services
deployed in the SE with this VRF going to OPER_DOWN state.
For instance, capacity can be added for a virtual service when needed by scaling out the virtual
service to additional SEs within the SE group, then removing (scaling in) the additional SEs when
no longer needed. In this case, the primary SE for the virtual service coordinates the distribution of
the virtual service traffic among the other SEs, while also continuing to process some of the virtual
service’s traffic.
An alternative method for scaling a virtual service is to use a Border Gateway Protocol (BGP)
feature, route health injection (RHI), with a layer 3 routing feature, equal-cost multi-path (ECMP).
Using Route Health Injection (RHI) with ECMP for virtual service scaling avoids the managerial
overhead placed upon the primary SE to coordinate the scaled out traffic among the SEs.
BGP is supported in legacy (active/standby) and elastic (active/active and N+M) high availability
modes.
If a virtual service is marked down by its health monitor or for any other reason, the NSX
Advanced Load Balancer SE withdraws the route advertisement to its virtual IP (VIP) and restores
the same only when the virtual service is marked up again.
Notes on Limits
Service Engine Count
By default, NSX Advanced Load Balancer supports a maximum of four SEs per virtual service, and
this can be increased to a maximum of 64 SEs. Each SE uses RHI to advertise a /32 host route
to the virtual service’s VIP address and can accept the traffic. The upstream router uses ECMP to
select a path to one of the SEs.
The limit on SE count is imposed by the ECMP support on the upstream router. If the router
supports up to 64 equal-cost routes, then a virtual service enabled for RHI can be supported on up
to 64 SEs. Similarly, if the router supports a lesser number of paths, then the virtual service count
enabled for RHI will be lower.
NSX Advanced Load Balancer supports 4 distinct subnets with any number of peers in those 4
subnets. Consequently, a VIP can be advertised on more than 4 peers as long as those peers
belong to 4 or fewer subnets. To illustrate:
n A VIP can be advertised to 4 pairs of peers (once again, 8 peers), with each pair belonging to a
separate subnet.
Supported Ecosystem
BGP-based scaling is supported in the following:
n VMware
Note Peering with OpenStack routers is not supported. However, peering with an external router
is possible.
BGP-based Scaling
NSX Advanced Load Balancer supports the use of the following routing features to dynamically
perform virtual service load balancing and scaling:
n Route health injection (RHI): RHI allows traffic to reach a VIP that is not in the same subnet as
its SE. The NSX Advanced Load Balancer Service Engine (SE) where a virtual service is located
advertises a host route to the VIP for that virtual service, with the SE’s IP address as the
next-hop router address. Based on this update, the BGP peer connected to the NSX Advanced
Load Balancer SE updates its route table to use the NSX Advanced Load Balancer SE as the
next hop for reaching the VIP. The peer BGP router also advertises itself to its upstream BGP
peers as a next hop for reaching the VIP.
n Equal cost multi-path (ECMP): Higher bandwidth for the VIP is provided by load sharing its
traffic across multiple physical links to the SE(s). If an NSX Advanced Load Balancer SE has
multiple links to the BGP peer, the NSX Advanced Load Balancer SE advertises the VIP host
route on each of those links. The BGP peer router sees multiple next-hop paths to the virtual
service’s VIP and uses ECMP to balance traffic across the paths. If the virtual service is scaled
out to multiple NSX Advanced Load Balancer SEs, each SE advertises the VIP, on each of its
links to the peer BGP router.
When a virtual service enabled for BGP is placed on its NSX Advanced Load Balancer SE, that SE
establishes a BGP peer session with each of its next-hop BGP peer routers. The NSX Advanced
Load Balancer SE then performs RHI for the virtual service’s VIP by advertising a host route (/32
network mask) to the VIP. The NSX Advanced Load Balancer SE sends the advertisement as a
BGP route update to each of its BGP peers. When a BGP peer receives this update from the NSX
Advanced Load Balancer SE, the peer updates its route table with a route to the VIP that uses the
SE as the next hop. Typically, the BGP peer also advertises the VIP route to its other BGP peers.
The BGP peer IP addresses and the local Autonomous System (AS) number and a few other
settings are specified in a BGP profile on the NSX Advanced Load Balancer Controller. RHI
support is disabled (default) or enabled within the individual virtual service’s configuration. If an
NSX Advanced Load Balancer SE has more than one link to the same BGP peer, this also enables
ECMP support for the VIP. The NSX Advanced Load Balancer SE advertises a separate host route
to the VIP on each of the NSX Advanced Load Balancer SE interfaces with the BGP peer.
If the NSX Advanced Load Balancer SE fails, the BGP peers withdraw the routes that were
advertised to them by the NSX Advanced Load Balancer SE.
n If a new peer is added to the BGP profile, the virtual service IP is advertised to the new BGP
peer router without needing to disable/enable the virtual service.
n If a BGP peer is deleted from the BGP profile, any virtual service IPs that had been advertised
to the BGP peer will be withdrawn.
Note The ECMP route group or ECMP next-hop group on the router could exhaust if the unique
SE BGP next-hops advertised for a different set of virtual service VIPs. When such exhaustion
happens, the routers could fall back to a single SE next-hop causing traffic issues.
Example:
The following is the sample config on a Dell S4048 switch for adding 5k network entries and 20k
paths:
w1g27-avi-s4048-1#show ip protocol-queue-mapping
Protocol Src-Port Dst-Port TcpFlag Queue EgPort Rate (kbps)
-------- -------- -------- ------- ----- ------ -----------
TCP (BGP) any/179 179/any _ Q9 _ 10000
UDP (DHCP) 67/68 68/67 _ Q10 _ _
UDP (DHCP-R) 67 67 _ Q10 _ _
TCP (FTP) any 21 _ Q6 _ _
ICMP any any _ Q6 _ _
IGMP any any _ Q11 _ _
TCP (MSDP) any/639 639/any _ Q11 _ _
UDP (NTP) any 123 _ Q6 _ _
OSPF any any _ Q9 _ _
PIM any any _ Q11 _ _
UDP (RIP) any 520 _ Q9 _ _
TCP (SSH) any 22 _ Q6 _ _
TCP (TELNET) any 23 _ Q6 _ _
Switch/Router
/31 or /30 subnet with /24 subnet with SVI in Create L2 Port- Separate L3 interfaces Separate L3 interfaces
BGP peering between the router. BGP Channel in different subnets (/31 in same subnet -
the Interface IPs peering between the or /24 with SVI) and Not Supported
Host and SVI IP peer with Router’s IP on
each subnet
10.10.10.3/24 10.10.20.4/24
10.10.10.3/31 10.10.20.3/24 10.10.20.3/24 10.10.20.3/24 10.10.20.3/24
SE SE SE SE SE
Linux Server Linux Server Linux Server Linux Server Linux Server
BGP is supported over the following types of links between the BGP peer and the NSX Advanced
Load Balancer SEs:
n Host route (/30 or /31 mask length) to the VIP, with the NSX Advanced Load Balancer SE as
the next hop.
n Network route (/24 mask length) subnet with Switched Virtual Interface (SVI) configured in the
router.
n Layer 2 port-channel (separate physical links configured as a single logical link on the next-hop
switch or router).
n Multiple layer 3 interfaces, in separate subnets (/31 or /24 with SVI). A separate BGP peer
session is set up between each NSX Advanced Load Balancer SE layer 3 interface and the BGP
peer.
Each SE can have multiple BGP peers. For example, an SE with interfaces in separate layer 3
subnets can have a peer session with a different BGP peer on each interface. The connection
between the NSX Advanced Load Balancer SE and the BGP peer on separate Layer 3 interfaces
that are in the same subnet and same VLAN is not supported. Using multiple links to the BGP
peer provides higher throughput for the VIP. The virtual service also can be scaled out for higher
throughput. In either case, a separate host route to the VIP is advertised over each link to the BGP
peer, with the NSX Advanced Load Balancer SE as the next-hop address.
To make debugging easier, some BGP commands can be viewed from the NSX Advanced Load
Balancer Controller shell. For more information, see BGP Visibility.
n Field
VirtualService
advertise_down_vs
n Configuration
Note
n If the virtual service is already down, the configuration changes done will not affect
it. These changes will be applied if virtual service goes down in future. In such
cases, you should disable and then enable virtual service and apply the configuration.
remove_listening_port_on_vs_down feature will not work if advertise_down_vs is False.
n For custom actions, such as HTTP redirect, showing error pages, etc., to handle down virtual
service, VirtualService.remove_listening_port_on_vs_down should be False.
Use Case for adding the same BGP peer to the different VRFs
You can add a block preventing from:
n Adding a BGP peer which belongs to a network with a different VRF than the VRF that you are
adding the peer to
n Changing network VRF if the network is being used in the BGP profile
| vnic[3]
| |
| if_name |
avi_eth5 |
| linux_name |
eth3 |
| mac_address |
00:50:56:86:0f:c8 |
| pci_id |
0000:0b:00.0 |
| mtu |
1500 |
| dhcp_enabled |
True |
| enabled |
True |
| connected |
True |
| network_uuid | dvportgroup-2404-cloud-d992824d-
d055-4051-94f8-5abe4a323231 |
| nw[1]
| |
| ip |
fe80::250:56ff:fe86:fc8/64 |
| mode |
DHCP |
| nw[2]
| |
| ip |
10.160.4.16/24 |
| mode |
DHCP |
| is_mgmt |
False |
| is_complete |
True |
| avi_internal_network |
False |
| enabled_flag |
False |
| running_flag |
True |
| pushed_to_dataplane |
True |
| consumed_by_dataplane |
True |
| pushed_to_controller |
True |
| can_se_dp_takeover |
True |
| vrf_ref | T-0-
default |
| vrf_id |
2 |
| ip6_autocfg_enabled | False
11:46
| vnic[7]
| |
| if_name |
avi_eth6 |
| linux_name |
eth4 |
| mac_address |
00:50:56:86:12:0e |
| pci_id |
0000:0c:00.0 |
| mtu |
1500 |
| dhcp_enabled |
True |
| enabled |
True |
| connected |
True |
| network_uuid | dvportgroup-69-cloud-d992824d-
d055-4051-94f8-5abe4a323231 |
| nw[1]
| |
| ip |
10.160.4.21/24 |
| mode |
DHCP |
| nw[2]
| |
| ip |
172.16.1.90/32 |
| mode |
VIP |
| ref_cnt |
1 |
| nw[3]
| |
| ip |
fe80::250:56ff:fe86:120e/64 |
| mode |
DHCP |
| is_mgmt |
False |
| is_complete |
True |
| avi_internal_network |
False |
| enabled_flag |
False |
| running_flag |
True |
| pushed_to_dataplane |
True |
| consumed_by_dataplane |
True |
| pushed_to_controller |
True |
| can_se_dp_takeover |
True |
| vrf_ref | T-0-
default |
| vrf_id |
2 |
| ip6_autocfg_enabled |
False |
| T-1-default | vrfcontext-9bea0022-0c15-44ea-8813-cfd93f559261 |
| T-1-VRF | vrfcontext-18821ea1-e1c7-4333-a72b-598c54c584d5 |
+-------------+-------------------------------------------------+
| cloud_ref | backend_vcenter |
+----------------------------+-------------------------------------------------+
Note
n The tenant (tenant VRF enabled) specific SE is configured with a PG-4 interface in VRF context
(T-0-default) which belongs to the tenant and not the actual VRF context (global) in which the
PG-4 is configured.
n From a placement perspective, if you initiate an add vNIC for a Service Engine for a virtual
service, the vNIC’s VRF will always be the VRF of the virtual service. This change will block
you from adding a BGP peer to a vrfcontext if the BGP peer belongs to a network that has
a different vrfcontext. The change is necessary as this configuration can cause traffic to be
dropped.
n Because there is no particular use case for having a VRF-A with BGP peers which belong to
networks in VRF-B, you will not be allowed to make any configuration changes.
n Additionally, you can change an existing network’s VRF, and there are BGP peers in that
network’s VRF which belong to this network, the change will be blocked.
For example, if an NSX Advanced Load Balancer SE fails, BFD on the BGP peer router can quickly
detect and correct the link failure.
Note With NSX Advanced Load Balancer release 21.1.2, the BFD feature supports BGP multi-hop
implementation.
Scaling
Scaling out/in virtual services is supported. In this example, a virtual service is placed on the NSX
Advanced Load Balancer SE on the 10.10.10.x network is scaled out to 3 additional NSX Advanced
Load Balancer SEs.
SE VIP SE SE VIP SE
V V V
A B C D A B C D E A B C D E
Figure 1 shows the virtual service placed on four SEs, with a flow ongoing between a client and
SE-A. In figure 2, there is a scale-out to SE-E. This changes the hash on the router. Existing flows
get rehashed to other SEs. In this particular example, suppose it is SE-C.
V V V
A B C D E A B C D E A B C D E
IPIP tunnel
In the NSX Advanced Load Balancer implementation, SE-C sends a flow probe to all other SEs
(figure 4). Figure 5 shows SE-A responding to claim ownership of the depicted flow. In figure 6,
SE-C uses IPIP tunnelling to send all packets of this flow to SE-A.
A B C D E
IPIP tunnel
Fig. 7
VMware, Inc. 374
VMware NSX Advanced Load Balancer Configuration Guide
In figure 7, SE-A continues to process the flow and sends its response directly to the client.
In such a setup, when one of the links goes down, the BGP withdraws the routes from that
particular NIC causing rehashing of that flow to another interface on the same SE or to another SE.
The new SE that receives the flow tries to recover the flow with a flow probe which fails because of
the interface going down.
The problem is seen with both the front end and the back end flows.
For the front end flows to be recovered, the flows must belong to a BGP virtual service that is
placed on more than one NIC on a Service Engine.
For the back end flows to be recovered, the virtual service must be configured with SNAT IPs and
must be advertised through BGP to multiple peers in the back end.
Mandatory Requirements:
If the interface goes down, the FT entries are not deleted. If the flow lands on another interface,
the flow-probe is triggered which is going to migrate the flow from the old flow table to the new
interface where the flow is landed.
The interface down event is reported to the Controller and the Controller removes the VIP
placement from the interface. This causes the primary virtual service entry to be reset. If the same
flow now lands on a new interface, it triggers a flow-probe, flow-migration if the virtual service was
placed initially on more than one interface.
If the flow lands on a new SE, the remote flow-probes are triggered. A new flag called relay will be
added to the flow-probe message. This flag indicates that all the receiving interfaces need to relay
the flow-probes to other flow-tables where the flow can be there. The flag is set at the sender of
the flow-probe when the virtual service is detected as BGP scaled-out virtual service.
On the receiving SE, the messages are relayed to the other flow tables resulting in a flow
migration. So subsequent flow-probe from the new SE is going to earn a response because the
flow now resides on the interface that is up and running.
If there is more than one interface on the flow-probe receiving SE, they will all trigger a flow-
migrate.
This results in the flow getting recovered in case an interface fails, and the flow lands on another
interface with flow table entry.
Mesos Support
BGP is supported for north-south interfaces in Mesos deployments. The SE container that is
handling the virtual service will establish a BGP peer session with the BGP router configured in
the BGP peering profile for the cloud. The SE then injects a /64 route (host route) to the VIP, by
advertising the /64 to the BGP peer.
n The BGP peer must allow the SE’s IP interfaces and subnets in its BGP neighbor configuration.
The SE will initiate the peer connection with the BGP router.
n For eBGP, the peer router will detect the time-to-live (TTL) value decremented for the BGP
session. This can prevent the session from coming up. This issue can be prevented from
occurring by setting the eBGP multi-hop TTL. For example, on Juniper routers, the eBGP
multi-hop TTL must be set to 64.
To enable MD5 authentication, select md5_secret in the respective BGP peer configuration. MD5
support is extended to OpenShift cloud where the Service Engine runs as docker container but
peers with other routers masquerading as host.
n Configure a BGP profile. The BGP profile specifies the local Autonomous System (AS) ID that
the NSX Advanced Load Balancer SE and each of the peer BGP routers are in, and the IP
address of each peer BGP router.
n Enable the Advertise VIP using the BGP option on the Advanced tab of the virtual service’s
configuration. This option advertises a host route to the VIP address, with the NSX Advanced
Load Balancer SE as the next hop.
Note When BGP is configured on global VRF on LSC in-band, BGP configuration is applied on SE
only when a virtual service is configured on the SE. Till then peering between SE and peer router
will not happen.
Procedure
3 Click the BGP Peering tab, and click the Edit icon to reveal more fields.
5 Click Add New Peer to reveal a set of fields appropriate to iBGP or eBGP.
Note Remote AS is an additional field in eBGP. BGP peering (as eBGP) is explained as
follows:
n SE placement network
n BFD option (on by default, enables very fast link failure detection through BFD, the only
async mode is supported)
Navigate to Infrastructure > Routing, and select BGP Peering. Enter the desired values for the
timers as shown:
BGP configuration is tenant-specific and the profile. Accordingly, sub-options appear in a suitable
tenant vrfcontext.
This profile enables iBGP with peer BGP router 10.115.0.1/16 in local AS 100. The BGP connection
is secured using MD5 with shared secret “abcd.”
The following commands enable RHI for a source-NAT’ed floating IP address for a virtual service
(vs-1):
The following command can be used to view the virtual service’s configuration:
Two configuration knobs have been added to configure the per-peer “advertisement-interval” and
“connect” timer in Quagga BGP:
Configuration knobs have been added to configure the keepalive interval and hold timer on a
global and per-peer basis:
The above commands configure the keepalive/hold timers on a global basis, but those values
can be overridden for a given peer using the following per-peer commands. Both the global and
per-peer knobs have default values of 60 seconds for the keepalive timer and 180 seconds for the
hold timer.
Example: Example
The following is an example of router configuration when the BGP peer is FRR:
You need to find the interface information of the SE, which is peering with the router.
Here 10.115.10.45 matches the subnet in the peer configuration in vrfcontext->bgp_profile object.
# vtysh
Hello, this is FRRouting (version 7.2.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.
frr1# configure t
frr1(config)# router bgp 100
frr1(config-router)# neighbor 10.115.10.45 remote-as 100
frr1(config-router)# neighbor 10.115.10.45 password abcd
frr1(config-router)# end
frr1#
You need to perform this for all the SEs that will be peering.
Note If no VRF is provided in the filters, the command output can show routes from global vrf
which is present in the system, by default.
With NSX Advanced Load Balancer release 20.1.1, the BFD parameters are user-configurable
using the CLI. For more information, see Configuring High Frequency BFD.
You can select the VIPs to be advertised using labels. When configuring the VSVIP, you can
define that all the peers with a specific label should have a specific VIP advertised. Each peer
on the front end receives the VIP route advertisement only from the virtual services if the label
matches that of the peer.
n FE-Router-1, FE-Router-2, and FE-Router-3 have labels Peer1, Peer2, and Peer3 respectively.
n There are three virtual services in the Global VRF: VS1, VS2, and VS3.
n VS1 (1.1.1.1) is configured with the label Peer1. This implies, that the virtual service will be
advertised to Peer1.
n Similarly, VS2 will be advertised to Peer2 and VS 3 to Peer3, as defined by the labels.
Whenever BGP is enabled for a virtual service, the VIP will be advertised to all the front end
routers. However, in this case, the VIP will be advertised to the selected peer only.
To implement this, the labels list bgp_peer_labels is introduced in the VSVIP object configuration.
The length of each string can be a maximum of 128 characters. A label can consist of upper and
lower case alphabets, numbers, underscores, and hyphens.
Note
n If the VSVIP does not have any label, it will be advertised to all BGP peers with advertise_vip
set to True.
n If the VSVIP has the bgp_peer_labels, the peer with the field advertise_vip is set to True and
the label matching the bgp_peer_labels will receive VIP advertisement. However, if the BGP
peer configuration either has no label or if the label does not match, the peer will not receive
the VIP advertisement.
To enable selective VIP advertisement, add label Peer1 for the Peer, and add the Peer 1 label in
VsVip.bgp_peer_labels.
The following are the steps to configure BGP peer labels, from the NSX Advanced Load Balancer
UI:
8 Click Save.
Alternatively, BGP peer can be configured using the CLI as shown below:
Configuring VSVIP 1
From the VS VIP creation screen of the NSX Advanced Load Balancer, add BGP peer labels, as
required:
| enabled | True |
| auto_allocate_ip | False |
| auto_allocate_floating_ip | False |
| avi_allocated_vip | False |
| avi_allocated_fip | False |
| auto_allocate_ip_type | V4_ONLY |
| prefix_length | 32 |
| vrf_context_ref | global |
| east_west_placement | False |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+-----------------------------+-----------------------------------------+
[admin:]: vsvip> bgp_peer_labels Peer1
[admin:]: vsvip> save
Caveats
n This feature is only applicable to BGP-virtual services. For virtual services that do not use BGP,
the field bgp_peer_labels cannot be enabled.
Starting with NSX Advanced Load Balancer version 21.1.4, the virtual service placement is done
based on the BGP peer label configuration.
Use Case 1
A virtual service VIP with a label X, for instance, can only be placed on SE’s having BGP peering
with peers containing label X.
Network 5 Label 5
In this case, the virtual service is placed on Network 1, Network 2, Network 3, and Network 4.
Use Case 2
A virtual service VIP with no labels can be placed on SE’s having BGP peering with peers in any
subnet. That is, if the BGP peer has labels but BGP virtual service VIP does not have a label,
the virtual service VIP is advertised to be placed on all peer NICs (maximum of four distinct peer
networks).
Network 1 Label 1
Network 2 Label 2
Network 3 Label 3
Network 4 Label 4
Network 5 Label 5
In this case, the virtual service is randomly placed on any one of the four networks.
Use Case 3
If the virtual service VIP is updated to associate a label later, the SE receives the virtual service
SE_List update, the VIP is withdrawn from all the other peers and is placed only on the NIC
pertaining to the peer with the matching label (disruptive update).
Network 4 Label 4
Network 5 Label 5
Note Updates done on the VS VIP to associate the labels will lead to disruptive update of the
virtual service.
Network 5 Label 5
Use Case 4
If the label is removed from the virtual service and the virtual service VIP is left with no label, then
the virtual service VIP is placed on all the peer - NICs (maximum of four distinct peer networks).
Use Case 5
If the virtual service VIP is created with labels for which there is no matching peer, VS VIP creation
is blocked due to invalid configuration, whether it is at the time of creating the virtual service or if
the virtual service VIP label is updated later.
Network 2 Label 2
Network 3 Label 3
Network 4 Label 4
In this case, since there are no matching VS VIP BGP-peer labels, VS VIP creation is blocked with
No BGP Peer exists with matching labels error.
The BFD parameters are user-configurable, for faster failure detection. This gives you the flexibility
to choose the frequency of failure detection, as required.
1 Login to the NSX Advanced Load Balancer shell with your credentials
| shutdown | False |
| peers[2] | |
| remote_as | 65000 |
| peer_ip | 100.64.50.3 |
| subnet | 100.64.50.0/24 |
| md5_secret | |
| bfd | True |
| advertise_vip | False |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| keepalive_interval | 60 |
| hold_time | 180 |
| send_community | True |
| shutdown | False |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------+
3 A new parameter called bfd_profile is introduced within which the BFD parameters are
configured. In the bfd_profile, enter the values for mintx, minrx, and multi.
[admin:abc-ctrl]: vrfcontext>
[admin:abc-ctrl]: vrfcontext> bfd_profile
[admin:abc-ctrl]: vrfcontext:bfd_profile> mintx 1500
[admin:abc-ctrl]: vrfcontext:bfd_profile> minrx 1500
[admin:abc-ctrl]: vrfcontext:bfd_profile> multi 3
[admin:abc-ctrl]: vrfcontext:bfd_profile> save
[admin:abc-ctrl]: vrfcontext> save
The BFD parameters are now configured as per the values entered.
The minimum timeout duration was 3 seconds in prior releases. The minimum timeout duration as
low as 1.5 seconds is supported.
BGP/BFD Visibility
NSX Advanced Load Balancer uses Quagga for BGP based scaling of virtual services. Therefore,
debugging or checking the BGP configuration or the status of the BGP peer was possible only by
logging into the Quagga instance of the Service Engine.
For more information, see How to Access and Use Quagga Shell using NSX Advanced Load
Balancer CLI.
To make debugging easier, with NSX Advanced Load Balancer release 20.1.1, the capability to
view these commands from the NSX Advanced Load Balancer Controller shell is possible.
n Advertised Routes
n Peer Status
n Peer Info
n Running Configuration
Advertised Routes
/serviceengine/<se_uuid>/bgp/advertised_routes n vrf_ref
n peer_ip
Use the command bgp advertised_routes to view the BGP routes advertised to configured
peers:
+----------------------+------------------------------+
| Field | Value |
+----------------------+------------------------------+
| vrf | seagent-default |
| namespace | none |
| advertised_routes[1] | |
| ipv4_routes | show ip bgp |
| | No BGP process is configured |
| | 10-79-168-63# |
| | |
| ipv6_routes | show ipv6 bgp |
| | No BGP process is configured |
| | 10-79-168-63# |
| | |
+----------------------+------------------------------+
This is the generic advertised routes. To view the advertised routes for a specific VRF, use the
vrf_ref filter as shown below:
| | |
| ipv6_routes | show bgp neighbors 100.64.50.3 advertised-routes |
| | % No such neighbor or address |
| | family |
| | 10-79-168-63# |
| | |
+----------------------+-------------------------------------------------------+
Note Use the peer filter to view the advertised routes for a specific peer using show
serviceengine <se_name> bgp advertised_routes filter vrf_ref <vrf_name> peer_ipv4
<peer_IP>.
Peer Status
Command Filters Applicable
/serviceengine/<se_uuid>/bgp/peers_status vrf_ref
When advertising BGP routes to peers, use the bgp peer status flag to check if the advertising
was successful:
Starting with NSX Advanced Load Balancer 21.1.3, the current state of the BGP peers on
a ServiceEngine can be viewed using show serviceengine <Service Engine name>- bgp
peer_state. This shows the peer status of all the VRF configured in the Service Engine.
| upOrDownTime | |
| peers_state[3] | |
| peer_ip | 100.64.19.12 |
| state | BGP_PEER_IDLE |
| upOrDownTime | |
| peers_state[4] | |
| peer_ip | 100.64.19.11 |
| state | BGP_PEER_NOT_ESTABLISHED |
| upOrDownTime | 00:13:22 |
+----------------+------------------------------------+
Use show serviceengine <Service Engine name>- bgp peer_state filter vrf_ref global to
filter by VRF.
The following information can be viewed using show serviceengine <Service Engine name> bgp
peer_state:
n BGP Peer IP
n The State
n Up or Down Time
State Description
BGP_PEER_PREFIX_EXCEEDED Prefixes learnt from the peer exceeded the maximum limit.
Check the VRF's configuration.
BBGP_PEER_NOT_APPLICABLE_TO_THIS_SE On the SE, for the VRF, no interface is configured with the
peer's reachability network.
Note This feature provides peer states cached on SE. To update the interval, use
serviceenginegroup->bgp_state_update_interval.
Peer Information
Command Filters Applicable
n vrf_ref
/serviceengine/<se_uuid>/bgp/peers n peer_ipv4
n peer_ipv6
| | Fo |
| | reign host: 100.64.50.21, Foreign port: 179 |
| | Nexthop: 100.64.50.14 |
| | Nexthop global |
| | : fe80::250:56ff:fe91:feb0 |
| | Nexthop local: :: |
| | BGP connection: non shared network |
| | |
| | Read thread: on Write thread: off |
| | |
| | 10-79-168-63# |
| | |
+-----------+-----------------------------------------------------------------------------+
+-----------+------------------------+
| Field | Value |
+-----------+------------------------+
| vrf | seagent-default |
| namespace | none |
| peer_info | show ip bgp neighbors |
| | 10-79-168-63# |
| | |
+-----------+------------------------+
/serviceengine/<se_uuid>/bgp/running_config vrf_ref
| | Keepalives: |
| | 0 0 |
| | Route Refresh: 0 0 |
| | Capability: |
| | 0 0 |
| | Total: 0 0 |
| | Minimum time b |
| | etween advertisement runs is 5 seconds |
| | |
| | For address family: IPv4 Unicast |
| | Comm |
| | unity attribute sent to this neighbor(both) |
| | Inbound path policy configured |
| | O |
| | utbound path policy configured |
| | Route map for incoming advertisements is PEER_R |
| | M_IN_100.64.50.3 |
| | Route map for outgoing advertisements is *PEER_RM_OUT_100.64. |
| | 50.3 |
| | 0 accepted prefixes |
| | |
| | Connections established 0; dropped 0 |
| | Last reset |
| | never |
| | Next connect timer due in 4 seconds |
| | Read thread: off Write thread: off |
| | |
| | B |
| | GP neighbor is 100.64.50.21, remote AS 65000, local AS 65000, internal link |
| | BG |
| | P version 4, remote router ID 2.226.39.17 |
| | BGP state = Established, up for 04:5 |
| | 2:38 |
| | Last read 00:00:37, hold time is 180, keepalive interval is 60 seconds |
| | |
| | Neighbor capabilities: |
| | 4 Byte AS: advertised and received |
| | Route refresh: |
| | advertised and received(old & new) |
| | Address family IPv4 Unicast: advertised |
| | and received |
| | Graceful Restart Capabilty: advertised and received |
| | Remot |
| | e Restart timer is 120 seconds |
| | Address families by peer: |
| | none |
| | Gr |
| | aceful restart informations: |
| | End-of-RIB send: IPv4 Unicast |
| | End-of-RIB re |
| | ceived: IPv4 Unicast |
| | Message statistics: |
| | Inq depth is 0 |
| | Outq depth is |
| | 0 |
| | Sent Rcvd |
| | Opens: 1 |
| | 1 |
| | Notifications: 0 0 |
| | Updates: 2 |
| | 1 |
| | Keepalives: 294 293 |
| | Route Refresh: 0 |
| | 0 |
| | Capability: 0 0 |
| | Total: 297 |
| | 295 |
| | Minimum time between advertisement runs is 5 seconds |
| | |
| | For address f |
| | amily: IPv4 Unicast |
| | Community attribute sent to this neighbor(both) |
| | Inbound |
| | path policy configured |
| | Outbound path policy configured |
| | Route map for incomin |
| | g advertisements is PEER_RM_IN_100.64.50.21 |
| | Route map for outgoing advertiseme |
| | nts is *PEER_RM_OUT_100.64.50.21 |
| | 0 accepted prefixes |
| | |
| | Connections establishe |
| | d 1; dropped 0 |
| | Last reset never |
| | Local host: 100.64.50.14, Local port: 45618 |
| | Fo |
| | reign host: 100.64.50.21, Foreign port: 179 |
| | Nexthop: 100.64.50.14 |
| | Nexthop global |
| | : fe80::250:56ff:fe91:feb0 |
| | Nexthop local: :: |
| | BGP connection: non shared network |
| | |
| | Read thread: on Write thread: off |
| | |
| | 10-79-168-63# |
| | |
+-----------+-----------------------------------------------------------------------------+
+-----------+------------------------+
| Field | Value |
+-----------+------------------------+
| vrf | seagent-default |
| namespace | none |
| peer_info | show ip bgp neighbors |
| | 10-79-168-63# |
| | |
+-----------+------------------------+
You can view the current BGP configuration for all VRFs.
/serviceengine/<se_uuid>/bfd/session_status vrf_ref
Use the show serviceengine <Service Engine IP address> bfd session_status command to
check the details of the BFD packets and the BGP session.
The below is the output for the BFD session status on the NSX Advanced Load Balancer release
before 21.1.2.
| | Time=Down(05:300:19.723) |
| | Cu |
| | rrentTxInterval=1,000,000 us |
| | CurrentRxTimeout=0 us |
| | LocalDetectMulti=3 |
| | Loca |
| | lDesiredMinTx=1,000,000 us |
| | LocalRequiredMinRx=1,000,000 us |
| | RemoteDetectMulti |
| | =0 |
| | RemoteDesiredMinTx=0 us |
| | RemoteRequiredMinRx=1 us |
| | |
+-----------+-----------------------------------+
+-----------+-----------------------+
| Field | Value |
+-----------+-----------------------+
| vrf | seagent-default |
| namespace | none |
| status | There are 0 sessions: |
| | |
+-----------+-----------------------+
Note
n The peer_ipv4/ peer_ipv6 filters should always be used with the vrf_ref filter.
n When an invalid vrf_ref is provided, it defaults to the management vrf and when an invalid
peer filter is provided, an empty output is returned.
n With NSX Advanced Load Balancer 21.1.2, the status_level filter for the show serviceengine
<Service Engine name> bfd session_status command is not supported.
The community value is a 32-bit field that is divided into two sub-fields. The first 2 bytes encode
the AS number of the network that originated the community and the last 2 bytes carry a unique
number assigned by the AS. Communities add power to BGP, changing it from a routing protocol
to a tool for signalling and policy enforcement.
Use Cases
n BGP community is useful when there are common requirements for a range of IP addresses or
a network.
n It provides a better understanding of the network topology and routing policy requirements.
n It makes scalability, operation, and troubleshooting of a network easier. For more information
on the BGP community, see An Application of the BGP Community Attribute.
Working Principle
NSX Advanced Load Balancer supports the new ip_community option in the BGP configuration.
You can conveniently tag a virtual IP address (VIP) or a backend server IP address advertised from
an NSX Advanced Load Balancer Service Engine with appropriate communities. Tagging allows
BGP peers to handle BGP routes with discretion.
Configuration
Login to the NSX Advanced Load Balancer Controller command line interface (CLI) and follow the
steps to configure the BGP community for all routes advertised to a BGP peer:
+---------------------------
+-----------------------------------------------------------------------+
| Field |
Value |
+---------------------------
+-----------------------------------------------------------------------+
| uuid |
vrfcontext-ded10944-53da-4542-bbf1-1cd4f300fb29 |
| name |
global |
| bgp_profile
| |
| local_as |
65000 |
| ibgp |
True |
| keepalive_interval. |
60 |
| hold_time |
180 |
| send_community |
True |
| community[1] |
internet |
| community[2] |
10:10 |
| community[3] |
65000:20 |
| system_default |
True |
| tenant_ref |
admin |
| cloud_ref |
Default-Cloud |
+---------------------------
+-----------------------------------------------------------------------+
+----------------------------+-------------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------------+
| uuid | vrfcontext-ded10944-53da-4542-bbf1-1cd4f300fb29 |
| name | global |
| bgp_profile | |
| local_as | 65000 |
| ibgp | True |
| peers[1] | |
| remote_as | 1 |
| | |
| send_community | True |
| community[1] | internet |
| community[2] | 65000:20 |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+
The example shows how to tag any routes with a specific community that will be applied to only a
specific IP range. This IP-specific community overrides the default community in bgp_profile that
applies to all routes.
Follow the steps mentioned to configure a BGP community for a single IP address (for example
a VIP address) that is advertised to a BGP peer. While configuring a community for the single IP
address, ip_end is optional. The user can however configure both ip_begin and ip_end to the
same IP address without any issue.
Follow the CLI commands to stop tagging BGP advertised routes with the community. This
command stops tagging routes with the community while preserving the configuration.
| peers[1] | |
| remote_as | 1 |
| | |
| hold_time | 180 |
| send_community | False |
| community[1] | internet |
| community[2] | 65000:20 |
| ip_communities[1] | |
| ip_begin | 10.70.163.100 |
| ip_end | 10.70.163.200 |
| community[1] | 200:200 |
| community[2] | 100:100 |
| ip_communities[2] | |
| ip_begin | 10.70.164.150 |
| community[1] | 150:150 |
+--------------------------+----------------+
[admin:controller]: vrfcontext:bgp_profile> save
Follow the NSX Advanced Load Balancer CLI commands to delete the configured
ip_communities:
| send_community | False |
| community[1] | local-AS |
| community[2] | no-export |
| ip_communities[1] | |
| ip_begin | 10.70.163.100 |
| ip_end | 10.70.163.200 |
| community[1] | 200:200 |
| community[2] | 100:100 |
| ip_communities[2] | |
| ip_begin | 10.70.164.150 |
| community[1] | 150:150 |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+
| ip_communities[1] | |
| ip_begin | 10.70.164.150 |
| community[1] | 150:150 |
+--------------------------+----------------+
Follow the steps to enable the community tags for the BGP-advertised routes:
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+
It is possible to tag routes advertised to a BGP peer with a standard community. NSX Advanced
Load Balancer supports tagging of the routes in BGP sub mode only. NSX Advanced Load
Balancer does not support tagging of the communities on a per route basis.
+---------------------------
+-----------------------------------------------------------------------+
| Field |
Value |
+---------------------------
+-----------------------------------------------------------------------+
| uuid |
vrfcontext-3cc726d3-d94a-4eb0-9c70-f70d7e1b185e |
| name |
global |
| bgp_profile
| |
| local_as |
65000 |
| ibgp |
True |
| keepalive_interval. |
60 |
| hold_time |
180 |
| send_community |
True |
| community[1] |
internet |
| community[2] |
10:10 |
| community[3] |
65000:20 |
| system_default |
True |
| tenant_ref |
admin |
| cloud_ref |
Default-Cloud |
+---------------------------
+-----------------------------------------------------------------------+
</code></pre>
Multihop BGP
NSX Advanced Load Balancer supports multihop BGP. A plain peer configuration is supported in
all its variations, including iBGP multihop.
n eBGP multihop: BGP peers are more than one hop away and in a different autonomous
system. BGP peers are not directly connected.
n iBGP multihop: BGP peers are in the same autonomous system but more than one hop away.
Configuring eBGP
To configure eBGP multihop, a per-peer configuration parameter, that is, ebgp_multihop specifies
the number of next hops. The following are the two main configuration sections:
n The eBGP-multihop peer. The multihop peer must be configured with the same subnet as
that of the interface network
n Configuring BGP peer and intermediate routers: Static or default route configuration on the
NSX Advanced Load Balancer Controller, intermediate router, and BGP peer.
Provide the following values to BGP, IPv4 Prefix, IPv4 Peer, Remote AS, and Multihop:
n AS – 65000
n Type – eBGP
n Remote AS – 1
n BFD – Yes
For more information on configuring BGP on NSX Advanced Load Balancer, see BGP Support
for Scaling Virtual Services.
2 The following diagram explains all the required configurations for configuring multihop eBGP
peers:
10.10.3.16/24
Intermediate For VIP configured in random subnet,
Router R1 intermediate router(s) need(s) to have
10.10.116.13/24 static route (or some default route)
configured to it as follows:
10.10.226.0/24 NH 10.10.116.17
10.10.116.17/24
Configure static route or default route to reach peer network (10.10.3.0/24) using router R1
(10.10.116.12). 10.10.3.0/24 next hop 10.10.116.12
10.10.116.0/24 next hop 10.10.3.16 If no static route is specified, there needs to be some default
route through which to reach SE interface network.
Configure the following additional neighbor configuration to peer with Avi SE which is two hops
apart: neighbor 10.10.116.17 ebgp-multihop 2VIP routes on the router R2 are learned as follows:
Configuring iBGP
A multihop iBGP configuration is similar to that of a normal iBGP peer. Once the proper peer
placement subnet, peer IP and other details are provided, the Service Engine will initiate peering
with the router.
Provide the following values to BGP AS, IPv4 Prefix, and IPv4 Peer, and select iBGP:
md5_secret abcd
: vrfcontext:bgp_profile:peers > save
: vrfcontext:bgp_profile > save
: vrfcontext > save
If graceful restart is configured and the interfaces in SE that are used for BGP does not have
floating interface IPs, the virtual service will be marked down. It will recover when the floating
interface IPs are added.
The graceful restart feature also advertises BGP graceful restart option to the BGP peer. The peer
will preserve the routes from SE for 120 secs even when the connection is lost.
Note
n The graceful restart timer should be less than the hold timer.
n The graceful restart will be allowed only if the linked SE group is legacy HA and
distribute_load_active_standby is not enabled.
n If you move an SE group from legacy HA mode to any other mode, and if a network service
with graceful restart exists that refers to this SE group then the graceful restart will fail.
Restrictions
The following are the restrictions of BGP graceful restart:
n You can set the BGP graceful restart feature only on Legacy HA by disabling
distribute_load_active_standby. This is so that the routes are advertised only from 1 SE.
The floating interface IP will be constant and always available on the SE advertising the
routes(VIPs).
n Requires a floating interface IP for the interface from where the peering happens.
Configuration
The graceful restart configuration is as follows:
networkservice:routing_service> graceful_restart
networkservice:routing_service>
NSX Advanced Load Balancer relies on a variety of methods to detect Service Engine failures, as
listed:
When vSphere High Availability is enabled, if the Controller detects that a vSphere host failure has
occurred, the SEs will transition to OPER_PARTITIONED or OPER_DOWN prior to missing six consecutive
heartbeat misses.
n SEs (on the failed host) which have operational virtual services transition to OPER_PARTITIONED
state.
n SEs (on the failed host) which do not have any operational virtual services transition to
OPER_DOWN state.
To verify holistic failure detection, the Service Engine datapath heartbeat mechanism was devised,
where the Service Engines send periodic heartbeat messages over the data interfaces.
By default, this communication is set to standard mode. It can also be configured for the
aggressive mode, as discussed in the Enabling Aggressive Mode using the CLI section.
1 Custom EtherTypes
This is the default mode applicable when the Service Engines are in the same subnet. The
EtherTypes used are:
n ETHERTYPE_AVI_IPC 0XA1C0
n ETHERTYPE_AVI_MACINMAC 0XA1C1
n ETHERTYPE_AVI_MACINMAC_TXONLY 0XA1C2
2 IP Encapsulation
This mode is applicable when the infrastructure does not permit EtherTypes through. Even
in this mode, it is assumed that the Service Engines are in the same subnet. This mode is
applicable for AWS by default.
#shell
Login: admin
Password:
[GB-slough-cam:cd-avi-cntrl1]: > configure serviceengineproperties
[GB-slough-cam:cd-avi-cntrl1]: seproperties> se_bootup_properties
[GB-slough-cam:cd-avi-cntrl1]: seproperties:se_bootup_properties> se_ip_encap_ipc 1
Note For changes to the se_ip_encap_ipc command to be effective, reboot all Service
Engines in the Service Engine group.
n IPPROTO_AVI_IPC 73
n IPPROTO_AVI_MACINMAC 97
n IPPROTO_AVI_MACINMAC_TX 63
3 IP packets
This mode is applicable when the Service Engines are in different subnets. The IP packet
destined to the destination Service Engine’s interface IP is sent to the next-hop router. The IP
protocols used in this mode are:
n IPPROTO_AVI_IPC_L3 75
n IPPROTO_AVI_MACINMAC 97
n Bidirectional Forwarding Detection (BFD) detects SE failures and prompts the router not to
use the route to the failed SE for flow load balancing.
1 Virtual service’s primary SE sends periodic heartbeat messages to all virtual services’
secondary SEs.
2 If a SE fails to respond repeatedly, the primary SE will suspect that the said SE may be down.
4 NSX Advanced Load Balancer Controller sends a sequence of echo messages to confirm if the
suspected Service Engine is indeed down.
Based on the time frame and frequency of heartbeat messages sent across the Service Engines,
the modes of operation are standard and aggressive. The algorithm for both modes is the same,
with a difference in frequency and time frame, as explained below:
1 The primary SE sends heartbeat messages to the secondary SE on a customized interval, e.g.,
100 milliseconds. A string of consecutive failures to respond will indicate that the given SE
could be down. According to the settings shown in the second column, the primary SE will
suspect a secondary SE to be down if,
n 10 consecutive heartbeat messages fail for 1 sec (aggressive). However it could be tweaked
to make it aggressive with the below configuration parameters.
n As soon as primary suspects that the secondary is down, it apprises the NSX Advanced
Load Balancer Controller , which then sends echo messages to the suspect. According to
the settings shown in the third column, the Controller will declare the suspect down after,
By summing the values in the second and third columns, the Controller makes a failure conclusion
within 9 seconds under standard settings, but just within 5 seconds under aggressive settings.
The time taken to detect Service Engine failure based on SE-DP heartbeat failure is as follows:
The aggressive failure detection as aggressive as 2 seconds can be achieved with the following
configuration. However it is recommended only on bare-metal environment, on virtualised
environment it can lead to false positives.
configure serviceengineproperties
se_runtime_properties
| dp_aggressive_hb_frequency | 100 milliseconds |
| dp_aggressive_hb_timeout_count | 5 |
se_agent_properties
| controller_echo_rpc_aggressive_timeout | 500 milliseconds |
| controller_echo_miss_aggressive_limit | 3 |
Login to the shell prompt for NSX Advanced Load Balancer Controller and enter the following
commands under the chosen Service Engine group:
| aggressive_failure_detection | True
1 Doublecheck the configuration on the router and NSX Advanced Load Balancer. Make sure
that the peer IPs, subnets, and AS numbers are correct.
2 Verify the MD5 passwords are the same on the router and NSX Advanced Load Balancer.
3 Run “show serviceengine bgp” to determine the state of the BGP session initiated by the NSX
Advanced Load Balancer SE.
4 Verify there are no ACLs/route maps on the router preventing the sessions/advertisements.
5 Additionally, if needed, check the packet capture using tcpdump (tcpdump -M ) on the router
and check BGP negotiations.
How to Access and Use Quagga Shell using NSX Advanced Load
Balancer CLI
Quagga is a network routing software suite providing implementations of various routing
protocols. NSX Advanced Load Balancer uses Quagga for BGP-based scaling of virtual services.
For more information on BGP scaling, see BGP Support for Scaling Virtual Services.
Instructions
Quagga shell is used to check BGP configuration and status of BGP peer.
Note In this example, all the commands are executed from the default namespace on an NSX
Advanced Load Balancer SE hosting a virtual service enabled for BGP. To list the namespaces
available, use the command ip netns. To switch to the desired datapath namespace, use the
following command.
Use the netcat localhost bgpd command instead of the telnet localhost bgpd command to get
access to the Quagga shell.
Quagga-bgp>
Quagga-bgp> en
Quagga-bgp# show run
Current configuration:
!
password <password>
log file /var/lib/avi/log/bgp/0_bgpd.log
!
router bgp 65000
bgp router-id 1.2.87.205
network 10.140.99.153/32
neighbor 10.140.60.155 remote-as 3
neighbor 10.140.60.155 password <password>
neighbor 10.140.60.155 advertisement-interval 5
neighbor 10.140.60.155 timers 60 180
neighbor 10.140.60.155 timers connect 10
neighbor 10.140.60.155 distribute-list 2 out
neighbor 10.140.99.157 remote-as 2
neighbor 10.140.99.157 password <password>
neighbor 10.140.99.157 advertisement-interval 5
neighbor 10.140.99.157 timers 60 180
neighbor 10.140.99.157 timers connect 10
neighbor 10.140.99.157 distribute-list 1 out
!
access-list 1 permit 10.140.99.153
!
line vty
!
end
Use the command show bgp neighbors to check bgp peering status:
Message statistics:
Inq depth is 0
Outq depth is 0
Sent Rcvd
Opens: 6 3
Notifications: 3 0
Updates: 4 1
Keepalives: 39103 39102
Route Refresh: 0 0
Capability: 0 0
Total: 39116 39106
Minimum time between advertisement runs is 5 seconds
Not all peers might be applicable on a particular SE. Only those peers with subnet matching any of
the interfaces in the SE, are applicable on the SE.
Note Peers in this section refer only to those BGP peers that have matching interfaces on the SE.
If peers with advertise_vip set are present, at least one such peer should be in the UP state. If
peers with advertise_snat_ip set are present, at least one such peer must be in the UP state. For
the peer monitor to mark the status as UP, both the conditions mentioned above have to be met.
The peer monitor marks the status as DOWN if either condition fails.
Note Similar to an IPv4 BGP peer, the IPv6 peer must be in the Service Engine’s directly-
connected network. If it is an eBGP multihop peer, then you need to configure the IPv6 subnet of
the Service Engine’s interface as subnet6, through which the multihop peer is reachable.
Using UI
To configure BGP IPv6 peer on the NSX Advanced Load Balancer UI:
Procedure
1 Navigate to Infrastructure > Routing and select the required cloud from the drop-down menu.
2 Click the BGP Peering tab and click the edit icon to create a new peer.
3 Enter the desired BGP autonomous system value in the BGP AS field.
4 Enter the IPv6 Prefix and IPv6 Peer details along with the MD5 Secret value. In the case of
eBGP, enter relevant information in the fields for Remote AS and Multihop.
Note You can save the configuration by entering just the IPv6 prefix and peer details.
Corresponding IPv4 details are optional. However, for either IPv4 or IPv6, both prefix and
peer details are required.
Using CLI
To configure an IPv6 BGP peer, login to the Controller shell, and execute the following commands:
Syntax
The following is an example of configuring an IPv6 BGP peer, with an IP address of 2006::54, and
a subnet of 2006::/64.
Procedure
1 Navigate to Infrastructure > Routing and select the required cloud from the drop-down menu.
2 Click the BGP Peering tab and click the edit icon to create a new peer.
3 Enter the IPv4 peer details under IPv4 Prefix and IP4 Peer fields.
4 Enter the IPv6 peer details under IPv6 Prefix and IPv6 Peer fields.
Results
You can configure the peer details using CLI as explained below:
Note Similar to dual-stack virtual service, the dual-stack peer considered for BGP virtual service
placement must have both its IPv4 (peer_ip/subnet) and IPv6 (peer_ip6/subnet6) located on the
same interface. The IPv6 routes will be advertised over the IPv6 peering and the IPv4 routes over
the IPv4 peering.
Procedure
6 Under Step 4: Advanced, click the Advertise VIP via BGP option to enable BGP advertising
for the configured virtual service.
Verifying Configuration
Use the show serviceengine service_engine_IP_address bgp command to verify the
configuration.
n To support elastic scaling using ECMP as described in BGP Support for Scaling Virtual
Services.
n To allow north-south VIPs to be allocated from a subnet other than that in which the cluster
nodes’ external interface resides.
Note NSX Advanced Load Balancer Controller must be outside the OpenShift/K8S cluster and
cannot run as a container alongside the NSX Advanced Load Balancer SE container.
Enabling BGP Features in NSX Advanced Load Balancer for Kubernetes and
OpenShift
Configuring BGP features in NSX Advanced Load Balancer is accomplished by configuring a BGP
profile and through an annotation in the Kubernetes/OpenShift service or route/ingress definition.
The BGP profile specifies the local Autonomous System (AS) ID that the NSX Advanced Load
Balancer Service Engine and each of the peer BGP routers are in and the IP address of each peer
BGP router.
Procedure
If the cloud is set up during the initial installation of the NSX Advanced Load Balancer
Controller using the setup wizard, the cloud name is “Default-Cloud,” as shown in the image.
3 Click the BGP Peering tab and click the edit icon to reveal more fields.
5 Click Add New Peer to reveal a set of fields appropriate to iBGP or eBGP.
f Set Multihop to 0.
n BFD (by default, enables very fast link failure detection using BFD).
n Advertise VIP.
h Advertise SNAT can be turned off as advertisement of SNAT is not relevant for
Kubernetes/OpenShift environments.
Results
The Edit BGP Peering screen for eBGP type is as shown in the image:
Select one: 0
Updating an existing object. Currently, the object is:
+----------------------------+-------------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------------+
| uuid | vrfcontext-f834cafa-b572-4ec3-9559-db0573f26d2f |
| name | global |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | OpenShift-Cloud |
+----------------------------+-------------------------------------------------+
: vrfcontext > bgp_profile
: vrfcontext:bgp_profile > local_as 65536
: vrfcontext:bgp_profile > ebgp
: vrfcontext:bgp_profile > peers peer_ip 10.115.0.1 subnet 10.115.0.0/16 md5_secret abcd
remote_as 65537
: vrfcontext:bgp_profile:peers > save
: vrfcontext:bgp_profile > save
: vrfcontext > save
: >
For example, to enable BGP RHI for a north-south service, use the following Kubernetes/
OpenShift service definition:
apiVersion: v1
kind: Service
metadata:
name: avisvc
labels:
svc: avisvc
annotations:
avi_proxy: '{"virtualservice":{"enable_rhi": true, "east_west_placement": false}}'
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
selector:
name: avitest
In some instances, it can be desirable to specify that the VIP be allocated from a specifically
named subnet. This can be achieved by defining the network in NSX Advanced Load Balancer and
then referencing the network by name in the service annotation as follows:
apiVersion: v1
kind: Service
metadata:
name: avisvc
labels:
svc: avisvc
annotations:
avi_proxy: >-
{"virtualservice":{"enable_rhi": true, "east_west_placement": false,
"auto_allocate_ip": true,
"ipam_network_subnet": {"network_ref": "/api/network/?name=ns-cluster-network-bgp"}}}
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
selector:
name: avitest
When explicitly referencing a network in this way, it is not necessary to include that network in the
Usable Networks list in the north-south IPAM object.
n Networks created in the NSX Advanced Load Balancer “admin” tenant can be referenced in
any Kubernetes namespace/OpenShift project.
n Networks created in a specific NSX Advanced Load Balancer tenant can be referenced only in
the corresponding namespace/project.
n Networks with the same name, defining different subnets, can be created in different tenants.
Combining these capabilities allows for great flexibility in the allocation of VIPs in different
subnets, for example:
n Add network(s) defined in the “admin” tenant to the north-south IPAM configuration.
n Add network(s) defined in the non-admin tenants only to the north-south IPAM
configuration.
n Allow application owners to place services in the specific subnet(s) through annotations
The following is the packet flow when Direct Server Return (DSR) is enabled:
n The load balancer does not perform any address translation for the incoming requests.
n The traffic is passed to the pool members without any changes in the source and the
destination address.
n The packet arrives at the server with the virtual IP address as the destination address.
n The server responds with the virtual IP address as the source address. The return path to the
client does not flow back through the load balancer and thus the term, Direct Server Return.
Use Case
DSR is often applicable to audio and video applications as these applications are sensitive to
latency.
Supported Modes
The supported modes for DSR are as follows:
Feature Support
Dataplane drivers DPDK and PCAP support for Linux server cloud
Load balancing algorithm Only Consistent Hash is supported for L2 and L3 DSR
Support for both TCP Fast Path and UDP Fast Path in L2
TCP UDP
and L3 DSR
Layer 2 DSR
n Destination MAC address for the incoming packets is changed to server MAC address.
The following diagram exhibits a packet flow diagram for Layer 2 DSR:
Server-1
00:50:56:bd:95:85
1
Client
Load Balancer
2 Data vNIC IP:
Dst Mac is changed 10.140.116.58
to Server Mac Server-2
00:50:56:bd:95:86 Loopback IP:
10.140.116.210
3
Packet Flow
n Clients send requests to a Virtual IP (VIP) served by the Load Balancer (Step 1)
Layer 2 - DSR
Login to the NSX Advanced Load Balancer CLI and use the configure networkprofile
<profile name> command to enter into TCP fast-path profile mode. For Layer 2 DSR, enter
the value for the DSR type as dsr_type_l2.
Once the network profile is created, create an L4 application virtual service with the DSR network
profile created above and attach DSR-capable servers to the pool associated with the virtual
service.
Configuring Server
ifconfig lo:0 <VIP ip> netmask 255.255.255.255 -arp up
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 >/proc/sys/net/ipv4/conf/<Intraface of pool server ip configured>/rp_filter
sysctl -w net.ipv4.ip_forward=1
Configuring Network Profile for DSR over TCP and UDP using UI
Network profiles for DSR over TCP and UDP can be created using NSX Advanced Load Balancer
UI. Log in to the UI and follow the steps mentioned below.
Procedure
1 Navigate to Templates > Profiles > TCP/UDP. Click Create to create a new TCP profile or
select the existing one to modify.
2 Provide the desired name and select TCP Fast Path as Type. Select the following options:
b Use the drop-down menu for DSR Type and select L2 or L3 as per the requirement.
3 For UDP fast path profile, select UDP Fast Path as Type. Select the following options:
b Use the drop-down menu for DSR Type and select L2 or L3 as per the requirement.
For layer 3 DSR, enter the value for the DSR type as dsr_type_l3 and encapsulation type as
encap_ipinip or encap_gre.
This creates the DSR profile over TCP (default L3, IPinIP encapsulation).
This creates the DSR profile over TCP(default L3, GRE encapsulation).
For layer 3 DSR, enter the value for the DSR type as dsr_type_l3 and encapsulation type as
encap_ipinip or encap_gre.
This creates the DSR profile over UDP (default L3, IPinIP encapsulation).
This creates the DSR profile over UDP (default L3, GRE encapsulation).
Layer 3 DSR
This section discusses Layer 3 DSR in detail.
n Tier 1: L3 DSR
n Deployment mode: Auto gateway and traffic enabling should be disabled for the deployment
mode when Layer 7 virtual service is configured (in the deployment mode Tier-2 as shown
below).
n If the SEs are scaled out in the Tier-2 deployment mode, pool members are added manually
once new SEs are added.
Note
n IP-in-IP tunnel is created from the load balancer to the pool members that can be a router
hop(s) away.
n The incoming packets from clients are encapsulated in IP-in-IP with source as the Service
Engine’s interface IP address and destination as the back-end server IP address.
n In the case of the Generic Routing Encapsulation (GRE) tunnel, the incoming packets from
clients are encapsulated in a GRE header, followed by the outer IP header (delivery header).
Deployment Modes
Tier-1
n Layer 4 virtual service is connected to application servers which terminate the connections.
Pool members are the application servers.
n Servers handle the IPinIP packets. The loopback interface is configured with the
corresponding virtual service IP address. The service listening on this interface receives
packets and responds to the client directly in the return path.
Tier-2
n Layer 4 virtual service is connected to the corresponding Layer 7 virtual service (which has the
same virtual service IP address as Layer 4 virtual service), which terminates the tunnel.
n Layer 4 virtual service’s pool members will be Service Engines of the corresponding Layer 7
virtual services.
n For the Layer 7 virtual service, traffic is disabled so that it does not perform ARP.
Packet Flow
n IPinIP packets reach one of the Service Engines of Layer 7 virtual service. That SE will decrypt
and handle the IPinIP packet and give it to the corresponding layer 7 virtual services. The
virtual service sends it to the backend servers.
n Return packets from the backend servers are received at the virtual service, and the virtual
service forwards the packet directly to the client.
n The following diagram exhibits packet flow for the tier-2 deployment in the Layer 3 mode:
S:SE-b-IP
D: SE3-f-IP,
S: C-IP, C-Port IP-in-PI S:SE3-b-IP Server-1
SE-2
D:VIP, S: C-IP, C-Port SE3-b-Port,
VIP-Port D:VIP, VIP-Port D:App-IP, Pool-port
Server-2
Client S:App-IP, Pool-port
SE-1 SE-3 D: SE3-b-IP, SE3-b-port
S:VIP, VIP-port
D: C-IP, C-port
SE-4
The following are the observations for the above deployment as mentioned in the diagram:
n Layer 4 virtual service is connected to the corresponding Layer 7 virtual service (which has the
same virtual service IP address as Layer 4 virtual service), which terminates the tunnel.
n Layer 4 virtual service’s pool members will be Service Engines of the corresponding Layer 7
virtual services.
n For the Layer 7 virtual service, traffic is disabled so that it does not perform ARP.
n Return packets from the back end servers are received at the virtual service, and the virtual
service forwards the packets directly to the client.
Creating Virtual Service and Associating it with the Network Profile (for Tier-2 deployment)
Navigate to Application > Virtual Services and click Create to add a new virtual service. Provide
the following information as mentioned:
n Provide the desired name for the virtual service and IP address.
n Select the network profile created in the previous step for Tier-2 deployment from the
TCP/UDP Profile drop-down menu.
Note The Traffic Enabled check box must not be selected for Tier-2 deployment.
Configuring Server
modprobe ipip
sysctl -w net.ipv4.ip_forward=1
netsh interface ipv4 set interface "Ethernet0" forwarding=enabled netsh interface ipv4 set
interface "Ethernet1" forwarding=enabled netsh interface ipv4 set interface "Ethernet1"
weakhostreceive=enabled netsh interface ipv4 set interface "Loopback" weakhostreceive=enabled
netsh interface ipv4 set interface "Loopback" weakhostsend=enabled
When new application servers are deployed, the servers need external connectivity for
manageability. In the absence of a router in the server networks, the NSX Advanced Load
Balancer SE can be used for routing the traffic of server networks.
Another use case is when virtual services use an application profile with the Preserve Client IP
option enabled, back end servers receive traffic with the source IP set to the IP of the originating
clients. The NSX Advanced Load Balancer SE’s IP needs to be configured as the default gateway
for servers to route all traffic back through the SEs to the clients.
Scope
The following features are supported:
n VMWare write access clouds are also supported when configured using the CLI.
n NSX Advanced Load Balancer supports IP routing for VMware cloud deployments in write
access mode. For this feature to work on VMware write access clouds, at least one virtual
service must be configured with the following configurations:
n One arm (in the two-arm mode deployment) must be placed in the backend network. For
this network, SE acts as the default gateway.
n The HA mode must be legacy HA (active/standby) only for SE groups with the enable IP
routing option set.
n The HA mode must be legacy HA (active/standby) only for SE groups and routing has to be
enabled in the corresponding Network Service.
n IP routing cannot be enabled in conjunction with the distribute load option set in the SE group
configuration.
n VMware write access mode if a virtual service has already been created. This virtual service
creates the required Service Engines before MAC masquerading is tested.
Use Case
SEs are in a
• Linux server cloud or
• VMware no-access cloud
(no auto-creation of SE)
FE-NW 10.10.40.0/24
Floating IP - 10.10.40.11/24
10.10.40.1/24 10.10.40.2/24
Legacy HA
active/standby
Floating IP - 10.10.10.11
BE-NW-1 10.10.10.0/24
BE-NW-3 10.10.30.0/24
Briefly, enabling IP routing requires the following configurations to be done at various points in the
network:
n On the NSX Advanced Load Balancer Controller, enable IP routing for the SE group. This has
to be configured through Network Service of routing_service type.
n On the front end router, configure static routes to the back end server networks with the next
hop as floating IP in the front end network.
n If BGP is enabled in the network and BGP peers configured on the SEs, then enable Advertise
back end subnets using BGP for the SE group.
n If BGP is enabled in the network and BGP peers are configured on the SEs, then enable
Advertise back end subnets using BGP for the SE group in the above routing enabled Network
Service.
n On the back end servers, configure the SE’s floating IP in the back end server network as the
default gateway.
2 ,Sna,mnsad
3 asmdnasnd
4 asdsand
5 asdaskjndas
6 asdasnda
7 Asdnasnd,asnd
8 asdnasnd
9 asdbsabd
10 asdasnd
11 asndbsanbd
12
Steps to configure the IP routing (default gateway) feature are listed below. UI and CLI in each
step are just two different ways of configuring the same step.
Procedure
c Configure Floating IP Addresses (for instance, 10.10.10.11), one on each back end network.
These IP addresses will get configured on the active SE and will be taken over by the
standby SE (new-active) upon failover.
d If there are no BGP peers configured, then configure Floating IP address for front end
networks (for instance, 10.10.40.11).
If there are no BGP peers configured, then configure Floating IP address for front-end
networks (for instance, 10.10.40.11) using the above Network Service configuration.
Enable IP routing on all SEs in the SE group using Network Service configuration. For more
details, see Network Service.
3 The above steps complete the configuration of routing for Service Engine Group via Network
Service. However, the network is incomplete without the front end routers and back end
servers being configured accordingly.
4 Front end router configuration (if no BGP peers are configured on SE). Configure the front end
router with a static route to the back end server network (with next-hop pointing to floating
interface IP of SE in front end network).
a Configure the default gateway of back end server(s) to point to floating interface IP of SE
(the one in server network).
This ensures that all the traffic including, return (VIP) traffic from the back end network,
uses SE for all northbound traffic.
6 Configure the default gateway of SE to the front end as needed. Navigate to Infrastructure >
Routing > Static Route > Create.
n If the front end supports BGP peering, then there is no necessity to configure floating IPs on
the front end interface (skip step 1.d above).
n Also, you do not have to configure static routes in the front-end router (skip step 3 above).
Procedure
On the NSX Advanced Load Balancer Controller, configure BGP Peers network and IP
Address.
2 Navigate to Infrastructure > Service Engine Group > Edit > Advanced. Enable Advertise
back-end subnets via BGP. This UI knob will appear only if Enable IP Routing option is
selected.
Configure Advertise Back end Networks of the Service Engine Group through its
corresponding Network Service. For more information, see Network Service.
3 Configure the application profile to preserve client IPs for associated virtual service(s). This
step is to be performed before any virtual service using the given application profile is
enabled.
This configuration will not succeed if enable_routing is not yet configured. This configuration
works in mutual exclusion with the Connection Multiplexing option for L7 application profiles.
4 Create a virtual service with an application profile for which preserve client IP is enabled.
On enabling the knob, flow-based routing is enabled for all the incoming traffic for all the
interfaces in a VRF. The Service Engine caches the incoming route traffic mac and forwards the
packet to the same next-hop that it received the traffic from.
Note For more information on NSX Advanced Load Balancer Routing GRO and TSO subject
to environment capabilities, see TSO, GRO, RSS, and Blocklist Feature on NSX Advanced Load
Balancer.
You can configure the routing function per VRF basis. The existing functions of
routing and its associated information such as enable_routing, floating_interface_ip,
enable_vip_on_all_interfaces, and Mac masquerade under SE group are grouped under
routing_service service type.
Note Network Service can be configured only using CLI. The Network Service will be in effect on
Active SE only if an interface of the corresponding VRF is present on the Service Engine.
enable_routing
floating_intf_ip 10.10.10.11
floating_intf_ip 10.10.40.11
advertise_backend_networks
enable_vip_on_all_interfaces
floating_intf_ip_se_2 10.10.20.11
floating_intf_ip_se_2 10.10.30.11
nat_policy_ref nat-policy
save
save
Supported Environments
The routing auto gateway functions are supported in the following environments:
Configure a network service corresponding to the SE group requires and set enable_auto_gateway
to True for the corresponding network service catering to routing.
Log in to the NSX Advanced Load Balancer Controller CLI and execute the following commands:
| Field | Value |
+--------------------------------+-----------------------------------------------------+
| uuid | networkservice-1bcd0e3a-4c3d-4e3e-8d1a-619120f9d68f |
| name | NS-Default-Group-Global
|
| se_group_ref | Default-Group |
| vrf_ref | global |
| service_type | ROUTING_SERVICE |
| routing_service | |
| enable_routing | True |
| enable_auto_gateway | True |
| nat_policy_ref | nat-policy |
| | |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+--------------------------------+-----------------------------------------------------+
Network objects in the NSX Advanced Load Balancer govern IP address allocation for the
following:
Note
n NSX Advanced Load Balancer SE will acquire IP addresses for its management network (used
to communicate with the NSX Advanced Load Balancer Controller) and its data networks (used
for data plane/ load balancing) from these Network objects.
n Each Cloud configured on the NSX Advanced Load Balancer can have multiple Network
objects defined, each for the various networks used to provide load balancing.
n Network objects are not used for native public-cloud integrations such as AWS, Azure, and
GCP. These are handled from within the Cloud object on the NSX Advanced Load Balancer.
IP address allocation in the Network object can be either through DHCP or Static IP Pools.
If IP address allocation is through static pools, you need to configure these static IP pools.
1 Either a single pool or a dedicated pool of IPs will be used for both VIPs and NSX
Advanced Load Balancer SEs.
2 To use a single pool of IPs, you can select Use Static IP Address for VIPs and SE
check box and specify the range of IPs from the configured IP subnet.
3 To use a dedicated pool of IPs for VIPs, you can deselect Use Static IP Address for
VIPs and SE checkbox.
n Select Use for VIPs to specify the range of IPs from the configured IP Subnet used
for VIPs.
n Select Use for Service Engines to specify the range of IPs from the configured IP
subnet used for the NSX Advanced Load Balancer SEs.
Procedure
1 Log in to UI and navigate to Infrastructure > Service Engines. Select the desired SEs and click
the edit option.
In the above example, VLAN trunking is enabled on the Ethernet interface 1 with VLAN 137.
You can now place the virtual service on SE using the usual way.
To create virtual service on NSX Advanced Load Balancer, see Virtual Services.
For more information on Virtual Guest Tagging (VGT) mode, see VLAN Configuration.
If the memory runs low when you add a VLAN interface, the configuration is accepted but the
interface is put into a fault state. You can confirm this by using the show serviceengine < >
vnicdb command and checking if there is a fault entry for the concerned interface.
Table 2-1.
Field Value
vnic[2]
if_name avi_eth2.999
linux_name eth2.999
mac_address 00:50:56:81:2f:ec
pci_id PCI-eth2.999
mtu 1496
dhcp_enabled TRUE
enabled TRUE
connected TRUE
network_uuid Unknown
nw[1]
ip 100.3.231.0/24
mode STATIC
nw[2]
ip fe80::250:56ff:fe81:2fec/64
mode DHCP
is_mgmt FALSE
is_complete TRUE
avi_internal_network FALSE
Field Value
enabled_flag TRUE
running_flag TRUE
pushed_to_dataplane FALSE
consumed_by_dataplane FALSE
pushed_to_controller FALSE
can_se_dp_takeover TRUE
vrf_ref global
vrf_id 1
ip6_autocfg_enabled TRUE
fault
uuid 00:50:56:81:2f:ec-eth2.999
Note 550MB memory is required to configure 1000 VLAN interfaces. If there are configurations
such as virtual services on those interfaces, more memory is required.
The SEs can be configured with 1 vCPU core and 2 GB RAM, up to 64 vCPU cores and 256 GB
RAM.
In write access mode, you can configure SE resources for newly created SEs within the SE Group
properties.
For the SE in read or no orchestrator modes, the SE resources are manually allocated to the SE
virtual machine when it is being deployed.
n Connections, Requests and SSL Transactions per second (CPS/ RPS/ TPS) - Primarily gated
by the available CPU.
This section illustrates the expected real-world performance and discusses SE internals on
computing and memory usage.
CPU
NSX Advanced Load Balancer supports x86 based processors, including those from AMD and
Intel. Leveraging AMD’s and Intel’s processors with AES-NI and similar enhancements steadily
enhances the performance of the NSX Advanced Load Balancer with each successive generation
of the processor.
CPU is a primary factor in SSL handshakes (TPS), throughput, compression, and WAF inspection.
Service Engine
vCPU 0
Performance increases linearly with CPU if CPU usage limit or environment limits are not hit. CPU
is the primary constraint for both transactions per second and bulk throughput.
Within a SE, one or more CPU cores will be given a dispatcher role. It will interface with NICs and
distribute network flows across the other cores within the system, effectively load balancing the
traffic to other CPU cores. Each core is responsible for terminating TCP, SSL, and other processing
determined by the virtual service configuration. The vCPU 0 shown in the diagram acts as the
dispatcher and can also handle some percentage of SSL traffic if it has the available capacity. By
using a system of internally load balancing across CPU cores, NSX Advanced Load Balancer can
scale linearly across the ever-increasing capacity.
Memory
Memory allocated to the SE primarily impacts concurrent connections and HTTP caching.
Doubling the memory will double the ability of the SE to perform these tasks. The default is 2
GB memory, reserved within the hypervisor for VMware clouds. See SE Memory Consumption for
a verbose description of expected concurrent connections. Generally, SSL connections consume
about twice as much memory as HTTP layer 7 connections and four times as much memory as
layer 4 with TCP proxy.
NIC
Throughput through a SE can be a gating factor for the bulk throughput and sometimes for
SSL-TPS. The throughput for an SE is highly dependent upon the platform.
Disk
The SEs can store logs locally before they are sent to the Controllers for indexing. Increasing the
disk will increase the log retention on the SE. SSD is preferred over hard drives, as they can write
the log data faster.
The recommended minimum size for storage is 15 GB, ((2 * RAM) + 5 GB) or 15 GB, whichever is
greater. 15 GB is the default for SEs deployed in VMware clouds.
n Maximum storage on the disk not allocated for logs on the SE (configurable through SE
runtime properties)
You can calculate the capacity reserved for debug logs and client logs as follows:
n Debug Logs capacity = (SE Total Disk * Maximum Storage not allocated for logs on SE)/ 100
Adjustments to these values are done based on configured value for minimum storage allocated
for logs and RAM of SE, and so on.
PPS
PPS is generally limited by hypervisors. Limitations are different for each hypervisor and version.
PPS limits on Bare metal (no hypervisor) depend on the type of NIC used and how Receive Side
Scaling (RSS) is leveraged.
n NSX Advanced Load Balancer supports RSA and Elliptic Curve (EC) certificates. The type of
certificate used along with the cipher selected during negotiation, determines the CPU cost of
establishing the session.
n RSA 2k keys are computationally more expensive compared to EC. NSX Advanced Load
Balancer recommends you to use EC with PFS, which provides the best performance and
the best possible security.
n RSA certificates can still be used as a backup for clients that do not support current industry
standards. As NSX Advanced Load Balancer supports both an EC certificate and an RSA
certificate on the same virtual service, you can gradually migrate to using EC certificates with
minimal user experience impact. For more information, see RSA versus EC certificate priority.
n Default SSL profiles of NSX Advanced Load Balancer prioritize EC over RSA and PFS over
non-PFS.
n EC using perfect forward secrecy (ECDHE) is about 15% more expensive than EC without PFS
(ECDH).
n SSL session reuse gives better SSL Performance for real-world workloads.
Bulk Throughput
The maximum throughput for a virtual service depends on the CPU and the NIC or hypervisor.
Using multiple NICs for client and server traffic can reduce the possibility of congestion or NIC
saturation. The maximum packets per second for virtualized environments vary dramatically and
will be the same limit regardless of the traffic being SSL or unencrypted HTTP.
See Performance Datasheet numbers for throughput numbers. SSL throughput numbers are
generated with standard ciphers mentioned in the datasheet. Using more esoteric or expensive
ciphers can harm throughput. Similarly, using less secure ciphers, such as RC4-MD5, will provide
better performance but are also not recommended by security experts.
Generally, the TPS impact is negligible on the CPU of the SE if SSL re-encryption is required, since
most of the CPU cost for establishing a new SSL session is on the server, not the client. For the
bulk throughput, the impact on the CPU of the SE will be double for this metric.
In other words, they are many orders of magnitude greater than what can be achieved. Consider
40kB of Memory per SSL terminated connection in the real world as a preliminary but conservative
estimate. The amount of HTTP header buffering, caching, compression, and other features play
a role in the final number. For more information, see SE Memory Consumption, including the
methods for optimizing for greater concurrency.
Native auto-scale feature of NSX Advanced Load Balancer (L2 Scaleout) allows a virtual service to
be scaled out across four SEs. Using ECMP scale-out (L3 Scale-out), a virtual service can be scaled
out to multiple SEs with a linear scale for workloads.
SE SE SE
secondary 1 primary secondary 2
For more information on the scale-out feature, see Autoscale Service Engines.
The following are the points to be considered while sizing for different environments:
n A single SE can scale up to 36 cores for Linux server cloud (LSC) deployment for Baremetal
and CSP
n PPS Limits on different clouds depend on either hypervisor or NIC used and how
dispatcher cores and RSS are configured. For more information on recommended
configuration and feature support, see TSO GRO RSS Features
n SR-IOV and PCIe Passthrough are used in some environments to bypass the PPS limitations
and provide line-rate speeds. For more information on support for SR-IOV, see Ecosystem
Support
n Sizing for public clouds should be decided based on the cloud limits and SE performance on
different clouds for different VM sizes
Per-App SE Mode
This section describes about per-app Service Engine (SE) mode.
When an SE group is configured in per-app SE mode, a vCPU counts at a 25% rate for licensing
usage. For example, each 2-vCPU SE in a per-app SE group utilizes half a vCPU license (2 * 0.25).
Per-app SE mode is limited to a maximum of 2 virtual services per SE so that customers can also
enable HA. All HA modes are supported. Per-VS license mode is not supported for DNS virtual
services.
2 Select the SE group to be edited and click the pencil (edit) icon.
3 Per-app SE mode is available under High Availability & Placement Setting, as shown in figure
below. By default, per-app SE mode is disabled for any SE group.
4 Click on the check box to enable Per-app SE mode and then click on Save.
Note that the displayed value of Virtual Services per Service Engine automatically changes to 2.
Restriction
One can only set per-app SE mode when first defining an SE group. It can’t be toggled on a
pre-existing setup. Any attempt to toggle the option will be ignored, and an error message will be
displayed.
The process of connecting starts with the first communication that a freshly-instantiated SE sends
to its parent Controller. Classic examples of this type of communication are:
1 The Controller cluster is protected behind a firewall, while its SEs are on the public Internet.
2 In a public-private cloud deployment, Controllers reside in the public cloud (e.g., AWS), while
SEs reside in the customer’s private cloud.
Implementation
In addition to the management node addresses that Controllers in the cluster can mutually see, for
each Controller, a second management IP address or a DNS-resolvable FQDN that is addressable
by SEs connected to an isolated network, can be specified. It is this second IP address or FQDN
that is incorporated by the Controller into the SE image used to spawn SEs. The NSX Advanced
Load Balancer has added the public-ip-or-name parameter to support this capability.
Setting the Parameter through the NSX Advanced Load Balancer CLI
In the initial release, the parameter is accessible only through the REST API and NSX Advanced
Load Balancer CLI. In the following CLI example a single-node cluster is employed.
| name | 10.10.30.102 |
| ip | 10.10.30.102 |
| vm_uuid | 005056b02776 |
| vm_mor | vm-222393 |
| vm_hostname | node1.controller.local |
+---------------+----------------------------------------------+
[admin:my-controller-aws]: cluster> nodes index 1
[admin:my-controller-aws]: cluster:nodes> public_ip_or_name 1.1.1.1
Explanation
n The SEs cannot address (route to) the Controller by using the address 10.10.30.102 from their
network.
n Administrative staff are aware that a NAT-enabled firewall is in place and programmed to
translate 1.1.1.1 to 10.10.30.102.
n The string parameter public_ip_or_name in the object definition of the first (and only) node
of the cluster is set to 1.1.1.1. So, Controller “cluster-0-1” knows that it must embed 1.1.1.1 (not
10.10.30.102 ) into the SE image it creates for spawning SEs.
n When an SE comes alive for the first time, it therefore addresses its parent Controller at IP
address 1.1.1.1.
n Due to being completely transparent to that SE and because of the firewall’s NAT’ing ability,
the initial communication is passed on to IP address 10.10.30.102.
Important Notes
n The public_ip_or_name field needs to be configured either for all the nodes in the cluster or
none of the nodes. A subset of nodes in the cluster cannot be configured.
n When this configuration is enabled, SEs from all clouds will always use the
public_ip_or_name to attempt to talk to the Controller. It is not currently possible to have
SEs from one cloud to use the private network while SEs from another cloud use the NATed
network.
n It is recommended to enable this feature while configuring the cluster before SEs are created
and not modify this setting while SEs exist.
SE Memory Consumption
This topic discusses calculation of memory utilization within a Service Engine (SE) to estimate the
number of concurrent connections or the amount of memory that can be allocated to features
such as HTTP caching.
SEs support 1-256 GB memory. The minimum recommendation for NSX Advanced Load Balancer
is 2 GB. Providing more memory drastically increases the scale of capacity. Adjusting the priorities
for memory between concurrent connections and optimized performance buffers also improves
the scale of capacity significantly.
Memory allocation for NSX Advanced Load Balancer SE deployments in write access mode is
configured through Infrastructure > Cloud > SE Group Properties. Changes to the Memory
per Service Engine property only impact newly created SEs. For read or no access modes, the
memory is configured on the remote orchestrator such as vCenter. Changes to existing SEs need
the SE to be powered down prior to the change.
Memory Allocation
The following table details the memory allocation for SE:
<Insert Image>
The shared memory pool is divided between two components, namely, Connections and Buffers.
A minimum of 10% must be allocated to each. Changing the Connection Memory Percentage slider
only impacts the newly created SEs and not the existing SEs.
Connections consist of the TCP, HTTP, and SSL connection tables. Memory allocated to
connections directly impacts the total concurrent connections that an SE can maintain.
<Insert Image>
Buffers consist of application layer packet buffers. These buffers are used to queue packets
from Layer 4 to Layer 7 for providing improved network performance. For example, if a client
is connected to the NSX Advanced Load Balancer SE at 1Mbps with large latency and the server
is connected to the SE at no latency and 10Gbps throughput, the server can respond to client
queries by transmitting the entire response and proceed to service the next client request. The
SE buffers the response and transmit it to the client at a much reduced speed, handling any
retransmissions without needing to interrupt the server. This memory allocation also includes
application centric features such as HTTP caching and improved compression.
Buffers maximize the number of concurrent connections by changing the priority towards
connections. The calculations for NSX Advanced Load Balancer are based on the default setting,
which allocates 50% of the shared memory for connections.
Concurrent Connections
Most Application Delivery Controller (ADC) benchmark numbers are based on an equivalent TCP
Fast Path, which uses a simple memory table with client IP:port mapped to server IP:port. Though
TCP Fast Path uses very less memory, enabling extremely large concurrent connection numbers,
it is not relevant to the vast majority of real world deployments which rely on TCP and application
layer proxying. The NSX Advanced Load Balancer benchmark numbers are based on full TCP
proxy (L4), TCP plus HTTP proxy with buffering and basic caching with DataScript (L7), and
the same scenario with Transport Layer Security Protocol (TLS) 1.2 between client and the NSX
Advanced Load Balancer.
The memory consumption numbers per connection listed below can be higher or lower. For
example, typical buffered HTTP request headers consume 2kb of memory, but they can be as high
as 48kb. The numbers below are intended to provide real world sizing guidelines.
n 10 KB L4
n 20 KB L7
To calculate the potential concurrent connections for an SE, use the following formula:
To calculate layer 4 sessions (memory per connection = 10KB = 0.01MB) for an SE with 8 vCPU
cores and 8 GB RAM, using a Connection Percentage of 50%, the calculation is: ((8000 - 500 -
( 100 * 8 )) * 0.50) / 0.01 = 335k.
The calculations in the table are with 90% connection percentage. The table above shows the
number of concurrent connections for L4 (TCP Proxy mode) optimized for connections.
This command shows a truncated breakdown of memory distribution for the SE. The SE has one
vCPU core with 141 MB allocated for the shared memory’s connection table. The huge_pages value
of 91 means that there are 91 pages of 2 MB each. This indicates that 182 MB has been allocated for
the shared memory’s HTTP cache table.
Code Description
Code Description
hypervisor_type refers to the following list of hypervisor types and the respective values
associated with it:
SE_HYPERVISOR_TYPE_UNKNOWN 0
SE_HYPERVISOR_TYPE_VMWARE 1
SE_HYPERVISOR_TYPE_KVM 2
SE_HYPERVISOR_TYPE_DOCKER_BRIDGE 3
SE_HYPERVISOR_TYPE_DOCKER_HOST 4
SE_HYPERVISOR_TYPE_XEN 5
SE_HYPERVISOR_TYPE_DOCKER_HOST_DPDK 6
SE_HYPERVISOR_TYPE_MICROSOFT 7
"statistics": {
"max": 141,
}
"statistics": {
"min": 5,
"max": 5,
"mean": 5
},
If virtual service application profile caching is enabled, on upgrading the NSX Advanced Load
Balancer from an earlier version, this field is automatically set to 15 and so 15% of SE memory will
be reserved for caching. This value is a percentage configuration and not an absolute memory
size.
Note Total memory allocated for caching must meet the 1GB min allocation per core. If
app_cache_persent exceeds this condition, the allocated memory will be less than % of the Total
System memory.
For 10GB 9 Core SE, 15 % app_cache_percent would be 1GB instead of 1.5 GB.
Configuring using UI
You can enable this feature using the NSX Advanced Load Balancer UI. Navigate to Infrastructure
> Service Engine Group and click the edit icon of the desired SE group. In the Basic Settings tab,
under Memory Allocation section, enter the value meant to be reserved for Layer 7 caching in the
Memory for Caching field.
<Insert Image>
By default, these options are set to false. Use the following commands and enable the options
core_shm_app_learning and core_shm_app_cache.
The connection refusals on a particular virtual service can be due to the high consumption of
packet buffers by that virtual service.
When the packet buffer usage of a virtual service is greater than 70% of the total packet buffers,
the connection refusals start. This might mean that there is a slow client that is causing a packet
buffer build up on the virtual service.
This issue can be alleviated by increasing the memory allocated per SE or by identifying and
limiting the number of requests by slow clients using a network security policy.
Per virtual service level admission control is disabled by default. To enable this setting, set the
Service Engine Group option per_vs_admission_control to True.
The connection refusals stop when the packet buffer consumption on the Virtual Service drops to
50%. The sample logs generated show admission control:
The connection refusals and packet throttles due to admission control can be monitored using the
se_stats metrics API:
https://<Controller-IP>/api/analytics/metrics/serviceengine/se-<SE-UUID>?
metric_id=se_stats.sum_connection_dropped_packet_buffer_stressed,se_stats.sum_packet_dropped_p
acket_buffer_stressed
To know how to resolve intermittent connection refusals on NSX Advanced Load Balancer SEs
correlating to higher traffic volume, see Connection Refusals.
For HTTP traffic, NSX Advanced Load Balancer can be configured to insert an X-Forwarded-For
(XFF) header in client-server requests to include the original client IP addresses in the logging
requests. This feature is supported for IPv6 in NSX Advanced Load Balancer.
To include the client’s original IP address in HTTP traffic logs, enable NSX Advanced Load
Balancer to insert an XFF header into the client traffic destined for the server. XFF insertion can be
enabled in the HTTP application profile attached to the virtual service.
2 Click on the edit icon near a HTTP Application Profile to open the following editor:
Note Optionally the header can be renamed using the XFF Alternate Name field.
4 Click Save.
The profile change affects any virtual services that use the same HTTP application profile.
When XFF header insertion is enabled, the SE checks the headers of client-server packets for
existing XFF headers. If XFF headers already exist, the SE first removes any pre-existing XFFs,
then inserts its own XFF header. This is done to prevent clients from spoofing their IP addresses.
Example:
avi.http.add_header("XFF", avi.vs.client_ip())
The tx_ring method is the default transmission mechanism in the current non-DPDK
environments for NSX Advanced Load Balancer SEs. The PCAP tx_ring method consumes more
memory compared to the PCAP socket mechanism. Due to the higher memory consumption, the
rest of the processes might run into memory allocation failures in SEs having limited resources.
Due to resource constraints on the system, the tx_ring mode can cause packet drop issues
in the transmission path in the non-DPDK deployment. Whenever the issue occurs with the
default tx_ring method, an alternative raw socket approach is used to transfer the packets in
the transmission path.
Disabling PCAP_TX_Ring
Login to the NSX Advanced Load Balancer shell prompt and use the configure
serviceenginegroup mode to disable the enable_pcap_tx_ring transmission mode, as shown
below:
Re-enabling PCAP_TX_Ring
To switch the transmission mode back to the tx_ring method, log into the NSX Advanced Load
Balancer CLI and re-enable the method as shown below:
Preserve Client IP
This section discusses the configuration and scope of preserve client IP address.
By default, NSX Advanced Load Balancer Service Engines (SEs) do source NAT-ing (SNAT) of
traffic destined to back-end servers. Due to SNAT, the application servers see the IP address
of the SE interfaces and are unaware of the original client’s IP address. Preserving a client’s IP
address is a desirable feature in many cases, for example, when servers have to apply security and
access-control policies. Two ways to solve this problem in NSX Advanced Load Balancer are:
Both of the above require the back-end servers to be capable of supporting the respective
capability.
A third and more generic approach is for the SE to use the client IP address as the source
IP address for load-balanced connections from the SE to back-end servers. This capability is
called preserve client IP, one component of NSX Advanced Load Balancer default gateway feature
and property that can be set on/off in application profiles.
Note Enable IP Routing with Service Engine option is not mandatory to select Preserve Client IP
Address.
However, you can either use Legacy HA, configure floating interface IP, and set it as the default
gateway on the server to attract return traffic. (or)
Setup the routing in the backend to ensure that return traffic for the client-IP-preserved traffic
requests sent to the backend server comes back to the SE as needed.
n NSX Advanced Load Balancer will always NAT the back-end connection in these cases:
n When the back-end servers are not on networks directly-attached to the SE, i.e., they are
a hop or more away.
Example Use-Case
SEs are in a
• Linux server cloud or
• VMware no-access cloud
(no auto-creation of SE)
10.10.40.3/24
FE-NW 10.10.40.0/24
Floating IP - 10.10.40.11/24
10.10.40.1/24 10.10.40.2/24
Legacy HA
active/standby
BE-NW-1 10.10.10.0/24
BE-NW3 10.10.30.0/24
Enable IP routing on the SE group before enabling preserve client IP on an application profile
used to create virtual services on that SE group.
In addition,
n Configure static routes to the back-end server networks on the front-end servers with nexthop
as front-end floating IP
Create a virtual service using the advanced-mode wizard. Configure its application profile
to preserve client IPs as follows: Applications > Create Virtual Service > Advanced > Edit
Application Profile.
Note that this configuration needs to be done before enabling any virtual service in the chosen
application profile. Once an application profile is configured to preserve client IP, it preserves the
client IP for all virtual services using this application profile.
For deploying preserve client IP in NSX-T overlay cloud, see Preserve Client IP for NSX-T Overlay.
n NSX Advanced Load Balancer Controller HA: This provides node-level redundancy for the
Controllers. A single Controller is deployed as the leader node while the two additional
Controllers are added as the follower nodes.
n NSX Advanced Load Balancer Service Engine HA: This provides SE-level redundancy within
a SE group. If a SE within the group fails, then HA heals the failure and compensates for the
reduced site capacity; which means it replaces a new SE in place of the one that has failed.
Note To ensure the highest level of uptime for a site, including NSX Advanced Load Balancer
software upgrades, you need to ensure the availability for both NSX Advanced Load Balancer
Controllers and NSX Advanced Load Balancer Service Engines.
HA for the Controllers and Service Engines are separate features which are configured separately.
HA for the Controllers is a system administration setting.
Note To ensure application availability in the presence of whole site failures, NSX Advanced Load
Balancer recommends use of NSX Advanced Load Balancer GSLB which provides an overview.
For more details on GSLB Overview, see GSLB guide.
n Throughput
n Auto Scaling
You can create a three-node cluster by adding two additional nodes. This three-node cluster
provides a node-level redundancy for the Controller and maximizes performance for CPU-
intensive analytics functions. However, for a single Controller in a single-node deployment, it
performs all administrative functions; collects and processes all the analytics data. These tasks are
distributed in a three-node cluster.
In a three-node Controller cluster, one node is the primary (leader) node and performs the
administrative functions. The other two nodes are followers (secondaries), and perform data
collection for analytics, in addition to standing by as backups for the leader.
Quorum
The Controller level HA requires a quorum of NSX Advanced Load Balancer Controller nodes to
be up. In a three-node Controller cluster, quorum can be maintained if at least two of the three
Controller nodes are up. If one of the Controllers fail, the remaining two nodes continues service
and NSX Advanced Load Balancer continues to operate. However, if two of the three nodes go
down, then the entire cluster goes down, and NSX Advanced Load Balancer stops working.
Failover
Each Controller node in a cluster periodically sends heartbeat messages to the other Controller
nodes in a cluster, through an encrypted SSH tunnel using TCP port 22 (port 5098 if running as
Docker containers).
Primary (leader)
0110
Heartbeat
messages
0110 0110
The heartbeat interval is ten seconds. The maximum number of consecutive heartbeat messages
that can be missed is four. If one of the Controllers does not hear from another Controller for 40
seconds (four missed heartbeats), then the other Controller is assumed to be down.
If only one node is down, then the quorum is still maintained and the cluster can continue to
operate.
If a follower node goes down but the primary (leader) node remains up, then the access to virtual
services continues without any interruption.
Primary (leader)
0110
n If the primary (leader) node goes down, the member nodes form a new quorum and elect a
cluster leader. The election process takes about 50-60 seconds and during this period, there
is no impact on the data plane. The SEs will continue to operate in the Headless mode, but
the control plane service will not be available. During this period, you cannot create a VIP
through LBaaS or use the NSX Advanced Load Balancer user interface, API, or CLI.
Primary (leader)
node down.
No reply to heartbeats.
Headless mode
(during election of
new primary/leader)
0110 0110
In this procedure, the NSX Advanced Load Balancer Controller node that is already deployed
in the singe-node deployment is referred to as the incumbentNSX Advanced Load Balancer
Controller.
The following are the steps to convert a single-node NSX Advanced Load Balancer Controller
deployment into a three-node deployment:
1 Install a single new Controller node. During installation, configure only the following settings
for each node:
n Gateway address
2 Connect the management interface of each new Controller node to the same network as the
incumbent Controller. After the incumbent Controller detects the two new Controller nodes,
the incumbent Controller becomes the primary (leader) Controller node for the three-node
cluster.
3 Use a web browser to navigate to the management IP address of the primary (leader)
Controller node.
4 Navigate to Administrator > Controller > Nodes and click Edit . The Edit Clusterwindow
appears.
5 In the Controller Cluster IP field, specify the shared IP address for the Controller cluster.
6 In the Public IP or Host Name field, specify the management IP address of the new Controller
node.
Note To configure cluster in AWS Cloud, each node of the cluster requires an admin account
password.
After execution of the above steps, the incumbent Controller becomes the primary (leader) for the
cluster and invites the other Controller to the cluster as members.
The NSX Advanced Load Balancer then performs a warm reboot of the cluster. This process can
take two or three minutes. After the reboot, the configuration of the primary (leader) Controller is
synchronized to the new member nodes once the cluster appears online.
Primary (leader)
Cluster IP:
10.30.163.63
10.30.163.68 0110
0110 0110
10.30.163.64 10.30.163.65
n Controller Cluster IP
NSX Advanced Load Balancer Service Engines groups support the following HA modes:
n Elastic HA: This provides fast recovery for individual virtual services following failure of the
Service Engine. Depending on the mode, the virtual service is either already running on
multiple SEs or is quickly placed on another SE. The following modes of cluster HA are
supported:
n Active/Active
n N+M
n Legacy HA for the Service Engines: This emulates the operation of two device hardware
active/ standby HA configuration. The active SE carries all the traffic for a virtual service placed
on it. The other SE in the pair is the standby for the virtual service, which does not carry traffic
while the other active SE is healthy.
n Service Engine Elastic HA mode: This combines scale-out performance and high availability
n Active/ Active
n Legacy HA mode: This enables a smooth migration from legacy appliance based load
balancers.
The 'N' in N+M is the minimum number of SEs required to place virtual services in the SE group.
This calculation is performed by the NSX Advanced Load Balancer Controller based on Virtual
Services per Service Engine parameter. The 'N' varies over time as the virtual services are
placed on or removed from the group. The maximum number of Service Engines is labeled 'E'.
The 'M'in N+M is the number of additional SEs the NSX Advanced Load Balancer Controller spins
up in order to handle 'M' number of SE failures without reducing the capacity of the SE group. The
'M' appears in Buffer Service Engines field.
The minimum scale per virtual service is labeled as 'B' and the maximum scale per virtual service is
labeled as 'C'.
Note The buffer SE in N+M mode is the number of SE failures that the system can tolerate for the
virtual services to be up and operational (placed on atleast one SE), but not in the same capacity.
In the SE Group, if a minimum scale per virtual service is set and an additional SE is required, then
you should increase the buffer SE according to the calculations.
You can select N+M mode parameters by navigating to Infrastructure > Cloud Resources > Service
Engine Group. You can either create a new SE group or edit the existing one. Select N+M (buffer)
option in Elastic HA under High Availability Mode section in Basic Settings tab. Here, the two
parameters are set.
The Advanced HA & Placement section in Advanced tab shows that the three parameters are set.
With virtual services per SE set to 8, N is 3 (20/8 = 2.5, which rounds to 3).
Note that no single SE in the group is completely idle. The Controller places virtual services on all
available SEs. In N+M mode, NSX Advanced Load Balancer ensures enough buffer capacity exists
in aggregate to handle one (M=1) SE failure. In this example, each of the four SEs has five virtual
services placed. A total of 12 spare slots are still available for additional virtual service placements,
which is sufficient to handle one SE failure.
The right side of below image shows the SE group just after SE2 has failed. The five virtual services
in SE2 have been placed onto spare slots found on surviving SEs, namely, SE1, SE3, and S4.
A A
The imbalance in loading disappears over time if one or both of two things happens:
n New virtual services are placed on the group. As many as four virtual services can be placed
without compromising the M=1 condition. They will be placed on SE5 because NSX Advanced
Load Balancer chooses the least-loaded SE first.
With 'M' set to 1, the SE group is single-SE fault tolerant. Customers desiring multiple-SE
fault tolerance can set 'M' higher. NSX Advanced Load Balancer permits 'M' to be dynamically
increased by the administrator without interrupting any services. Consequently, you can start with
M=1 (typical of most N+M deployments), and increase it if the conditions warrant.
If an N+M group is scaled out to maximum number of Service Engines and 'N' times virtual
services per SE is placed, then NSX Advanced Load Balancer will permit additional virtual service
placements (into the spare capacity represented by 'M'), but an HA_COMPROMISED event will be
logged.
For a Write Access cloud, the Controller will attempt to recover the failed SE after five minutes
by rebooting the virtual machine. After a further five minutes, the Controller will attempt to delete
the failed SE virtual machine after which a new SE will be spun up to restore the configured buffer
capacity.
Back to M = 1 state
As shown in above image, with only four slots remaining just after the five re-placements, if NSX
Advanced Load Balancer’s orchestrator mode is set to write access, NSX Advanced Load Balancer
spins up SE5 to meet the M=1 condition, which in this case requires at least eight slots available for
re-placements.
Note To provide time to identify the cause of a failure, the first SE that fails in an SE group is not
automatically deleted even after five minutes. You can then perform troubleshooting on the failed
SE and delete the virtual machine manually if restoration is not possible. The Controller will delete
the SE virtual machine after three days if you have not manually deleted the same.
Elastic HA Active/Active
In active/active mode, NSX Advanced Load Balancer places each virtual service on more than one
SE, as specified by Minimum Scale per Virtual Service parameter, the default minimum is two. If
an SE in the group fails, then,
n Virtual services that had been running are not interrupted. They continue to run on other SEs
with degraded capacity until they can be placed once again.
n If NSX Advanced Load Balancer’s orchestrator mode is set to write access, a new SE is
automatically deployed to bring the SE group back to its previous capacity. After waiting for
the new SE to spin up, the Controller places on it the virtual services that had been running on
the failed SE.
In a span of time, five virtual services (VS1-VS5) are placed. The VS3 is scaled from its initial two
placements to third place, illustrating NSX Advanced Load Balancer’s support for 'N-way active'
virtual services. The below image depicts five virtual services placed on an active/active SE group.
SE 1 SE 2 SE 3 SE 4 SE 5 SE 6
The below image displays that the SE3 has failed. As a result, one of the two VS2 instances and
and one of three VS3 instances have failed. However, other three virtual services (VS1, VS4, VS5)
are unaffected. Also, neither VS2 nor VS3 are interrupted, because these instances were placed
on SE4, SE5, and SE6 previously and they continue to work with degraded performance. In the
below image, you can also view a single SE failure in an active/active SE group.
SE 1 SE 2 SE 3 SE 4 SE 5 SE 6
The NSX Advanced Load Balancer Controller deploys SE7 as a replacement for SE3 and places
VS2 and VS3 on it which brings both virtual services up to their prior level of performance. The
below image shows the recovery of a single SE in an active/active SE group.
Compact Placement
When Compact placement is ON, NSX Advanced Load Balancer uses the minimum number of SEs
required. When Distributed placement is ON, NSX Advanced Load Balancer uses as many SEs
as required within a limit allowed by maximum number of Service Engines. By default, Compact
placement is ON for Elastic HA, N+M (buffer) mode. And by default, Distributed placement is ON
for Elastic HA, Active/Active mode.
n After VS1 is placed, SE2 is deployed because M=1 (handles one SE failure).
n When VS2 requires placement, NSX Advanced Load Balancer assigns it to an idle SE2 to make
the best use of all the running SEs.
n Compact Placement ON: Subsequent placements of VS3 through VS8 does not require
additional SEs to maintain HA (M=1 => one SE failure). With Compact placement ON, NSX
Advanced Load Balancer prefers to place virtual services on existing SEs.
n Distributed Placement ON: Subsequent placements of VS3 and VS4 results in scaling the SE
group out to its maximum number four, illustrating NSX Advanced Load Balancer’s preference
for performance at the expense of its resources. After reaching four deployed SEs which is
the maximum number of SEs for this group, the NSX Advanced Load Balancer places virtual
services VS5 through VS8 on pre-existing, least-loaded SEs. The below image shows the
Elastic HA N+1 SE group with Compact placement ON and OFF. It has eight successive virtual
service placements as shown.
SE 1 SE 2 SE 1 SE 2 SE 3 SE 4
A A
VS7 VS8
VS5 VS6
VS3 VS4 VS5 VS6 VS7 VS8
VS1 VS2 VS1 VS2 VS3 VS4
n Elastic HA N+M mode: Since the compact placement is ON by default in N+M mode, the
NSX Advanced Load Balancer Controller deferred deployment of spare capacity is preferred
instead of immediately packing the virtual services densely onto existing SEs.
Configuring Auto-Rebalance
The Auto-Rebalance option applies only to the Elastic HA modes, and it is OFF by default. If the
Auto-Rebalance remains in its default OFF state then, an event is logged instead of performing
migrations automatically. To enable Auto-Rebalance, see How to Configure Auto-rebalance using
NSX Advanced Load Balancer CLI.
If auto-rebalance is left in its default OFF state, an event is logged instead of automatically
performing migrations.
The following are the the trigger types that aggregate at NSX Advanced Load Balancer Service
Engine level:
n Throughput in Mbps
n Open connections
n CPU
The minimum and the maximum threshold is configured along with one of these options for the
trigger type. By default, auto-rebalance is based on CPU trigger type.
Instructions
The following are the steps to configure auto-rebalance feature on a NSX Advanced Load
Balancer Service Engine:
1 Login to the NSX Advanced Load Balancer Controller CLI and enter the shell mode by using
the shell command.
3 Optionally, use the switch command to switch to the respective tenant or cloud for which
auto_rebalance can be configured.
4 The auto_rebalance option is set to false. To enable it, login to the Controller, bring up the
shell, and set auto_rebalance to True.
The auto_rebalance_interval interval-value provides the interval for which the auto-
rebalance will be triggered on reaching the configured threshold. The interval-value is in
seconds and the recommended value is 300s. For instance, auto_rebalance_interval 300.
n se_auto_rebalance_cpu
n se_auto_rebalance_mbps
n se_auto_rebalance_open_conns
n se_auto_rebalance_pps
The
Note The object used to configure thresholds is the same for all the trigger types (max_cpu_usage,
min_cpu_usage) and is a part of the SE group configuration.
The max_cpu_usage value defines the maximum threshold value for CPU. The value is in
percentage. For instance, max_cpu_usage 70.
The min_cpu_usage value defines the minimum threshold value for CPU. The value is in
percentage. For instance, min_cpu_usage 30.
Scenario 1: Auto-rebalance is based on PPS trigger, with a scale out threshold of 70% (of 200,000
PPS, that is, when it exceeds 140,000 PPS) and a scale in the threshold of 30% (of 200,000 PPS ,
that is, when it reduces below 60,000 PPS).
Scenario 2: Auto-rebalance is based on open connection triggers, with a scale out threshold of
60% (of 5,000 open connections, that is, when it exceeds 3,000 open connections) and a scale
in threshold of 20% (of 5,000 open connections, that is, when it reduces below 1,000 open
connections).
Configuring Elastic HA
This sections explains the steps to configure elastic HA.
4 Click the edit icon next to the SE group name, or click Create to create new one. Fill out the
requisite fields.
5 Click Save.
Elastic HA N + M mode (the default) is applied to the applications with the following conditions:
1 The SE performance required by any application can be delivered by a fraction of one SE's
capacity. Hence, each virtual service is placed on a single SE.
2 Applications can tolerate brief outages, though not longer than it takes to place a virtual
service on an existing SE and plumb its network connections. This should takes few seconds
only.
The pre-existence of buffer SE capacity, coupled with the default setting of compact placement
ON, speeds up the replacement of virtual services that are affected by a failure. The NSX
Advanced Load Balancer does not wait for a substitute SE to spin up; it immediately places
affected virtual services on spare capacity.
Most applications requirement for HA is satisfied by M=1. However, in either development or test
environments, 'M' can be set to 0 as the developers or test engineers can wait for the new SE to
spin up before the virtual service is back online.
Elastic HA active/active mode is applied to mission-critical applications where the virtual services
must continue without interruption during the recovery period.
Additional Information
Difference between HA_MODE_SHARED and HA_MODE_SHARED_PAIR options available on
Avi CLI
NSX Advanced Load Balancer also provides elastic HA, including active/active and N+M modes. In
legacy HA mode, only two NSX Advanced Load Balancer SEs are configured. By default, active
virtual services are compacted onto one SE. In this mode, one SE carries all the traffic for a virtual
service placed on it and is thus the active SE for that virtual service. The other SE in the pair is the
standby for that virtual service that does not carry traffic for it while the other SE is healthy.
Upon failure of an SE, the surviving SE takes over traffic for all virtual services that were previously
active on the failed SE, by continuing to handle traffic for virtual services that are already assigned
to it. As part of the takeover process, the survivor also takes ownership of all floating IP addresses
such as VIPs, SNAT-IP and so on. The compacted and distributed options determine whether all
active virtual service placements are concentrated onto one SE in a healthy pair or not.
NSX Advanced Load Balancer supports rolling upgrades by the NSX Advanced Load Balancer
Controller of SEs in a legacy HA configuration. Virtual services running on a legacy HA SE group
are not disrupted during a rolling upgrade. The below image depicts Legacy HA active/ standby,
displaying compacted and distributed load options.
SE 1 SE 2 SE 1 SE 2
Health Monitoring
By default, health checks are sent by both SEs to the back-end servers. You can also disable
health monitoring by an SE for virtual services for which it is standing by. However, you can enable
health checks for each SE’s next-hop gateways.
Floating IP Address
You can assign one or more floating IP addresses to a SE group configured for legacy HA. The
floating IP address is applicable when the SE interfaces are not in the same subnet as the VIP or
source NAT (SNAT) IP addresses that uses the SE group. One floating interface IP is required per
each attached subnet per SE group while configuring in the Legacy HA mode.
The network service is used to configure floating IP. For more details on this, see Network Service
Configuration guide.
Disabling a Legacy-Mode SE
A combination of factors cause the disabling of a legacy-mode SE, that is different from SEs
running in either active/active or N+M mode. For more information, see Deactivating SE Members
of a Legacy HA SE Group
Configuring Legacy HA
This sections explains how to configure legacy HA.
The following are the steps to configure a pair of SEs for legacy HA.
1 Create an SE group for the pair of SEs. Legacy HA requires each pair of active/standby SEs to
be in its own SE group.
1 Navigate to Infrastructure > Cloud Resources > Service Engine Group. Click CREATE.
4 Specify the floating IP address (optional). Configuration of floating IP address is not supported
via UI in current release. You need to configure it using CLI via Network Service of the
corresponding SE-Group. For more details, see Network Service Configuration page.
5 By default, NSX Advanced Load Balancer compacts all virtual services into one SE within
the active/standby pair. To distribute active virtual services across the pair, within the Virtual
Service Placement Policy section of the SE group editor, select Distribute Load option.
Note You can specify the second floating IP address. Assign virtual services on an individual
basis to one or the other SE in the legacy pair by navigating to the Advanced tab in the virtual
service editor.
You can configure the second floating IP address using CLI via Network Service of the
corresponding SE-Group. For more details, see Network Service Configuration page.
6 By default, virtual services that fail cannot be migrated to the SE that replaces the failed
SE. Instead, the load remains compacted on the failover SE. Choose Auto-redistribute Load
option to make failback automatic.
7 The Virtual Services per Service Engine field sets a maximum number of virtual services that
may be placed. The legacy is non-elastic such that for any given virtual service, exactly one
placement (onto the virtual service's active SE) will be performed.
8 Finally, uncheck Health Monitoring on Standby Service Engine(s) option so that it can be
performed only by active SEs.
9 Click Save.
Note
n If NSX Advanced Load Balancer is deployed in full access mode then, the other SE is added to
the same group automatically .
n If NSX Advanced Load Balancer is installed in no access mode then, select the second SE to
add it to the group.
a If you are creating a new virtual service, select CREATE VIRTUAL SERVICE > Advanced
Setup. Specify a name and the VIP address, and then click Advanced tab.
b If you are editing an existing virtual service, click the edit icon in the row for the virtual
service. Click Advanced tab.
2 In the Other Settings section, select the SE group from ServiceEngine Groupdrop-down list.
3 Click Save.
The following commands create a new SE group for the pair of SEs:
The following commands create a new SE group for the pair of SEs:
Note
n If NSX Advanced Load Balancer is deployed in full access mode then, these commands add
both SEs to the group.
n If NSX Advanced Load Balancer is installed in no access mode then, additional commands are
needed to add the second SE to the group.
The following commands configure a virtual service vs1 with VIP 10.10.1.99 on the SE group:
Additional Information
n Default Gateway (IP Routing on NSX Advanced Load Balancer SE)
n MAC Masquerade
Procedure
Note Switchover functionality is currently unavailable in the NSX Advanced Load Balancer UI.
2 The virtual service switchovers occur asynchronously hence it is required to wait until all have
been completed. Poll the SE event log to verify all virtual services are in standby mode.
Note The standby virtual services should not be in the SE_STATE_DISABLED state.
3 The only way to prevent the standby SE from taking on any active virtual services is to remove
it from the legacy SE group. This can be done by moving it into a “maintenance” SE group
created for the purpose.
4 At this point, the legacy HA SE group comprises just one active SE. To return to a state of high
availability, there are two options:
n Option 1: If the Controller has write access to the cloud, it will automatically spin up a
replacement SE.
n Option 2: Otherwise, the user must manually add one to the group.
Note
n To speed up Option 2, the user can spin up the replacement SE in the maintenance group
before removing the standby SE from the legacy HA SE group.
n The SE moves mentioned above can be accomplished through the REST API, CLI, or UI.
n Scaling out a virtual service to an additional NSX Advanced Load Balancer Service Engine.
NSX Advanced Load Balancer supports scaling virtual services which distributes the virtual
service workload across multiple SEs to provide increased capacity on demand. This extends the
throughput capacity of the virtual service and increasing the level of high availability.
n Scaling out a virtual service distributes that virtual service to an additional SE. By default, NSX
Advanced Load Balancer supports a maximum of four SEs per virtual service when native load
balancing of SEs is in play. In BGP environments, the maximum can be increased to 64.
n Scaling in a virtual service reduces the number of SEs over which its load is distributed. A
virtual service requires atleast one SE always.
n The virtual service IP is GARPed by the primary SE. All inbound traffic from clients will arrive at
this SE.
n At Layer 2, excess traffic is forwarded to the MAC address of the additional secondary Service
Engine(s).
n The scaled-out traffic to the secondary SEs is processed as normal. The SEs will change the
source IP address of the connection to their own IP address within the server network.
n The servers will respond to the source IP address of the traffic, which can be the primary or
one of the secondary SEs.
n Secondary SEs will forward the response traffic back to the client, bypassing the primary SE.
NSX Advanced Load Balancer will issue an alert if the average CPU utilization of an SE exceeds
the designated limit during a five-minute polling period. The alerts for additional thresholds can
be configured for a virtual service. The process of scaling in or scaling out must be initiated by
an administrator. The CPU Threshold field of the SE Group > High Availability tab defines the
minimum and maximum CPU percentages.
n The virtual service IP is GARPed by the primary SE. All inbound traffic from clients will arrive at
this SE.
n At Layer 2, excess traffic is forwarded to the MAC address of the additional secondary Service
Engine(s).
n The scaled-out traffic to the secondary SEs is processed as normal. The SEs will change the
source IP address of the connection to their own IP address within the server network.
n The servers will respond to the source IP address of the traffic, which could be the primary or
one of the secondary SEs.
n Secondary SEs will forward the response traffic back to the client origin, bypassing the primary
SE.
n The virtual service IP resides on the Azure Load Balancer. All inbound traffic from clients will
arrive at the Azure LB.
n The Azure load balancer has a backend pool consisting of the NSX Advanced Load Balancer
Service Engines.
n The Azure load balancer balances the traffic to one of the NSX Advanced Load Balancer
Service Engines associated with the virtual service IP.
n The traffic to the SEs is processed. The SEs will change the source IP address of the
connection to their own IP address within the server network.
n The servers will respond to the source IP address of the traffic, which can be the primary or
one of the secondary SEs.
n The SEs forward their response traffic directly back to the origin client, bypassing the Azure
load balancer.
Scaling Process
The process used to scale out will depend on the level of access that is either 'write' access or
'read access/no access', that the NSX Advanced Load Balancer has to the hypervisor orchestrator.
The following is the scaling process:
n If NSX Advanced Load Balancer is in 'write' access mode with write privileges to the
virtualization orchestrator, then the NSX Advanced Load Balancer will automatically create
additional Service Engines when required to share the load. If the Controller runs into an issue
while creating a new Service Engine, then it will wait for few minutes and then retry on a
different host. With native load balancing of SEs in play, the original Service Engine (primary
SE) and ARPs for the virtual service IP address processes as much traffic as possible. Some
percentage of traffic arriving here will be forwarded via Layer 2 to the additional (secondary)
Service Engines. When traffic decreases, the virtual service automatically scales in back to the
original primary Service Engine.
n If NSX Advanced Load Balancer is in 'read access or no access' mode, an administrator must
manually create and configure new Service Engines in the virtualization orchestrator. The
virtual service can be scaled out only when the Service Engine is both configured for the
network and connected to the NSX Advanced Load Balancer Controller.
Note Existing Service Engines with spare capacity and appropriate network settings may be used
for the scale out. Otherwise, scaling out may require either modifying existing Service Engines or
creating new Service Engines.
Virtual services inherit the values for the minimum and maximum number of SEs from their SE
group on which they can be instantiated. Between the virtual service minimum/ maximum values,
you can manually scale the virtual service out/in from the UI, CLI, or REST API. Also, within the
same SE group, the current SEs virtual service instantiations can be migrated to other SEs.
Note
n A virtual service’s maximum instantiation count can be below the maximum number of SEs in
its group.
The NSX Advanced Load Balancer supports the automatic rebalancing of virtual services across
the SE group based on the load levels experienced by each SE. Auto-rebalance can migrate or
scale in/out virtual services to rebalance the load and, in a write-access cloud, this can also result
in SEs being provisioned or de-provisioned if required.
Note Auto-rebalancing applies only if elastic HA has been selected for the SE group.
To configure auto-rebalancing for an SE group, see How to Configure Auto-rebalance using NSX
Advanced Load Balancer CLI
Scaling Out
The following are the steps to manually scale a virtual service out when NSX Advanced Load
Balancer is operating in 'write access' mode:
1 Open the Virtual Servicewindow for the virtual service that you prefer to scale.
2 Hover the cursor over the name of the virtual service to open the Virtual Service Quick Info
popup message.
3 Click Scale Out button, to scale the Virtual Service out to an additional SE per click, up to four
SEs.
4 If available, NSX Advanced Load Balancer will attempt to use an existing Service Engine. If
none is available or matches reachability criteria, it may create a new SE.
5 In some environments, NSX Advanced Load Balancer may prompt for additional information to
create a new Service Engine, such as additional IP addresses.
The prompt Currently Scaling Out displays the progress while the operation is taking place.
Note
n If virtual service scales out across multiple SEs, then each SE will independently perform server
health monitoring to the pool’s servers.
Scaling out a virtual service can take around few seconds or few minutes. The scale out timing
depends whether an additional SE exists or if a new SE with relevant network and disk speeds
requirement must be created.
Scaling In
The following are the steps to manually scale in a virtual service in when NSX Advanced Load
Balancer is operating in 'write access' mode:
1 Open the Virtual Service Details page for the virtual service that you prefer to scale.
2 Hover the cursor over the name of the virtual service to open the Virtual Service Quick Info
popup message.
4 Select Service Engine to scale in. In other words, select the Service Engine that should be
removed from supporting this virtual service.
5 Scale the virtual service by one Service Engine per SE selection, to a minimum of one Service
Engine.
The prompt Currently scaling in displays the progress while the operation is taking place.
Note While Scaling In, existing connections take thirty seconds to complete. Remaining
connections to the SE are closed and must restart.
Migrating
The Migrate option allows smooth migration from one Service Engine to another Service Engine.
During this process, the primary SE will scale out to the new SE and begins to send it new
connections. After thirty seconds, the old SE will be deprovisioned from supporting the virtual
service.
Note Existing connections to the migration’s source Service Engine take thirty seconds to
complete prior to the SE that is being deprovisioned for the virtual service. Remaining connections
to the SE are closed and must restart.
Additional Information
This section provides additional information for specific infrastructures.
In the Direct Secondary Return mode, the return traffic will use VIP as the source IP and the
secondary SE’s MAC as the source MAC. The ‘ARP Inspection’ must be disabled in the network,
i.e., the network layer should not inspect/block/learn the MAC of the VIP from these packets.
Otherwise, MAC-IP mapping will flap. This is a case with a few environments, such as OpenStack,
Cisco ACI, etc., and tunnel mode is required in these environments.
In the L3 scale-out with BGP, this is not applicable since the ARP is done for the next hop, which
is the upstream router, which in turn does the ECMP to individual SEs. The return traffic uses
respective SE’s MAC as source MAC and VIP as source IP. The router handles this as expected.
Throughput
The term throughput appears throughout the NSX Advanced Load Balancer web interface and
documentation. Every vendor has a slightly different definition of throughput, which may even
change depending on context.
Thoughput
In NSX Advanced Load Balancer, throughput is defined based on the traffic paths through virtual
services, pools, and SE:
A B
Client C Service D
Server
Engine
n A - Client request to SE
n B - SE request to server
n C - SE response to client
n D - Server response to SE
Throughput Calculations
NSX Advanced Load Balancer calculates throughput as follows:
n Multiple virtual services and pools hosted by the SE. (The throughput number includes all
virtual services and pools hosted by the SE.)
Throughput numbers may differ between a virtual service and its pool due to network or
application headers, SSL offload, compression, caching, multiplexing, or many other features.
Policies are comprised of one or more rules, which are match-action pairs. A rule can contain
many matches, or have many actions. Multiple policies can be configured for a virtual service.
Policies can alter the default behavior of the virtual service, or if the matching criteria are not met,
can stay benign for a particular connection, request, or response.
Policies are not shared. They are defined on a per-virtual-service basis and intended to be simple
point-and-click functionality.
Policies are configured within the Policies tab of the virtual service editor.
Prioritizing Policies
Policies can be used to recreate similar functionality found elsewhere within the NSX Advanced
Load Balancer. For instance, a policy can be configured to generate an HTTP redirect from HTTP
to HTTPS. The same functionality can be configured within the Secure-HTTP application profile.
Since a policy is more specific than a general purpose profile, the policy takes precedence.
If the profile is set to redirect HTTP to HTTPS via port 443, and the policy is set to redirect HTTP to
HTTPS on port 8443, the client will be sent to port 8443. (See Execution Priority for more on this
topic.)
A virtual service can have multiple policies defined, one for each of the four types. Once defined,
policies for the four types are implemented in the following order of priority:
5 DataScripts policy
6 Access policy
For instance, a network policy that is set to discard traffic takes precedence over an HTTP request
policy set to modify a header. Since the connection is discarded, the HTTP request policy will not
execute. Each policy type can contain multiple rules, which in turn can be prioritized to process in
a specified order. This is done by moving the policies up or down in the ordered list within the NSX
Advanced Load Balancer UI.
Match - Action
All policies are made up of match and action rules, which are similar in concept to if - then logic.
Administrators set match criteria for all connections, requests, or response to the virtual service.
The NSX Advanced Load Balancer executes the configured actions for all traffic that meets the
match criteria.
A single match with multiple entries is treated as “or” operation. For instance, if a single match
type has the criteria “marketing”, “sales”, and “engineering” set, then the match is true if the path
contains “marketing”, or “sales”, or “engineering”.
If a rule has multiple matches configured, all match types must be true for the action to be
executed. In the above screenshot, both the path and HTTP method matches must be true.
Within each of these two match types, only one of the entries must be true for that match type
to be true. For HTTP method, a client request must be of type GET or HEAD. Multiple rules can be
configured for each policy and they can be configured to occur in a specific order. If no match is
applied, the condition is automatically met and the actions will be executed for each connection as
per the policy type.
Matches against HTTP content are case-insensitive. This is true for header names and values,
cookies, host names, paths, and queries. For HTTP policies, the NSX Advanced Load Balancer
compares Uniform Resource Identifier (URI) matches against the decoded HTTP URI. Many
browsers and web servers encode human-readable format content differently. For instance, a
browser’s URI encoding might translate the dollar character “$” to “%24”. The Service Engine (SE)
translates the “%24” back to “$” before evaluating it against the match criteria.
Create a Policy
The virtual service editor defines policies consisting of one or more rules that control the flow of
requests through the virtual service.
1 Policy Type: First select the policy type to add by selecting one of the following categories:
a HTTP Security: HTTP security policies perform defined actions such as allow/deny,
redirect to HTTPS, or respond with a static page.
c HTTP Request: HTTP request policies allow manipulation of HTTP requests, content
switching and also allow customized actions based on client HTTP requests.
d HTTP Response: HTTP response policies evaluate responses from the server, and can be
used to modify the server’s response headers. HTTP response policies are most often used
in conjunction with HTTP request policies to provide an Apache Mod_ProxyPass capability
for rewriting a website’s name between the client and the server.
e DataScripts: DataScripts execute when various events are triggered by data plane traffic.
2 Create Rule: Create a new rule by clicking the 'plus' icon and specify the following information
for the new rule:
a Enable or Disable: By default, the new rule is enabled. The green slider icon can be
toggled to change to gray, to disable the rule and make it have no effect on the traffic.
b Rule Name: Specify a unique name for the rule in the Rule Name field, or leave the default
system generated name in place.
c Logging: Select Logging checkbox if you want logging enabled for this rule. When
enabled, a log is generated for any connection or request that matches the rule’s match
criteria. If a virtual service is already set to log all connections or requests, this checkbox
will not create a duplicate log. Client logs are flagged with an entry for the policy type and
rule name that matched. When viewing the policy’s logs within the logs tab of the virtual
service, the logs will be part of the significant logs option unless the specific connection or
request is an error, in which case it can be displayed under the default non-significant logs
filter.
d Match: Add one or more matches using the Add New Match drop-down menu. The match
options vary depending on the context defined by the policy type to be created. If a rule is
not given a match, all connections or requests are considered true or matched.
e Action: Add one or more actions from the drop-down list to be taken when the matches
are true. The available options vary depending on the type of rule to be created.
f Save Rule: Click the Save Rule button to save the new rule.
3 Ordering: Rules are enforced in the order in which they appear in the list. For instance, if
you add a rule to close a connection based on a client IP address, followed by a rule that
redirects an HTTP request from that IP address to a secure HTTP (HTTPS) connection, the
NSX Advanced Load Balancer closes the connection without forwarding the request. Alter the
order in which rules are applied by clicking the up and down arrow icons until the rules are in
the desired order.
Network Security
The following table lists both the available network security match criteria and the configurable
actions that can occur when a match is made.
Note This feature is supported for IPv6 in NSX Advanced Load Balancer.
Actions Logging: Selecting the checkbox causes the NSX Advanced Load Balancer to log when
an action has been invoked.
Allow or Deny: Explicitly allow or deny any matched traffic. Denied traffic is issued a reset
(RST), unless the system is under a volumetric or denial of service attack, in which case
the connection can be silently discarded.
Rate Limit: Restrict clients from opening greater than the specified number of
connections per second in the Maximum Rate field. Clients that exceed this number
will have their excessive connection attempts silently discarded. If burst size is enabled,
clients can burst above the maximum rate, if they have not recently been opening
connections. This feature can be applied to TCP or UDP. All clients that match the match
criteria will be treated as one bucket. For instance, if no match is defined, any and all
IP addresses will increment the maximum rate counter. Throttling occurs for all new
connecting clients. To enable per client throttling, see the Advanced tab of the virtual
service. The manual for this page also contains a more robust description of connecting
throttling.
HTTP Method: The method used by the client request. The match is true if any one
of the methods that an administrator specifies is true.
The options available are GET, HEAD, POST, PUT, DELETE, OPTIONS, TRACE, and
CONNECT, PATCH, PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, and
UNLOCK.
Path: The path or a group of paths. Paths do not need to begin with a forward slash
( / ). For comparison purposes, the NSX Advanced Load Balancer automatically
omits any initial slash specified in the match field.
Example: https://www.avinetworks.com/marketing/index.html?a=1&b=2
Query: A query or a group of queries. Do not add the leading ‘?’ or ‘&’ characters
to a match.
Example: https://www.avinetworks.com/marketing/index.html?a=1&b=2
Bot Management: Select the option to configure the bot classification result.
Actions Logging: Selecting the checkbox causes the NSX Advanced Load Balancer to log
when an action has been invoked.
Close Connection: Matched requests causes the NSX Advanced Load Balancer to
close the TCP connection that received the request through an FIN. Many browsers
open multiple connections that are not closed unless requests sent over those
connections also trigger a close connection action.
Send Response: The NSX Advanced Load Balancer can serve an HTTP response
using HTTP status code 200 (success), 403 (unauthorized), or 404 (file not found).
A default page is rendered by the browser for each of these status codes. Instead,
you can also upload a custom HTML file. This file can have links to images or other
files, but only the initial HTML file is stored and served through the Send Response.
Note You can upload any type of file as a local response. It is recommended to
configure a local file using the UI. To update the local file using API, encode the
base64 file out of band and use the encoded format in the API.
Rate limit: Specify the maximum number of new connections, HTTP requests,
bandwidth in Mbps, and/or concurrent open connections from/for/by clients.
Note HTTP security policy option in available on the NSX Advanced Load Balancer UI. To create
or edit the existing HTTP security policy, navigate to Applications > Virtual Services, select the
desired virtual services, and select the HTTP Security option.
HTTP Request
HTTP request policies allow manipulation of HTTP requests. These requests can be modified
before they are forwarded to the server, used as a basis for content switching, or discarded.
HTTP Response
HTTP response policies evaluate responses from the server, and can be used to modify the
response headers of the server. These policies are generally used to rewrite redirects or in
conjunction with HTTP request policies, to provide a client to server name-rewriting capability
similar to Apache’s ProxyPass.
DataScripts
DataScripts are executed when various events are triggered by data plane traffic. A single rule can
execute different code during different events.
Access
Access policy can be provided for SAML, PingAccess, JWT or LDAP access.
SAML
n SSO Policy: Specify the SSO policy attached to the virtual service.
n SSO URL: Specify the SAML single signon URL to be programmed on the IDP.
n Session Cookie Name: Specify the HTTP cookie name for the authenticated session.
n SSL Key: Select the SSL key from the drop-down list.
PingAccess
n SSO Policy: Specify the SSO policy attached to the virtual service.
Create SSO policy by clicking Create SSO Policy button. Specify the following details:
n Type — Select the SSO policy type from the drop-down list. The following are the options
available in the drop-down list.
n JWT
n LDAP
n OAUTH/OIDC
n PingAccess
n SAML
n Default Auth Profile: Specify the auth profile to use for validating users.
JWT
n SSO Policy: Select the SSO Policy attached to the virtual service.
n Token Location: Select the token location as Authorization Headeror URL Query.
LDAP
n SSO Policy: Specify the SSO policy attached to the virtual service.
n Basic Realm: When a request to authenticate is presented to a client, the basic realm indicates
to the client which realm they are accessing.
n Connections Per Server: Specify the number of concurrent connections to LDAP server by a
single basic auth LDAP process.
n Cache Size: Specify the size of LDAP basic auth credentials cache used on the dataplane.
n Bind Timeout: Specify LDAP basic auth default bind timeout enforced on connections to LDAP
server.
n Request Timeout: Specify LDAP basic auth default login or group search request timeout
enforced on connections to LDAP server.
n Connect Timeout: Specify LDAP basic auth default connection timeout enforced on
connections to LDAP server.
n Reconnect Timeout: Specify LDAP basic auth default reconnect timeout enforced on
connections to LDAP server.
n Servers Failover Only: Check this box to indicate that LDAP basic auth uses multiple LDAP
servers in the event of a fail-over only.
Policy Tokens
In more complex scenarios, an administrator can capture data from one location and apply it to
another location. The NSX Advanced Load Balancer supports the use of variables and tokens,
which can be used for this purpose.
Variables can be used to insert dynamic data into the modify header actions of HTTP request
and HTTP response policies. Two variables namely, $client_ip and $vs_port are supported. For
instance, a new header can be added to a HTTP request called origin_ip, with a value set to
$client_ip, to insert the source address of the client as the value of the header.
Tokens can be used to find and reorder specific parts of the HTTP hostname or path. For
instance, it is possible to rewrite the original request http://support.avinetworks.com/docs/
index.htm to http://www.avinetworks.com/support/docs/index.htm. Tokens can be used
for HTTP host and HTTP path. The tokens are derived from the original URL. Token delimiter in
host header is “.” and in the URL path it is “/”.
Example: Example 1
Original request support avinetworks com docs index.htm
URL:
In the example above, the client request is broken down into HTTP host and HTTP path. Each
section of the host and path are further broken down according to the “.” and “/” delimiters
for host and path. A host or path token can be used in an action to rewrite a header, a
host, or a path. In the example, a redirect of http://www.avinetworks.com/support/docs/
index.htm would send requests to docs.avinetworks.com/support/docs/index.htm
In addition to using the host[0], host[1], host[2] convention, a colon can be used to denote
whether the system must continue till the end of the host or path. For instance, host[1:] implies to
use avinetworks, followed by any further host fields. The result will be avinetworks.com. This is
especially useful in a path, which may contain many levels. Tokens can also specify a range, such
as path[2:5]. Host and path tokens can also be abbreviated as 'h' and 'p', such as h[1:] and p[2].
In the rewrite URL, redirect, and rewrite location header actions, the host component of the URL
can be specified in terms of tokens, the tokens can be constants strings or tokens from existing
host and path component of the URL.
Example: Example 2
New URL: region.avinetworks.com/france/paris/index.htm
Example: Example 3
Request
www1 avinetworks com sales foo index.htm auth=true
URL
n If the host header contains an IPv4 address and not a FQDN, and the rewrite URL or redirect
action refers to a host token, for instance, host[0], host[1,2], and so on, the rule action is
skipped and the next rule is evaluated.
n If the host header or path contains less tokens than that referenced in the action, then the rule
action is skipped. For instance, if the host name in the host header has only three tokens (host
name www.avinetworks.com, where, token host[0] = www, host1 = avinetworks, host2 =
com). If the action refers to host[4] the rule action is skipped.
n If the location header in the HTTP response contains an IPv4 address and the response policy
action is rewrite location header which refers to host tokens, the rule action is skipped.
n Rule engine does not recognize octal or hexadecimal IPv4 address in the host address. That is,
the rule action is not skipped if the host header has octal/ hexadecimal IPv4 address and the
action references a host token such as host1, and so on.
n If an HTTP request arrives with multiple host headers, the first host header will be used.
n Per RFC, HTTP 1.1 requests must have a non-empty host header. In case of encountering an
empty header, a 400 ‘Bad Request’ HTTP response is returned by the NSX Advanced Load
Balancer.
The following are the steps to configure regex matching and tokens:
n Create a string group object with the list of regex patterns you want to use for URI matching.
Note the use of regex captures (the string pattern within the parentheses) which are needed to
generate the regex tokens.
n Navigate to Templates > Groups > String Groups. Click CREATE. Specify the string name and
type.
n Under Policies, create a matching rule with the Criteria field selected as Regex pattern
matches and attach the necessary string group(s).
n You can now use regex captures as tokens in the corresponding action rule. On the GUI, you
can use SG_RE[] to access these tokens. These tokens are obtained from the first string in the
string group list that matched with the request Path.
Example: Example
Regex String: ^/hello/(.*)/world/(.*)$
/world/ /world/
The NSX Advanced Load Balancer Controller has a single interface used for various control plane
related tasks such as:
n Communication between the Controller and third-party entities for automation, observability,
etc.
n Communication between the Controller and third-party Hardware Security Modules (HSMs).
Starting with the NSX Advanced Load Balancer version 21.1.3, an additional interface is available
on the Controller to allow the ability to isolate the communication for some of the above entities.
Additionally, any static routes to be added to the Controller interfaces should now leverage the
cluster configuration instead of /etc/network/interfaces subsystem. These configurations will
be persisted across the Controller reboot and upgrade.
Note This feature is supported only on the Controllers deployed in vCenter and enables the use
of the additional interface only for HSMs.
Classification
The following labels available for classification:
n MGMT — This signifies general management communication for the Controller access, as well as
the Controller initiating communication, for instance, logging, third party API calls, and so on.
n HSM — This is used to classify communication between the Controller and an HSM device.
With this classification, the traffic can be moved from the default, main interface to the additional
interface, if configured.
Note
n MGMT and SE_SECURE_CHANNEL can only be performed by the primary (eth0) interface.
Operating Model
By default (prior to 21.1.3), the Controller is provisioned with one interface when being deployed in
vCenter (during installation).
1 Shut down the Controller virtual machine and add the interface through vCenter UI.
2 On powering ON the Controller virtual machine, NSX Advanced Load Balancer will recognize
the additional interface, and additional configuration through the NSX Advanced Load
Balancer CLI can be performed.
Note Hotplug of interfaces (addition to the virtual machine without powering off the virtual
machine) is not supported.
For the interface to be recognized within the NSX Advanced Load Balancer Controller software
and further classification via labels to be performed, NSX Advanced Load Balancer ‘cluster’
configuration model should be used.
1 Shut down the Controller and add the new interface via the vCenter.
2 Power on the Controller. The new interface will be visible as eth1, while the primary interface
will always be visible as eth0 in the Cluster configuration:
| mac_address | 00:50:56:81:cb:45 |
| mode | STATIC |
| ip | 10.102.64.201/22 |
| gateway | 10.102.67.254 |
| labels[1] | MGMT |
| labels[2] | SE_SECURE_CHANNEL |
| labels[3] | HSM |
| interfaces[2] | |
| if_name | eth1 |
| mac_address | 00:50:56:81:c0:89 |
+-----------------+----------------------------------------------+
In the above,
n For the second interface (index 2), the IP and label has been added.
n The label HSM has been removed from the primary interface (index 1).
Note The nodes that already are configured with additional interfaces and routes, can be added
to a cluster.
For more information on configuring cluster, see API - Configuring the NSX Advanced Load
Balancer Controller Cluster.
1 Remove the configuration (mode, IP, labels) from the second interface (eth1).
Starting with NSX Advanced Load Balancer version 21.1.3, you should not edit /etc/network/
interfaces file. All configurations (IP, Static Route) should be via cluster configuration.
| if_name | eth1 |
| route_id | 1 |
+--------------------+----------------------------------------------+
[admin:controller]: cluster> save
n For the discovery of the secondary interface, the Controller nodes need to be stand-alone, i.e.,
not part of a cluster. This is a one-time operation for NSX Advanced Load Balancer to discover
the additional interface.
n Once the secondary interfaces have been discovered, the Leader node can be used to form
the cluster, as detailed in Deploying an NSX Advanced Load Balancer Controller Cluster.
n After the cluster is fully formed, the secondary interface configuration for all the nodes can be
performed.
Note
n There is no requirement to log in to the node for the interface discovery to succeed. The only
requirement is for the interface to be in a connected state in the virtual machine and for the
Controller to have been powered on.
n The cluster formation and the secondary interface configuration should be performed as
separate steps.
The cluster configuration and runtime configuration contain the IP information for the cluster.
If the IP address of a leader or follower node changes (for instance, due to DHCP), this script
must be run to update the IP information. The cluster will not function properly until the cluster
configuration is updated.
If the IP address of a Controller node is changed for any reason (such as DHCP), the following
script must be used to update the cluster configuration. This applies to single-node deployments
as well as cluster deployments.
To repair the cluster configuration after IP address of a Controller node has changed, run the
change_ip.py script.
Note
n The change IP script only changes the NSX Advanced Load Balancer cluster configuration.
It does not change the IP address of the host or the virtual machine on which Controller
services are running. For instance, it does not update the /etc/network/interfacesfile in
a VMware-hosted Controller. You should change the IP address for the virtual machine in the
vApp properties in VMware.
Script Options
Caution Before running the script, check to make sure new IPs are working on all nodes
and are reachable across nodes. If one or more IPs are not accessible, the script makes a best-
effort update, though there is no guarantee that the cluster will be back in sync upon restoring
connectivity.
The script can be run on the Controller node whose management IP address changed, or on
another Controller node within the same cluster.
The script must be run on one of the nodes that is in the cluster. If the script is run on a node that
is not in the cluster, the script fails.
-i ipaddr: Specifies the new IP address of the node on which the script is run.
-m subnet-mask: If the subnet also changed, use this option to specify the new subnet. Specify the
mask in 255.255.255.0. Format.
-g gateway-ipaddr: If the default gateway also changed, use this option to specify the new
gateway.
*change_ip.py -i **ipaddr*
This command is run on node 10.10.25.81. Since no other nodes are specified, this is assumed to
be a single-node cluster (just this Controller).
In the following example, the node’s default gateway also has changed.
To update Controller IP information for a cluster, use a command string such as:
Example:
This command is run on node 10.10.25.81, which is a member of a 3-node cluster that also contains
nodes 10.10.25.82 and 10.10.25.83.
The script can be run on any of the nodes in the cluster. The following example is run on node
10.10.25.82:
Note After executing change_ip.py, in case of failure, use recover.py to convert nodes to
single nodes and create the 3-node cluster again. For more information, see Recover a Non-
Operational Controller Cluster.
To verify if the system is functioning properly, go to the Controller Nodes page and ensure that all
nodes are CLUSTER_ACTIVE.
1 Change the IP address of each Controller node within the cluster to the new IP by manually
editing the network scripts on the host and changing the interface configuration.
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address <ipv4 address>
netmask 24
gateway <ipv4 gw>
3 Ensure that the new Controller IP addresses are reachable in the network from the other
Controller nodes.
For a 3 node cluster deployment, you need to change the IPs on all the Controllers and then run
the command as shown below from any Controller node to update the Controller IP information
for a cluster.
where,
n -i ipaddr: Specifies the new IP address of the node on which the script is run.
n -m subnet-mask: If the subnet is also changed, use this option to specify the new subnet.
Specify the mask in the following format: 255.255.255.0
n -g gateway-ipaddr: If the default gateway is also changed, use this option to specify the new
gateway.
Note The Controller cluster should come back up with the new IPs.
Considerations
The following considerations should be noted:
n The interface names, eth0, eth1, and so on, and discovered MAC addresses are static, and
cannot be modified.
n The primary (eth0) interface cannot be modified, apart from the labels.
n All labels needs to be a part of some interface and a label cannot be repeated in more than
one interface.
n For the additional interface, only Static IP mode is supported. DHCP is not supported.
n The Access Controls are applied only to the primary interface. It is recommended to continue
to use external firewall settings to restrict access, for instance, inbound SSH to the additional
interface.
n You should not edit /etc/network/interfaces file. All configurations, such as IP, Static
Route, should be via cluster configuration.
n The secondary interfaces should remain in connected state within the virtual machine.
Disconnecting them may lead to the interface being removed, if the virtual machine is
rebooted.
Auto Scaling
This topic explains the various autoscaling capabilities provided by NSX Advanced Load Balancer
and their integration with public cloud ecosystems, such as Amazon Web Services (AWS) and
Microsoft Azure.
For the public cloud ecosystems which can provide elastic autoscaling capabilities for workloads,
NSX Advanced Load Balancer uses these capabilities and even manages their behaviour based on
the metrics collected by NSX Advanced Load Balancer.
n Scaling the virtual service to more (or fewer) SEs, so that traffic can be serviced by more (or
fewer) load-balancing instances as the NSX Advanced Load Balancer SEs reach (underutilize)
capacity.
n Scaling the application server pool to more (or fewer) application instances, so that traffic can
be serviced by a right-sized back end pool.
Both types of scaling can be performed automatically through pre-set NSX Advanced Load
Balancer policies, based on load and capacity measurements done using NSX Advanced Load
Balancer.
Ecosystem Integration
NSX Advanced Load Balancer supports the above-mentioned autoscaling features in all
ecosystems. This section discusses integration considerations related to the below public clouds:
n Microsoft Azure
In the default configuration, a virtual service is placed on a single SE. However, if the SE is not
sufficient to handle traffic for the virtual service, the virtual service can be scaled out to additional
SEs. In this case, more than one SE handles traffic for the virtual service.
In the case of automated scaling of virtual service placements, one of the following SE parameters
can be used to configure thresholds beyond which a virtual service should be scaled out to a new
SE, or scaled back into fewer SEs:
For more information on virtual service scaling, see Virtual Service Scaling.
As public cloud infrastructure is charged based on usage or uptime, it is important to have enough
capacity based on usage, along with the ability to scale resources on-demand.
Public clouds provide autoscaling features. The templates for autoscaling servers can be used to
spawn virtual machines and configure them. The scale-out or scale-in can either be done manually
or based on certain load conditions.
In this manner, NSX Advanced Load Balancer distributes traffic requests to the requisite virtual
machine instances.
The scaling in or scaling out of the pool is controlled based on policies associated with the
autoscale group, and the Controller does not influence this operation.
An autoscale policy is created on the Controller and is associated with the pool. This autoscale
policy contains parameters and thresholds for triggering the scale-out and scale-in event, based
on a wide range of metrics and alerts that NSX Advanced Load Balancer supports.
When the threshold is crossed, the Controller communicates with the public cloud to initiate a
scale-out or a scale-in operation and also manages the pool membership.
A key advantage of this method is the ability to use a much richer set of metrics for performing
scaling decisions, as compared to the metrics available with the public cloud.
Whenever NSX Advanced Load Balancer decides on scale-in, any server which is already down
will be selected for scale-in.
Also, the down servers can be garbage collected by NSX Advanced Load
Balancer, after a configured delay. To configure the delay parameter, use the
delay_for_server_garbage_collection under the autoscale_policy options.
AZ Aware Autoscaling
While scaling in, NSX Advanced Load Balancer autoscale will ensure the balance of servers across
different AWS availability zones. For instance, if there are four servers in a pool (two servers each
in AZ1 and AZ2), and scale-in happens for two servers, you will be left with two servers, one in
each AZ.
n Amazon Web Services (autoscaling managed by public cloud): NSX Advanced Load Balancer
Integration with AWS Auto Scaling Groups
n Amazon Web Services (autoscale managed by NSX Advanced Load Balancer): Configuration
and Metrics Collection on NSX Advanced Load Balancer for AWS Server Autoscaling
n Microsoft Azure: Virtual Machine Scale Set integration with NSX Advanced Load Balancer
Auto scaling groups in the AWS are referred to as external autoscaling groups in NSX Advanced
Load Balancer because they are an external entity to NSX Advanced Load Balancer. With this
feature, more fine-grained scaling policies can be applied, based on NSX Advanced Load Balancer
Controller collected metrics and AWS CloudWatch metrics.
n Level 4 metrics
n Level 7 metrics
n Insight metrics
For the complete list of metrics collected by NSX Advanced Load Balancer, see Metric List.
Infrastructure-related metrics for server instances such as CPU usage, network usage, and so on,
are fetched from CloudWatch by NSX Advanced Load Balancer. The metrics collected from AWS
are as follows:
n vm_stats.avg_cpu_usage
n vm_stats.avg_disk_read
n vm_stats.avg_disk_write
n vm_stats.avg_disk_io
n vm_stats.avg_net_usage
Autoscale Policy
An autoscale policy is a set of rules to configure and trigger an alert using the above-mentioned
metrics. To create or choose an existing autoscale policy, navigate to Applications > Pools and
click the edit icon for the desired pool. Select the AutoScale Policy option inthe Settings tab to
add a new autoscale policy or to use an existing one.
Setting the value for use-external-asg to true instructs the Autoscale Manager to start
orchestrating scale-in or scale-out activities for the associated pool. The value for the use-
external-asg flag is set to true for the default autoscale launch configuration (default-
autoscalelaunchconfig).
To enable the checkbox for Use External ASG, navigate to Applications > Pools, and click the
edit icon for the desired pool. Select the desired name from the the drop-down list of Autoscale
Launch Config field in the Settings tab.
AZ Aware Autoscaling
While scaling in, NSX Advanced Load Balancer autoscale will ensure the balance of servers across
different AWS availability zones. For instance, if there are four servers in a pool (two servers each
in AZ1 and AZ2), and scale-in happens for two servers, you will be left with two servers (one in
each AZ).
1 Navigate to Applications > Pools. Click Create Pool and select the required cloud.
2 Select Create Autoscale Policy option from Autoscale Policy drop-down list.
3 On the New AutoScale Policy page, provide the desired name, and minimum and maximum
instances for the pool. You can also provide Server Garbage Collection Delaydetails.
4 Select the required alerts for server autoscaling from the Alerts drop-down list in the Scale-
Out section. Also specify the Cooldown Period,Adjustment Step and Intelligent Margin.
5 Select the required alerts for server autoscaling from the Alerts drop-down list in the Scale-In
section. Also specify the Cooldown Period and Adjustment Step and Intelligent Margin.
a Cooldown Period: During this period no new scale-out event is triggered to allow the
previous scale-out to complete.
c Intelligent Margin: Minimum extra capacity as percentage of load used by the intelligent
scheme. Scale-out is triggered when the available capapcity is less than this margin.
Whereas, Scale-in is triggered when the available capapcity is more than this margin.
7 Navigate to Applications > Pools and select the drop-down menu for AutoScale Launch
Config to create a new autoscale launch configuration. Specify the name for autoscale launch.
8 Click Save.
9 Create a virtual service for the configured pool with an autoscaling group.
Navigate to Templates > Events to check alerts generated for scale-out or scale-in events.
3 CONFIG_UPDATE: The pool was updated and the scaled-in pool member is deleted.
Note Burstable Performance Instance types are not supported for CPU utilization based
autoscaling. Burstable Performance Instance types are AWS instance types with their names
starting with T2, such as, T2.micro, T2.large, and so on.
An NSX Advanced Load Balancer pool is a group of back end servers having similar
characteristics, or serving or hosting similar applications. In the NSX Advanced Load Balancer-
AWS integration, a pool is scaled in or out to reflect actions taken by AWS on the corresponding
AWS auto scaling group. These actions are governed by AWS preconfigured policies and criteria.
Scaling out is adding one or more instances to the auto scaling group and scaling in is removing
one or more instances from the auto scaling group.
For more information about auto scaling groups on AWS, see Auto Scaling groups.
Background
NSX Advanced Load Balancer supports AWS auto scaling groups for configuring pools for a
virtual service.
NSX Advanced Load Balancer AWS cloud connector periodically polls AWS auto scaling group
membership information and updates the corresponding pool server membership if the changes
are required.
For instance, if a new server (instance) is added to an AWS auto scaling group being used as an
NSX Advanced Load Balancer pool, NSX Advanced Load Balancer will automatically update the
pool membership to include the newly provisioned server. Conversely, upon deletion of a server
(instance) from the AWS auto scaling group, NSX Advanced Load Balancer will delete this server
from its pool membership. This enables seamless, elastic and automated management of back end
server resources without any operator intervention or configuration updates.
Note
n NSX Advanced Load Balancer supports SNS and SQS features for auto scaling groups. If SNS
and SQS are not in use, the default polling method is used. For more information, see Using
the SNS-SQS feature for Auto Scaling Groups.
Prerequisites
n The AWS user or IAM role needs to read access to Auto scaling groups and instances therein.
For more information, see IAM Role Setup for Installation into AWS.
1 Log in to the UI. Navigate to Applications > Pools. Click Create Pool. Select the cloud and
specify the pool name and accept the defaults for the remaining field options.
2 Click Next to view server options. Select the Auto Scaling Groups option from Select Servers.
3 Select auto scaling group instances already configured on AWS for that specific cloud from the
Auto Scaling Group drop-down list.
4 After selecting an instance or server from the list, NSX Advanced Load Balancer will fetch the
instance or server information from AWS.
5 Click the Save option, the UI will return to the Pools page to display the Auto Scaling group
members.
By default, the flag for using SNS or SQS option is set to false on the NSX Advanced Load
Balancer Controller. In the default polling method, the Controller polls every ten minutes to
synchronize information regarding ASG membership changes. If SNS and SQS features are not
enabled, set the polling interval to one minute. This value can be configured between 60 seconds
(1 minute) to 1800 seconds (30 minutes). When using the SNS-SQS feature, increase the polling
interval value from 1 minute to 10 minutes (recommended), as the cloud connector notifies the
Controller instantly when ASG membership changes.
Log in to the Controller’s shell prompt and follow the steps as shown below.
Set use_sns_sqs to false and change asg_poll_interval to 60 seconds when SNS/SQS is not
in use.
Configuring on AWS
AWS users should have all the required privileges to perform various actions required to enable
and use SNS-SQS services. For the list of privileges provided, check the following JSON files:
n avicontroller-sns-policy.json
n avicontroller-sqs-policy.json
n avicontroller-asg-notification-policy.json
Follow the steps mentioned in IAM Role Setup for Installion into AWS to associate these policies to
AWS users.
Alerts
NSX Advanced Load Balancer synchronizes information of Auto Scaling groups configured on
AWS. If any of the Auto Scaling groups are deleted on the integrated AWS, a corresponding alert,
and an event is generated on NSX Advanced Load Balancer. For more information on this, see
Alerts when an Auto Scaling Group is deleted on AWS.
Alerts can be checked on the NSX Advanced Load Balancer user interface under the Pools tab. To
check the alert, navigate to Applications > Pools, and select the desired pool.
Navigate to the Alerts tab to check the alerts for the Auto Scaling group deletion.
n Deleted ASG
Note
n If multiple auto scaling groups are deleted on AWS, there will be only one alert for the specific
pool (of which ASG is part).
n If the deleted auto scaling group is part of multiple pools, NSX Advanced Load Balancer
generates alerts for each pool.
n While reconfiguring the same auto scaling group on NSX Advanced Load Balancer, the
information regarding associated members is available to reuse.
Legacy high availability (HA) includes support for gateway monitoring. NSX Advanced Load
Balancer SEs taking on either active or standby roles for a virtual service in a legacy HA
deployment can perform gateway monitoring. By default, gateway monitoring is off until an IP
address to monitor is furnished for the cloud. When an IP address is furnished, all legacy HA SE
groups within the cloud perform gateway monitoring.
Note
n If the external GW monitor fails, then the SEs are removed for any placement.
n This is applicable for a single monitor in one VRF or other monitors that are succeeding.
Issue Description
Gateway is not reachable from active NSX Advanced Load If only the standby NSX Advanced Load Balancer SE for
Balancer SE but is reachable from standby SE. a virtual service can reach the gateway, the active SE
becomes standby, and the standby SE becomes active.
When the gateway reachability is restored on the standby
SE, it stays in the standby state.
Gateway is not reachable from standby NSX Advanced The active NSX Advanced Load Balancer SE for the virtual
Load Balancer SE but is reachable from active SE. service remains active, and the standby NSX Advanced
Load Balancer SE remains in the standby state. When
gateway reachability is restored on the standby SE, the SE
stays in the standby state.
Active NSX Advanced Load Balancer SE loses gateway The active NSX Advanced Load Balancer SE for the virtual
connectivity after standby SE has lost gateway service remains active, and the standby SE remains in the
connectivity. standby state.
Both the active NSX Advanced Load Balancer SE and the The active NSX Advanced Load Balancer SE for the virtual
standby SE simultaneously lose gateway reachability. service remains active and the standby SE remains in
standby state.
With multiple gateway monitors, at least one gateway is This results in switching all the virtual services on the
not reachable from active NSX Advanced Load Balancer current active SE to standby SE.
SE, but all gateways reachable from standby SE.
The NSX Advanced Load Balancer DNS virtual service primarily implements the following
functionality:
A DNS service is represented in green and it is hosted on the leftmost Service Engine as shown
in the figure below. If a matching entry is found then, the DNS virtual service responds to DNS
queries. If a matching entry is not found and the pool members are configured then, the DNS
virtual service forwards the request to the backend DNS pool servers (represented in blue).
DNS virtual service supports A/A, A/S and N+M with health monitoring support added for DNS
virtual service configured in active/standby mode.
NSX Advanced Load Balancer can be configured with more than one DNS virtual service.
SE-local
DNS service
DNS
DNS DNS
App-1 App-2
Back-end DNS servers
A NSX Advanced Load Balancer DNS virtual service acts as an authoritative DNS server for one or
more subdomains (zones), and all analytics and client logs are supported.
In this scenario, the corporate name server delegates one or more subdomains to the NSX
Advanced Load Balancer DNS service, which in turn acts as an authoritative DNS server for
them. In the example shown below, avi.acme.com and gslb.acme.com are the subdomains.
Typically, the corporate name server will have a NS record pointing to the NSX Advanced Load
Balancer DNS service (10.100.10.50). Client queries for these subdomains are sent directly to NSX
Advanced Load Balancer, whereas all DNS requests outside of acme.com are instead sent to the
external “.com” name server.
avi.acme.com NS 10.100.10.50
gslb.acme.com NS 10.100.10.50
All DNS requests acme.com <local records>
outside of ace.com, Corp
sent to external “.com” DNS Server
Name Server
All DNS requests for avi.acme.com
and gslnb.acme.com sent to Avi DNS
service
10.100.10.50
DNS
In this scenario, where there is a primary name server for a domain with pass-through to corporate
name server NSX Advanced Load Balancer DNS responds to any zone it has been configured to
support. DNS queries that do not match NSX Advanced Load Balancer DNS records pass through
(proxy) to corporate DNS servers via a virtual service pool created for that purpose. If members of
that pool receive DNS requests outside the corporate domain (acme.com in this case), they send
them to their external “.com” name server.
acme.com
Corp
DNS Server
n DNS Policy
n Adding Custom A Records to an NSX Advanced Load Balancer DNS Virtual Service
n Clickjacking Protection
n The Logs tab provides detailed information about DNS queries from clients, including FQDN,
query-type, significant errors, responses such as IP addresses, CNAME, SRV, etc.
n Non-significant logs should be enabled with caution, since a large number of DNS queries
typically hit a DNS service, and this would result in too many logs entries.
n Categorization of non-significant logs is also very important. If certain errors are typical in a
certain deployment, these errors should be excluded from significant logs.
n DNS health monitors in Health tab can be configured to monitor the health of DNS servers that
are configured as DNS service pool members. For complete information, refer to DNS Health
Monitor section.
Note
n Detailed analytics is not available for TCP.
n NO-DATA may occasionally appear when a metric tile is selected. This typically implies “Not
Applicable”. For instance, a GSLB service name may not be applicable for the DNS proxy or a
static entry.
Additional Features
The following are the additional features:
n Domain filtering drops requests for any domains that are not explicitly configured on the DNS
service (Default setting is to allow all domains).
n With full TCP proxy, client spoofing is prevented for TCP DNS queries. SYN flood attacks are
mitigated.
n You can respond to failed DNS requests by returning a DNS error code or dropping the
packets.
NSX Advanced Load Balancer supports text record (TXT) record and mail exchanger (MX) record.
n TXT record: This is used to store text-based information for the configured domain.
n Under the DNS Virtual Services section, click the drop-down list to either choose a pre-
defined DNS virtual service or create a virtual service.
For more information on configuration steps for DNS virtual services, see to Configuring a Local
DNS Virtual Service on All Active Sites that host DNS.
n Specify relevant information for all fields in the editor. Enable the checkbox for Active
Memberoption and click Save and Set DNS Virtual Services.
n Select from one or more DNS virtual services in the drop-down list and click Save to enable it
for the GSLB configuration.
This below screenshot illustrates, the case where there are no DNS virtual services to choose. An
active GSLB site does not require a DNS, though it may be preferred, as described in the next
section.
1 You must have at least two geographically separated active GSLB sites. For each site,
configure DNS to a scalable SE group.
2 If only one active site is defined then, ensure a minimum of at least one geographically remote
cloud. On that remote cloud, configure DNS for GSLB on a scalable SE group. Also, define all
virtual services to support the mission-critical applications running on the original location.
Configuring DNS
This section explains how to configure DNS on NSX Advanced Load Balancer.
Settings Tab
1 Under the Profiles section, select System DNS profile option in the Application Profile drop-
down list.
2 Choose a suitable profile for the network settings under TCP/UDP Profile, such as System-
UDP-Per-Pkt.
3 Under the Service Port section, enter 53 for the Services field.
4 Under the Pool section, choose a relevant IPv4, IPv6, or IPv4 + IPv6 pool from the drop-down
list or click Create Pool to configure a new pool. On creating a new pool, navigate to the
Servers tab to enter the IPv4, IPv6, or IPv4v6 member information.
1 Click Create DNS Record to create a new DNS record. You can create the DNS record for both
IPv4 and IPv6 traffic.
2 Specify a qualified domain name under FQDN. For Type, choose the record type from the
drop-down list.
3 Under the AandAAAA Record section, enter the IP for A record under IPv4 Address field and
IP for AAAA record under IPv6 Address. You can choose to enter any one of them or both.
Multiple IP addresses (both for IPv4 and IPv6) can be configured as well.
Note
n FQDN resolution of pool member objects is supported only through SE.
To enable the DNS Resolution on SE, dns_resolution_on_se must be set in cloud configuration.
The Service Engine needs DNS resolver configuration for resolving the FQDNs from the Service
Engine. For this a DNSResolver object needs to be configured in the cloud configuration. Only
one DNSResolver object is supported per cloud.
The following is the CLI command for configuring the DNS resolver in cloud:
n fixed_ttl: If configured, this value is used for refreshing the DNS entries. This will override
both received_ttl and min_ttl. The entries are refreshed only on fixed_ttl even when
received_ttl is less than fixed_ttl.
n min_ttl: If configured, this TTL overrides the TTL from responses if TTL is less than
min_ttl.effectively and if TTL is equal to max(received_ttl, min_ttl).
n If the resolution needs to be done through SE and the DNS resolvers are updated through
DHCP, you can enable only dns_resolution_on_se code and do not have to configure
dns_resolver code in the cloud.
Configuring DNS Nameservers on Service Engine for Client Log Streaming and
for External Health Monitor
If DNS resolver in cloud is configured as per steps in Configuring DNS Resolution on
SE, /etc/systemd/resolved.conf for management network and /etc/netns/{namespace-
name}/resolv.conf for all VRF on SE virtual machine are updated.
DNS Policy
This section explains about DNS policy.
A DNS policy consists of rules which has match targets and actions. The match targets are the
various attributes of a DNS request such as query type, query domain name, DNS transport
protocol used, client IP originating the request, etc. The rule actions can vary from security
actions, such as closing the connection, to response actions, such as generating an empty
response, etc.
A DNS policy can be referenced by a Layer-4 DNS virtual service (L4 DNS VS), a virtual service
which has an application profile type DNS. A single DNS virtual service can refer to a single DNS
policy.
DNS
virtual service
foo
DNS policy A
The DNS rule engine is executed for a DNS request only when a DNS request has been received
and parsed successfully.
A DNS policy rule is said to be a hit for a DNS request if all the match targets of the rule evaluate
to TRUE. If any match target of the rule does not evaluate to TRUE, the rule is not considered a hit
and the subsequent rule of the current policy (or, if there are no more rules in current policy, then
the first rule of the next policy is applicable) is evaluated.
Note For a DNS query, prior to lookups into the database for GSLB and static DNS entries, the
DNS policy rules are applied first.
Matches
This section explains about rule matching in the DNS policy with match targets and actions.
Client IP
The match target matches the client IP address of the DNS query against a configured set of IP
addresses. The IP address match can be against an implicit set of IP addresses, IP address ranges
and IP prefixes, and/or a set of IP address group objects.
n IS IN evaluates to TRUE, if the client IP of the current DNS request is in the configured set of IP
addresses.
n IS NOT IN evaluates to TRUE, if the client IP of the current DNS request is not in the configured
set of IP addresses.
Use Case
A client IP match target can be used to block DNS queries emanating from a particular
geographical area hosting a bad bot. This is achieved by configuring a client IP rule match using
the IP addresses associated with the particular geographical area, and a rule action of drop.
client IP match
match: IS IN
addresses:
202.192.0.1
202.192.0.2
ranges
begin: 192.168.71.1
end: 192.168.71.20
begin: 192.168.73.15
end: 192.168.73.31
prefixes
addr: 192.0.31.1
mask: 16
IP group foo
ipgroup: ip_group_foo
ipgroup: ip_group_bar
IP group bar
The query name match operation supports the following match operations:
n Begins With evaluates to TRUE, if the query domain name of the current DNS request begins
with any of the strings in the configured set of strings.
n Does Not Begin With evaluates to TRUE, if the query domain name of the current DNS request
does not begin with any of the strings in the configured set of strings.
n Contains evaluates to TRUE, if the query domain name of the current DNS request contains any
of the strings in the configured set of strings.
n Does Not Contain evaluates to TRUE, if the query domain name of the current DNS request
contains none of the strings in the configured set of strings.
n Ends evaluates to TRUE, if the query domain name of the current DNS request ends with any of
the strings in the configured set of strings.
n Does Not End With evaluates to TRUE, if the query domain name of the current DNS request
does not end with any of the strings in the configured set of strings.
n Equals evaluates to TRUE, if the query domain name of the current DNS request equals any of
the strings in the configured set of strings.
n Does Not Equal evaluates to TRUE, if the query domain name of the current DNS request
equals none of the strings in the configured set of strings.
Use case
A query domain name match target can be used to block DNS queries for certain domains that are
not served by the DNS virtual service. This is achieved by configuring a rule with query domain
name match using the desired unavailable domain names, and a rule action of drop.
query_domain_names:
internal.
dmz.
admin.
Query Type
This match target matches the type of the DNS query against a configured set of query types
(record types A, AAAA, CNAME, and so on). The query type match operation supports the
following match operations:
n Is In evaluates to TRUE if the query type of the current DNS request is in the configured set of
query types.
n Is Not In evaluates to TRUE if the query type of the current DNS request is not in the
configured set of query types.
Use case
A query type match target can be used to block DNS queries not served by the DNS virtual
service. This is achieved by configuring a rule query type match using the desired available query
types, and a rule action of drop. Thus, any query type not in the configured set will be dropped.
match: Is Not In
query_types:
A
AAAA
CNAME
SRV
n Is In evaluates to TRUE, if the transport of the current DNS request is in the configured set of
transport protocols.
n Is Not In evaluates to TRUE, if the transport of the current DNS request is not in the configured
set of transport protocols.
Use Case
A query transport protocol match target can be used to redirect DNS queries over UDP instead
of the TCP. This is achieved by configuring a rule with transport protocol match using the UDP
protocol as match, and rule action of Empty Response with truncation TC bit set. Thus, any query
over UDP will receive an empty response with truncation TC bit set which allows the client to
retransmit the query over TCP.
match: Is In
protocol:
UDP
Rate Limiting
It is possible to identify a match which specifies the maximum number of DNS requests allowed in
a period of time through the REST API or UI.
Actions
n Access Control: This rule action allows a UDP DNS query to be processed or dropped. If the
query arrives over TCP then, it will be allowed or dropped with additional option of resetting
the connection.
Use Case: If a rule match is configured to block DNS queries of types other than A, AAAA, CNAME
and SRV then, the drop action is used in the rule.
n Custom Response: This action allows a custom response that is sent for a DNS query request.
The response can be controlled to set the response code RCODE, the Authority AA and
Truncation TC bit in the response. Through REST API and CLI, the resource record sets are
supported, permits custom data to be inserted into the Answer, Authority, and Additional
sections of the DNS response body. For details on RRsets, see RFC 1034, Domain Names —
Concepts and Facilities.
Use Cases: If the DNS entries in the DNS virtual service does not support AAAA records for IPv6
address and hints the client to request for A records then, a rule match is configured to catch
the AAAA DNS queries and the response action is used in the rule action to generate an empty
NOERROR response. This causes the client to reissue the query for an A record. The Custom A,
CNAME, NS and/or AAAA record types can be returned.
n GSLB Site Selection : The policy of the DNS virtual service is configured so that a rule match
can override the usual GSLB-algorithm-based response. As a result of a match, one site is
chosen from a set of IP addresses (each homed at a different GSLB site) that share a common
site_name tag. If none of these are available, up to 16 fallback sites can be identified as an
alternative. If none of the fallback sites are healthy and the is_preferred_site Boolean is
TRUE, the DNS virtual service picks a site based on the configured GSLB algorithm. For more
information, see GSLB Site Selection with Fallback and Preferred-Site Options.
Use case: Imagine three GSLB sites, one each in Paris, Lyons, and Antwerp. With the geolocation
algorithm of NSX Advanced Load Balancer, a client situated in France, close to the French-Belgian
border is directed to Antwerp. However, since the client is in France, the GSLB-site-selection
action returns the VIP of a site having the site name “FRANCE”.
n Pool and Pool Group Selection: An NSX Advanced Load Balancer DNS virtual service will be
configured with backend DNS servers. Routing of requests to backend DNS servers but not
the members of the default pool requires definition of a pool or pool-group selection action.
This feature is supported in the NSX Advanced Load Balancer REST API, CLI and UI.
Use Case: It might be necessary to resolve a subset of DNS queries using DNS infrastructure
residing in a remote cloud. NSX Advanced Load Balancer DNS virtual service can conditionally
load balance such queries to one of the DNS servers in the remote cloud.
n Rate Limiting : A NSX Advanced Load Balancer DNS virtual service can be configured to limit
the rate at which DNS requests are accepted. You can specify the number of requests that are
allowed in a given time period. The action can be configured as either DROP or Report Only.
If DROP is configured then, the traffic exceeding the rate limit is dropped by the virtual service.
If Report Only is configured then, such traffic is passed through but marked as significant
logs in the application logs.
Note The rate limiting is configured from the NSX Advanced Load Balancer REST API or CLI and
not the UI.
Use Cases: DNS request rate limiting can be used to ensure quality of service and improved
security.
Procedure
1 Edit the DNS virtual service for which policy rules are to apply.
2 Click Green button. NSX Advanced Load Balancer displays Rule 1 as default, which can be
changed accordingly.
allow:
allow: '(true | false) # Field Type: Optional'
reset_conn: '(true | false) # Field Type: Optional'
gslb_site_selection:
fallback_site_names: <string>
is_site_preferred: '(true | false) # Field Type: Optional'
site_name: '<string> # Field Type: Optional'
pool_switching:
pool_group_uuid: '<string> # Field Type: Optional'
pool_uuid: '<string> # Field Type: Optional'
response:
authoritative: '(true | false) # Field Type: Optional'
rcode: '<choices: DNS_RCODE_NOERROR | DNS_RCODE_NXDOMAIN | NS_RCODE_YXDOMAIN |
DNS_RCODE_REFUSED | DNS_RCODE_FORMERR | DNS_RCODE_YXRRSET | DNS_RCODE_NOTIMP |
DNS_RCODE_NOTZONE | DNS_RCODE_SERVFAIL | DNS_RCODE_NXRRSET | DNS_RCODE_NOTAUTH>
# Field Type: Optional'
resource_record_sets:
- resource_record_set:
cname:
cname: '<string> # Field Type: Required'
fqdn: '<string> # Field Type: Optional'
ip_addresses:
- ip_address:
addr: '<string> # Field Type: Required'
type: '<choices: V4 | V6 | DNS> # Field Type: Required'
nses:
- ip_address:
addr: '<string> # Field Type: Required'
type: '<choices: V4 | V6 | DNS> # Field Type: Required'
nsname: '<string> # Field Type: Required'
ttl: '<integer> # Field Type: Optional'
type: '<choices: DNS_RECORD_DNSKEY | DNS_RECORD_AAAA | DNS_RECORD_A | DNS_RECORD_OTHER
| DNS_RECORD_AXFR | DNS_RECORD_SOA | DNS_RECORD_MX | DNS_RECORD_SRV |
DNS_RECORD_HINFO
| DNS_RECORD_RRSIG | DNS_RECORD_OPT | DNS_RECORD_ANY | DNS_RECORD_PTR | DNS_RECORD_RP
| DNS_RECORD_TXT | DNS_RECORD_CNAME | DNS_RECORD_NS>
Field Type: Optional'
section: '<choices: DNS_MESSAGE_SECTION_QUESTION | DNS_MESSAGE_SECTION_ADDITIONAL
| DNS_MESSAGE_SECTION_AUTHORITY | DNS_MESSAGE_SECTION_ANSWER>
Field Type: Optional'
truncation: '(true | false) # Field Type: Optional'
For example, if there is a static record of type A for foo.com on SE, and if a DNS policy is
configured stating that if query matches foo.com, action will be pool or pool group switching.
In that case you will get response from pool or pool group switched server rather than record
present on SE.
Another use case is supporting record types of TXT, NS, etc. on a server which are not yet
supported in GSLB services and redirect those queries to the backend server based on DNS
policies.
NSX Advanced Load Balancer supports custom DNS profiles to communicate the DNS provider.
With the new feature, you can use your own DNS provider and NSX Advanced Load Balancer uses
the allowed usable domain as per the requirement.
n Navigate to Templates > Profiles > Custom IPAM/DNS and click Create to upload the script.
n Provide DNS name and upload the script as the code to handle DNS records, for instance,
update and delete the DNS records.
n Delete record
In this example, the following parameters are used while uploading the script to NSX Advanced
Load Balancer :
n username: admin
These parameters (provider-specific information) are used to communicate with DNS providers.
Note The above parameters are provided for example purpose only. Based on the method used
in the script, the parameters are passed to the script.
n Choose Custom DNS created in the previous step and provide the additional provider-specific
parameters, as shown below:
The additional parameters provided above and usable domains are optional fields. But, they help
in provisioning virtual service automatically with the required attributes.
While provisioning the virtual service, the option to choose among multiple domains are available
under Applicable Domain Name as shown below.
n Click Create to create a new virtual service which will use the Custom DNS profile for
registering domain automatically. Specify the following details for the virtual service:
n Application Domain Name: Use the usable domain provided while creating the custom
DNS profile.
n Once the virtual service creation is successful, the FQDN will be registered with the virtual
service.
n The same domain will be registered at the DNS provider site as well.
"
Custom DNS script
"""
import socket
import os
import getpass
import requests
import inspect
import urllib
import json
import time
if not fqdn:
print "Not valid FQDN found %s, returning"%record_info
return
# REST API
api = WebApiClient(username, passkey, domain)
api.disable_ssl_chain_verification()
param_dict = {
# DNS Record Information
"dns_record_id" : dns_record_id,
"fqdn" : fqdn,
"type" : "CNAME" if record_type == 'DNS_RECORD_CNAME' else "A",
"ttl" : str(ttl),
"content" : cname if record_type == 'DNS_RECORD_CNAME' else ip,
"site" : "ALL"
}
# Send request to register the FQDN, failures can be raised and the VS creation will fail
rsp = api.send_request("Update", param_dict)
if not rsp:
err_str = "ERROR:"
err_str += " STATUS: " + api.get_response_status()
err_str += " TYPE: " + str(api.get_error_type())
err_str += " MESSAGE: " + api.get_error_message()
print err_str
raise Exception("DNS record update failed with %s"%err_str)
n username: <username>
n password: <password>
l
[admin-cntrl1]: > configure customipamdnsprofile custom-dns-profile
[admin-cntrl1]: customipamdnsprofile>
cancel Exit the current submode without saving
do Execute a show command
name Name of the Custom IPAM DNS Profile.
new (Editor Mode) Create new object in editor mode
no Remove field
save Save and exit the current submode
script_params (submode)
script_uri Script URI of form controller://ipamdnsscripts/<file-name>
show_schema show object schema
tenant_ref Help string not found for argument
watch Watch a given show command
where Display the in-progress object
[admin-cntrl1]: customipamdnsprofile>
In the above configuration snippet, the custom_dns_script.py script is uploaded with the
following attributes.
n Name: custom-dns-profile
n Username: dnsuser
Use the following syntax for uploading your script. controller://ipamdnsscripts/<script name>
----------------------------------------------------------------------+
| uuid |customipamdnsprofile-c12faa8a-f0eb-4128-a976-98d30391b9f2 |
name |custom-dns-
profile
|script_uri | controller://ipamdnsscripts/
custom_dns_script.py | script_params[1]
|
| name |
username
| value |
dnsuser
|is_sensitive |
False
|is_dynamic |
False
|script_params[2]
|
|name |
password
|value |
<sensitive>
|is_sensitive |
True
|is_dynamic |
False
|tenant_ref |
admin
+------------------
+--------------------------------------------------+
Use the command configure ipamdnsproviderprofile <profile name> to create the IPAM DNS
provider profile.
Note Parameters used for the profile configuration depends on the environment.
n Name: api_version
n Value: 2.2
Note
n AWS Cloud in NSX Advanced Load Balancer supports AWS DNS by enabling
route53_integration in the cloud configuration and does not require this DNS profile
configuration.
n A separate DNS provider configuration (as described in the 'DNS Configuration' section below)
is required only for cases where AWS provides the infrastructure service for other clouds.
For more information refer to Service Discovery Using IPAM and DNS.
DNS Configuration
This section explains DNS configuration.
To use AWS as the DNS provider, one of the following types of credentials are required:
1 Identity and Access Management (IAM) roles: Set of policies that define access to resources
within AWS.
2 AWS customer account key: Unique authentication key associated with the AWS account.
If you prefer to use the Using IAM Role, then follow the steps below:
1 If you use the IAM role method to define access for an NSX Advanced Load Balancer
installation in AWS, then use the steps in IAM Role Setup for Installation into AWS article
to set up the IAM roles before you start to deploy the NSX Advanced Load Balancer Controller
EC2 instance.
2 In the Type field, select AWS Route 53 DNS and select Use IAM Roles button.
If you prefer to use the Using Access Key, then follow the steps below:
n In the Type field, select AWS Route 53 DNS and select Use Access Keys and enter the
following information:
n Select the AWS region into which the VIPs will be deployed.
n Select Access AWS through Proxy, if access to AWS endpoints requires a proxy server.
n A drop-down of available domain names associated with that VPC are displayed. Configure at
least one domain for virtual service’s FQDN registration with Route 53.
n Click Save.
NSX Advanced Load Balancer supports integration with third-party IPAM providers such as NS1,
TCPWave, so on, for providing automatic IP address allocation for virtual services.
Along with the script, you can add the following key-value parameters, which are used by the
functions in the script to communicate with the IPAM provider:
n username — <username>
n server — 1.2.3.4
These parameters (provider-specific information) are used to communicate with IPAM providers.
Note
n The above parameters are provided for example purpose only. Based on the method used in
the script, the parameters are passed to the script.
n The file-name must have a .py extension and conform to PEP8 naming convention.
Configuring using UI
1 Navigate to Templates > Profiles > Custom IPAM/DNS, and click Create.
n username: <username>
n server: 1.2.3.4
n wapi_version
n network_view: default
n dns_view: default
4 Click Save
Note Parameters used for the profile configuration depend on the environment.
5 Add usable subnets if required. If it is set, while provisioning the virtual service, the option
to choose among multiple usable subnets are available under Network for VIP Address
Allocation as shown in Step 4: Create a Virtual Service section. If it is not set, all the available
networks/ subnets from the provider are listed.
Note You can not configure this step using the UI in 21.1.1 version.
1 To associate the custom IPAM option for the cloud, navigate to Infrastructure > Cloud, and
use the custom IPAM profile created in the Step 2.
1 Use the configure cloud <cloud name> to attach the IPAM profile to the cloud.
b Network for VIP Address Allocation: Select the network/ subnets for IP allocation
(mandatory only through UI).
Configuring using UI
2 Once the virtual service creation is successful, the IP is allocated for virtual service as shown
below. Also an IPAM record will be created with the provider for the same.
1 Use the configure vsvsip <vsvip name> command and configure virtualservice <vs
name> command to create vsvip and vs respectively.
Python Script
1 The script should have all the required functions and exception classes defined, else
the system displays the following error message during IPAM profile creation: “Custom
IPAM profile script is missing required functions/exception classes
{function_or_exception_names}.”
a TestLogin
b GetAvailableNetworksAndSubnets
c GetIpamRecord
d CreateIpamRecord
e DeleteIpamRecord
f UpdateIpamRecord
a CustomIpamAuthenticationErrorException
b CustomIpamRecordNotFoundException
c CustomIpamNoFreeIpException
d CustomIpamNotImplementedException
e CustomIpamGeneralException
6 A separate Python script will also be provided to validate the provider script.
Note Example scripts for various IPAM providers are being developed and will be made available
once done.
"""
This script allows the user to communicate with custom IPAM provider.
Required Functions
------------------
1. TestLogin: Function to verify provider credentials, used in the UI during IPAM profile
configuration.
2. GetAvailableNetworksAndSubnets: Function to return available networks/subnets from the
provider.
3. GetIpamRecord: Function to return the info of the given IPAM record.
4. CreateIpamRecord: Function to create an IPAM record with the provider.
5. DeleteIpamRecord: Funtion to delete a given IPAM record from the provider.
6. UpdateIpamRecord: Function to update a given IPAM record.
class CustomIpamAuthenticationErrorException(Exception):
"""
Raised when authentication fails.
"""
pass
class CustomIpamRecordNotFoundException(Exception):
"""
Raised when given record not found.
"""
pass
class CustomIpamNoFreeIpException(Exception):
"""
class CustomIpamNotImplementedException(Exception):
"""
Raised when the functionality is not implemented.
"""
pass
class CustomIpamGeneralException(Exception):
"""
Raised for other types of exceptions.
"""
pass
def TestLogin(auth_params):
"""
Function to validate user credentials. This function is called from IPAM profile
configuration UI page.
Args
----
auth_params: (dict of str: str)
Parameters required for authentication. These are script parameters provided
while
creating a Custom IPAM profile.
Eg: auth_params can have following keys
server: Server ip address of the custom IPAM provider
username: self explanatory
password: self explanatory
logger_name: logger name
Returns
-------
Return True on success
Raises
------
CustomIpamNotImplementedException: if this function is not implemented.
CustomIpamAuthenticationErrorException: if authentication fails.
"""
1. Check all credentials params are given else raise an exception.
2. Raise an exception if test login fails.
V4_V6.
Returns
-------
subnet_list: (list of dict of str : str)
network (str): network id/name
v4_subnet (str): V4 subnet
v6_subnet (str): V6 subnet
v4_available_ip_count (str): V4 free ips count of the network/v4_subnet
v6_available_ip_count (str): V6 free ips count of the network/v6_subnet
each dict has 5 keys: network, v4_subnet, v6_subnet, v4_available_ip_count,
v6_available_ip_count
v4_available_ip_count and v6_available_ip_count are optional, currenty this function
returns the first 3 keys. returning counts is TBD.
Raises
------
None
"""
1. return all the available networks and subnets.
When an NSX Advanced Load Balancer DNS virtual service has a pass-through pool (of back-end
servers) configured and the FQDNs are not found in the DNS table, it proxies these requests to
the pool of servers. An exception is when the NSX Advanced Load Balancer is configured with an
authoritative domain, and the queried FQDN is within the authoritative domain, in which case an
NXDOMAIN is returned.
The NSX Advanced Load Balancer DNS virtual service includes a Start of Authority (SOA) record
with its NXDOMAIN (and other) replies.
Note Responses to SOA queries are not supported prior to NSX Advanced Load Balancer release
18.2.5. See Support for SOA rdata Queries.
Features
An SOA record accompanies an NXDOMAIN (non-existent domain) response if the incoming query’s
domain is a subdomain of one of the configured authoritative domains in the DNS application
profile.
Negative caching, such as, the caching of the fact of non-existence of a record, is determined by
name servers authoritative for a zone which must include the Start of Authority (SOA) record when
reporting no data of the requested type exists. The minimum field value of the SOA record and the
TTL of the SOA itself is used to establish the TTL for the negative answer.
If the query’s FQDN matches an entry in the DNS table, but the query type is not supported by
default then, the NSX Advanced Load Balancer SE generates a NOERROR response, optionally with
an SOA record if the domain matches a configured authoritative domain.
When an NXDOMAIN reply is appropriate for an FQDN that ends with one of the authoritative
domains, the value appearing in the Negative TTL field will be incorporated into the attached SOA
record. This value is 30-seconds by default. However, the allowed range is 1 to 86400 seconds.
An NSX Advanced Load Balancer DNS virtual service need not have a back-end DNS server pool.
If it does have a back-end pool, the NSX Advanced Load Balancer DNS Service Engines will only
load balance to it if the FQDN is not a subdomain of one of those configured in the Authoritative
Domain Names field. All are configured with Ends-With semantics.
The values in the Valid subdomains field are specified for validity checking and thus optional. If
not configured, all subdomains of acme.com will be processed and looked up in the DNS table.
n Specify subdomains of the authoritative domain for which the DNS can provide an IP address
for validity checking (optional). These subdomains are for validity checking and thus optional.
If not configured, all subdomains will be processed and looked up in the DNS table.
name_server: The <domain-name> of the name server that was the original or primary source
of data for this zone. This field is used in SOA records pertaining to all domain names specified
as authoritative domain names. If not configured, domain name is used as name server in SOA
response.
admin_email: Email address of the administrator responsible for this zone . This field is used
in SOA records pertaining to all domain names specified as authoritative domain names. If not
configured, the default value hostmaster is used in SOA responses.
CLI Example
[admin:10-10-25-20]: applicationprofile> dns_service_profile admin_email john_doe@acme.com
[admin:10-10-25-20]: applicationprofile:dns_service_profile> name_server roadrunner.com
[admin:10-10-25-20]: applicationprofile:dns_service_profile> save [admin:10-10-25-20]:
applicationprofile> save
+---------------------------------+-------------------------------------+ |
Field | Value
| +---------------------------------+-------------------------------------+ |
uuid | applicationprofile-fdb6a5d6-bbf8-4f15-b851-
f436b599992c |
| name | System-DNS |
| type | APPLICATION_PROFILE_TYPE_DNS |
| dns_service_profile | | |
num_dns_ip | 1 | |
ttl | 30 sec |
| error_response | DNS_ERROR_RESPONSE_ERROR |
| domain_names[1] | sales.acme.com |
| domain_names[2] | docs.acme.com |
| domain_names[3] | support.acme.com |
| edns | False |
| dns_over_tcp_enabled | True |
| aaaa_empty_response | True |
| authoritative_domain_names[1] | acme.com |
| authoritative_domain_names[2] | coyote.com |
| negative_caching_ttl | 30 sec
| | name_server | roadrunner.com
| | admin_email | john_doe@acme.com
| | ecs_stripping_enabled | True
| | preserve_client_ip | False
| | tenant_ref | admin
| +---------------------------------+-------------------------------------+
[admin:10-10-25-20]: >
When a SOA request is made, the SOA response is sent in the answer section. For non-existent
records of domains for which the NSX Advanced Load Balancer is the authority, the response is
sent in the authority section.
The configure virtualservice dns-vs command shows there is already an existing static
custom A record for FQDN ggg.avi.local.
+----------------------------------+--------------------------------------+
| uuid | virtualservice-bc7c7fc6-583e-4335-8d33-ec4670771a85 |
| name | dns-vs |
| ip_address | 10.90.12.200 |
| enabled | True |
| services[1] | |
| port | 53 |
| enable_ssl | False |
| port_range_end | 53 |
| application_profile_ref | System-DNS |
| network_profile_ref | System-UDP-Per-Pkt |
| se_group_ref | Default-Group |
| east_west_placement | False |
| scaleout_ecmp | False |
| active_standby_se_tag | ACTIVE_STANDBY_SE_1 |
| flow_label_type | NO_LABEL |
| static_dns_records[1] | |
| fqdn[1] | ggg.avi.local |
| type | DNS_RECORD_A |
| ip_address[1] | |
| ip_address | 1.1.1.1 |
+----------------------------------+--------------------------------------+
: virtualservice< static_dns_records
New object being created
: virtualservice:static_dns_records< fqdn abc.avi.local
: virtualservice:static_dns_records< ip_address
New object being created
: virtualservice:static_dns_records:ip_address< ip_address 11.11.11.11
: virtualservice:static_dns_records:ip_address< save
: virtualservice:static_dns_records< type dns_record_a
: virtualservice:static_dns_records< save
: virtualservice< save
<+----------------------------------+-------------------------------------+
| Field | Value |
+----------------------------------+--------------------------------------+
| uuid | virtualservice-bc7c7fc6-583e-4335-8d33-ec4670771a85 |
| name | dns-vs |
| ip_address | 10.90.12.200 |
| enabled | True |
| services[1] | |
| port | 53 |
| enable_ssl | False |
| port_range_end | 53 |
| application_profile_ref | System-DNS |
| network_profile_ref | System-UDP-Per-Pkt |
| se_group_ref | Default-Group |
| east_west_placement | False |
| scaleout_ecmp | False |
| active_standby_se_tag | ACTIVE_STANDBY_SE_1 |
| flow_label_type | NO_LABEL |
| static_dns_records[1] | |
| fqdn[1] | ggg.avi.local |
| type | DNS_RECORD_A |
| ip_address[1] | |
| ip_address | 1.1.1.1 |
| static_dns_records[2] | |
| fqdn[1] | abc.avi.local |
| type | DNS_RECORD_A |
| ip_address[1] | |
| ip_address | 11.11.11.11 |
+----------------------------------+--------------------------------------+
The above command sequence will create a static entry for the FQDN abc.avi.local on virtual
service dns-vs. This can also be confirmed from the GUI under Applications > Virtual Services >
DNS Records as illustrated below.
Clickjacking Protection
Clickjacking is a malicious technique of tricking a user into clicking on something different from
what the user perceives, thus potentially revealing confidential information or allowing others
to take control of their computer while clicking on seemingly innocuous objects, including web
pages.
$> shell
Login: admin
Password:
The following DataScript selectively determines if the referring site, determined by the referer
header, is allowed to embed this site within an iframe. The list of allowed referers is maintained
within a separate string group, which allows for an extensive, REST API updatable list without
directly modifying the rule with every update.
The following example involves creating a string group, then creating the DataScript which
references the string group:
http://www.avinetworks.com/
https://avinetworks.com/docs/
https://avinetworks.github.com
https://support.avinetworks.com
DataScript
Note If the multiple queries were passthrough to the upstream DNS server, then the TCP
connection between the client and NSX Advanced Load Balancer follows the regular connection
close process.
Other than DNS query pipelining, DNS queries over TCP get the same treatment as DNS over UDP
as far as DNS behavior is concerned. Note that by using TCP, DNS over TCP is not limited to 512
bytes size, as is the case for DNS over UDP.
For backward compatibility, by default, the option is TRUE for all clouds. However, if set to FALSE
for a given cloud, NSX Advanced Load Balancer DNS lookups of virtual services within the cloud
will return IP addresses as soon as the virtual services become operational. Virtual services enter
the operational state when one of the following conditions are true:
n No pool has been defined, but a return page has been defined. The classic use case for this is
the return of a static “under construction” page by a virtual service still in its infancy.
n The virtual service is an NSX Advanced Load Balancer DNS, whether or not it has a back-end
server pool defined for it.
Note Toggling the state-based-dns-registration option impacts virtual services that are
defined thereafter. It does not have a retroactive effect on virtual services that have already been
defined.
DNS virtual service on NSX Advanced Load Balancer primarily implements the following
functionality:
NSX Advanced Load Balancer DNS can host manual static DNS entries. For a given FQDN, you
can configure an A, AAAA, SRV, CNAME, or NS record to be returned.
n TXT Record: This is used to store text-based information of the outside domain for the
configured domain. This is useful in identifying ownership of a domain.
n MX Record: This is used in mail delivery based on the configured domain. This is useful in
redirecting email requests to the mail servers for a specified domain.
In the above instance, the favorite-protocol=DNS test is used as a DNS TXT record for the domain
txtrec.acme.com.
[admin:controller]: virtualservice:static_dns_records>
[admin:controller]: virtualservice> static_dns_records index 1
[admin:controller]: virtualservice:static_dns_records> txt_records
New object being created
[admin:controller]: virtualservice:static_dns_records:txt_records> text_str
"favorite_protocol=DNS"
[admin:controller]: virtualservice:static_dns_records:txt_records> save
[admin:controller]: virtualservice:static_dns_records> save
[admin:controller]: virtualservice> save
Configured TXT record data now respond to the appropriate DNS query. Use the following dig
command to test the desired output.
Note The value for the priority field can vary from 0-65535.
DNS queries to the VIP should now serve the record data thus configured .
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;txtrec.acme.com. IN MX
;; ANSWER SECTION:
txtrec.acme.com. 30 IN MX 10 m1.acme.com.
aviuser@controller:~$
n IP group
n Configure valid DNS servers on the NSX Advanced Load Balancer Controller.
n In the web interface, navigate to Administration > Settings > DNS / NTP.
n Create or edit an existing pool, or create a new virtual service in basic mode.From the Servers
tab, select servers using the IP address, IP address range, or DNS name option. In the Server
IP address field, enter a valid domain name.
n If DNS cannot resolve the name, it is displayed in red. If DNS resolves the name to an IP
address, it will be listed below the field.
n If DNS resolves to multiple IP addresses, the list will be shown below though it is
potentially truncated.
If the DNS server returns the IP address which is already assigned to the server then, there is no
change. However, the pool is updated in the following cases:
n If DNS resolution of a server hostname results in a different set of IP addresses than the set
received previously, the pool members corresponding to this hostname are updated with the
new set of IP addresses, and the older IP addresses are removed.
n In case of either the DNS resolution results in a timeout or if there is a failure due to a
temporary outage of the DNS server, then the old set of IP addresses is preserved.
n If DNS resolution results in an error (for example, non-existent domain or no answer from the
server), then the hostname is mapped to IP address “0.0.0.0.”
In case a timeout or an error occurs then, NSX Advanced Load Balancer will seek to resolve the
hostname in the next resolution interval.
The NSX Advanced Load Balancer IPAM/DNS profile consists of both IPAM and DNS related
configuration in a single bundle. It is recommended to have both IPAM and DNS configuration in
a single profile for ease of management. However, configuration of one can exclude the other if
different profiles for IPAM and DNS are preferred.
For instance, vantage-ipam can be created without configuring any DNS domains and vantage-
dns can be created by using only domain names and without any networks/subnets.
Cloud
Infrastructure IPAM DNS IPAM DNS IPAM DNS
Cloud
Infrastructure IPAM DNS IPAM DNS IPAM DNS
Note
n When creating virtual services in OpenStack or AWS cloud, a separate configuration for IPAM
is not needed/allowed, since the cloud configuration has support for IPAM natively in NSX
Advanced Load Balancer.
n Default means NSX Advanced Load Balancer accepts the cloud’s IPAM/DNS support
without additional action on the part of the NSX Advanced Load Balancer admin.
n NSX Advanced Load Balancer supports Route 53 when AWS is the cloud provider
configuration in NSX Advanced Load Balancer.
n Not used means, although the cloud supports DNS, NSX Advanced Load Balancer does
not use it.
n When creating a virtual service in Linux Server cloud in AWS/ GCP environment, you can use
the cloud-native IPAM solution of AWS/ GCP.
n NSX Advanced Load Balancer DNS service can be used with all these clouds.
4 Fill in the displayed fields (detailed steps are provided in the sections below).
7 Select the IPAM and DNS providers from the drop-down list. Either one or both must be
selected based on the provider(s) required. For instance, prior to 18.2.5 versions, if Infoblox is
the IPAM provider then, it must be the DNS provider as well.
8 For east-west virtual services in this cloud, you need to additionally select east-west IPAM and
DNS providers from the pull-down list. Either one or both can be selected. This is an optional
step.
9 Click Save.
n IPAM Configuration
n DNS Configuration
IPAM Configuration
This section explains the steps to configure IPAM.
Prerequisites
NSX Advanced Load Balancer allocates IP addresses from a pool of IP addresses within the subnet
configured listed as follows.
Procedure
4 Under IP Address Management, click on the required option for DHCP Enabled and IPv6
Auto Configuration.
b Enter the subnet address in IP Subnet field, in the following format: 9.9.9.0/24
c Click Add Static IP Address Pool to specify the pool of IP addresses. Specify the range of
the pool under IP Address Pool. NSX Advanced Load Balancer will allocate IP addresses
from this pool. For instance, 9.9.9.100-9.9.9.200.
d Click Save.
6 Click Save.
Note
n Virtual service creation will fail if the static IP address pool is empty or exhausted.
n For East West IPAM (applicable to container-based clouds, i.e., Mesos, OpenShift,
Kubernetes, Docker UCP, and Rancher), create another network with the appropriate
link-local subnet and a separate IPAM/DNS Profile.
Usable Networks
This feature enables assigning one or more of the networks created above to be default
usable networks, if no specific network/subnet is provided in the virtual service configuration.
An administrator can configure these networks, thus eliminating the need for a developer to
provide a specific network/subnet while creating a virtual service for the application.
DNS Configuration
This section explains how to configure DNS.
1 Navigate to Templates > IPAM/DNS Profiles and create a DNS profile by selecting the DNS
type in the Type drop-down list.
2 Add one or more DNS Service Domain names. NSX Advanced Load Balancer will be the
authoritative DNS server for these domains.
3 Configure a TTL value for all records for a particular domain, or leave the Default Record TTL
for all Domains field blank to accept the default TTL of 30 seconds.
4 Click Save.
After configuring a DNS profile (above) with a set of domains for which NSX Advanced Load
Balancer DNS will be serving records, configure a DNS virtual service in NSX Advanced
Load Balancer for applications to discover each other. This serves two purposes, DNS high
availability and interoperability with other DNS providers in the same cluster. For instance, Mesos-
DNS.
Note If the Controllers are running on Mesos nodes with Mesos DNS enabled, use port 8053.
d If the Controller is on an external network (requires routing for SE data traffic to reach the
Controller), then add a static route to the Controller network as shown below.
3 To add a static route (when the Controller is in an external network), navigate to Infrastructure
> Cloud Resources > Routing. Click Createand add a Default-Gateway IP address for the
cluster.
4 There are 2 ways to enable NSX Advanced Load Balancer DNS service in your data center.
n Add DNS VIP (“10.160.160.100” as configured above) to the nameservers list in /etc/
resolv.conf on all nodes requiring service discovery. Create applications and verify
resolution works for the application’s FQDN by issuing dig app-name.domain anywhere
in the cluster.
n Add DNS VIP in the corporate DNS server as the nameserver for serving domain names
configured in the DNS profile above. Any requests to mycompany-cloud will be redirected
to and serviced by the NSX Advanced Load Balancer DNS service.
IPAM only: With IPAM in play, selecting the Auto Allocate checkbox causes the Network for
VIP Address Allocation selection box to appear. From a list of displayed networks and subnets a
choice can be made; in this case, either ipam-nw1 or ipam-nw2 can be selected. From the selected
network (ipam-nw1) an address for the VIP will be auto-allocated.
DNS only: With DNS in play, no list of networks is offered. Instead, one of several domains is
offered. By selecting .test.avi from the list and accepting the default prefix (vs) in the Fully
Qualified Domain Name field, the user is specifying vs.test.avi as the final FQDN.
IPAM and DNS: With both IPAM and DNS available, you can both specify a network from which to
auto-allocate a VIP address and the FQDN (vs.test.avi) to which it will be associated.
Note
n If a DNS profile is configured under a cloud where the virtual service is being created, then
the virtual service's IP cannot be determined from a fully qualified domain name; however, you
can enter an IP address or select the Auto Allocate checkbox.
n In the case of Infoblox, if there is a list of usable_subnets/ usable_domains configured then the
drop-down will consist only of those entries. If no such configuration is found, NSX Advanced
Load Balancer will display the entire list of available subnets/ domains from Infoblox.
NSX Advanced Load Balancer communicates with OpenStack Neutron via APIs to provide IPAM
functionality. Currently, DNS services from OpenStack are not supported in this configuration.
This function provides support for cloud providers who host their Virtual Machines/ instances
on OpenStack (for example, Mesos nodes running on OpenStack instances). Therefore, this
configuration is irrelevant if you are using OpenStack cloud in NSX Advanced Load Balancer.
Configuring IPAM
The following are the steps to configure IPAM:
6 Click Save.
n SSL Certificates
n Integrating Let's Encrypt Certificate Authority with NSX Advanced Load Balancer System
n Integrating Let's Encrypt Certificate Authority with NSX Advanced Load Balancer
n SSL/TLS Profile
VMware strives to ensure the highest level of security, adhering to rigorous testing and validation
standards. NSX Advanced Load Balancer includes numerous security-related features to ensure
the integrity of the NSX Advanced Load Balancer system as well as the applications and services it
protects.
Industry Validation
Many of the largest and most trusted brands on the Internet have subjected NSX Advanced Load
Balancer to their own testing or testing by third-party companies such as Qualys and Rapid7.
This continuous testing ensures that, in addition to the proven success of NSX Advanced Load
Balancer in public and private networks, it has been thoroughly vetted by known industry security
leaders.
The following are a few examples of web UI and other attack vectors tested through external
penetration testing:
n SQL injection
n Credential disclosure
n Clickjacking
n Strong output validation to guard against disclosure of sensitive fields such as passwords,
export of keys
Despite the best attempts to proactively resolve any potential threat before the code release, it
is essential to ensure a solid plan of action if a security hole is discovered in customer deployed
software.
VMware strongly recommends key administrators subscribe to NSX Advanced Load Balancer's
mailing list. Security alerts are proactively sent to customers to notify them if an issue has been
found, and the potential mitigation required. Subscribe through VMware customer portal. It also
publishes responses to Common Vulnerabilities and Exposures (CVEs) of note, which include
known vulnerabilities in NSX Advanced Load Balancer or software used by it, such as SSL and
Linux. Avi may also publish CVE responses to issues that do not impact NSX Advanced Load
Balancer to inform our customers that they are protected. These CVEs are posted to the NSX
Advanced Load Balancer Knowledge Base site but not sent proactively via email alerts.
VMware Security Advisories document remediation for security vulnerabilities that are reported
in VMware products. Sign up on the right-hand side of this page to receive new and updated
advisories in e-mail.
See also:
n CVEs
n Protocol Ports Used by NSX Advanced Load Balancer for Management Communication
n Clickjacking Protection
SSL Certificates
NSX Advanced Load Balancer supports terminating client SSL and TLS connections at the virtual
service, which requires it to send a certificate to clients that authenticate the site and establishes
secure communications.
n SSL/TLS profile
n ssl_ciphers HIGH:!aNULL:!MD5:+SHA1; and DHE 1024, 2048, and so on are the supported
ciphers and cipher sizes.
n SSL Certificate
n SSL certificates can be used to present to administrators connecting to the NSX Advanced
Load Balancer web interface or API, and also for the NSX Advanced Load Balancer
SE to present to servers when SE-to-server encryption is required with client (the SE)
authentication.
The SSL/TLS Certificates page on Templates > Security > SSL/TLS Certificates allows the import,
export, and generation of new SSL certificates or certificate requests. Newly-created certificates
may be either self-signed by NSX Advanced Load Balancer or created as a Certificate Signing
Request (CSR) that must be sent to a trusted Certificate Authority (CA), that generates a trusted
certificate.
n Creating a self-signed certificate generates both the certificate and a corresponding private
key.
n Imported existing certificates are not valid until a matching key has been supplied.
n NSX Advanced Load Balancer supports PEM and PKCS12 formats for certificates.
n Create: Shows the list of certificates from the drop-down list to create a certificate.
n Edit: Opens the Edit Certificatewindow. Only incomplete certificates that do not have a
corresponding key can be edited.
n Export: The down arrow icon exports a certificate and corresponding private key.
The table on this tab contains the following information for each certificate:
n Name: This displays the name of the certificate. Mouse over the name of the certificate will
display any intermediate certificate that has been automatically associated with the certificate.
n Status: This shows the status of the certificate. Status in 'green' indicates it is good, in
'yellow'/orange/red' indicates the certificate is expiring soon or has already expired, and 'gray'
indication, if the certificate is incomplete.
n Common Name: This displays the fully qualified name of the site to which the certificate
applies. This entry must match the hostname the client will enter in their browser in order for
the site to be considered trusted.
n Self Signed: This displays whether the certificate is self-signed by NSX Advanced Load
Balancer or signed by a Certificate Authority.
n Valid Until: This displays the date and time when the certificate expires.
Create Certificate
Navigate to Templates > Security > SSL/TLS Certificates. Click Create to open the New
Certificate (SSL/TLS)window.
When creating a new certificate, you can select any of the following certificates:
n Application Certificate: This certificate is used for normal SSL termination and decryption on
NSX Advanced Load Balancer. This option is also used to import or create a client certificate
for NSX Advanced Load Balancer to present to a backend server when it needs to authenticate
itself.
n Controller Certificate: This certificate is used for the GUI and API for the Controller
cluster. Once uploaded, select the certification through Administration > Settings > Access
Settings.
n Type: Select the type of certificate to create from the drop-down list. The following are the
options:
n Self Signed: Quickly create a test certificate that is signed by NSX Advanced Load
Balancer. Client browsers will display an error that the certificate is not trusted. If the HTTP
application profile has HTTP Strict Transport Security (HSTS) enabled, clients will not be
able to access a site with a self signed certificate.
n CSR: Create a valid certificate by first creating the certificate request. This request must be
sent to a certificate authority, which will send back a valid certificate that must be imported
back into NSX Advanced Load Balancer.
n Import: Import a completed certificate that was either received from a certificate authority
or exported from another server.
n Common Name: Specify the fully qualified name of the site, such as www.vmware.com. This
entry must match the hostname the client entered in the browser in order for the site to be
considered trusted.
n Input the required information required for the type of certificate you are creating:
n Self-Signed Certificates
n CSR Certificates
n Importing Certificates
Note The OCSP stapling can be enabled and configured using the UI. For more information, see
Using OCSP Stapling through the UI.
Self-Signed Certificates
NSX Advanced Load Balancer can generate self-signed certificates. Client browsers do not trust
these certificates and will warn the user that the virtual service’s certificate is not part of a trust
chain.
Self-signed certificates are good for testing or environments where administrators control the
clients and can safely bypass the browser’s security alerts. Public websites should never use
self-signed certificates.
If you have selected Self Signed option from Type drop-down list in the New Certificatewindow,
then specify the following details:
n Organization: Company or entity registering the certificate, such as NSX Advanced Load
Balancer Networks, Inc. (optional).
n Organization Unit: Group within the organization that is responsible for the certificate, such as
Development (optional).
n Algorithm: Select either EC (Elliptic Curve) or RSA. RSA is older and considered less secure
than EC, but is more compatible with a broader array of older browsers. EC is new, less
expensive computationally, and generally more secure; however, it is not yet accepted by
all clients. NSX Advanced Load Balancer allows a virtual service to be configured with two
certificates at a time, one each of RSA and EC. This will enable it to negotiate the optimal
algorithm with the client. If the client supports EC, then the NSX Advanced Load Balancer will
prefer this algorithm, which gives the benefit of natively supporting Perfect Forward Secrecy
for better security.
n Key Size: Select the level of encryption to be used for handshakes, as follows:
Higher values may provide better encryption but increase the CPU resources required by both
NSX Advanced Load Balancer and the client.
CSR Certificates
The Certificate Signing Request (CSR) is the first of three steps involved in creating a valid SSL/TLS
certificate. The request contains the same parameters as a Self-Signed Certificate; however, NSX
Advanced Load Balancer does not sign the completed certificate. Rather, it must be signed by a
Certificate Authority that is trusted by client browsers.
If you have selected CSR option from Type drop-down list in the New Certificate window, then
specify the following details:
n Organization: Company or entity registering the certificate, such as NSX Advanced Load
Balancer Networks.
n Organization Unit: Group within the organization that is responsible for the certificate, such as
Development.
n Algorithm: Select either EC (Elliptic Curve) or RSA. RSA is older and considered less secure
than EC, but is more compatible with a broader array of older browsers. EC is new, less
expensive computationally, and generally more secure; however, it is not yet accepted by
all clients. NSX Advanced Load Balancer allows a Virtual Service to be configured with two
certificates at a time, one each of RSA and EC. This allows NSX Advanced Load Balancer to
negotiate the optimal algorithm with the client. If the client supports EC, then NSX Advanced
Load Balancer prefers this algorithm, which gives the added benefit of natively supporting
Perfect Forward Secrecy for better security.
n Key Size: Select the level of encryption to be used for handshakes, as follows:
Higher values provide better encryption but increase the CPU resources required by both NSX
Advanced Load Balancer and the client.
n After specifying the necessary details, click Save to generate the CSR.
n Forward the completed CSR to any trusted Certificate Authority (CA), such as Thawte or
Verisign, by selecting the Certificate Signing Request at the bottom left of the New Certificate
popup and then either copying and pasting it directly to the CA’s website or saving it to a file
for later use.
n Once the CA issues the completed certificate, you may either paste it or upload it into the
Certificate field at the bottom right of the New Certificatewindow.
Note It can take several days for the CA to return the finished certificate. Meanwhile, you can
close the New Certificatewindow to return to the SSL/TLS Certificates page. The new certificate
will appear in the table with the notation Awaiting Certificate Valid Until column.
When you receive the completed certificate, click Edit icon for the certificate to open the Edit
Certificate, and then paste the certificate and click Save to generate the CSR certificate. NSX
Advanced Load Balancer will generate a key from the completed certificate automatically .
Import Certificates
You may directly import an existing PEM or PKCS12 SSL/TLS certificate into NSX Advanced Load
Balancer (such as from another server or load balancer). A certificate will have a corresponding
private key, which must also be imported.
Note NSX Advanced Load Balancer generates the key for self-signed or CSR certificates
automatically.
2 Click CREATE and select the certificate type such as Application Certificate.
3 Click Type and select Import. Certificate or Private Key can be imported by copying-pasting or
uploading a file.
n PEM File – PEM files contain certificate or private key in plain-text Base64 encoded format.
Certificate and private key can be provided in separate PEM files or combined in a single PEM
file.
n If certificate and private key are provided in a single PEM file, navigate to Paste Key text box
and add the private key by following any one of the methods listed below:
n Upload File: Click the Upload File button, select the PEM or PKCS12 file, then click
Validate button to parse the file. If the upload is successful then, the Key field will be
populated.
n Paste: Copy and paste a PEM key into the Key field. Be careful, not to introduce extra
characters in the text, which can occur when using some email clients or rich text editors. If
you copy and paste the key and certificate together as one file then, click Validate button
to parse the text and populate the Certificate field.
n If certificate and private key are provided in two seperate PEM files, follow the below steps to
import each individually:
n Certificate - Add the certificate in the Paste Certificate text box by copying-pasting or file
upload, as described above.
n Key – Add the private key in the Paste Key field by copying-pasting or file upload.
n PKCS 12 File - PKCS12 files contain both the certificate and key, PKCS12 is a binary format,
which cannot be copied-pasted, and hence it can be uploaded only. Navigate to the Paste Key
and follow the below step to import the PKCS #12 file.
n Upload File - Click the Import File button, select the PKCS12 file, click the Validate button to
parse the file. If the upload is successful, both the Key and Certificate fields will be populated.
n Key Passphrase: You can also add and validate a key passphrase to encrypt the private key.
n Import: Select Import to finish adding the new certificate and key. The key will be embedded
with the certificate and treated as one object within the NSX Advanced Load Balancer UI.
Certificate Authority
Certificates require a trusted chain of authority to be considered as valid. If the certificate used is
directly generated by a certificate authority that is known to all client browsers then, no certificate
chain is required. However, if there are multiple levels required, an intermediate certificate may be
necessary. Clients will often traverse the path indicated by the certificate to validate on their own
if no chain certificate is presented by a site, but this adds additional DNS lookups and time for the
initial site load. The ideal scenario is to present the chain certs along with the site cert.
If a chain certificate, or rather a certificate for a certificate authority, is uploaded via the Certificate
> Import in the certificates page, it will be added to the Certificate Authority section. NSX
Advanced Load Balancer will automatically build the certificate chain if it detects a next link in
the chain exists.
To validate a certificate that has been attached to a chain certificate, hover the cursor over the
certificate’s name in the SSL Certificates table at the top of the page. NSX Advanced Load
Balancer supports multiple chain paths. Each may share the same CA issuer, or they may be
chained to different issuers.
SSL Profile
NSX Advanced Load Balancer supports the ability to terminate SSL connections between the
client and the virtual service, and to enable encryption between NSX Advanced Load Balancer
and the back-end servers. The SSL/TLS profile contains the list of accepted SSL versions and the
prioritized list of SSL ciphers.
Both an SSL/TLS profile and an SSL certificate must be assigned to the virtual service while
configuring it to terminate client SSL/TLS connections. In case you prefer to encrypt traffic
between NSX Advanced Load Balancer and the servers then, an SSL/TLS profile must be assigned
to the pool. While creating a new virtual service via the basic mode, the default system SSL/TLS
profile is used automatically.
SSL termination can be performed on any service port. However, browsers assume that the
default port as 443. The best practice is to configure a virtual service to accept both HTTP and
HTTPS by creating a service on port 80, by selecting the + icon to add an additional service port,
and then set the new service port to 443 with SSL enabled. A redirect from HTTP to HTTPS is
generally preferable, which can be done through a policy or by using the System-HTTP-Secure
application profile.
Each SSL/TLS profile contains default groupings of supported SSL ciphers and versions that may
be used with RSA or an Elliptic Curve certificate, or both. Ensure that any new SSL/TLS profile
you create, includes ciphers that are appropriate for the certificate type that will be used later. The
default SSL/TLS profiles included with NSX Advanced Load Balancer provides a broad range of
security. For instance, the Standard Profile works for typical deployments.
Creating a new SSL/TLS profile or using an existing profile entails various trade-offs between
security, compatibility, and computational expense. For instance, increasing the list of accepted
ciphers and SSL versions increases the compatibility with clients while also lowering security
potentially.
n Delete: An SSL/TLS profile can only be deleted, if it is not currently assigned to a virtual
service. An error message will indicate the virtual service referencing the profile. The default
system profiles can be modified, but not deleted.
The table on this tab provides the following information for each SSL/TLS profile:
n Accepted Versions: Select one or more SSL/TLS versions from the drop-down list to add
to this profile. Chronologically, TLS v1.0 is the oldest supported, and TLS v1.2 is the newest.
SSL v3.0 is no longer support as of NSX Advanced Load Balancer v15.2. In general with SSL,
older versions have many known vulnerabilities while newer versions have many undiscovered
vulnerabilities. As with any security, NSX Advanced Load Balancer recommends diligence to
understand the dynamic nature of security and to ensure that NSX Advanced Load Balancer
is always up to date. Some SSL ciphers are dependent on specific versions of SSL or TLS
supported. For more information, refer to OpenSSL.
n Accepted Ciphers: Enter the list of accepted ciphers in the Accepted Ciphers field. Each
cipher entered must conform to the cipher suite names listed at OpenSSL. Separate each
cipher with a colon. For example, AES:3DES means that this Profile will accept the AES and
3DES ciphers. When negotiating ciphers with the client, NSX Advanced Load Balancer will
prefer ciphers in the order listed. You may use an SSL/TLS profile with both an RSA and an
Elliptic Curve certificate. These two types of certificates can use different types of ciphers, so it
is important to incorporate ciphers for both types. Selecting only the most secure ciphers may
incur higher CPU load on NSX Advanced Load Balancer and may also reduce compatibility
with older browsers.
PKI Profile
The Public Key Infrastructure (PKI) profile allows configuration of Certificate Revocation List
(CRLs), and the process for updating the lists. The PKI profile may be used to validate clients
and server certificates.
n Client Certificate Validation: NSX Advanced Load Balancer supports the ability to validate
client access to an HTTPS site via client SSL certificates. Clients will present their certificate
when accessing a virtual service, which will be matched against a CRL. If the certificate is valid
and the clients are not on the list of revoked certificates, they will be allowed access to the
HTTPS site.
Client certificate validation is enabled via the HTTP profile’s Authentication tab. The HTTP profile
will refer the PKI profile for specifics on the Certificate Authority (CA) and the CRL. A PKI profile
may be referenced by multiple HTTP profiles.
n Server Certificate Validation: Similar to validating a client certificate, NSX Advanced Load
Balancer can validate the certificate presented by a server, such as when an HTTPS health
monitor is sent to a server.
Server certificate validation uses the same PKI profile to validate the certificate presented.
Server certificate validation can be configured by enabling SSL within the desired pool, and then
specifying the PKI Profile.
n Delete: A PKI profile may only be deleted if it is not currently assigned to an HTTP profile. An
error message will indicate the HTTP profile referencing the PKI profile.
The table on this tab provides the following information for each PKI Profile:
n Certificate Revocation List: Revocation lists (CRLs) that have been attached to the PKI Profile.
n Ignore Peer Chain: When set to true, the certificate validation will ignore any intermediate
certificates that might be presented. The presented certificate is only checked against the
final root certificate for revocation. When this option is disabled (by default), the certificate
must present a full chain which is traversed and validated, starting from the client or server
presented cert to the terminal root cert. Each intermediate cert must be validated and
matched against a CA cert included in the PKI profile.
n Certificate Authority: Add a new certificate from a trusted Certificate Authority. If more than
one CA are included in the PKI profile, then a client’s certificate must match only to any one
of them to be valid. A client’s certificate must match the CA as the root of the chain. If the
presented cert has an intermediate chain, then each link in the chain must be included here.
See Ignore Peer Chain (step above) to ignore intermediate validation checking.
n Client Revocation List: The CRL allows invalidation of certificates, or more specifically the
certificate’s serial number. The revocation list may be updated by manually uploading a new
CRL, or by periodically downloading from a CRL server. If a client or server certificate is found
to be in the CRL, the SSL handshake will fail, with a resulting log created to provide further
information about the handshake.
n Server URL: Specify a server to download CRL updates. Access to this server will be done
from the Controller IP addresses, which means they will require firewall access to this
destination. The server may be an IP address, or an FQDN along with an HTTP path, such
as www.avinetworks.com/crl.
n Refresh Time: After the elapsed period of time, NSX Advanced Load Balancer will
automatically download an updated version of the CRL. If time is not specified then, NSX
Advanced Load Balancer will download a new CRL at the current CRL’s lifetime expiration.
n Upload CRL File: Upload a CRL manually. Subsequent CRL updates can be done by
manually uploading new lists, or configuring the Server URL and Refresh Time to automate
the process.
Certificate Management
To create a new certificate, follow the steps below:
1 From the NSX ALB UI, navigate to the Templates > Security > Certificate Management.
2 Click Create.
3 In the New Certificate Management screen, enter the Name of the profile.
4 In the Control Script field, select the required alert script configuration, as required.
Note Click Create button in the drop down, to create a new Control Script (if required).
5 If the profile needs to pass some parameter values to the script, select Enable Custom
Parameters.
Note Re-upload the Control Script, if the file has been modified after uploading for the
changes to reflect.
7 Click Save.
Authentication Profile
The Authentication profile (“auth profile”) allows configuration of clients into a Virtual Service via
HTTP basic authentication.
The authentication profile is enabled via the HTTP basic authentication setting of a virtual service’s
Advanced Properties tab.
NSX Advanced Load Balancer also supports client authentication via SSL client certificates, which
is configured the HTTP Profile’s Authentication section.
n Delete: An Auth profile may only be deleted if it is not currently assigned to a virtual service or
in use by NSX Advanced Load Balancer for administrative authentication.
The table on this tab provides the following information for each auth profile:
n LDAP Servers: Configure one or more LDAP servers by adding their IP addresses.
n LDAP Port: The service port to use when communicating with the LDAP servers. This is
typically 389 for LDAP or 636 for LDAPS (SSL).
n Secure LDAP using TLS: Enable startTLS for secure communications with the LDAP servers.
This may require a service port change.
n Base DN: LDAP Directory Base Distinguished Name. Used as default for settings where DN is
required but was not populated like User or Group Search DN.
n Anonymous Bind: Minimal LDAP settings that are required to verify User authentication
credentials by binding to LDAP server. This option is useful when you do not have access
to administrator account on the LDAP server.
n User DN Pattern: LDAP user DN pattern is used to bind an LDAP user after replacing the
user token with real username. The pattern should match the user record path in the LDAP
server. For example, cn=,ou=People,dc=myorg,dc=com is a pattern where we expect to
find all user records under ou “People”. When searching LDAP for a specific user, we
replace the token with username.
n User Token: An LDAP token is replaced with real user name in the user DN pattern. For
example, in User DN Pattern is configured as “cn=-user-,ou=People,dc=myorg,dc=com”,
the token value should be -user-.
n User ID Attribute: LDAP user ID attribute is the login attribute that uniquely identifies a
single user record. The value of this attribute should match the username used at the login
prompt.
n User Attributes: LDAP user attributes to fetch on a successful user bind. These attributes
are used only for debugging purpose.
n Admin Bind DN: Full DN of LDAP administrator. Admin bind DN is used to bind to an
LDAP server. Administrators should have sufficient privileges to search for users under
user search DN or groups under group search DN.
n User Search DN: LDAP user search DN is the root of search for a given user in the
LDAP directory. Only user records present in this LDAP directory sub-tree are allowed for
authentication. Base DN value is used if this value is not configured.
n Group Search DN: LDAP group search DN is the root of search for a given group in the
LDAP directory. Only matching groups present in this LDAP directory sub-tree will be
checked for user membership. Base DN value is used if this value is not configured.
n User Search Scope: LDAP user search scope defines how deep to search for the user
starting from user search DN. The options are search at base, search one level below or
search the entire subtree. The default option is to search one-level deep under user search
DN.
n Group Search Scope: LDAP group search scope defines how deep to search for the group
starting from the group search DN. The default value is the entire subtree.
n User ID Attribute: LDAP user ID attribute is the login attribute that uniquely identifies a
single user record. The value of this attribute should match the username used at the login
prompt.
n Group Member Attribute: LDAP group attribute that identifies each of the group
members. For example, member and memberUid are commonly used attributes.
n User Attributes: LDAP user attributes to fetch on a successful user bind. These attributes
are only for debugging.
n Insert HTTP Header for Client UserID: Insert a HTTP header into the client request before it
is sent to the destination server. This field is used to name the header. The value will be the
client’s User ID. This same UserID value will also be used to populate the User ID field in the
Virtual Service’s logs.
n Required User Group Membership: User should be a member of these groups. Each group is
identified by the DN. For example,’cn=testgroup,ou=groups,dc=LDAP,dc=example,dc=com’
n Auth Credentials Cache Expiration: The max allowed length of time a client’s authentication is
cached.
n Group Member Attribute Is Full DN: Group member entries contain full DNs instead of only
User ID attribute values.
Additional Information
Changing the NSX Advanced Load Balancer Controller’s Default Certificate
SSL/TLS protocol helps keep an internet connection secure. It safeguards any sensitive data sent
between two machines, systems, or devices, preventing intruders from reading and modifying any
information transferred between two machines/systems/devices. SSL/TLS Certificate facilitates
secure, encrypted connections between the two machines, systems, or devices. However, there
are some challenges around SSL/TLS Certificate:
Let’s Encrypt resolves all the above challenges.For more information please see Let’s Encrypt.
n Provisioning a DNS record under the domain (as per CSR’s common name)
Note NSX Advanced Load Balancer supports HTTP-01 challenge for domain validation.
n Let’s Encrypt gives a token to ACME client, and the ACME client puts a file on the web server
at http://<YOUR_DOMAIN>/well-known/acme-challenge/<TOKEN>. That file contains the token,
plus a thumbprint of account key.
n Once the ACME client tells Let’s Encrypt that the file is ready, Let’s Encrypt tries retrieving it
(potentially multiple times from multiple vantage points).
n If validation checks get the right responses from the web server, the validation is considered
successful, and certificate will be issued.
Note As Let’s Encrypt CA communicates on port 80 for HTTP-01 challenge, hence port 80 should
be opened on the firewall and Let’s Encrypt CA should be able to reach to user’s network (network
where NSX Advanced Load Balancer System is deployed, Let’s Encrypt CA connects through
public network to user’s NSX Advanced Load Balancer System on port 80).
If there is a virtual service listening on port 80 at NSX Advanced Load Balancer, script does not
create a virtual service else script would automatically create a virtual service listening on port 80
for the respective virtual service listening on port 443/custom SSL Port.
For more information regarding domain validation please refer the below URLs:
n Challenge Types
n Get the script that would assist in getting and renewing the certificate.
n Add the script as controller script on NSX Advanced Load Balancer System.
n Review the list of certificates, Let’s Encrypt CA would push signed certificate.
1 Download the script available at letsencrypt_mgmt_profile. To download the file, click the Raw
option.
3 Access the NSX Advanced Load Balancer Controller and navigate to Templates > Scripts >
ControlScripts and click CREATE.
4 Enter the Name and paste the script in the Import or Paste Control Script. Click Save.
5 Configure a user account, first configure custom role (Make sure that read & write
access enabled for virtual service, Application Profile, SSL/TLS Certificates and Certificate
Management Profile. Now add a user, add/select all relevant details and call the custom role
here.
6 Add a user, enter all the required details and select the configured custom role.
7 Navigate to Templates > Security > Certificate Management and click CREATE.
8 Enter the Name and select the configured control script and select Enable Custom Script
Parameters and add custom parameters by clicking ADD.
Note It is recommended not to use admin account, always add a user account which has
custom role (with limited access).
9 Navigate to Templates > Security > SSL/TLS Certificates, click CREATE and select
Application Certificate.
10 Enter the Name and select the configured Certificate Management Profile and add all relevant
details. Click Save.
Note Make sure that a virtual service is configured with the Application Domain Name as
Common Name (CN) of certificate, CN of certificate must match with the Application Domain
Name of virtual service. FQDN (CN of certificate/ Application Domain Name of virtual service
should resolve to IP address and reachability also should be there).
After few minutes, review the list of the certificate, you can see the certificate pushed by Let’s
Encrypt CA.
Logs
To view the logs, please enable non-significant logs at the configured virtual service and attempt
to generate the certificate.
Note There is a rate limit imposed by Let’s Encrypt CA, and hence please make sure that the
renewal of the certificate does not hit the rate limit.
SSL/TLS protocol helps keeping an internet connection secure and safeguard any sensitive
data sent between two machines, systems or devices, preventing intruders from reading, and
modifying any information transferred between two machines, systems or devices. Though
SSL/TLS certificate ensures secure, encrypted connections between systems, following are some
challenges around it.
Let’s Encrypt resolves the above challenges. For more information, see Let’s Encrypt.
n Provisioning a DNS record under the domain (as per CSR’s common name).
The NSX Advanced Load Balancer supports HTTP-01 challenge for domain validation.
HTTP-01 Challenge
n Let’s Encrypt gives a token to the ACME client that puts a file on the web server at http://
<YOUR_DOMAIN>/well-known/acme-challenge/<TOKEN>. This file contains the token and a
thumbprint of account key.
n Once the ACME client tells Let’s Encrypt that the file is ready, Let’s Encrypt tries retrieving it
(potentially multiple times from multiple vantage points).
n If validation checks get the right responses from the web server, the validation is considered
successful, and a certificate is issued.
Note
n As Let’s Encrypt CA communicates on port 80 for HTTP-01 challenge, port 80 must be opened
on the firewall and Let’s Encrypt CA must be able to reach user network (network where the
NSX Advanced Load Balancer System is deployed). Let’s Encrypt CA connects through public
network to user’s NSX Advanced Load Balancer System on port 80.
n The script automatically creates a virtual service on port 80 for the respective virtual service
listening on port 443/custom SSL Port, only if there is no virtual service already listening on
port 80.
n Challenge Types
1 Get the script which would assist in getting and renewing the certificate.
2 Add the script as controller script on the NSX Advanced Load Balancer System.
6 Make sure that the FQDN resolves to public IP and port 80 is open at Firewall.
8 Review the list of certificates. Let’s Encrypt CA would push signed certificate.
1 Download the script available at letsencrypt_mgmt_profile. To download the file, click Raw
option. Copy the code available.
2 In the NSX Advanced Load Balancer, navigate to Templates > Scripts > ControlScripts and
click Create.
3 Add a meaningful name and paste the code copied in step 1 in the Import or Paste Control
Script field. Save the configuration.
4 Configure a custom role by navigating to Administration > Roles. Ensure that read and write
access is enabled for Virtual Service, Application Profile, SSL/TLS Certificates and Certificate
Management Profile, for this role.
5. Add a user, enter all the required details and select the configured custom role.
6. Navigate to Templates > Security > Certificate Management and click Create.
7. Enter a meaningful name, select the configured control script and enable custom parameters,
add custom parameters as shown in the following example:
Note It is recommended not to use admin account. Always add a user account which has custom
role (with limited access).
8. Navigate to Templates > Security > SSL/TLS Certificates, click Create and select Application
Certificate.
9. Enter meaningful name, common name, select the configured certificate management profile,
add all relevant details and save the configuration.
Ensure that a virtual service is configured with the Application Domain Name as Common Name
(CN) of certificate. CN of certificate must match with the Application Domain Name of the virtual
service. FQDN (CN of certificate or Application Domain Name of virtual service must resolve to IP
address and the domain must be reachable).
After few minutes, review the list of the certificates. You can see the certificate pushed by Let’s
Encrypt CA. Associate the certificate to the configured virtual service.
Logs
To view the logs, enable non-significant logs for the configured virtual service and generate the
certificate. Following is an example of the log.
Note Let’s Encrypt CA imposes a rate limit. So ensure that the renewal of the certificate does not
hit the rate limit.
Additional Information
n For more details regarding rate limit, see Rate Limits.
n For more details regarding SSL/TLS Certificate details, see SSL Certificates.
An SSL certificate can be revoked by the certificate authority (CA) before the scheduled expiration
date. This implies that the certificate can no longer be trusted. This process of invalidating an
issued SSL certification before expiry of the certificate validity is called certificate revocation.
It is critical for browsers and clients to detect if a certificate has been revoked and suggest a
security warning. Certificate revocations are checked either using the Certificate Revocation List
(CRL) or Online Certificate Status Protocol (OCSP).
A CRL is a large list of certificates that have been revoked by the CA. When a client sends a
request for an SSL connection to a virtual service, the NSX Advanced Load Balancer checks the
CAs and CRL(s) in the PKI profile of the virtual service to verify whether the client certificate is still
valid. To know more, see Full-chain CRL Checking for Client Certificate Validation.
Downloading and updating the long list of serial numbers that have been revoked can be
cumbersome. In the OCSP method, the client queries the status of a single certificate, instead of
downloading and parsing an entire list. This results in lesser overhead on the client and network.
Since OCSP requests are sent for each certificate, it can be an overhead for the OCSP responder,
in case of high traffic.
OCSP Stapling
The RFC 2560 describes OCSP stapling, a new method for checking revoked certificates. In this
method, when a certificate has to be verified, the browser issues an OCSP request with the serial
number of the certificate to the OCSP responder. The OCSP responder looks up the CA database
using the serial number and fetches the revocation status of the certificate corresponding to the
serial number. It returns the revocation status of the certificate through a signed OCSP response.
The client does not have to communicate with the CA server each time to get the certificate
status. The NSX Advanced Load Balancer retrieves the information and serves it to the client, on
receiving a request.
In NSX Advanced Load Balancer, OCSP stapling can be enabled only on the Application
certificates and the Root/Intermediate certificates. For response, the OCSP response of only the
Application certificate is stapled to the certificate in TLS/SSL handshake. OCSP Stapling can be
enabled and configured through the NSX Advanced Load Balancer UI and CLI.
Time Parameters
UI Field Description
Note If for any reason, the OCSP request cannot be processed, OCSPErrorStatus tracks the
status errors to include failures in the OCSP workflow.
OCSP stapling can be enabled through the NSX Advanced Load Balancer UI for Root/Intermediate
CA Certificates and Application Certificates.
Note
n OCSP stapling can be enabled only on Root/Intermediate certificates and Application
certificates, and not on controller certificates.
n In case of application certificates, OCSP stapling is currently supported in the CSR and import
modes. OCSP stapling cannot be enabled for self-signed certificates.
n From the NSX Advanced Load Balancer UI, navigate to Templates > Security > SSL/TLS
Certificates.
n Import File or paste the details in the Upload or Paste Certificate File field.
n Enter Max Tries to define the number of times the failed job gets scheduled (with Fail Job
Interval) . After the maximum number of tries are exhausted, the job gets scheduled with
regular OCSP job (Frequency Interval).
n Choose Failover or Override for Responder URL Action, to either failover or override the AIA
extension contained in the SSL/TLS certificate of the OCSP responder.
n Click Validate.
n If a certificate is revoked, the status of the certificate is marked as Revoked in the NSX
Advanced Load Balancer UI.
n SSL score of all the certificates with status as Revoked or Issuer Revoked are marked 0.
n Virtual service faults are added to alert the users when the certificate is either Revoked or
Issuer Revoked.
+-------------------------------------------
+---------------------------------------------------------------------------------------------
-----+
| Field |
Value
|
+-------------------------------------------
+---------------------------------------------------------------------------------------------
-----+
| uuid | sslkeyandcertificate-380d9e69-4f04-4519-8151-
c89ff2d7bb6f |
| name | test-
cert |
| type |
SSL_CERTIFICATE_TYPE_VIRTUALSERVICE
|
| certificate
|
|
| version |
2
|
| serial_number |
15597070261980010830
|
| self_signed |
True
|
| issuer
|
|
| common_name |
test.example.com
|
| email_address |
usera@abc.com
|
| organization_unit |
L7
|
| organization |
abc
|
| locality |
Bangalore
|
| state |
Karnataka
|
| country |
IN
|
| distinguished_name | C=IN, ST=Karnataka, L=Bangalore, O=VMware,
OU=L7, CN=test.example.com, emailAddress=user@abc.com |
|
|
|
| enable_ocsp_stapling |
True
|
| ocsp_config
|
|
| ocsp_req_interval | 21600
sec |
| ocsp_resp_timeout | 60
sec
|
| responder_url_lists[1]. | http://
ocsp.example.com/ |
| url_action |
OCSP_RESPONDER_URL_FAILOVER
|
| failed_ocsp_jobs_retry_interval | 30
sec
|
| tenant_ref |
admin
|
+-------------------------------------------
+---------------------------------------------------------------------------------------------
-----+
Note If a successful OCSP response is received, the next_update value and the
ocsp_req_interval value are compared and the lesser value of the two is used to schedule the
next OCSP Request.
On receiving the OCSP requests, the CA servers or responders respond with the certificate status.
The OCSP responses cannot be forged, as they are directly signed by the CA. The NSX Advanced
Load Balancer Controller verifies the signature of the OCSP response. If the response verification
fails, the response is dropped and failover mechanisms are triggered to send further requests.
n Good
A positive response is received to the status inquiry. So the certificate with the requested
certificate serial number is not revoked within the validity interval.
n Revoked
n Unknown
The responder does not recognize the certificate being requested. This can be because the
request indicates an unrecognized issuer, that is not served by this responder.
Navigate to Templates > Security > SSL/TLS Certificates to view the status of SSL/TLS
certificates.
Application Logs
App logs are generated with the following significance.
To control the significant logs for the above scenarios, configure analytics profile as shown in the
following example.
True |
| exclude_stale_ocsp_responses_as_error |
True |
| exclude_issuer_revoked_ocsp_responses_as_error |
True |
| exclude_unavailable_ocsp_responses_as_error |
True |
| hs_security_ocsp_revoked_score |
0.0 |
| enable_adaptive_config |
True |
+-------------------------------------------------
+-------------------------------------------------------+
+-------------------------------------------------
+-------------------------------------------------------+
[admin:controller-vmdc2]: analyticsprofile> no
exclude_revoked_ocsp_responses_as_error |
+-------------------------------------------------
+-------------------------------------------------------+
| Field
| Value |
+-------------------------------------------------
+-------------------------------------------------------+
| uuid
| analyticsprofile-1775513e-bbf5-47ce-a067-42237c91315d |
| name
| System-Analytics-Profile |
| tenant_ref
| admin |
| exclude_revoked_ocsp_responses_as_error
| False |
| exclude_stale_ocsp_responses_as_error
| True |
| exclude_issuer_revoked_ocsp_responses_as_error
| True |
| exclude_unavailable_ocsp_responses_as_error
| True |
| hs_security_ocsp_revoked_score
| 0.0 |
| enable_adaptive_config
| True |
+-------------------------------------------------
+-------------------------------------------------------+
+-------------------------------------------------
+-------------------------------------------------------+
[admin:controller-vmdc2]: analyticsprofile>
hs_security_ocsp_revoked_score 3.0 |
+-------------------------------------------------
+-------------------------------------------------------+
| Field
| Value |
+-------------------------------------------------
+-------------------------------------------------------+
| uuid
| analyticsprofile-1775513e-bbf5-47ce-a067-42237c91315d |
| name
| System-Analytics-Profile |
| tenant_ref
| admin |
| exclude_revoked_ocsp_responses_as_error
| False |
| exclude_stale_ocsp_responses_as_error
| True |
| exclude_issuer_revoked_ocsp_responses_as_error
| True |
| exclude_unavailable_ocsp_responses_as_error
| True |
| hs_security_ocsp_revoked_score
| 3.0 |
| enable_adaptive_config
| True |
+-------------------------------------------------
+-------------------------------------------------------+
n exclude_revoked_ocsp_responses_as_error
n exclude_stale_ocsp_responses_as_error
n exclude_issuer_revoked_ocsp_responses_as_error
n exclude_unavailable_ocsp_responses_as_error
These fields are enabled by default. When set to True, the corresponding logs are excluded from
significant logs. To include the logs in significant logs, set the fields to False.
In the event of a security compromise, even if the attacker has the key, they must supply an OCSP
staple when using the certificate. If not, the browser rejects the certificate. If an OCSP staple is
included, the response would identify the certificate as revoked, and the browser will reject the
certificate. This mitigates the security issues of OCSP stapling.
Caveat
OCSP Stapling v2 described in RFC RFC6961 defines a new extension status_request_v2 that
enables the client to request the status of all certificates in the chain. Currently, in NSX Advanced
Load Balancer, multiple certificate status requests are not supported. When a client sends the
client hello with “status_request_v2” extension, the NSX Advanced Load Balancer return the
certificate status of only the application certificate directly attached to the virtual service.
NSX Advanced Load Balancer can validate SSL certificates presented by clients against a trusted
certificate authority (CA) and a configured certificate revocation list (CRL). Certificate information
is passed to the server through various headers through additional options. For certificate
authentication, an HTTP application profile and an associated public key infrastructure (PKI) profile
have to be configured.
Starting with NSX Advanced Load Balancer release 18.2.3, this has been extended to L4 SSL/TLS
applications (via the NSX Advanced Load Balancer CLI).
2 Click Create to create a new HTTP application profile with type as HTTP. For more
information, refer to Configuring HTTP Profile.
HTTP Headers
NSX Advanced Load Balancer optionally inserts the client’s certificate, or parts of it, into a new
HTTP header to be sent to the server. To insert multiple headers, the plus icon is used. These
inserted headers are in addition to any headers added or manipulated by the more granular HTTP
policies or DataScripts.
n HTTP Header Name : Name of the headers to be inserted into the client request that is sent to
the server.
n HTTP Header Value : Used with the HTTP Header Name field, this field is used to determine
the field of the client certificate to insert into the HTTP header sent to the server. Several
options are more general, such as the SSL Cipher, which lists the ciphers negotiated between
the client and NSX Advanced Load Balancer. These generic headers may be used for non-
client certificate connections by setting the Validation Type to Request.
Parameter Settings
The enable_chunk_merge parameter takes on one of two values:
n When set to True (the default), if the back-end server sends chunked HTTP responses, the
NSX Advanced Load Balancer SE merges chunks _that are received together_ into a single
chunk before forwarding its response to the client. If the server is slow, SE will not wait for the
server to send all the chunks.
n For example, if the server has seven chunks, but the SE only receives the first three chunks
when it is scheduled to process the response, it will merge them into one big chunk and
forward it to the client. Next time, if the SE has received all four of the remaining chunks,
it will merge them into one and forward it to the client. Chunk merging has been the
behavior of NSX Advanced Load Balancer from the beginning.
n When set to False, in case of a chunked HTTP response, if response buffer mode is not
configured, the NSX Advanced Load Balancer SE forwards the chunks received from the
server as is. In addition, the response body, which is in chunked mode, will not get cached. If
the cache is configured, the saved cache entry needs to be cleared.
UI Configuration
The Enable Chunk Merge option appears under the General tab of the Application Profile editor,
as shown below.
False |
| x_forwarded_proto_enabled |
False |
| spdy_enabled |
False |
| spdy_fwd_proxy_mode |
False |
| post_accept_timeout |
30000 milliseconds |
| client_header_timeout |
10000 milliseconds |
| client_body_timeout |
30000 milliseconds |
| keepalive_timeout |
30000 milliseconds |
| client_max_header_size |
12 kb |
| client_max_request_size |
48 kb |
| client_max_body_size |
0 kb |
| max_rps_unknown_uri |
0 |
| max_rps_cip |
0 |
| max_rps_uri |
0 |
| max_rps_cip_uri |
0 |
| ssl_client_certificate_mode |
SSL_CLIENT_CERTIFICATE_NONE |
| websockets_enabled |
True |
| max_rps_unknown_cip |
0 |
| max_bad_rps_cip |
0 |
| max_bad_rps_uri |
0 |
| max_bad_rps_cip_uri |
0 |
| keepalive_header |
False |
| use_app_keepalive_timeout |
False |
| allow_dots_in_header_name |
False |
| disable_keepalive_posts_msie6 |
True |
| enable_request_body_buffering |
False |
| enable_fire_and_forget |
False |
| max_response_headers_size |
48 kb |
| respond_with_100_continue |
True |
| hsts_subdomains_enabled |
True |
| enable_request_body_metrics |
False |
| fwd_close_hdr_for_bound_connections |
True |
| max_keepalive_requests |
100 |
| disable_sni_hostname_check |
False |
| reset_conn_http_on_ssl_port |
False |
| http_upstream_buffer_size |
0 kb |
| enable_chunk_merge |
False |
| preserve_client_ip |
False |
| preserve_client_port |
False |
| tenant_ref |
admin |
+---------------------------------------
+---------------------------------------------------------+
PKI Profile
The PKI profile contains the configured certificate authorities and CRL. A PKI profile is necessary if
the Validation Type is set to Request or Validation Type is Required.
The PKI profile supports configuring and updating the client certificate revocation lists. The PKI
profile is used to validate clients or server certificates.
n Client Certificate Validation: NSX Advanced Load Balancer validates client access to an
HTTPS virtual service via client SSL certificates. Clients will present their certificate while
accessing the virtual service. This will be matched against a CRL. If the certificate is valid and
the clients are not on the list of revoked certificates then, they are allowed access the HTTPS
virtual server. Client certificate validation is enabled via the HTTP profile’s Authentication tab.
The HTTP profile will reference the PKI profile for specifics on the CA and the CRL. A single
PKI profile may be referenced by multiple profiles.
n Server Certificate Validation: NSX Advanced Load Balancer can validate the certificate
presented by a server, such as when a HTTPS health check is sent to a server. Server
certificate validation also uses a PKI profile to validate the certificate presented. Server
certificate validation can be configured by enabling SSL within the desired pool, and then
specifying the PKI Profile.
n Ignore Peer Chain : This option is disabled by default. When disabled, the certificate must
present a full chain which is traversed and validated, starting from the client or server
presented certificate to the terminal root certificate. If this option is enabled, NSX Advanced
Load Balancer will ignore any cert chain the peer/client is presenting. Instead, the root and
intermediate certs configured in the Certificate Authority section of the PKI profile are used to
verify trust of the client’s cert. Each intermediate certificate must be validated and matched
against a CA certificate included in the PKI profile.
n Host Header Check : If enabled, this option ensures the virtual service’s VIP field, when
resolved using DNS, matches the domain name field of the certificate presented from a server
to NSX Advanced Load Balancer when back-end SSL is enabled. If the server’s certificate does
not match then, it is considered insecure and marked down.
n Enable CRL Check : If this option is selected, the client’s certificate is verified against the
certificate revocation list.
Certificate Authority
Add a new certificate from a trusted Certificate Authority. If more than one CA is included in the
PKI profile then, a client’s certificate should match with one of them to be considered as valid.
A client’s certificate must match the CA as the root of the chain. If the presented certificate has an
intermediate chain then, each link in the chain must be included here. Enable Ignore Peer Chain to
ignore intermediate validation checking.
n Leaf Certificate CRL validation only : When enabled, NSX Advanced Load Balancer will only
validate the leaf certificate against the CRL. The leaf is the next certificate in the chain up from
the client certificate. A chain may consist of multiple certificates. To validate all certificates
against the CRL, disable this option. Disabling this option means you need to upload all the
CRLs issued by each certificate in the chain. Even if one CRL is missing, the validation process
will fail.
n Server URL: Specify a server from which CRL updates can be downloaded. Access to this
server will be done from the NSX Advanced Load Balancer Controller IP addresses, which
means they will require firewall access to this destination. The CRL server may be identified
by an IP address or a fully qualified domain name (FQDN) along with an HTTP path, such as
https://www.avinetworks.com/crl.
n Refresh Time : After the elapsed period of time, NSX Advanced Load Balancer will download
an updated version of the CRL automatically. If no time is specified, NSX Advanced Load
Balancer will download a new CRL at the current CRL’s lifetime expiration.
n Upload Certificate Revocation List File : Navigate to the CRL file to upload. Subsequent CRL
updates can be done by manually uploading newer lists, or configuring the Server URL and
Refresh Time to automate the process.
The Controllers store the keys locally in a database in which sensitive information is encrypted.
The keys will be encrypted during backups, provided a passphrase is included during the backup
process. To encrypt (all sensitive fields like passwords or private keys) before storing them in the
database, use the following:
User passwords are hashed using the PBKDF2 (Password-Based Key Derivation Function 2)
algorithm with a SHA256 hash. All other passwords (for example, cloud credentials) are also
encrypted using this method.
As the Controllers store the system configuration, including the private SSL keys, it is critical to
ensure proper security. Numerous options exist to lock down the access levels of administrators,
ensure strong passwords, and limit administrative source IP address ranges.
For administrators having full access to the certificates and keys, an attempt to export a private
key will be noted in the Operations > Events > Config Audit log. Using role-based access, export
ability should be restricted to the fewest number of administrators possible.
Thales Luna (formerly SafeNet Luna) HSM & Externally Stored Keys
NSX Advanced Load Balancer supports external hardware security modules and certificate stores
to guarantee a higher level of physical security. The original key is stored on the external system,
with the public key available to NSX Advanced Load Balancer. It supports the following types of
external key stores:
n Thales nShield
Note The UI or the CLI can be used when client-facing ports are SSL-terminated. To make
client-ports communicate in the clear while server-side ports are SSL-encrypted, the CLI mode
must be used.
n Navigate to the Virtual Service Basic or Advanced Setup wizards. Select type
SSL application. As shown in the following figure, click SSL for Application Type. Default
value for Port is 443 and can be changed. The required certificate can be self-signed or be one
of the other certs visible in the drop-down menu.
n Edit the settings for the virtual service if the system-standard defaults for the application, i.e.
TCP/UDP, and SSL profiles need to be changed. See the following example.
n To enable the PROXY protocol for your layer 4 SSL VS, or to tune the TCP connection rate
limiter settings, use the application profile editoras shown in the following example.
Note You have the option to enable either version 1 or version 2 of the PROXY protocol.
When a virtual service is configured with both EC and RSA certificates, NSX Advanced Load
Balancer will prioritize the EC certificates.
n If a client supports ciphers from only one certificate type, NSX Advanced Load Balancer uses
that certificate type.
n If the client supports ciphers for both certificates and the virtual service is configured with both
certificates, then the EC certificate will be chosen.
The priority of ECC over RSA is not configurable. NSX Advanced Load Balancer prefers EC
over RSA due to EC’s significantly faster performance with handshake negotiation. On average,
processing for ECC is about four times less CPU-intensive than RSA.
EC also tends to provide significantly higher security. A 256-bit EC certificate (the minimum length
supported) is roughly equivalent to a 3k RSA cert. Additionally, EC cryptography enables Perfect
Forward Secrecy (PFS) with significantly less overhead.
For more information more about the basics of setting up an SSL/TLS profile refer to SSL/TLS
Profile article.
How It Works
At its simplest, an SSL/TLS virtual service must be configured with some base SSL profile. That
profile might be identical to the system default profile shipped with every NSX Advanced Load
Balancer release image or a custom defined image. However, the key point is that it must exist.
Optionally, to treat some of the client community in customized fashion, an authorized user may
define and associate one or more profile selectors with the virtual service. Their presence triggers
an algorithm within NSX Advanced Load Balancer that is based on the client’s IP address and may
cause the Service Engine to obey profile parameters other than those defined in the base SSL
profile.
Virtual Service
Profile selector 1
SSL/TLS
Clients
Profile selector n
a An IP group reference : points at one or more IP groups and identifies all the clients
collectively that applies to the SSL profile selector.
b A match criterion : governs the presence or absence from the list which will cause a client
to take on the selector’s SSL profiles parameters.
2 An SSL profile reference (exactly one per selector) is a SSL profile with parameters such as
SSL/TLS version, SSL timeout, ciphers, etc.
Profile name
SSL/TLS version
SSL profile reference SSL timeout
Cliphers
...
Algorithm
n If one or more profile selectors are associated with the virtual service, NSX Advanced Load
Balancer checks each of them and attempts to match with the client’s IP address. Since the
selector list is in ordered fashion, it may yield different results depending on the sequence.
n While checking the selectors, if a SSL profile is not assigned to the client, then the base SSL
profile is applied.
The client IP list is the conjunction of pre-existing IP groups named Internal and Ip-grp-2. These
two and the ssl_profile_ref (named sslprofile-2 in this example) should be pre-configured
earlier according to the requirements of the traffic flow and SSL algorithms.
Note Some output lines have been removed for the sake of brevity.
+------------------------------------+------------------------------------+
| Field | Value |
+------------------------------------+-----------------------------------+
| uuid | virtualservice-08ba76c3-faab-430d-86db-
a4d9703effa4 |
| name | vs-1 |
| enabled | True |
| services[1] | |
| port | 80 |
| enable_ssl | False |
| port_range_end | 80 |
| services[2] | |
| port | 443 |
| enable_ssl | True |
| port_range_end | 443 |
| application_profile_ref | System-HTTP |
| network_profile_ref | System-TCP-Proxy |
| pool_ref | vs-1-pool |
| se_group_ref | Default-Group |
| network_security_policy_ref | vs-vs-1-Default-Cloud-ns |
| http_policies[1] | |
| index | 11 |
| http_policy_set_ref | vs-1-Default-Cloud-HTTP-Policy-Set-0 |
| ssl_key_and_certificate_refs[1] | System-Default-Cert |
| ssl_profile_ref | System-Standard |
.
.
.
| vip[1] | |
| vip_id | 1 |
| ip_address | 10.160.221.250 |
| enabled | True |
| auto_allocate_ip | False |
| auto_allocate_floating_ip | False |
| avi_allocated_vip | False |
| avi_allocated_fip | False |
| auto_allocate_ip_type | V4_ONLY |
| vsvip_ref | vsvip-vs-1-Default-Cloud |
| use_vip_as_snat | False |
| traffic_enabled | True |
| allow_invalid_client_cert | False |
+------------------------------------+-----------------------------------------------------+
+------------------------------------+-----------------------------------+
| Field | Value
+------------------------------------+---------------------------------+
| uuid | virtualservice-08ba76c3-faab-430d-86db-a4d9703effa4 |
| name | vs-1
| enabled | True
| services[1] |
| port | 80
| enable_ssl | False
| port_range_end | 80
| services[2] |
| port | 443
| enable_ssl | True
| port_range_end | 443
| application_profile_ref | System-HTTP
| network_profile_ref | System-TCP-Proxy
| pool_ref | vs-1-pool
| se_group_ref | Default-Group
| network_security_policy_ref | vs-vs-1-Default-Cloud-ns
| http_policies[1] |
| index | 11
| http_policy_set_ref | vs-1-Default-Cloud-HTTP-Policy-Set-0
| ssl_key_and_certificate_refs[1] | System-Default-Cert
| ssl_profile_ref | System-Standard
.
.
.
| vip[1] |
| vip_id | 1
| ip_address | 10.160.221.250
| enabled | True
| auto_allocate_ip | False
| auto_allocate_floating_ip | False
| avi_allocated_vip | False
| avi_allocated_fip | False
| auto_allocate_ip_type | V4_ONLY
| vsvip_ref | vsvip-vs-1-Default-Cloud
| use_vip_as_snat | False
| traffic_enabled | True
| allow_invalid_client_cert | False
| ssl_profile_selectors[1] |
| client_ip_list |
| match_criteria | IS_IN
| group_refs[1] | Internal
| group_refs[2] | Ip-grp-2
| ssl_profile_ref | sslprofile-2
+------------------------------------+------------------------------------+
[admin:10-160-3-76]: >
Note
1 A virtual service’s SSL profile selector client IP list does not (yet) support implicit IP
configurations. Please use group UUIDs.
2 An SSL profile selector configuration requires the virtual service to have at least one SSL-
enabled service port. Otherwise, it should be a child virtual service.
3 A child VS will not inherit its parent virtual service’s SSL profile selectors; just the parent’s
default SSL profile.
Additional Information
n DataScript: NSX Advanced Load Balancer SSL Client Cert Validation
SSL/TLS Profile
The NSX Advanced Load Balancer supports the ability to terminate SSL connections between the
client and the virtual service, and to enable encryption between NSX Advanced Load Balancer and
the back-end servers.
The Templates > Security > SSL/TLS Profile contains the list of accepted SSL versions and the
prioritized list of SSL ciphers. To terminate client SSL connections, both an SSL profile and an SSL
certificate must be assigned to the virtual service. To also encrypt traffic between NSX Advanced
Load Balancer and the servers, an SSL profile must be assigned to the pool. When creating a new
virtual service via the basic mode, the default system SSL profile is automatically used.
Each SSL profile contains default groupings of supported SSL ciphers and versions that may
be used with RSA or an elliptic curve certificates, or both. Ensure that any new profile created
includes ciphers that are appropriate for the certificate type that will be used. The default SSL
profile included with NSX Advanced Load Balancer is optimized for security, rather than just
prioritizing the fastest ciphers.
Creating a new SSL/TLS profile or using an existing profile entails various trade-offs between
security, compatibility, and computational expense. For example, increasing the list of accepted
ciphers and SSL versions increases the compatibility with clients, while also potentially lowering
security.
Note
n NSX Advanced Load Balancer can accommodate a broader set of security needs within a
client community by associating multiple SSL profiles with a single virtual service, and have the
Service Engines choose which to use based on the client’s IP address. For more information,
refer to the Client-IP-based SSL Profiles article.
n The virtual service creation without SSL profile should default to System-Standard-PFS SSL
profile.Selecting unsafe ciphers will display the following error message.
The table provides the following information for each SSL/TLS profile:
n Accepted Ciphers : List of ciphers accepted by the profile, including the prioritized order.
n Click Create to see a window (as shown in the below screenshots). In this, TLS1.3 is
unchecked.
n Checking the TLS 1.3 option causes the Early Dataoption to appear.
UI Fields
This section explains UI Fields.
n Type : Choose Application if the profile is to be associated with a virtual service, System if the
profile is to be associated with the Controller.
n Cipher : Ciphers may be chosen from the default List view or a String. The String view is for
compatibility with OpenSSL-formatted cipher strings. When using String view, NSX Advanced
Load Balancer does not provide an SSL rating, nor a score for the selected ciphers.
n SSL Rating : This is a simple rollup of the security, compatibility, and performance of the
ciphers chosen from the list. Often ciphers may have great performance but very low security.
The SSL rating attempts to provide some insight into the outcome of the selected ciphers.
NSX Advanced Load Balancer Networks may change the score of certain ciphers from time
to time, as new vulnerabilities are discovered. This does not impact or change an existing
NSX Advanced Load Balancer deployment, but it does mean the score for the profile, and
potentially the security penalty of a virtual service, may change to reflect the new information.
n Version : NSX Advanced Load Balancer supports versions SSL 3.0, TLS 1.0 and newer. The
older SSL 2.0 protocol is no longer supported. Starting with release 18.2.6, TLS 1.3 protocol is
supported. Users must select one or more of the three supported TLS 1.3 ciphers in the list of
ciphers or configure them in the Ciphersuites option under the String view.
n Send “close notify” alert : Gracefully inform the client of the closure of an SSL session. This is
similar to TCP doing a FIN/ACK rather than an RST.
n Prefer client cipher ordering : Off by default, set this to On if you prefer the client’s ordering.
n Enable SSL Session Reuse : On by default, this option persists a client’s SSL session across
TCP connections after the first occurs.
n SSL Session Expiration : Set the length of time in seconds before an SSL session expires.
n Ciphers : When negotiating ciphers with the client, NSX Advanced Load Balancer will give
preference to ciphers in the order listed. The default cipher list prioritizes elliptic curve with
PFS, followed by less secure, non-PFS and slow RSA-based ciphers. Enable, disable, and
reorder the ciphers via the List view. In the String view, manually enter the cipher strings
via the OpenSSL format, which is documented on the OpenSSl.org website. You may use
an SSL/TLS profile with both an RSA and an elliptic curve certificate. These two types of
certificates can use different types of ciphers, so it is important to incorporate ciphers for both
types in the profile if both types of certs may be used. As with all security, NSX Advanced Load
Balancer Networks recommends diligence to understand the dynamic nature of security and to
ensure that NSX Advanced Load Balancer is always up to date.
n Ciphersuites : This option exclusively configures TLS 1.3 protocol ciphers. Currently, NSX
Advanced Load Balancer supports the below:
n TLS_AES_128_GCM_SHA256
n TLS_AES_256_GCM_SHA384
n TLS_CHACHA20_POLY1305_SHA256
Note These ciphers will only work with the TLS 1.3 protocol. The old ciphersuites cannot be used
with the TLS 1.3 protocol.
n Early Data : This option enables TLS-v1.3-terminated applications to send application data
(referred to here as early data or 0-RTT data) without having to first wait for the TLS
handshake to complete. This saves one full round-trip time between the client and server
before the client requests can be processed. SSL session reuse must be enabled to use the
Early Data option.
Note Starting with NSX Advanced Load Balancer 21.1, NSX Advanced Load Balancer supports
configuring Elliptic Curve Cryptography(ECC) Cipher Suites in SSL profile.
Elliptic Curve Cryptography is a public-key cryptosystem that offers equivalent security with
a smaller key size than currently prevalent cryptosystems. This results in conserving power,
memory, bandwidth, and the resultant computational cost.
n secp256r1 (23)
n secp384r1 (24)
n secp521r1 (25)
n x25519(29)
n x448(30)
To configure the EC Named curve, Named Curve (TLS Supported Groups) in SSL Profile
configuration, the field configure ec_named_curve is introduced.
This implies that the secp256r1 (23), secp384r1 (24) and secp521r1 (25) curve group is supported by
default.
sslprofile>save
Signature Algorithms
This section describes the steps to configure signature algorithm.
The SSL client uses the “signature_algorithms” extension to indicate to the server which
signature/hash algorithm pairs should be used in digital signatures.
n md(5)
n sha1(2)
n sha224(3)
n sha256(4)
n sha384(5)
n sha512(6)
n rsa
n dsa
n ecdsa
In NSX Advanced Load Balancer, the signature algorithms set by a client are used directly in the
supported signature algorithm in the client hello message.
The supported signature algorithms set by a server are not sent to the client but are used to
determine the set of shared signature algorithms and their order.
The client authentication signature algorithms set by a server are sent in a certificate request
message if client authentication is enabled. Otherwise, they are unused. Similarly, client
authentication signature algorithms set by a client are used to determine the set of client
authentication shared signature algorithms.
Signature algorithms will neither be advertised nor used if the security level prohibits them.
SSL_VERSION_TLS1 |
| accepted_versions[2]
| |
| type |
SSL_VERSION_TLS1_1 |
| accepted_versions[3]
| |
| type |
SSL_VERSION_TLS1_2 |
| accepted_ciphers |
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDH |
| | E-ECDSA-
AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:EC |
| | DHE-
RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA |
| | -AES256-
SHA:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-S |
| | HA256:AES256-
SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA |
| --------------------Truncated
Output---------------------- |
|
| |
| prefer_client_cipher_ordering |
False |
| enable_ssl_session_reuse |
True |
| ssl_session_timeout |
86400 sec |
| type |
SSL_PROFILE_TYPE_APPLICATION |
| ciphersuites |
TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256 |
| enable_early_data |
False |
| ec_named_curve |
auto |
| signature_algorithm |
auto |
| tenant_ref |
admin |
+-------------------------------
+----------------------------------------------------------------------------------+
sslprofile> save
n The client sends a cipher(s) that is not configured in the virtual service’s SSL profile.
n The client sends a cipher(s) that does not match the certificate’s authentication type on the
virtual service.
n For example, the client sends ECDSA ciphers when the virtual service has only an RSA
certificate configured.
n The client sends a cipher(s) that does not match the SSL/TLS protocol.
n For example, the client sends AES256-GCM-SHA394 TLS 1.2 cipher when the virtual service
does not have TLS1.2protocol enabled (even though, the SSL profile has this cipher
enabled).
When any one of this issues occurs, it is beneficial to show what ciphers client has sent as part
of the client hello. The necessary changes can be performed to the virtual service or the client
configuration to fix the problem.
A client sends anywhere between 180-200ciphers in a client hello, and the server picks one of
them.
The cipher selection depends on the various factors like ciphers and protocols enabled, type of
the certificate configured, et.c. on the virtual service. When the virtual service is unable to select
a single cipher, the SSL connection fails with the error: SSL Error: No Shared Cipher. In such
a case, the NSX Advanced Load Balancer records all the ciphers that the client has sent in the
application log.
A no shared ciphers SSL error can be fixed by making the necessary changes to the virtual
service or client configuration as per the ciphers sent by the client.
Strength
SSL ciphers are defined by the Templates > Security > SSL/TLS Profile. Within a profile, there are
two modes for configuring ciphers, List view and String view.
SSL Rating
Modifying or reordering the list will alter the associated SSL Rating in the top right corner of the
SSL / TLS Profile edit window. This provides insight into the encryption performance, security, and
client compatibility of the selected ciphers. This ranking is only made against the validated ciphers
from the List View mode.
List View
The default cipher list view shows common ciphers in order of priority. Enable or disable ciphers
via the checkbox, and reorder them via the up/down arrows or drag and drop. List view provides
a static list of validated ciphers. If alternate ciphers not listed are required, consider using String
View. The ciphers included in this list are considered reasonably strong. If a cipher is later deemed
to be insecure or less secure, it’s security score rating will drop to indicate it has fallen out of favor.
String View
The second cipher configuration mode allows accepted ciphers to be added as a string, similar to
the OpenSSL syntax for viewing and setting ciphers. For this mode, NSX Advanced Load Balancer
accepts all TLS 1.0 - 1.2, and Elliptic Curve ciphers from https://www.openssl.org/docs/man1.0.2/
apps/ciphers.html. In this mode, the administrator must determine if the enabled ciphers are
secure. Consider setting strong security by employing a known cipher suite, such as “HIGH”.
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES256
Configuration
NSX Advanced Load Balancer uses the concept of parent and child virtual services for SNI virtual
hosting. When the option for virtual hosting virtual service is selected on the create (through
advanced mode) or edit action, the virtual service participates in the virtual hosting. The virtual
hosting virtual service must be configured as either a parent or a child virtual service.
Child VS
aaa.avi.com
Parent VS
10.1.1.1.:443 bbb.avi.com
ccc.avi.com
The parent virtual service governs the networking properties used to negotiate TCP and SSL with
the client. It can also be a catch-all, if the domain name requested by the client does not exist or
does not match with one of the configured child virtual services.
Network: The listener IP address, service port, network profile, and SSL profile. No networking
properties are configured on the child virtual services.
Pool: Optionally specify a pool for the parent virtual service. The pool will only be used if no child
virtual service matches a client’s requested domain name.
SSL Certificate: An SSL certificate may be configured which could either be a wildcard certificate
or a specific domain name. The parent’s SSL certificate is used if the client does not send an SNI
hostname TLS extension or if the TLS SNI hostname of the client does not match any of the child
VSs virtual service domain names. If an SSL certificate with specific domain name is returned to
the client, for example, in the case of sending a friendly error message, the client will receive an
SSL name mismatch message. So, it is advisable to use a wildcard on the parent.
The parent virtual service receives all new client TCP connection handshakes, which are reflected
in the statistics. Once a child virtual service is selected, the connection is internally handed off to a
child virtual service. So subsequent metrics such as packets, concurrent connections, throughput,
requests, logs and other statistics will only be recorded on the child virtual service. Similarly, the
child virtual service will not have logs for the initial TCP or SSL handshakes, such as the SSL
version mismatch errors, which are recorded at the parent virtual service.
The parent delegates to the child during the SNI phase of the TLS handshake.
If there is an SNI message received from the client and the SNI hostname matches the configured
hostnames for any of the child virtual services, the connection switches to the child virtual service
at that point. Also, all the SSL (certificate etc.) and L7 state (policies, DataScripts etc.) of the child
virtual service is applied to the HTTP request. Subsequently, the log ends up on the child virtual
service.
If the switch to the child virtual service did not happen, the connection/request is handled on the
parent virtual service. So the SSL and L7 state of the parent gets applied. The default certificate on
the parent is presented to the client. Once the request is received and parsed, you can close the
client-side TCP connection through no pool, pool with close action, or security policy. If you have
a wildcard certificate on the parent that covers all the subdomains of the child virtual services, you
can serve that from the parent and then close the connection as mentioned above.
Selection of a child virtual service is solely based on the FQDNs (Fully Qualified Domain Name)
configured on the SNI child. Ensure that there are no duplicates or overlaps among the child
FQDNs. Common Name or Subject Alternate Name in the virtual service certificate has no role to
play in the selection of children for SNI traffic.
The vh_domain_name of the SNI child virtual service has to be explicitly added to the parent virtual
service VIP’s list for DNS records to be populated correctly.
Once a child is selected (using server name TLS extension of client hello), its certificate is served
on the connection and host header of further HTTP requests must match the FQDNs of one of the
children, failing which, the connection would fail with virtual host error on the applog.
If connection fails to select a child, it will be served by the parent virtual service.
If no child matches the client request, the parent’s SSL certificate and pool are used. In cases
where you have a TLS SNI parent with a TLS/SSL profile that supports TLS versions 1, 1.1, and 1.2,
and a TLS child which has only TLS 1.2 configured, the child will continue to use TLS 1.2.
In such a setup where the parent and child virtual services use different SSL profiles, the flow for
SSL handshake is as follows:
2 Client Hello -> Parent virtual service. The client Hello contains the SNI. So the NSX Advanced
Load Balancer selects the child virtual service.
3 SSL profile of the child is used. Child virtual service SSL profile is used to allow or deny based
on the SSL/TLS version and select a cipher.
4 Child virtual service responds with a server Hello that includes the cipher and the child
certificate.
Logs
The application logs option on the user interface displays SNI hostname information along with
other SSL related information. The SNI information in the application logs provide more insight
about the incoming requests and also help in troubleshooting various issues. When the child
virtual service sees an SSL connection with SNI header, the hostname in the SNI header is
recorded in the application log along with the SSL version, PFS, and cipher related information.
To check for SNI-enabled virtual service related logs, navigate to Applications > Virtual Service,
select the desired virtual service, and navigate to Logs.
Figure 7-1.
Note When the Host header of a client request does not match the FQDN configured on the child
virtual service, the request would fail with an application log on the child instead of being proxied
using parent virtual service’s default Pool.
A proxy identifies the client IP from the L3 header of the incoming connection. However, it is not
always the actual client IP address. In a situation where there are proxies between the actual client
and NSX Advanced Load Balancer, the intermediary proxy always adds the source IP address of
the incoming connection into the “X-Forwarded-For” header. It replaces the source IP address
with its IP address as the source IP in the L3 header while forwarding the request to the actual
destination.
<<image>>
The true client IP feature enables fetching the actual client IP address from “X-Forwarded-For” or
a user-defined header and tracking the actual client IP address into logs or configure policies such
as HTTP Security, HTTP Request, etc. based on the true client IP address.
n The actual client IP address can be shared with the actual server (NSX Advanced Load
Balancer can add the identified actual client IP as X-Forwarded-For, and the server can be
configured to parse it).
n You can configure HTTP policy, SSO policy, etc., based on the actual client IP address.
n Source IP is always the IP address from the IP header of the downstream connection
(incoming).
n Client IP is derived based on user configuration. It could be derived from the X-Forwarded-
For or a user specified header, or it could be the same as Source IP.
For L4 applications, Source-IP and Client-IP would always be the same. In the case of HTTP
applications, it can be different. By default, the feature is disabled. After enabling true client IP,
specify the desired header from where the client IP should be fetched.
If the user doesn’t define any header, it would be fetched from the X-Forwarded-For header. The
specified header needs to have a format of a comma-separated list of IP addresses as a header
value. If the format is not such, it will be ignored.
You can configure only one header as of now to fetch client IP.
1 Access the CLI by logging into the NSX Advanced Load Balancer Shell.
* Headers (optional), define the desired HTTP header from where the client IP needs to be fetched.
If not specified, by default, “X-Forwarded-For” is configured.
* Direction (optional), define the direction to count the IPs in the specified header value. By
default, the value is Left.
* Index_in_header (optional), define the position in the configured direction in the specified
header’s value. By default, the value is 1.
Define the parameters for True_Client_IP (header name, direction, and index in the header) as
shown below:
Use cases
The following features can be configured to use actual client IP:
n HTTP Policies
The following features are affected after enabling True Client IP:
n Application Logs
n Analytics Policy
n RUM/ Client Insights Sampling – Client IP address to check when inserting RUM script
n IP Reputation
Upgrade
By default, True Client IP is disabled. Hence while upgrading the NSX Advanced Load Balancer,
all instances where client IP is referred to will refer to Source IP, and no change in behavior is
evident.
If True Client IP is enabled later, all the instances that refer to client IP will refer to True Client IP.
To use Source IP specifically in any such places, explicitly change the configuration.
Examples
True Client IP Header Direction Index Count
Configuration Parameter Parameter Parameter Request Details Behaviour
Certificates
The certificate must be issued by a Certificate Authority that is publicly trusted (included with the
operating system), or the CA’s root cert has been installed in the client device.
n RSA 2k or higher
Cipher Support
All enabled ciphers must support PFS. Disable all but the following ciphers from the Cipher list
view. If only an EC or RSA cert is in use, it doesn’t hurt only to enable the compatible ciphers. If
both an EC and RSA certificate will be used (best practice), then leave all of the following ciphers
enabled.
ECC Ciphers
n TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
n TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
n TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
n TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
n TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
n TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
RSA Ciphers
n TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
n TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
n TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
n TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
n TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
You can create an instance of this object, an individual certificate management profile, which
provides a way to configure a path to a certificate script. Along with the set of parameters the
script needs (CSR, common name, and others) to integrate with a certificate management service
within the customer’s internal network. The script itself is left opaque by design to accommodate
the various certificate management services different customers may have.
For SSL certificate configuration, you need to select CSR and fill in the necessary fields for the
certificate, and select the certificate management profile to which this certificate is bound. The
NSX Advanced Load Balancer Controller will then use the CSR and the script to obtain the
certificate and also renew the certificate upon expiration. As a part of the renewal process, a
new key pair is generated and a certificate corresponding to this is obtained from the certificate
management service.
As a part of the SSL certificate configuration, the NSX Advanced Load Balanceryoushould only
select CSR, fill in the necessary fields for the certificate, and select the certificate management
profile to which this certificate is bound. The NSX Advanced Load Balancer Controller will then use
the CSR and the script to obtain the certificate and also renew the certificate upon expiration. As a
part of the renewal process, a new key pair is generated and a certificate corresponding to this is
obtained from the certificate management service.
Without the addition of this automation, the process for sending the CSR to the external CA, then
installing the signed certificate and keys, must be performed by the NSX Advanced Load Balancer
user.
1 Prepare a Python script that defines a certificate_request() method. The method must
accept the following input as a dictionary:
a CSR
The specific parameter values to be passed to the script are specified within the certificate
management profile.
Sensitive Parameters
For parameters that are sensitive, for instance, passwords, the values can be hidden. Marking a
parameter sensitive prevents its value from being displayed in the web interface or being passed
by the API.
Dynamic Parameter
The value for a certificate management parameter can be assigned within the profile or within
individual CSRs.
n If the parameter value is assigned within the profile, the value applies to all CSRs generated
using this profile.
n To dynamically assign a parameter’s value, indicate that the parameter is dynamic within the
certificate management profile. This leaves the parameter’s value unassigned. In this case, the
dynamic parameter’s value is assigned when creating an individual CSR using the profile. The
parameter value applies only to that CSR.
Procedure
1 Navigate to Templates > Security > Certificate Management and click Create.
3 Select an alert script configuration object type for the certificate management profile from the
drop-down list.
4 If the profile need to pass some parameter values to the script, check Enable Custom
Parameters box, and specify their names and values.
In this example, the location (URL) of the CA service and the login credentials for the service,
will be passed to the script. For parameters that are sensitive, such as, passwords, select the
Sensitive checkbox. Marking a parameter sensitive prevents its value from being displayed in
the web interface or being passed by the API. For parameters that are to be dynamically
assigned during CSR creation, select the Dynamic checkbox. This leaves the parameter
unassigned within the profile.
5 Click Save.
Procedure
1 Navigate to Templates > Security > SSL/TLS Certificates. Select Application Certificate
option from CREATE drop-down list.
2 Specify the certificate name and select the type as CSR in the Type field.
3 Select the profile configured in the previous section from the Certificate Management Profile
drop-down list.
The NSX Advanced Load Balancer Controller generates a key pair and CSR, executes the
script to request the CA-signed certificate from the NSX Advanced Load Balancer PKI service,
and saves the signed certificate in persistent storage.
You can choose to customize when certificate expiry notifications are sent; see Chapter 8
Certificate Management Integration for CSR Automation section. If the certificate management
profile is configured for a certificate, a renewal is attempted in the last-but-one interval. By
default, NSX Advanced Load Balancer Controller generates events 30 days, seven days, and one
day before expiry. In this setting, certificate renewal will be attempted seven days before expiry.
If the certificate management profile is configured for automatic certificate renewal, a renewal is
attempted just prior to the penultimate notification (in the above example, that will be just prior
to the seven-day notification). If the renewal succeeds, the last two notifications are not sent. If
the renewal fails, the penultimate notification is sent. Thereafter, if a manual renewal succeeds
prior to the last notification, it is skipped. Otherwise, the final notification will be sent (with no
accompanying final attempt to renew).
When a certificate renewal occurs, a new expiration date is set and yet another notification
schedule is established per the values within the ssl_certificate_expiry_warning_days array in
force at the time.
Prerequisites
OpenSSL 1.1.x or later.
n In NSX Advanced Load Balancer, navigate to Templates > Security > SSL/TLS Certificates,
click on Export icon (right) of System-Default-Cert entry.
n Copy data from the Key and Certificate field to two new files using the COPY TO CLIPBOARD
option. Name the new files as system-default.key and system-default.cer, respectively.
n Use OpenSSL to run the following command to verify the expiration date of the certificate:
n Run the following command to generate a new CSR with the system-default.key.
n Run the following command to generate a new certificate with the new expiration date. In this
example, the new certificate is named as system-default2.cer.
openssl x509 -req -days 365 -in system-default.csr -signkey system-default.key -out system-
default2.cer
Changes Required using NSX Advanced Load Balancer CLI and NSX
Advanced Load Balancer UI
n Copy the system-default2.cer and the system-default.key to the NSX Advanced Load
Balancer Controller.
Optional Step: Before performing the next steps, you may disable any virtual services that are
configured to use the System-Default-Cert.
n Login to the NSX Advanced Load Balancer CLI, and execute the following command to
perform the changes for the default certificate on NSX Advanced Load Balancer (System-
Default-Cert).
n Execute the certificate command, then click Enter. Run certificate file:<path to system-
default2.cer>/system-default2.cer. Enter the save command to save the changes.
n Enable the virtual services if they were disabled before the changes (optional).
n Login to the NSX Advanced Load Balancer user interface, navigate to Templates > Security >
SSL/ TLS Certificates and check the expiry date for the renewed certificate.
Example: Example
In the below sequence,
Note The two dates are automatically inserted and displayed in sequence.
+-----------------------------------------+---------+
| Field | Value |
+-----------------------------------------+---------+
| uuid | global |
| unresponsive_se_reboot | 300 |
| crashed_se_reboot | 900 |
| se_offline_del | 172000 |
| vs_se_create_fail | 1500 |
| vs_se_vnic_fail | 300 |
| vs_se_bootup_fail | 300 |
| se_vnic_cooldown | 120 |
| vs_se_vnic_ip_fail | 120 |
| fatal_error_lease_time | 120 |
| upgrade_lease_time | 360 |
| query_host_fail | 180 |
| vnic_op_fail_time | 180 |
| dns_refresh_period | 60 |
| se_create_timeout | 900 |
| max_dead_se_in_grp | 1 |
| dead_se_detection_timer | 360 |
| api_idle_timeout | 15 |
| allow_unauthenticated_nodes | False |
| cluster_ip_gratuitous_arp_period | 60 |
| vs_key_rotate_period | 60 |
| secure_channel_controller_token_timeout | 60 |
| secure_channel_se_token_timeout | 60 |
| max_seq_vnic_failures | 3 |
| vs_awaiting_se_timeout | 60 |
| vs_apic_scaleout_timeout | 360 |
| secure_channel_cleanup_timeout | 60 |
| attach_ip_retry_interval | 360 |
| attach_ip_retry_limit | 4 |
| persistence_key_rotate_period | 60 |
| allow_unauthenticated_apis | False |
| warmstart_se_reconnect_wait_time | 300 |
| vs_se_ping_fail | 60 |
| se_failover_attempt_interval | 300 |
| max_pcap_per_tenant | 4 |
| ssl_certificate_expiry_warning_days[1] | 30 days |
| ssl_certificate_expiry_warning_days[2] | 7 days |
| ssl_certificate_expiry_warning_days[3] | 1 days |
| seupgrade_fabric_pool_size | 20 |
| seupgrade_segroup_min_dead_timeout | 360 |
+-----------------------------------------+---------+
+-----------------------------------------+---------+
| Field | Value |
+-----------------------------------------+---------+
| uuid | global |
| unresponsive_se_reboot | 300 |
| crashed_se_reboot | 900 |
| se_offline_del | 172000 |
| vs_se_create_fail | 1500 |
| vs_se_vnic_fail | 300 |
| vs_se_bootup_fail | 300 |
| se_vnic_cooldown | 120 |
| vs_se_vnic_ip_fail | 120 |
| fatal_error_lease_time | 120 |
| upgrade_lease_time | 360 |
| query_host_fail | 180 |
| vnic_op_fail_time | 180 |
| dns_refresh_period | 60 |
| se_create_timeout | 900 |
| max_dead_se_in_grp | 1 |
| dead_se_detection_timer | 360 |
| api_idle_timeout | 15 |
| allow_unauthenticated_nodes | False |
| cluster_ip_gratuitous_arp_period | 60 |
| vs_key_rotate_period | 60 |
| secure_channel_controller_token_timeout | 60 |
| secure_channel_se_token_timeout | 60 |
| max_seq_vnic_failures | 3 |
| vs_awaiting_se_timeout | 60 |
| vs_apic_scaleout_timeout | 360 |
| secure_channel_cleanup_timeout | 60 |
| attach_ip_retry_interval | 360 |
| attach_ip_retry_limit | 4 |
| persistence_key_rotate_period | 60 |
| allow_unauthenticated_apis | False |
| warmstart_se_reconnect_wait_time | 300 |
| vs_se_ping_fail | 60 |
| se_failover_attempt_interval | 300 |
| max_pcap_per_tenant | 4 |
| ssl_certificate_expiry_warning_days[1] | 45 days |
| ssl_certificate_expiry_warning_days[2] | 30 days |
| ssl_certificate_expiry_warning_days[3] | 14 days |
| ssl_certificate_expiry_warning_days[4] | 7 days |
| ssl_certificate_expiry_warning_days[5] | 1 days |
| seupgrade_fabric_pool_size | 20 |
| seupgrade_segroup_min_dead_timeout | 360 |
To remove any of the warning_days entries, execute a sequence as follows within the configure
command:
Note Add as many warning_days entries as you like. However, while removing them NSX
Advanced Load Balancer will reject any attempt to reduce the number of entries below three.
Prerequisites
Knowledge of OpenSSL
The following are the steps to createdirectories for keys and certificates:
$ mkdir client-cert-auth-demo
$ cd client-cert-auth-demo
[client-cert-auth-demo] $
Use the openssl genrsa -out CA.key 2048 command to generate a self-signed CA certificate with
2048-bit encryption.
e is 65537 (0x10001)
Generate self-signed CA Cert:
[client-cert-auth-demo] $ openssl req -x509 -new -nodes -key CA.key -sha256 -days 1024 -out
CA.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:California
Locality Name (eg, city) [Default City]:Santa Clara
Organization Name (eg, company) [Default Company Ltd]:Avi Networks
Organizational Unit Name (eg, section) []:Engineering
Common Name (eg, your name or your server's hostname) []:demo.avi.com
Email Address []:
The following are the steps to generate client certificate signing request:
1 Generate aclient.key using the openssl genrsa -out client.key 2048 command.
2 Use the openssl req -new -key client.key -out client.csr command to create a client
CSR.
Note
n The Common Name should match the hostname or FQDN of your client machine.
n Leave the email address, the challenge password, and the optional company name empty.
[client-cert-auth-demo] $ openssl x509 -req -in client.csr -CA CA.pem -CAkey CA.key
-CAcreateserial -
out client.pem -days 1024 -sha256
Signature ok
subject=/C=US/ST=California/L=Santa Clara/O=Avi Networks/OU=Engineering/CN=client.avi.com
Getting CA Private Key
Use the following OpenSSL command to convert the client key format from PEM to PKCS12.
Provide an export password.
Configuring CRL
This section explains the two ways of configuring CRL, namely, by generating the CRL and re-
generating the CRL.
Generating CRL
By default, if client certificate validation is enabled in an HTTP profile, the PKI profile used by
the virtual service must contain at least one CRL. This CRL is issued by the CA that signed the
client certificate. Use the following OpenSSL command to generate the CRL using the key and the
certificate created in the previous steps.
This command may exhibit a few errors. Take the actions as required. For instance, the following
commands create a file.
/etc/pki/CA/index.txt file and the file /etc/pki/CA/crlnumber with the content 01:
[client-cert-auth-demo] $ touch /etc/pki/CA/index.txt
[client-cert-auth-demo] $ echo 01 > /etc/pki/CA/crlnumber
n Copy the client.pfx to your workstation (in this example, a MAC workstation is used), and
open it in the keychain.
n Specify the export password to add the client PFX key to your local keychain store as shown
below.
Note Use the export password provided while converting PEM key to PFX key.
2 In this example, a new PKI profile is created. Provide the desired name, check Enable CRL
Check box.
3 In Certificate Authority (CA) tab, select Add and click Upload Certificate Authority File (CA)
to upload a file.
4 Navigate to Certificate Revocation List (CRL) tab and select Add. You can add the details
either byproviding the server URL, or by uploading the file saved on your local workstation.
5 Click Save. As shown below, the CA file and the CRL file have been added to the PKI profile
(My-PKI-Profile). The application profile should contain a CRL for each of the intermediate CA
in the chain of trust.
Creating PKI Application Profile using the NSX Advanced Load Balancer CLI
[admin:My-Avi-Controller-17.2.10]: > configure pkiprofile
test
[admin:My-Avi-Controller-17.2.10]: pkiprofile> ca_certs
New object being created
[admin:My-Avi-Controller-17.2.10]: pkiprofile:ca_certs> certificate --
Please input the value for field certificate (Enter END to terminate input):-----BEGIN
CERTIFICATE----- <————————— Paste cert here
MIIFAzCCA+ugAwIBAgIEUdNg7jANBgkqhkiG9w0BAQsFADCBvjELMAkGA1UEBhMC
VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3d3cuZW50
cnVzdC5uZXQvbGVnYWwtdGVybXMxOTA3BgNVBAsTMChjKSAyMDA5IEVudHJ1c3Qs
r2RsCAwEAAaOCAQkwggEFMA4GA1UdDwEB/wQEAwIBBjAP
jbEnmUK+xJPrSFdDcSPE5U6trkNvknbFGe/KvG9CTBaahqkEOMdl8PUM4ErfovrO
GhGonGkvG9/q4jLzzky8RgzAiYDRh2uiz2vUf/31YFJnV6Bt0WRBFG00Yu0GbCTy
BrwoAq8DLcIzBfvLqhboZRBD9Wlc44FYmc1r07jHexlVyUDOeVW4c4npXEBmQxJ/
B7hlVtWNw6f1sbZlnsCDNn8WRTx0S5OKPPEr9TVwc3vnggSxGJgO1JxvGvz8pzOl
u7sY82t6XTKH920l5OJ2hiEeEUbNdg5vT6QhcQqEpy02qUgiUX6C
-----END CERTIFICATE----- <————————— Press Enter key after pasting cert
END <————————— Type END and press Enter key
[admin:My-Avi-Controller-17.2.10]: pkiprofile:ca_certs> save
[admin:My-Avi-Controller-17.2.10]: pkiprofile> no crl_check <————————— Optional for
testing
[admin:My-Avi-Controller-17.2.10]: pkiprofile> save
Procedure
1 Navigate to Templates > Profiles > Application and select HTTP from Create drop-down list
to create a new HTTP application profile. Provide the desired name, and set the type to HTTP.
2 Select the Security tab, and choose the Require tab under the Client SSL Certificate
Validation.
3 Select the PKI profile created in the previous step, and add the desired HTTP headers that you
want to see in the application logs.
Procedure
2 Edit or create the application profile for your L4 SSL/TLS application. For instance, my-L4-app-
profile.
5 Enter the ssl_client_certificate_mode. If you key in just a portion of the keyword, followed
by two TAB key clicks, three choices will appear.
6 Pick the desired validation type, which is explained in a subseqent section of this article.
The following are the steps to associate application profile with virtual service:
3 Click edit icon and select the HTTP application profile created in the previous step.
The PKI profile has an option for full-chain CRL checking. You can enable this option by checking
Enable CRL Check box.
n Full-chain CRL checking disabled: By default, if client certificate validation is enabled in the
HTTP profile used by the virtual service, the PKI profile used by the virtual service must contain
at least one CRL, a CRL issued by the CA that signed the client’s certificate.
For a client to pass certificate validation, the CRL in the profile must be from the same CA that
signed the certificate presented by the client, and the certificate must not be listed in the CRL
as revoked.
n Full-chain CRL checking enabled: For more rigorous certificate validation, CRL checking can
enabled in the PKI profile. In this case, NSX Advanced Load Balancer requires the PKI profile
to contain a CRL for every intermediate certificate within the chain of trust for the client.
For a client to pass certificate validation, the profile must contain a CRL from each intermediate
CA in the chain of trust, and the certificate cannot be listed in any of the CRLs as revoked.
If the profile is missing a CRL for any of the intermediate CAs, or the certificate is listed as
revoked in any of those CRLs, the client’s request for an SSL session to the virtual service is
denied.
Note Another option in the PKI profile (Ignore Peer Chain) controls how NSX Advanced Load
Balancer assembles the chain of trust for a client, specifically whether the intermediate certificates
presented by the client are allowed to be used. If full-chain CRL checking is enabled, the PKI
profile must contain CRLs from the signing CAs for every certificate that is used to build a given
client’s chain of trust, whether the intermediate certificates are from the client or from the PKI
profile.
Here is an example of a PKI profile with CRL checking enabled. This profile also contains the
intermediate and root certificates that form the chain of trust for the server certificate. The profile
also contains the CRLs from the issuing authorities for the server and intermediate certificates.
The www.root.client.com CRL is used to verify whether certificate www.intermediate.client.com is
valid. Likewise, the www.intermediate.client.com CRL is used to verify whether the “client” (leaf)
certificatewww.client.client.com is valid.
2 Click Create.
4 If creating a new profile specify a name and add the key, certificate, and CRL files. Ensure the
profile contains a CRL for each intermediate CA in the chain of trust.
5 Click Save.
Use Case
If a certificate expires or it needs to be replaced, multiple virtual services can be impacted. You
can manually update each virtual service, one by one, to use a replacement certificate, presents
administrative burden. By updating the certificate in place, NSX Advanced Load Balancer lifts that
burden. Updating the pre-existing named certificate is automatically followed by a push to all
affected SEs, which in turn causes all affected virtual services to continue without interruption.
UI Interface
1 Navigate to Templates > Security > SSL/TLS Certificates.
2 Click the pencil icon at the extreme right of the row to open the certificate editor.
Note Any row listing a self-signed certificate will present no such option.
2 Two notification periods (45 days and 14 days) are specified and saved into the configuration.
Note The two dates are automatically inserted and displayed in sequence.
| dead_se_detection_timer | 360 |
| api_idle_timeout | 15 |
| allow_unauthenticated_nodes | False |
| cluster_ip_gratuitous_arp_period | 60 |
| vs_key_rotate_period | 60 |
| secure_channel_controller_token_timeout | 60 |
| secure_channel_se_token_timeout | 60 |
| max_seq_vnic_failures | 3 |
| vs_awaiting_se_timeout | 60 |
| vs_apic_scaleout_timeout | 360 |
| secure_channel_cleanup_timeout | 60 |
| attach_ip_retry_interval | 360 |
| attach_ip_retry_limit | 4 |
| persistence_key_rotate_period | 60 |
| allow_unauthenticated_apis | False |
| warmstart_se_reconnect_wait_time | 300 |
| vs_se_ping_fail | 60 |
| se_failover_attempt_interval | 300 |
| max_pcap_per_tenant | 4 |
| ssl_certificate_expiry_warning_days[1] | 30 days |
| ssl_certificate_expiry_warning_days[2] | 7 days |
| ssl_certificate_expiry_warning_days[3] | 1 days |
| seupgrade_fabric_pool_size | 20 |
| seupgrade_segroup_min_dead_timeout | 360 |
+-----------------------------------------+---------+
+-----------------------------------------+---------+
| Field | Value |
+-----------------------------------------+---------+
| uuid | global |
| unresponsive_se_reboot | 300 |
| crashed_se_reboot | 900 |
| se_offline_del | 172000 |
| vs_se_create_fail | 1500 |
| vs_se_vnic_fail | 300 |
| vs_se_bootup_fail | 300 |
| se_vnic_cooldown | 120 |
| vs_se_vnic_ip_fail | 120 |
| fatal_error_lease_time | 120 |
| upgrade_lease_time | 360 |
| query_host_fail | 180 |
| vnic_op_fail_time | 180 |
| dns_refresh_period | 60 |
| se_create_timeout | 900 |
| max_dead_se_in_grp | 1 |
| dead_se_detection_timer | 360 |
| api_idle_timeout | 15 |
| allow_unauthenticated_nodes | False |
| cluster_ip_gratuitous_arp_period | 60 |
| vs_key_rotate_period | 60 |
| secure_channel_controller_token_timeout | 60 |
| secure_channel_se_token_timeout | 60 |
| max_seq_vnic_failures | 3 |
| vs_awaiting_se_timeout | 60 |
| vs_apic_scaleout_timeout | 360 |
| secure_channel_cleanup_timeout | 60 |
| attach_ip_retry_interval | 360 |
| attach_ip_retry_limit | 4 |
| persistence_key_rotate_period | 60 |
| allow_unauthenticated_apis | False |
| warmstart_se_reconnect_wait_time | 300 |
| vs_se_ping_fail | 60 |
| se_failover_attempt_interval | 300 |
| max_pcap_per_tenant | 4 |
| ssl_certificate_expiry_warning_days[1] | 45 days |
| ssl_certificate_expiry_warning_days[2] | 30 days |
| ssl_certificate_expiry_warning_days[3] | 14 days |
| ssl_certificate_expiry_warning_days[4] | 7 days |
| ssl_certificate_expiry_warning_days[5] | 1 days |
| seupgrade_fabric_pool_size | 20 |
| seupgrade_segroup_min_dead_timeout | 360 |
+-----------------------------------------+---------+
To remove any of the warning_days entries, execute a sequence within the configure command.
For instance,
Note Add as many warning_days entries as you like. However, when removing them, NSX
Advanced Load Balancer will reject any attempt to reduce the number of entries below three.
The support for HSM and ASM communication on NSX Advanced Load Balancer is as follows:
n NSX Advanced Load Balancersupport dedicated interfaces for HSM communication on new
Service Engines.
n NSX Advanced Load Balancersupport dedicated interfaces for ASM (sideband) communication
on new and existing Service Engines.
n NSX Advanced Load Balancersupport dedicated interfaces for HSM communication on new
and existing NSX Advanced Load Balancer Controller.
For more information,see Additional Deployment Options section in the Cisco CSP Installation
guide.
NSX Advanced Load Balancer includes integration support for networked Thales Luna HSM
products (formerly SafeNet Luna Network HSM) and AWS CloudHSM V2.
This article covers the Thales Luna Network HSM (formerly SafeNet Luna Network HSM)
integration. For more information on the re-branding, click here.
Integration Support
NSX Advanced Load Balancer can be configured to support a cluster of HSM devices in high
availability (HA) mode. NSX Advanced Load Balancer support of HSM devices requires installation
of the user’s Thales Luna Client Software bundle, which can be downloaded from the Thales
website.
By default, NSX Advanced Load Balancer Controller and Service Engines use their respective
management interfaces for HSM communication. On CSP, NSX Advanced Load Balancer supports
the use of a dedicated Service Engine data interface for HSM interaction. Also, on the CSP
platform, you can use dedicated Controller interface for HSM communication.
You can choose to create the HSM group in the admin tenant with all the Service Engines spread
across multiple tenants. This way, HSM can be enabled on a per-SE-group basis by attaching the
HSM group to the corresponding SE group. In this mode, the configuration to choose between
a dedicated interface and a management interface for HSM communication is done in the admin
tenant; all other tenants are forced to use that configuration.
Alternatively, you can create HSM groups in their respective tenants. The configuration choice of
a dedicated or management interface for HSM communication is determined at the tenant level.
In this mode, Controller IPs can overlap in every HSM group. Internally, the certificate for these
overlapping clients is created once and reused for any subsequent HSM group creation.
Prerequisites
n Thales Luna devices are installed on your network.
n Thales Luna devices are reachable from the NSX Advanced Load Balancer Controller and
Service Engines.
n Thales Luna devices must have a virtual HSM partition defined before installing the client
software. Clients are associated with a unique partition on the HSM. These partitions should
be pre-created on all the HSMs that will be configured in HA/non-HA mode. Also note that
the password to access these partitions should be the same across the partitions on all HSM
devices.
n Server certificates for Thales Luna devices are available for creating the HSM Group in NSX
Advanced Load Balancer Controller for mutual authentication.
n Each NSX Advanced Load Balancer Controller and Service Engine must:
n Have the client license from Thales Luna to access the HSM.
n Be able to reach the HSM at ports 22 and 1792 through Controller management
or Controller dedicated and Service Engine management or Service Engine dedicated
management interface.
To enable support for Thales Luna Network HSM, the downloaded Thales Luna client software
bundle must be uploaded to the NSX Advanced Load Balancer Controller. It must be named
safenet.tar and can be prepared as follows:
n Copy files from the downloaded software into any given directory, for instance, safenet_pkg.
n Change directory (cd) to that directory, and enter the cp commands as follows:
cp 610-012382-008_revC/linux/64/configurator-5.4.1-2.x86_64.rpm
configurator-5.4.1-2.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/configurator-7.3.0-165.x86_64.rpm
configurator-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/libcryptoki-7.3.0-165.x86_64.rpm
libcryptoki-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/vtl-7.3.0-165.x86_64.rpm vtl-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/lunacmu-7.3.0-165.x86_64.rpm lunacmu-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/cklog-7.3.0-165.x86_64.rpm cklog-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/multitoken-7.3.0-165.x86_64.rpm
multitoken-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/ckdemo-7.3.0-165.x86_64.rpm ckdemo-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/lunacm-7.3.0-165.x86_64.rpm lunacm-7.3.0-165.x86_64.rpm
tar -cvf safenet.tar configurator-7.3.0-165.x86_64.rpm libcryptoki-7.3.0-165.x86_64.rpm
vtl-7.3.0-165.x86_64.rpm lunacmu-7.3.0-165.x86_64.rpm cklog-7.3.0-165.x86_64.rpm
multitoken-7.3.0-165.x86_64.rpm ckdemo-7.3.0-165.x86_64.rpm lunacm-7.3.0-165.x86_64.rpm
n HSM package can be uploaded in the web interface at Administration > Settings > Upload
HSM Packages.
n HSM package upload is also supported through the CLI. You can use the following command
in the NSX Advanced Load Balancer Controller CLI shell to upload the HSM package:
This command uploads the packages and installs them on the NSX Advanced Load Balancer
Controller or NSX Advanced Load Balancer Controllers,if clustered. If the Controller is deployed as
a three-node cluster, the command installs the packages on all three nodes. Upon completion of
the above command, the system displays HSM Package uploaded successfully message.
n NSX Advanced Load BalancerService Engines in an SE group referring to an HSM group need
a one-time reboot for auto-installation of the HSM packages. To reboot NSX Advanced Load
BalancerSE, issue the following CLI shell command:
n To allow NSX Advanced Load Balancer Controllers to talk to Thales Luna HSM, the Thales
Luna client software bundle distributed with the product must be uploaded to NSX Advanced
Load Balancer. The software bundle preparation and upload is described above. In this
example, note that the NSX Advanced Load BalancerSE name is Avi-se-ksueq.
Step 1: Create the HSM Group and add the HSM devices to it
To begin, use the following commands on Controller bash shell to fetch the certificates of the HSM
servers. The example below fetches certificates from two servers 1.1.1.11 and 1.1.1.13.
The contents of these certificates are used while creating the HSM Group. NSX Advanced Load
Balancer supports trusted authentication for all nodes in the system. This can be done by
providing IP addresses of Controller(s) and Service Engine(s) which will interact with HSM. Use the
below options of the HSM Group editor. The Thales Luna server certificates can also be provided
by the security team managing the Thales Luna appliances. In either case, having access to these
certificates is a pre-requisite to create any HSM configuration in NSX Advanced Load Balancer.
By default, SEs use the management network to interact with the HSM. On CSP, NSX Advanced
Load Balancer also supports the use of a dedicated network for HSM interaction. Also, on the CSP
platform, you can use a dedicated interface on the Controllers for HSM communication.
The following are the steps to create the HSM group from the GUI:
n Switch to the desired tenant and navigate to Templates > Security > HSM Groups.
n Select either Dedicated Network or Management Network for the HSM to communicate with.
n Specify the client IP addresses of the desired Thales Luna appliances and the respective server
certificates obtained previously. Multiple HSMs may be included in the group via the green
Add Additional HSM button.
The Password and partition Serial Number fields can be populated if the respective HSM partition
passwords are available at this stage. Otherwise, this has to be done after client registration step
below.
Note
n If any dedicated SE or Controller interfaces have been configured for HSM communication,
check Dedicated Interface box and verify the IPs listed are those of the desired dedicated
interfaces on the Service Engines and/or Controllers. The UI should allow changing the IP
addresses if this is not the case.
n All NSX Advanced Load Balancer Controller's and all Service Engines associated with the SE
group should have at least one IP address in the list to ensure access to the HSMs. This step
is extremely important because Thales Luna appliances will not allow communications from
un-registered client-IP addresses. Click Save once all client-IP addresses have been verified.
Step 2: Register the Client with HSM Devices for Mutual Authentication
The clients in this case are NSX Advanced Load Balancer Controller's and Service Engines and the
generated client certificates need to be registered with the Thales Luna appliances for purposes of
mutual authentication. This can be done directly per steps 3 and 4 below or by sending the client
certificates to the concerned security team managing the HSM appliances.
The following are the steps to register the client with HSM devices:
1 Navigate to Templates > Security > HSM Groups. Click edit icon to download generated
certificates.
2 After download, save the certificate as **.pem**. In this example, the certificate needs to be
saved as 10.160.100.220.pem before scp to HSM.
4 Perform the above steps (1) and (2) for all HSM devices. The next steps must only be
performed after all client certificates are registered on all HSM appliances configured above to
verify the registration. First ensure the (partition) password is populated in the HSM group by
editing the same.
5 On the NSX Advanced Load Balancer Controller bash shell, the application ID must be opened
before the SE can communicate with the HSM. This can be done using the following command,
which will automatically be replicated to each NSX Advanced Load Balancer Controller in the
cluster. In case HSM groups were created in different tenants, safenet.py scripts can take an
optional argument -t . Alternately the default admin tenant can be provided as the argument
value. Verify that the application ID can be opened successfully per output below.
Note In the step above, if an error message appears stating that the application is already open,
you can close it using the following command. After closing it, reopen the application.
Verify that the partition serial numbers listed below match the ones set up on the Thales
Luna appliances or the ones provided by the security team. This should also match with the
configuration in the HSM group object. Internally, the serial number is used to configure HA if the
client is registered on more than one partition on the HSM.
Number of slots: 5
You can enable HA from the CLI as follows after switching to the appropriate tenant if required.
Alternatively, this can also be done in the web interface by selecting the HSM group and editing it
to select the Enable HA check box. This option is available only while editing the HSM group with
more than one server.
Once HA is set up, verify the output of the listSlots command to ensure the avi_group virtual
card slot is configured.
Number of slots: 1
n Switch to appropriate tenant and navigate to Infrastructure > Cloud Resources > Service
Engine Group.
n Bring up the Service Engine group editor for the desired Service Engine group.
n Click Save.
The Controller is setup as a client of HSM and can be used to create keys and certificates on the
HSM. Both the RSA and EC type of key/certificate creation is supported.
Use a browser to navigate to the Controller’s management IP address. If NSX Advanced Load
Balancer is deployed as a three-node Controller cluster, navigate to the management IP address
of the cluster. Use this procedure to create keys and certificates. The creation process is similar to
any other key/certificate creation. For a key/certificate bound to HSM, select the HSM group while
creating the object. The picture below illustrates the creation of self-signed certificate bound to a
HSM group.
Note HSM Group t2-avihsm2 is selected. This is the HSM group that was created earlier. You can
create the self-signed EC cert on HSM provided in t2-avihsm2 by clicking on Save button.
Use a browser to navigate to the NSX Advanced Load Balancer Controller’s management IP
address. If NSX Advanced Load Balancer is deployed as a three-node Controller cluster, navigate
to the management IP address of the cluster. Use this procedure to import the private keys
created using the Thales Luna cmu/sautil utilities, and the associated certificates.
n Upload the certificate file in Upload or Paste Certificate File field in the Certificate
Information section. You can select Paste text (to copy-and-paste the certificate text
directly in the web interface) or Upload File.
n If the key file is secured by a passphrase, enter it in the Key Passphrase field.
n Paste the key file (if copy-and-pasting) or navigate to the location of the file (if uploading).
n Paste the key file (if copy-and-pasting) or navigate to the location of the file (if uploading).
n Click Validate. NSX Advanced Load Balancer checks the key and certificate files to ensure they
are valid.
n Select the HSM certificate from the SSL Certificate drop-down list.
n Click Advanced. On the Advanced page, select the SE group to which the HSM group was
added.
n Click Save.
The virtual service is now ready to handle SSL/TLS traffic using the encryption/decryption services
of the Thales Luna Network HSM device.
n Cisco CSP
Note Starting with NSX Advanced Load Balancer version 20.1.5, dedicated interfaces for Service
Engines deployed in vCenter No Orchestrator environments are supported.
Dedicated hardware security module (HSM) interfaces on NSX Advanced Load Balancer Service
Engines use the following configuration parameters:
n avi.hsm-ip.SE
n avi.hsm-static-routes.SE
n avi.hsm-vnic-id.SE
Parameters
avi.hsm-ip.SE
n Description : This is the IP address of the dedicated HSM vNIC on the SE (this is NOT the IP
address of the HSM).
n Format: IP-address/subnet-mask
avi.hsm-static-routes.SE
n Description : These are comma-separated, static routes to reach HSM devices. Even /32
routes can be provided.
Note If there is a single static route, provide the same and ensure the square brackets are
matched. Also, if HSM devices are in the same subnet as the dedicated interfaces, provide the
gateway as the default gateway for the subnet.
n Format : [ hsm network1/mask1 via gateway1, hsm network2/mask2 via gateway2 ] OR [ hsm
network1/mask1 via gateway1 ]
avi.hsm-vnic-id.SE
n Description : For CSP, this is the ID of the dedicated HSM vNIC and is typically 3 on CSP
(vNIC0 is management interface, vNIC1 is data-in interface and vNIC2 is data-out interface).
For vCenter No Orchestrator, this is the vNIC ID,for instance, “3”for “Eth3”.
Instructions
Cisco CSP
A sample YAML file for the Day Zero configuration on the CSP is shown below:
Once an NSX Advanced Load Balancer Service Engine is created with the Day Zero configuration
file and appropriate virtual NIC interfaces are added to the SE service instance on Cisco CSP,
verify that the dedicated vNIC configuration is applied successfully and the HSM devices are
reachable via this interface. In this case, interface eth3 (dedicated HSM interface) is configured
with IP 10.160.103.227/24.
Login into the bash prompt of NSX Advanced Load Balancer SE and use ip route command and
run a ping test to check reachability of the dedicated interface IP.
vCenter No-Orchestrator
When the Service Engine is being deployed, add the OVF properties listed above to the virtual
machine. For existing Service Engines, the SE virtualmachine can be powered off, the OVF
properties added, and the VM powered on.
Additional Information
For different types of supported configuration for HSM and ASM communication on NSX
Advanced Load Balancer, refer to How to configure dedicated interfaces for HSM and ASM
communication on Cisco CSP.
n Cisco CSP
Background
Dedicated hardware security module (HSM) interfaces on NSX Advanced Load Balancer Service
Engines use the following configuration parameters:
n avi.hsm-ip.SE
n avi.hsm-static-routes.SE
n avi.hsm-vnic-id.SE
For existing SEs, these parameters can be populated in the /etc/ovf_config file.
Note All parameters in this file are comma-separated and the file format is slightly different
from the YML file used for spinning up new Service Engines. However, the parameters and their
respective formats are exactly the same as they are for new Service Engines.
YAML parameters
avi.hsm-ip.SE
n Description : This is the IP address of the dedicated HSM vNIC on the SE (this is NOT the IP
address of the HSM).
n Format : IP-address/subnet-mask.
avi.hsm-static-routes.SE
n Description : These are comma-separated, static routes to reach HSM devices. Even /32
routes can be provided.
Note If there is a single static route, provide the same and ensure the square brackets are
matched. Also, if HSM devices are in the same subnet as the dedicated interfaces, provide the
gateway as the default gateway for the subnet.
n Format : [ hsm network1/mask1 via gateway1, hsm network2/mask2 via gateway2 ] OR [ hsm
network1/mask1 via gateway1 ]
avi.hsm-vnic-id.SE
n Description : This is the ID of the dedicated HSM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface).
Instructions
CSP Configuration
To add a dedicated HSM vNIC on an existing SE CSP service, perform the following steps:
Note In the sample configuration provided below, vNIC3 is used which is actually the fourth NIC
on the CSP service.
1 Navigate to Configuration > Service > Action > Power Off to power off NSX Advanced Load
Balancer SE service using CSP user interface.
2 Add a new vNIC to the SE with desired parameters Navigate to Configuration > Service >
Action > Service Edit > Add vnic to add a new vNIC to the SE with desired parameters.
Provide VLAN id, VLAN type, VLAN tagged, Network Name, Model, etc., and click Submit.
3 To power on the SE service on CSP UI navigate to Configuration > Service > Action > Power
ON.
1 Perform the following steps using NSX Advanced Load Balancer Service Engine bash shell.
ssh admin@<SE-MGMT-IP>
bash#
bash# sudo su
bash# /opt/avi/scripts/stop_se.sh
bash# mv /var/run/avi/ovf_properties.saved /home/admin
Note Perform a move operation; do not copy this file. Edit it to provide the three comma-
separated, HSM-dedicated NIC related parameters. The file looks like the following:
2 Verify that the dedicated vNIC information is applied correctly and the HSM devices are
reachable via this interface. In this sample configuration, the eth3 dedicated HSM interface
is configured with IP 10.160.103.227/24.
YAML Parameters
avi.asm-ip.SE
n Description : This is the ip address of the dedicated sideband interface on the SE (this is NOT
the self IP or virtual service IP of the ASM device).
n Format : IP-address/subnet-mask.
avi.asm-static-routes.SE
n Description : These are comma-separated, static routes to reach the sideband ASM virtual
service IP. Even /32 routes can be provided. The gateway will be the self IP of the ASM device.
Note If there is a single static route, provide the same and ensure the square brackets
are matched. Also, if the ASM virtual service IPs are in the same subnet as the dedicated
interfaces, provide the gateway as the default gateway for the subnet.
avi.hsm-vnic-id.SE
n Description : This is the ID of the dedicated ASM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface)
Instructions
A sample SE YAML file for the Day Zero configuration on the CSP will look as follows:
avi.mgmt-ip.SE: "10.128.2.18"
avi.mgmt-mask.SE: "255.255.255.0"
avi.default-gw.SE: "10.128.2.1"
AVICNTRL: "10.10.22.50"
AVICNTRL_AUTHTOKEN: “febab55d-995a-4523-8492-f798520d4515”
avi.asm-vnic-id.SE: ‘3'
avi.asm-static-routes.SE: [169.254.1.0/24 via 10.160.102.1, 169.254.2.0/24 via 10.160.102.2]
avi.asm-ip.SE: 10.160.102.227/24
Once the SE is created with this Day Zero configuration and appropriate virtual NIC interfaces are
added to the SE service instance on CSP, verify that the dedicated vNIC configuration is applied
successfully and the ASM virtual service IPs are reachable via this interface. In this case, the
interface eth3 is dedicated sideband ASM interface and it is configured with IP 10.160.102.227/24.
Note All parameters in this file are comma-separated and the file format is slightly different
from the YML file used for spinning up new Service Engines. However, the parameters and their
respective formats are exactly the same as they are for new Service Engines.
YAML parameters
avi.asm-ip.SE
n Description : This is the IP address of the dedicated sideband interface on the SE (this is NOT
the self IP or virtual service IP of the ASM device).
n Format : IP-address/subnet-mask.
avi.asm-static-routes.SE
n Description : These are comma-separated, static routes to reach the sideband ASM virtual
service IPs. Even /32 routes can be provided. The gateway will be the self IP of the ASM
device.
Note If there is a single static route, provide the same and ensure the square brackets are
matched. Also, if the ASM virtual service IPs are in the same subnet as the dedicated interfaces,
provide the gateway as the default gateway for the subnet.
avi.hsm-vnic-id.SE
n Description : This is the ID of the dedicated ASM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface).
avi.asm-vnic- ID of the dedicated ASM vNIC and is numeric vNIC ID avi.asm-vnic-id.SE: '3'
id.SE typically 3 on CSP (vNIC0 is management
interface, vNIC1 is data-in interface, and
vNIC2 is data-out interface)
Instructions
Follow the below-mentioned steps to add a dedicated ASM vNIC on an existing SE CSP service. In
this example, vNIC 3 is used which is actually the fourth NIC on the CSP service.
n Navigate to Configuration > Services > Action > Power Off to power off the SE service on
Cisco CSP.
n To add a new vNIC to the SE with desired parameters, navigate to Configuration > Services
> Action > Service Edit and click Add vNIC and provide VLAN id, VLAN type, VLAN tagged,
network Name, Model etc., and click Submit.
n Navigate to Configuration > Services > Action and select Power On to power on the SE
service on Cisco CSP.
Perform the following steps on the Service Engine using bash shell.
n SSH to NSX Advanced Load Balancer SE IP and perform the following steps.
Note Move; do not copy this file. Edit it to provide the three comma-separated ASM-dedicated
NIC related parameters. The file looks like the following:
n Verify that the dedicated vNIC information is applied correctly and the ASM virtual service IPs
are reachable via this interface. In this case, the interface eth3 is dedicated ASM interface and
it is configured with IP 10.160.102.227/24.
YAML parameters
HSM parameters
1 avi.hsm-ip.SE
a Description : This is the IP address of the dedicated HSM vNIC on the SE (this is NOT the
IP address of the HSM device).
b Format : IP-address/subnet-mask.
2 avi.hsm-static-routes.SE
a Description: These are comma-separated, static routes to reach HSM devices. Even /32
routes can be provided.
Note If there is a single static route, provide the same and ensure the square brackets are
matched. Also, if HSM devices are in the same subnet as the dedicated interfaces, provide
the gateway as the default gateway for the subnet.
3 avi.hsm-vnic-id.SE
a Description: This is the ID of the dedicated HSM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface).
avi.hsm-vnic- ID of the dedicated HSM vNIC and is numeric vNIC ID avi.hsm-vnic-id.SE: '3'
id.SE typically 3 on CSP (vNIC0 is management
interface, vNIC1 is data-in interface, and
vNIC2 is data-out interface)
ASM parameters
1 avi.asm-ip.SE
a Description: This is the ip address of the dedicated sideband interface on the SE (this is
NOT the self IP or virtual service IP of the ASM device).
b Format: IP-address/subnet-mask.
2 avi.asm-static-routes.SE
a Description: These are comma-separated, static routes to reach the sideband ASM vips.
Even /32 routes can be provided. The gateway will be the self IP of the ASM device.
Note If there is a single static route, provide the same and ensure the square brackets
are matched. Also, if the ASM virtual service IPs are in the same subnet as the dedicated
interfaces, provide the gateway as the default gateway for the subnet.
3 avi.hsm-vnic-id.SE
a Description: This is the ID of the dedicated ASM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface)
avi.asm-vnic- ID of the dedicated ASM vNIC and is numeric vNIC ID avi.asm-vnic-id.SE: '3'
id.SE typically 3 on CSP (vNIC0 is management
interface, vNIC1 is data-in interface, and
vNIC2 is data-out interface)
Instructions
A sample Service Engine YAML file for the Day Zero configuration on Cisco CSP will look like as
follows:
avi.hsm-ip.SE: 10.160.103.227/24
avi.hsm-static-routes.SE:[ 10.128.1.0/24 via 10.160.103.1, 10.128.2.0/24 via 10.160.103.2]
avi.hsm-vnic-id.SE: '3'
avi.asm-vnic-id.SE: ‘4'
avi.asm-static-routes.SE: [169.254.1.0/24 via 10.160.102.1, 169.254.2.0/24 via 10.160.102.2]
avi.asm-ip.SE: 10.160.102.227/24
Once SE is created with this Day Zero configuration and appropriate virtual NIC interfaces are
added to the SE service instance in CSP, verify that the dedicated vNIC configuration is applied
successfully and the HSM devices and ASM virtual service IPs are reachable via the dedicated
interfaces. In this sample configuration, the interface eth3 is configured as the dedicated HSM
interface with IP 10.160.103.227/24 and the interface eth4 is configured as the sideband ASM
interface with IP 10.160.102.227/24.
Note NSX Advanced Load Balancer Service Engine requires five interfaces for this configuration.
n vNIC0: Management interface
To verify configuration of both the dedicated interfaces, ssh to NSX Advanced Load Balancer SE
IP, run ip route command, and perform a ping test.
bash# ip route
default via 10.10.2.1 dev eth0
10.10.1.0/24 via 10.160.103.1 dev eth3
10.10.2.0/24 via 10.160.103.2 dev eth3
10.10.2.0/24 dev eth0 proto kernel scope link src 10.128.2.27
10.160.103.0/24 dev eth3 proto kernel scope link src 10.160.103.227
bash# ping -I eth3 <HSM-IP>
ping -I eth3 10.10.1.51
PING 10.10.1.51 (10.128.1.51) from 10.160.103.227 eth3: 56(84) bytes of data.
64 bytes from 10.10.1.51: icmp_seq=1 ttl=62 time=0.229 ms
Note All parameters in this file are comma-separated and the file format is slightly different
from the YML file used for spinning up new Service Engines. However, the parameters and their
respective formats are exactly the same as they are for new Service Engines.
YAML parameters
avi.asm-ip.SE
n Description: This is the IP address of the dedicated sideband interface on the SE (this is NOT
the self IP or virtual service IP of the ASM device).
n Format : IP-address/subnet-mask.
avi.asm-static-routes.SE
n Description: These are comma-separated, static routes to reach the sideband ASM virtual
service IPs. Even /32 routes can be provided. The gateway will be the self IP of the ASM
device.
Note Note: If there is a single static route, provide the same and ensure the square brackets
are matched. Also, if the ASM virtual service IPs are in the same subnet as the dedicated
interfaces, provide the gateway as the default gateway for the subnet.
avi.hsm-vnic-id.SE
n Description: This is the ID of the dedicated ASM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface)
avi.asm-vnic- ID of the dedicated ASM vNIC and is numeric vNIC ID avi.asm-vnic-id.SE: '3'
id.SE typically 3 on CSP (vNIC0 is management
interface, vNIC1 is data-in interface, and
vNIC2 is data-out interface)
Instructions
Follow the below-mentioned steps to add a dedicated ASM vNIC on an existing SE CSP service. In
this example, vNIC 3 is used which is actually the fourth NIC on the CSP service.
n Navigate to Configuration > Services > Action > Power Off to power off the SE service on
Cisco CSP.
n To add a new vNIC to the SE with desired parameters, navigate to Configuration > Services >
Action > Service Edit and click Add vNIC then provide VLAN id, VLAN type, VLAN tagged,
network Name, Model etc. Click Submit.
n Navigate to Configuration > Services > Action and select Power On to power on the SE
service on Cisco CSP.
Perform the following steps on the Service Engine using bash shell.
n SSH to NSX Advanced Load Balancer SE IP and perform the following steps.
ssh admin@<SE-MGMT-IP>
bash#
bash# sudo su
bash# /opt/avi/scripts/stop_se.sh
bash# mv /var/run/avi/ovf_properties.saved /home/admin
Note Do not copy the file Move;. Edit the file to provide the three comma-separated ASM-
dedicated NIC related parameters. The file looks like the following:
n Verify that the dedicated vNIC information is applied correctly and the ASM virtual service IPs
are reachable via this interface. In this case, the interface eth3 is dedicated ASM interface and
it is configured with IP 10.160.102.227/24.
n avi.hsm-ip.Controller
n avi.hsm-static-routes.Controller
n avi.hsm-vnic-id.Controller
YAML parameters
For configuration on a new NSX Advanced Load Balancer Controller, these parameters can be
provided in the day-zero YAML file.
avi-hsm-ip.Controller
n Description: This is the ip address of the dedicated HSM vNIC on the Controller (this is not the
IP address of the HSM).
n Format: IP-address/subnet-mask
avi.hsm-static-routes.Controller
n Description: These are comma-separated, static routes to reach the HSM devices from the
respective NSX Advanced Load Balancer Controller. Even /32 routes can be provided.
Note If there is a single static route, provide the same and ensure the square brackets are
matched. Also, if the HSM devices are in the same subnet as the dedicated interfaces, provide
the gateway as the default gateway for the subnet.
avi.hsm-vnic-id.Controller
n Description: This is the ID of the dedicated HSM vNIC and is typically 1 on CSP. vNIC0 is the
management interface, which is the only interface on NSX Advanced Load Balancer Controller
by default.
n Format: ‘numeric-vnic-id’
Instructions
A sample NSX Advanced Load Balancer Controller service YAML file for the Day Zero
configuration on the CSP looks like as follows:
avi.default-gw.Controller: 10.128.2.1
avi.mgmt-ip.Controller: 10.128.2.30
avi.mgmt-mask.Controller: 255.255.255.0
avi.hsm-ip.Controller: 10.160.103.230/24
avi.hsm-static-routes.Controller: [10.128.1.0/24 via 10.160.103.1, 10.130.1.0/24 via
10.160.103.1]
avi.hsm-vnic-id.Controller: '1'
Once NSX Advanced Load Balancer Controller is created with this Day Zero configuration and
additional virtual NIC interface is added to the Avi Controller service instance on CSP, verify that
the dedicated vNIC configuration is applied successfully and the HSM devices are reachable via
the dedicated interface. In this case we configured eth1 as the dedicated HSM interface with IP
10.160.103.230/24.
n Annexure A: Approved Security Functions for FIPS PUB 140-2, Security Requirements for
Cryptographic Modules.
n Annexure C: Approved Random Number Generators for FIPS PUB 140-2, Security
Requirements for Cryptographic Modules.
There are four levels of security in the FIPS 140-2 standard, and for each level there are different
areas related to the design and implementation of a tool’s cryptographic design. The following are
the levels of security:
n Level-1: This defines the standards for basic security in a cryptographic module and enables
FIPS approved cipher suites.
n Level-2: This defines the standards for tamper-evidence physical security and role-based
authentication of cryptographic modules. Tamper-evidence physical security includes tamper-
evident coatings, seals, or pick-resistant locks.
n Level-3: This defines standards for tamper-resistance physical security and identity-based
authentication. Hardware devices must have internal HSMs with tamper-resistant features such
as a sealed epoxy cover, which when removed, must render the device useless and make the
keys inaccessible.
n Level-4: This requires tamper detection circuits to detect any device penetration, and erase
the contents of the device in the event of tampering.
VMware has specifically obtained FIPS 140-2 validation of its OpenSSL FIPS Object Module
v2.0.20-vmw that is used in NSX Advanced Load Balancer components.
Note Security Levels 2–4 are specific to various levels of physical security, such as:
n tamper-evidence physical security: This includes tamper-evident coatings, seals, or pick-
resistant locks.
n tamper-resistance physical security: This includes features such as sealed epoxy cover to
protect the harware device.
These security levels do not apply to software solutions, where hardware is used to run the
software solution.
The NSX Advanced Load Balancer uses the FIPS canister 2.0.20-vmw, which is compliant with
FIPS 140-2 Level 1 cryptography.
Supported Environments
FIPS is supported when:
n The SEs are deployed in a VMware vSphere environment, specifically with the following cloud
connectors:
n FIPS mode can be enabled only on deployments where no Service Engines are present.
n FIPS mode is enabled on the entire system, either on the Controller or on all nodes in case of a
cluster. FIPS is also enabled on all the SEs.
n There is no option to selectively enable FIPS for specific components, only for Controller, SEs,
or specific SE Groups.
n Once the NSX Advanced Load Balancer system is in FIPS mode, you cannot disable the FIPS
mode.
1 Ensure that the Controller does not have any SEs deployed. It is recommended to disable all
virtual services and delete any existing SEs.
3 Upload the controller.pkg file (i.e., the upgrade package) for the same Controller base
version, to the Controller node. For instance, if the Controller being used is on version 20.1.5,
upload the 20.1.5 controller.pkg to the Controller.
For step-by-step instructions on how to upload, see Flexible Upgrades for NSX Advanced
Load Balancer.
3 Upload the controller.pkg file, i.e., the upgrade package, for the same Controller base
version, to the leader node. For instance, if the version of the Controller being used is 20.1.5,
upload the 20.1.5 version of controller.pkg to the leader.
For step-by-step instructions on how to upload, see Flexible Upgrades for NSX Advanced
Load Balancer.
n Adding a new Controller node to a Cluster: A Controller cluster requires all the nodes to be
FIPS enabled. If a Controller node needs to be replaced with a new Controller node, ensure
that the new node has FIPS enabled, before adding it to the Controller cluster.
n Upgrading a Deployment with FIPS Mode Enabled: Upgrade and Patch Upgrade in the FIPS
mode follow the same process as the non-FIPS deployments. No special considerations are
required for FIPS deployments.
n Disabling FIPS Mode: Once enabled, disabling of FIPS compliance mode is not supported.
n TLS v1.3 and 0-RTT (the enable_early_data option under the SSL Profile).
n The set of elliptic curves (EC) which are not supported as per OpenSSL FIPS Object Module of
VMware.
n Async SSL (This is a feature under the SE Group that goes in tandem with the HSM
configuration. This feature is not relevant when HSM is not allowed).
n L7 Sideband
CIS employs a closed crowdsourcing model to identify and refine effective security measures,
where individual recommendations are shared with the community for evaluation through a
consensus decision-making process. At a national and international level, CIS plays an essential
role in forming security policies and decisions by maintaining CIS Controls and CIS Benchmarks
and hosting the Multi-State Information Sharing and Analysis Center (MS-ISAC).
CIS Controls
CIS Controls and CIS Benchmarks provide global standards for internet security. The CIS
Benchmark is categorized as Controls, and each Control is a collection of standard security
tests. The CIS Controls include the popular 20 security controls, which map to many compliance
standards. The CIS Controls advocate a defense-in-depth model to prevent and detect
malware.For instance, Controls 1.1 is for Filesystem Configurations, a collection of tests like 1.1.1 -
Disable unused filesystems, which in turn comprises sub-set tests such as 1.1.1.1 - Ensure mounting
of cramfs filesystems is disabled, 1.1.1.2 - Ensure mounting of freevfs filesystems is disabled, and
others.
For more information on the relevant Controls and tests for Distributed Independent
Linux Benchmark, see CIS Ubuntu Linux LTS Benchmark, available for download at https://
learn.cisecurity.org/benchmarks.
Level 1 tests are part of the CIS 1.0 profile. As per CIS, the Level 1 - Server profile tests are practical
and prudent and are intended to provide a clear security benefit. These tests may inhibit the utility
of the technology beyond acceptable means.
Level 2 tests are part of the CIS 2.0 profile. This profile is an extension of the Level 1 - Server
profile and includes both Level 1 and Level 2 tests. As per CIS, the tests are intended for
environments or use cases where security is paramount for a deep defense mechanism. These
tests may negatively inhibit the utility or performance of the technology.
Note The Benchmark declares a Control as failed, even if one test within the Control fails.
Configuring CIS mode enables iptables, which cover all the 3.6.X set of Controls.
Configuring this command applies only to the Controllers and Service Engines created after that. If
the CIS mode needs to be enabled for existing SEs, follow one of the following suggested steps:
n No downtime: Scale out all Service Engines so that the services fall onto the newly spun SEs.
CIS mode will be enabled on the newly created SEs. Scale in to fall back to the former setup,
but with CIS mode SEs.
n With downtime: Reboot the SEs. When the SEs come back online, the CIS mode will be
enabled.
Note The list below only indicates the Benchmark denomination. For more information on the
Benchmarks tests, see CIS Benchmarks Landing Page.
n UDF File System – 1.1.1.7: Requires the UDF kernel not to load, leading to Service Engine
boot-up issues and failure to connect to the Controller.
n Separate Partitions – 1.1.2 to 1.1.17: Requires a separate partition for /tmp, /var, /var/
log, /var/log/audit, and /home. This does not comply with the two seperate logical
partitions designed to allow NSX Advanced Load Balancer version rollback.
n File System Integrity Checking – 1.3.2: Requires installation of the aide tool, which is CPU
intensive and leads to prolonged duration runs.
n 1.4.1 to 1.4.4: Requires a password-based grub bootloader menu that will interfere with the
NSX Advanced Load Balancer single-click upgrade functionality.
n 3.4.3: Requires adding default deny in /etc/hosts.deny, which would impact Service Engine
connectivity with the NSX Advanced Load Balancer Controller.
n 5.4.1.1 to 5.4.1.4: Requires enforcing password policy at the Service Engine level.
NSX Advanced Load Balancer supports only admin users. The Controller manages the password
policy, and when the admin user password is changed, it synchronizes this password across the
fleet of SEs. So, no password enforcement is required at the SE level.
Additional Information
For more information on executing Benchmarks using the Inspec tool, see Executing Benchmarks
using Inspec.
The following is a list of common denial of service (DoS) attacks and directed DoS (DDoS) attacks
mitigated by NSX Advanced Load Balancer.
Layer 3 Unknown protocol Packets with unrecognized Packets are dropped at the
IP protocol. dispatcher layer.
SYN flood Send TCP SYNs without If the TCP table is being
acknowledging SYN-ACK; filled with half connections,
the victim’s TCP table will like uncompleted TCP 3-
grow rapidly. way handshakes, begin
using SYN cookies.
X-mas tree TCP packets with all the Packets are dropped in the
flags set to various values protocol stack of the SE.
to overwhelm the victim’s
TCP stack.
Bad RST flood Send TCP RST packets with Packets are dropped in
bad sequence. the protocol stack in the
SE if the packet sequence
numbers are outside the
TCP window.
Bad sequence numbers TCP packets with bad Packets with sequence
sequence numbers. numbers outside the TCP
window are dropped in the
protocol stack in the SE.
Zero/ small window Attacker advertises a zero If the first TCP packet from
or very small window, the client, after a SYN, is
<100, after the TCP 3-way received with a zero or
handshake. small window, the SE drops
the packet and a RST is
sent.
Rate limiting CPS per IP Connection flood The rate limits configured
in the application profile
are applied. (Application
Profile > HTTP > DDoS >
Rate Limit HTTP and TCP
Settings).
Size limit for header and Resource consumption via The header-size limits
request long request time configured in the
application profile are used.
(Application Profile > HTTP
> DDoS > HTTP Limit
Settings > HTTP Size
Settings).
Rate limiting RPS per client Request flood The limit configured in the
IP application profile is used.
(Application Profile > HTTP
> DDoS > Rate Limit HTTP
and TCP Settings).
Rate limiting RPS per URL Request flood The limit configured in the
application profile is used.
(Application Profile > HTTP
> DDoS > Rate Limit HTTP
and TCP Settings).
DNS Amplification Egress The DNS virtual service Any requests coming from
is targeted by sending a defined range of source
very short queries which ports (well-known ports)
solicit very large responses, will be denied. The
spanning to multiple UDP range of ports to be
packets. The DNS virtual denied is configured in
services can be made to the Security Policy. To
Layer 7 (DNS)
participate in a reflection know how to configure
attack. The attacker spoofs a security policy for DNS
the DNS query’s source IP Amplification Egress DDoS
and source port to be that protection, click Configure
of a well known service port Security Policy for DNS
on a victim server. Amplification Egress DDoS
Protectionsection.
DNS Reflection Ingress Sending DNS Queries with Early dropping of unwanted
spoofed IP address of packets, at the dispatcher
the victim resulting in
swamping the victim with
unsolicited traffic via the
DNS server responses
DDoS Insights
The DDoS section on the right of the default security page breaks down distributed denial of
service data for the virtual service into the most relevant layer 4 and layer 7 attack data.
n L4 Attacks: The number of network attacks per second, such as IP fragmentation attacks
or TCP SYN flood. For the example shown here, each unacknowledged SYN is counted as
an attack. (This is the classic signature of the TCP SYN flood attack, a large volume of SYN
requests that are not followed by the expected ACKs to complete session setup.)
n L7 Attacks: The number of application attacks per second, such as HTTP SlowLoris attacks
or request floods. For the example shown here, every request that exceeded the configured
request throttle limit is counted as an attack. (See the application profile’s DDoS tab for
configuring custom layer seven attack limits.)
n Blocked Connections: If an attack was blocked, this is the number of connection attempts
blocked.
n Rate Limiters
Rate Limiters
Rate limiters are used to control the rate (count/period) of requests or connections sent or
received from a network. For instance, if you are using a virtual service that is configured to allow
1000 connections/ second and if the number of connections you make exceeds that limit, then
a rate limiting action will be triggered. You can configure this rate limiting action. The rate limits
allows a better flow of data and increases security by mitigating attacks such as DDoS.
n Count : It is the rate at which the token is generated. A token is consumed every time a
connection/request lands on the virtual service. If there is no token, then you can trigger the
rate limiting action.
n Burst size : It is the maximum number of tokens that can be held by the virtual service at any
given time.
n Period : It is the time period on which the rate limiting will be performed. In the above
example, it is 1000 connections/ second. You can configure the period to a different value
other than one second.
n Report Only
The following is the CLI for virtual service connection rate limiter:
You can check the Performance Limits box in Advanced tab of Applications > Virtual Service
window.
n Default Action
Note For this type of rate limiter, the default period is configured to one second.
For instace, assume that you want to rate limit user with IP subnet 172.100.200.0/24 for 1000
connections per second. The following is the CLI to execute the above request:
You can update this value in the IP Address field in Policies tab of Applications > Virtual Service
window.
You can configure rate limiters to control the policy evaluation based on the different parameters.
The rate limit objects are same the other rate limiters mentioned above:
n Count
n Period
n Burst
You can configure rate profiles under the action attributes of the HTTP policy. Rate limiters are
configured for the following:
n per_client_ip
n per_uri_path
The following are the steps to configure HTTP security rate limiter:
n Login to NSX Advanced Load Balancer CLI and use the configure httppolicyset <policy
name> command to start configuring http security policy for rate limiting.
n Configure the rate profiles under the action attributes of the HTTP policy as shown below. In
the below example rate profile is chosen as per_uri_path and rate limiter count as 10.
n Configure the required action once the rate limit is reached as per the configured policies
mentioned above. You can set the following configuration to set the action type as
rl_action_local_rsp with the response code as http_local_respose_status_code_403.
n The final configuration output is shown below which exhibits the action to send response code
as 403 if the incoming requests cross the limit of 10 requests per 10 seconds for the associated
HTTP security policy and the virtual service.
+-------------------------+--------------------------------------------+
| Field | Value
+--------------------------+--------------------------------------------+
| uuid |
httppolicyset-91f02717-7dc6-42ff-9b00-1f411d3723df
|
| name | example_rl_policy |
| http_security_policy | |
| rules[1] | |
| name | rl_rule_1 |
| index | 1 |
| enable | True |
| match | |
| client_ip | |
| match_criteria | IS_NOT_IN |
| prefixes[1] | 192.168.100.0/24 |
| action | |
| action | HTTP_SECURITY_ACTION_RATE_LIMIT |
| rate_profile | |
| rate_limiter | |
| count | 10 |
| period | 10 sec |
| burst_sz | 0 |
| action | |
| type | RL_ACTION_LOCAL_RSP |
| status_code | HTTP_LOCAL_RESPONSE_STATUS_CODE_403 |
| per_client_ip | True |
| per_uri_path | True |
| is_internal_policy | False |
| tenant_ref | admin |
+------------------------+--------------------------------------------+
You can check Enable box in DNS Policy tab in Policies tab of Applications > Virtual Services
window.
n Report Only
You can edit Rate Limit HTTP and TCP Settings section in DDos tab in Application Profile
window.
n consume – This is the number that this API consumes in the rate limiter bucket, The default
value is 1. This function indicates whether the request is above the threshold or not.
n Login to the NSX Advanced Load Balancer CLI and use configure vsdatascriptset <policy
name> command to configure rate limiters. Provide the policy name and assign the desired
value of the rate limiters (count, period, and burst size) as shown below.
The NSX Advanced Load Balancer updates these metrics periodically as defined in the Metric
Update Frequency in the virtual service configuration. If a DDoS event is detected by an SE, the
SE immediately sends information about the attack to the Controller, instead of locally storing the
data until the next polling interval.
The following table lists the increments in which metrics data can be displayed in the web
interface. The data granularity per increment and the retention period also are listed.
Note Real-time metrics are enabled by default for the first 30 minutes of a virtual service’s life.
After these initial 30 minutes, real-time metrics are disabled to conserve resources. Real-time
metrics can be re-enabled manually at any time.
The DNS virtual service is targeted by sending concise queries that solicit expansive responses
(spanning multiple UDP packets). The DNS virtual services can participate in a reflection attack.
The attacker spoofs the DNS query’s source IP and source port to be that of a well-known service
port on a victim server.
Any requests from a defined range of source ports (well-known ports) will be denied. The range of
ports to be denied is configured in the Security Policy.
Log in to the NSX Advanced Load Balancer shell and create a new security policy as shown below:
The new security policy test-secpolicy1 with DNS Amplification Egress DDoS protection is
displayed as follows:
| uuid | securitypolicy-9f5149f2-ab88-4ea3-9944-
cc6ed6aea77a |
| name | test-secpolicy1 |
| oper_mode | MITIGATION |
| dns_attacks | |
| attacks[1] | |
| attack_vector | DNS_AMPLIFICATION_EGRESS |
| mitigation_action | |
| deny | True |
| enabled | True |
| max_mitigation_age | 60 min |
| network_security_policy_index | 0 |
| dns_policy_index | 0 |
| dns_amplification_denyports | |
| match_criteria | IS_IN |
| ranges[1] | |
| start | 1 |
| end | 52 |
| ranges[2] | |
| start | 54 |
| end | 2048 |
| tenant_ref | admin |
+-------------------------------+---------------------------------------+
Note A security policy configured for DNS Amplification Egress mitigation cannot be attached
to a non-DNS virtual service, for instance, an HTTP virtual service. When attached to a non-DNS
virtual service, an error will be displayed, and the security policy will not be attached to the virtual
service.
For instance, consider a virtual service dns-vs-1. The steps to attach the network policy to the
virtual service are shown below:
shell>
configure virtualservice dns-vs-1
security_policy_ref test-secpolicy1
save
exit
Now the virtual service dns-vs-1 is armed with the DDoS protection security policy. Any such
attacks will be detected and mitigated by the SE. Security manager creates network security rules
and DNS rules for SE to use and block the attacker’s IP address, source port, and DNS record
request types. On significant attacks, the metrics manager will raise DDoS events which will be
displayed on the controller UI.
For details on various Workspace ONE UEM application modules, see Workspace One UEM.
Workspace
ONE UEM Least HTTP Cookie/ 60
L7 SSL 443 VIP1 443
Admin connections minutes
Console
Workspace
Least
ONE UEM L7 SSL 443 VIP2 Source IP 443
connections
Admin API
Workspace
ONE UEM Least Source IP Address/
L7 SSL 443 VIP3 443
Device connections 20 minutes
Services
Consistent
DataScript for
AWCM L7 SSL 443/2001 VIP4 Hash with 2001
persistence
custom string
Tunnel
proxy –
8443(TCP
and
UDP),
2020(TC Least Source IP/30
Tunnel Proxy L4 VIP5 8443/2020
P). Connections minutes
Fast-path
is
recomme
nded.
Tunnel
Per app –
443 (TCP
Tunnel Per- and Least
L4 VIP6 Source IP 443
App VPN UDP). Connections
Fast-path
recomme
nded
Note
n All components run on different servers and a separate Load balancer VIP is configured for
each component.
n The timeout value must be less than policy retrieval interval for some services,for
instance,Secure Email Gateway).
n Persistence is not required when all the users are coming through the NAT as they have the
same source IP address.
4 Specify the successful and failure checks details and Send Interval and Receive Timeout
details.
5 Is Federated field describes the object's replication scope. If this field is unchecked, then the
object is visible within the controller-cluster and its associated service engines. If checked,
then the object is replicated across the federation.
7 Select the Authentication Type to either NTLM or Basic from the drop-down list.
11 Specify the server maintenance mode and Role-Based Access Control (RBAC) details.
12 Click Save and proceed to the next step of creating a persistence profile.
Creating Pool
The following are the steps to create a pool:
2 Select the cloud from the Select Cloud sub-screen and click Next.
4 Select the persistence profile created in the previous step from the Analytics Profile drop-
down menu.
5 To bind the monitor, click Add Active Monitor and select the custom HTTPS monitor that was
created in the previous section.
6 For SSL offload, the Enable SSL option on the pool level is not required as traffic goes to the
back-end servers in plain text. If the back-end server listens only on SSL, the traffic needs to
be sent in encrypted form. So we need to enable SSL on the pool level. Select the Enable SSL
check box, select the appropriate SSL profile, and click Next.
8 Navigate through Step 3: Advancedtab and Step 4: Reviewtab by clicking Next and then click
Save.
3 Specifythe name and description of the application profile, and retain default values in the
HTTP Settings section.
4 Check X-Forwarded-Forbox.
6 Click Save to proceed further to install the SSL certificate. If not required, some of these
options can be disabled.
7 Some services like Device Service and Admin Console might require HTTP Strict Transport
Security. Select the HTTP Strict Transport Security (HSTS) check box if required.
2 Select Advanced Setup from the Create Virtual Service drop-down menu. Select a cloud from
the Select Cloud drop-down menu.
a Application Profile: Select the application profile created in the previous section.
3 For SSL profile, use the default SSL profile, or create a new one as per the requirement.
4 For SSL certificate, install the certificate and bind it to the virtual service as shown above.
5 Click Next and retain the default settings for the remaining fields.
Creating Pool
n Follow the same navigation steps mentioned inCreating Poolsection in Workspace ONE UEM
Admin Console.
n Set the value of the field Preferred persistence method to Source IP persistence with timeout
value set to 20 minutes.
Creating a Pool
n Follow the same navigation steps mentioned inCreating Poolsection in Workspace ONE UEM
Admin Console.
n Consistent Hash.
Creating Pool
Follow the same navigation steps mentioned inCreating Poolsection in Workspace ONE UEM
Admin Console.
For instance, since AWCM needs persistence based on parameter awcmsessionid in either the URI
or header, Consistent Hash based on the custom string can be used. The custom string is defined
in the following steps using DataScript.
For AWCM, it is required is to keep the front-end connection for 2 minutes. Navigate to the DDos
tab and change the HTTP Keep-Alive Timeout to 120000 ms (120 seconds).
Creating a DataScript
The following are the steps to create a DataScript and associate it with the AWCM pool:
3 Select the AWCM pool from the Pools drop-down menu and specify the other details.
4 Click the Events tab and click Add under the Events sub-section.
5 Add the following DataScript to bind the AWCM Pool to the Datascript.
<br<default_pool = "AWCM-Pool"
query = avi.http.get_query("awcmsessionid")
header = avi.http.get_header("awcmsessionid")
cookie = avi.http.get_cookie("awcmsessionid")
if query ~= nil and query ~= "true" then
avi.vs.log('QUERY HASH: '.. query)
avi.pool.select("AWCM-Pool")
avi.pool.chash(query)
elseif header ~= nil then
avi.vs.log('HEADER HASH: '.. header)
avi.pool.select("AWCM-Pool")
avi.pool.chash(header)
else if cookie ~= nil then
avi.vs.log('COOKIE HASH: '..cookie)
avi.pool.select("AWCM-Pool")
avi.pool.chash(cookie)
else
avi.vs.log('NIL HASH')
avi.pool.select("AWCM-Pool")
end
end
6 Select the AWCM DataScript created in the previous section from the Script To Execute
drop-down list.
d Set the Server Response Data to 407 and Response Code to 4XX.
e Click Save.
c Click Save.
For VMware Tunnel,Tunnel (Proxy), Client IP Address persistence Type is recommended with
Persistence Timeout value set to 30 minutes.
Click Save and proceed to the next step of creating a pool for servers.
Creating Pool
Follow the same navigation steps mentioned inCreating Poolsection in Workspace ONE UEM
Admin Console.
b Analytics Profile: The Tunnel Persistence Profile created in the previous step.
d Add Health Monitor: Tunnel HTTPS monitor created in the previous section.
2 Click Next and navigate to Step 3: Advanced Tab. Select the Disable Port Translation check
box.
3 Click Save.
1 Navigate to Applications > Virtual Services and select the Advanced Setup.
2 Select the System-L4-Application from Application Profile drop-down list and configure the
virtual service with the following options:
Client IP Address persistence is recommended with Persistence Timeout value set to 30 minutes.
Creating Pool
Follow the same navigation steps mentioned inCreating Poolsection in Workspace ONE UEM
Admin Console.
3 Click Add Active Monitor and select the TCP Monitor as Tunnel-TCP.
1 Navigate to Applications > Virtual Services and select the Advanced Setup.
2 Select System-L4-Application from Application Profile drop-down link and configure the
virtual service with the following options:
Note Enabling TSO/ GRO is non-disruptive and does not require an SE restart.
The following are the steps to enable TSO and GRO on SE:
1 Login to the CLI and use the configure serviceenginegroup command to enable TSO and
GRO features.
2 To verify if the features are correctly turned ON in the SE, you can check the following statistics
on the Controller CLI:
a GRO statistics are part of interface statistics. For GRO, check the statistics for the following
parameters:
1 gro_mbufs_coalesced
b TSO statistics are part of mbuf statistics. For TSO, check the statistics for the following
parameters:
1 num_tso_bytes
2 num_tso_chain
3 Execute the show serviceengine <interface IP address> interface command and filter
the output using the grep command shown as follows:
Note The sample output mentioned above is for 1-queue (No RSS).
Note In the case of a port-channel interface, provide the relevant physical interface name as
the filter in the intfname option. For reference, refer to the output mentioned below for the
Ethernet 4 interface.
Note The statistics for an NIC is the sum of the statistics for each queue for the specific
interface.
If the features are enabled, the statistics in the output mentioned above will reflect non-zero
values for TSO parameters.
| distribute_queues | False |
[admin:cntrl]: serviceenginegroup> distribute_queues
Overwriting the previously entered value for distribute_queues
[admin:cntrl]: serviceenginegroup> save
| distribute_queues | True |
When RSS is turned ON, all the NICs in the SE configure and use an optimum number of queue
pairs as calculated by the SE. The calculation of this optimum number is described in the section
on configurable dispatchers.
[admin:cntrl]: > show serviceengine 10.1.1.1 interface filter intfname bond1 | grep ifq
| ifq_stats[1] |
| ifq_stats[2] |
| ifq_stats[3] |
| ifq_stats[4] |
The value of counters for ipackets (input packets) and opackets (output packets) per interface
queue is a non-zero value as shown below:
[admin:cntrl]: > show serviceengine 10.1.1.1 interface filter intfname bond1 | grep pack
| ipackets | 40424864 |
| opackets | 42002516 |
| ipackets | 10108559 |
| opackets | 11017612 |
| ipackets | 10191660 |
| opackets | 10503881 |
| ipackets | 9873611 |
| opackets | 10272103 |
| ipackets | 10251034 |
| opackets | 10208920 |
Note The output includes statistics of each queue and one combined statistics overall for the NIC.
Configuration Samples
The example mentioned below exhibits the configuration on a bare-metal machine with 24 vCPUs,
two 10G NICs, and one bonded of two 10G NICs, and distribute_queues enabled.
n Set the value of the configure num_dispatcher_cores parameter to 0 (the default value).
After restarting the SE, though the configured value for dispatchers is set to 0, the number of
queues and the number of dispatchers is changed to 4, as shown in the following output:
n Zero (Reserved) — Auto (deduces optimal number of queues per dispatcher based on the NIC
and operating environment).
INTEGER 0,1,2,4,8,16 Maximum number of queues per vnic Setting to '0' utilises all queues
that are distributed across dispatcher cores.
The show serviceegine [se] seagent displays the number of queues per dispatcher and the
total number of queues per interface.
Note The hybrid_rss_mode is protected by a must check that requires RSS to be enabled
before toggling this property to True.
n The property also shows up as per SE. The following is the configuration command:
----------------------------------------+
| num_dispatcher_cpu | 4 |
| num_flow_cpu | 4 |
| num_queues | 4 |
| num_queues_per_dispatcher | 1 |
| hybrid_rss_mode | True |
+---------------------------------------+
During the three-way handshake, both client and server advertise their respective MSS so that the
peers will not send TCP segments larger than the MSS. This is enabled by default.
The benefits of GRO are only seen if multiple packets for the same flow are received in a short
period. If the incoming packets belong to different flows, then the benefits of having GRO enabled
might not be seen.
Multi-Queue Support
The dispatcher on NSX Advanced Load Balancer is responsible for fetching the incoming packets
from a NIC, sending them to the appropriate core for proxy work and sending back the outgoing
packets to the NIC. A 40G NIC or even a 10G NIC receiving traffic at a high packet per second
or PPS,for instance, in the case of small UDP packets, might not be efficiently processed by a
single-core dispatcher.
This problem can be solved by distributing traffic from a single physical NIC across multiple
queues where each queue gets processed by a dispatcher on a different core. Receive Side
Scaling (RSS) enables the use of multiple queues on a single physical NIC.
On the NSX Advanced Load Balancer SE, the multi-queue feature is also enabled on the transmit
side that is, different flows are pinned to different queues (packets belonging to the same flow in
the same queue) to distribute the packet processing among CPUs.
Note The multi-queue feature (RSS) is not supported along with IPv6 addresses. If RSS is
enabled, then the IPv6 address cannot be configured for NSX Advanced Load Balancer SE
interfaces. Similarly, if the IPv6 address is already configured on NSX Advanced Load Balancer
SE interfaces, the multi-queue feature (RSS) cannot be enabled on those interfaces.
Also, queues per NIC can be set for each dispatcher core for better performance. NSX Advanced
Load Balancer SE tries to detect the best settings automatically for each environment.
The hybrid mode is brought in as a configurable property and aims at achieving higher
performances on low core SE, especially one and two core SE on vCenter - NSX-T cloud.
The following are the guidelines to follow while planning capacity for SEs:
General Guidelines
The following are the general guidelines:
n CPU and memory reservations are recommended for NSX Advanced Load Balancer SE virtual
machines for consistent and deterministic performance.
n Use compact mode in NSX Advanced Load Balancer SE Group settings for virtual service
placements on SEs. This ensures NSX Advanced Load Balancer uses the minimum number of
SEs required for virtual service placement. It helps in saving the cost in the case of public cloud
use cases.
Dispatcher Configurations
The following are the dispatcher configurations:
n The dedicated_dispatcher is set to False by default at the SE group level. This configuration is
optimal for SEs with smaller computer capacities, such as one and two cores.
n NSX Advanced Load Balancer recommends dedicated_dispatcher set to True for SE size
greater than two cores.
n The default settings for GRO is disabled, and TSO is enabled. This configuration works
normally for most of the workloads.
n GRO can be enabled whenever there is enough dispatchers (> = 4), and their utilization is low.
n You can enable RSS for better performance. RSS can realize better PPS with more dispatchers
and queues per NIC.
n The number of dispatchers can only be set in the power of two, that is, the number of
dispatchers can be one, two, four, eight and so on.
n Default value of max_queues_per_vnic is one. Setting the value to zero automatically decides
the number of queues based on the dispatcher count configured. You can set this value as per
the requirements.
n If the number of queues available per NIC is lesser than the dispatcher, the number of the
dispatcher is floored to the number of queues. It is recommended to have the number of
dispatchers greater than the number of available queues.
Datapath Isolation
You can enable SE datapath isolation for latency and jitter-sensitive applications. The feature
creates two independent CPU sets for datapath and control plane SE functions.
n High PPS load such as high connections per second with small file GETs must have more
dispatchers to do higher PPS.
n Workloads with high SSL transactions are proxy heavy and benefit from a high count of proxy
cores.
n Default settings are recommended for one and two core SEs.
The following examples explain the configuration recommendation for a six-core SE running on
the vCenter full access cloud.
Considering 18 to 20 packets for each TCP transaction for both front end and back end, this
requirement translates to nearly one million packets per second for new TCP connections. Given
the volume of packets, NSX Advanced Load Balancer SE should be configured with the following
configuration:
n Number of dispatchers: 2
For the above requirements, the dispatcher cores will not be busy as the packets per second will
not be very high, and SSL processing will be consuming proxy cores for doing ECC transactions
and throughput. RSS will not help in this use case, and recommendations for following workload:
n Number of dispatchers: 1
To achieve the above requirements, NSX Advanced Load Balancer recommends dedicating one of
the SE cores for non-data-path tasks. It can be achieved with the following configuration:
To establish this, a Certificate Management Profile object is used. This object is created by
navigating to Templates > Security > Certificate Management. The Certificate Management
object provides a way for configuring a path to a certificate script, and a set of parameters that the
script needs (CSR, Common Name, and others) to integrate with a certificate management service
within the customer’s internal network. The script itself is left opaque by design to accommodate
the various certificate management services of different customers.
As a part of the SSL certificate configuration, the NSX Advanced Load Balanceryou can select
CSR, fill in the necessary fields for the certificate, and select the certificate management profile
to which this certificate is bound. The Controller then uses the CSR and the script to obtain the
certificate, and renews the certificate upon expiration. As part of the renewal process, a new
public-private key pair is generated and a certificate corresponding to this is obtained from the
certificate management service.
Without this automation, the process of sending the CSR to the external trust anchor and
installation of the signed certificate and keys, must be performed by the NSX Advanced Load
Balancer user.
Note Python scripts are supported for this feature. Also Automated CSR workflow for SafeNet
HSM is supported.
1 Prepare a Python script that defines a certificate_request() method. The method must
accept the following inputs as a dictionary:
n CSR.
Note The specific parameter values to be passed to the script are specified within the certificate
management profile.
n If the parameter value is assigned within the profile, the value applies to all CSRs generated
using the profile.
n To dynamically assign a parameter’s value, indicate within the certificate management profile
that the parameter is dynamic. This leaves the parameter’s value unassigned. The dynamic
parameter’s value is assigned when an individual CSR is created using the profile. The
parameter value applies only to the created CSR.
1 Navigate to Templates > Security > Certificate Management and click Create.
3 Select the control script for certificate management profile from the drop-down list.
4 If the profile must pass some parameter values to the script, select the Enable Custom
Parameters checkbox, and specify the parameter names and values.
For parameters that are sensitive (for instance, passwords), select the Is Sensitive checkbox.
a Marking a parameter sensitive prevents its value from being displayed in the web interface
or being passed by the API. For parameters that are to be dynamically assigned during
CSR creation, select the Dynamic checkbox. This leaves the parameter unassigned within
the profile.
5 Click Save.
1 Navigate to Templates > Security > SSL/TLS Certificates, and click Create.
5 Select the certificate management profile configured in the previous section from the
Certificate Management Profile drop-down list.
6 Click Save.
The Controller generates a public-private key pair and CSR. It executes the script to request
the Trust Anchor-signed certificate from the PKI service, and saves the signed certificate in
persistent storage.
NSX Advanced Load Balancer allows you to tweak configuration parameters based on intended
use cases. Most of the commonly used parameters are available while using the UI. In addition, a
few advanced configuration parameters are available through the CLI and API alone.
n Changes in 20.1.3
The serviceengineproperties hierarchy is available at the Controller level, and the parameters are
applicable to all SEs across all clouds at the time of bootup.
n l7_conns_per_core
n ssl_sess_cache_per_vs
n l7_resvd_listen_conns_per_core
You can configure these parameters differently for each SE Group if required.
n buf_num
n buf_size
n level_normal
n level_aggressive
n window_size
n hash_size
You can configure these parameters differently for each Application Profile.
The serviceengineproperties hierarchy is available at the Controller level, and the parameters
apply to all SEs across all clouds. You can modify these parameters while SEs are running and this
applies to all SEs including those already running.
n spdy_fw_proxy_parse_enable
n mcache_fetch_enabled
n mcache_store_in_enabled
n mcache_store_out_enabled
n upstream_connpool_enable
n upstream_connect_timeout
n upstream_send_timeout
n upstream_read_timeout
n downstream_send_timeout
n lbaction_num_requests_to_dispatch
n lbaction_rq_per_request_max_retries
n user_defined_metric_age
n enable_hsm_log
n ngx_free_connection_stack
n http_rum_console_log
n http_rum_min_content_length
You can configure these parameters differently for each SE Group, if required.
n min_length
n max_low_rtt
n min_high_rtt
n mobile_strs
You can configure these parameters differently for each Application Profile.
n se_auth_ldap_cache_size
n se_auth_ldap_conns_per_server
n se_auth_ldap_reconnect_timeout
n se_auth_ldap_bind_timeout
n se_auth_ldap_request_timeout
n se_auth_ldap_servers_failover_only
You can configure these parameters differently for each virtual service.
Changes in 20.1.3
This section contains a list of changes made to the CLI and API structure.
n se_ip_encap_ipc
n se_l3_encap_ipc
n dp_hb_frequency
n db_hb_timeout_count
n dp_aggressive_hb_frequency
n dp_aggressive_hb_timeout_count
You can configure these parameters differently for each SE Group if required.
Upgrade Considerations
The seproperties based APIs for these config knobs will only work for NSX Advanced Load
Balancer versions before when the above changes were introduced.
The properties will be automatically migrated to the relevant equivalent configuration as part of
the upgrades.
API Considerations
Ensure that any automation using these properties is modified to the new API schema. Also, note
that the previous API schema also remains available with an older “X-Avi-Version”.
With HTTP/1.1,for multiple parallel requests, multiple TCP connections are opened. With HTTP/2,
multiple requests can be broken into frames that can be interleaved; the remote end is capable of
reassembling them. Multiple connections can still be opened, but the number of connections is not
as many as in HTTP/1.1.
n Server Push
It allows a server to send multiple resources in response to a client request without the client
explicitly sending a request for each of these resources. This reduces latency otherwise introduced
by waiting for each request to serve the resource. In HTTP/1.1, applications try to work around this
by inlining the resource. HTTP/2 enables the client to cache the resource, reuse it across pages,
and use multiplexing along with other resources.
n Flow control
HTTP/2 provides flow control at the application layer level by not allowing either end to
overwhelm the other side, by using window sizes.
n Header Compression
In HTTP/1.1, each header in the request is sent as text. In HTTP/2, the header compresses request
and response header metadata using the HPACK compression format, reducing the transmitted
header size.
n Stream Prioritization
Since the HTTP messages are sent as frames and the frames from different streams can be
interleaved, HTTP/2 can specify priorities for streams i.e. all frames received can be prioritized
based on their stream priorities.
Use Cases
n Workaround techniques used for HTTP 1.1 to make browsers compatible with HTTP/2, are no
longer required.
n All browsers that use HTTP/2 can be deployed on NSX Advanced Load Balancer.
The NSX Advanced Load Balancer supports HTTP over TLS, or HTTP over SSL method for all
HTTP/2 requests. This method uses TLS version 1.2 or later.
n All settings and options available for HTTP Setting are also available for HTTP/2-enabled
virtual services. HTTP features like HTTP policy, DataScripts, HTTP-timeout setting, etc. are
also supported for HTTP/2 requests.
n For front-end traffic, HTTP/2 is supported on both non-SSL and SSL enabled ports.
The following configuration changes support HTTP/2 on the front-end and the back-end:
n The enable_http2 flag is available for the pool and pool group level to indicate that all the
servers configured under this pool are HTTP/2.0 servers.
3 Select the check box for HTTP2 available under Settings > Service Ports.
To enable HTTP2 for the pools and the pool groups associated with the virtual service:
5. Navigate to Applications > Pools and click Create Pool or use the existing one. Select the
Enable HTTP2 check box available under Servers as shown below:
6. Navigate to Applications > Pools Groups and click Create Pool Group or use the existing one.
For enabling HTTP2, select the check box for Enable HTTP2 available under Pool Servers as
shown below:
Similarly, use the enable_http2 flag for the associated pool and pool groups for the virtual service.
n Configuring Pool
n Enabling HTTP/2 for Existing Pool Groups: To enable HTTP/2 for an existing pool group:
n Configure enable_http2 for all the pools using the steps mentioned above.
n Checking Status
The show virtualservice <virtual service name> command exhibits the flag value set as true,
as shown in the following code output:
| default_server_port | 80 |
| graceful_disable_timeout | 1 min |
| connection_ramp_duration | 10 min |
| max_concurrent_connections_per_server | 0 |
| servers[1] | |
| ip | 10.90.103.72 |
| port | 80 |
| hostname | 10.90.103.72 |
| enabled | True |
| ratio | 1 |
| verify_network | False |
| resolve_server_by_dns | False |
| static | False |
| rewrite_host_header | False |
| lb_algorithm | LB_ALGORITHM_LEAST_CONNECTIONS |
| lb_algorithm_hash | LB_ALGORITHM_CONSISTENT_HASH_SOURCE_IP_ADDRESS |
| inline_health_monitor | True |
| use_service_port | False |
| capacity_estimation | False |
| capacity_estimation_ttfb_thresh | 0 milliseconds |
| vrf_ref | global |
| fewest_tasks_feedback_delay | 10 sec |
| enabled | True |
| request_queue_enabled | False |
| request_queue_depth | 128 |
| host_check_enabled | False |
| sni_enabled | True |
| rewrite_host_header_to_sni | False |
| rewrite_host_header_to_server_name | False |
| lb_algorithm_core_nonaffinity | 2 |
| lookup_server_by_name | False |
| analytics_profile_ref | System-Analytics-Profile |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
| server_timeout | 0 milliseconds |
| delete_server_on_dns_refresh | True |
| enable_http2 |
True | <-----
+---------------------------------------+------------------------------------------------+
Limitations
n Caching is not supported for HTTP/2 pool.
n HTTP/2 health monitor support is not available. If the back-end server only supports HTTP/2,
HTTP(S) health monitor will not work for this pool. Only TCP or PING health monitor must be
configured for this pool.
n If the back-end server is SSL-enabled and supports both HTTP/1 and HTTP/2, HTTPS health
monitor must be configured with its own attribute.
n The client must be aware that the non-SSL port can only support HTTP/2.
n The back-end pool assumes that servers listening on the configured port only support HTTP/2
and will not send an HTTP1.1 to HTTP/2.0 upgrade.
n For the pool group, the HTTP version must be matched between the pool group and all the
associated pools.
n HTTP/1.1 chunked transfer-encoding mechanism on one side (the front-end) and HTTP/2
chunked mechanism on the other side (back-end) is not supported together. If chunking is
present i.e. if stream mode or partial buffer mode is required with chunk on one side and V2
chunk on the another side, this method is not supported.
Upgrade
If HTTP/2 is enabled in the application profile for a virtual service listening on port 80 and 443,
after an upgrade, the HTTP/2 will be automatically enabled on the virtual service on port 443.
HTTP/2 will not be enabled on port 80 which is non-SSL enabled port.
n max_http2_header_field – This field controls the maximum size (in bytes) of the compressed
request header field. The limit applies equally to both name and value of the request header. It
can be between 1 and 8192 bytes. The default value is 4096 bytes.
n http2_initial_window_size – This field controls the window size of the initial flow control in
HTTP/2 streams. The value for this field ranges from 64 to 32768 KB. The default value is
64KB.
The following CLI snippets exhibit configuration samples for the options mentioned above.
Login to the NSX Advanced Load Balancer CLI. Use the applicationprofile mode and the
http2_profile option to change the values of the parameters mentioned above.
| client_body_timeout |
30000 milliseconds |
| keepalive_timeout |
30000 milliseconds |
| client_max_header_size |
12 kb |
| client_max_request_size |
48 kb |
| client_max_body_size |
0 kb |
| max_rps_unknown_uri |
0 |
| max_rps_cip |
0 |
| max_rps_uri |
0 |
| max_rps_cip_uri |
0 |
| ssl_client_certificate_mode |
SSL_CLIENT_CERTIFICATE_NONE |
| websockets_enabled |
True |
| max_rps_unknown_cip |
0 |
| max_bad_rps_cip |
0 |
| max_bad_rps_uri |
0 |
| max_bad_rps_cip_uri |
0 |
| keepalive_header |
False |
| use_app_keepalive_timeout |
False |
| allow_dots_in_header_name |
False |
| disable_keepalive_posts_msie6 |
True |
| enable_request_body_buffering |
False |
| enable_fire_and_forget |
False |
| max_response_headers_size |
48 kb |
| respond_with_100_continue |
True |
| hsts_subdomains_enabled |
True |
| enable_request_body_metrics |
False |
| fwd_close_hdr_for_bound_connections |
True |
| max_keepalive_requests |
100 |
| disable_sni_hostname_check |
False |
| reset_conn_http_on_ssl_port |
False |
| http_upstream_buffer_size |
0 kb |
| enable_chunk_merge |
True |
| http2_profile
| |
| max_http2_control_frames_per_connection |
1000 |
| max_http2_queued_frames_to_client_per_connection |
1000 |
| max_http2_empty_data_frames_per_connection |
1000 |
| max_http2_concurrent_streams_per_connection |
128 |
| max_http2_requests_per_connection |
1000 |
| max_http2_header_field_size |
4096 bytes |
| http2_initial_window_size |
64 kb |
| preserve_client_ip |
False |
| preserve_client_port |
False |
| preserve_dest_ip_port |
False |
| tenant_ref |
admin |
+------------------------------------------------------
+---------------------------------------------------------+
[admin:controller]: applicationprofile> http_profile
[admin:controller]: applicationprofile:http_profile> http2_profile
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_control_frames_per_connection 2000
Overwriting the previously entered value for max_http2_control_frames_per_connection
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_queued_frames_per_connection 2000
No command or arguments found in 'max_http2_queued_frames_per_connection 2000'.
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_queued_frames_to_client_per_connection 2000
Overwriting the previously entered value for max_http2_queued_frames_to_client_per_connection
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_concurrent_streams_per_connection 256
Overwriting the previously entered value for max_http2_concurrent_streams_per_connection
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_requests_per_connection 2500
Overwriting the previously entered value for max_http2_requests_per_connection
[admin:controller]: applicationprofile:http_profile:http2_profile>
http2_initial_window_size 256
Overwriting the previously entered value for http2_initial_window_size
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_header_field_size 8192
| 0 |
| max_rps_uri
| 0 |
| max_rps_cip_uri
| 0 |
| ssl_client_certificate_mode
| SSL_CLIENT_CERTIFICATE_NONE |
| websockets_enabled
| True |
| max_rps_unknown_cip
| 0 |
| max_bad_rps_cip
| 0 |
| max_bad_rps_uri
| 0 |
| max_bad_rps_cip_uri
| 0 |
| keepalive_header
| False |
| use_app_keepalive_timeout
| False |
| allow_dots_in_header_name
| False |
| disable_keepalive_posts_msie6
| True |
| enable_request_body_buffering
| False |
| enable_fire_and_forget
| False |
| max_response_headers_size
| 48 kb |
| respond_with_100_continue
| True |
| hsts_subdomains_enabled
| True |
| enable_request_body_metrics
| False |
| fwd_close_hdr_for_bound_connections
| True |
| max_keepalive_requests
| 100 |
| disable_sni_hostname_check
| False |
| reset_conn_http_on_ssl_port
| False |
| http_upstream_buffer_size
| 0 kb |
| enable_chunk_merge
| True |
| http2_profile
| |
| max_http2_control_frames_per_connection
| 2000 |
| max_http2_queued_frames_to_client_per_connection
| 2000 |
| max_http2_empty_data_frames_per_connection
| 1000 |
| max_http2_concurrent_streams_per_connection
| 256 |
| max_http2_requests_per_connection
| 2500 |
| max_http2_header_field_size
| 8192 bytes |
| http2_initial_window_size
| 256 kb |
| preserve_client_ip
| False |
| preserve_client_port
| False |
| preserve_dest_ip_port
| False |
| tenant_ref
| admin |
+------------------------------------------------------
+---------------------------------------------------------+
[admin:controller]: >
| x_forwarded_proto_enabled |
False |
| post_accept_timeout |
30000 milliseconds |
| client_header_timeout |
10000 milliseconds |
| client_body_timeout |
30000 milliseconds |
| keepalive_timeout |
30000 milliseconds |
| client_max_header_size |
12 kb |
| client_max_request_size |
48 kb |
| client_max_body_size |
0 kb |
| max_rps_unknown_uri |
0 |
| max_rps_cip |
0 |
| max_rps_uri |
0 |
| max_rps_cip_uri |
0 |
| ssl_client_certificate_mode |
SSL_CLIENT_CERTIFICATE_NONE |
| websockets_enabled |
True |
| max_rps_unknown_cip |
0 |
| max_bad_rps_cip |
0 |
| max_bad_rps_uri |
0 |
| max_bad_rps_cip_uri |
0 |
| keepalive_header |
False |
| use_app_keepalive_timeout |
False |
| allow_dots_in_header_name |
False |
| disable_keepalive_posts_msie6 |
True |
| enable_request_body_buffering |
False |
| enable_fire_and_forget |
False |
| max_response_headers_size |
48 kb |
| respond_with_100_continue |
True |
| hsts_subdomains_enabled |
True |
| enable_request_body_metrics |
False |
| fwd_close_hdr_for_bound_connections |
True |
| max_keepalive_requests |
100 |
| disable_sni_hostname_check |
False |
| reset_conn_http_on_ssl_port |
False |
| http_upstream_buffer_size |
0 kb |
| enable_chunk_merge |
True |
| http2_profile
| |
| max_http2_control_frames_per_connection |
1000 |
| max_http2_queued_frames_to_client_per_connection |
1000 |
| max_http2_empty_data_frames_per_connection |
1000 |
| max_http2_concurrent_streams_per_connection |
128 |
| max_http2_requests_per_connection |
1000 |
| max_http2_header_field_size |
4096 bytes |
| http2_initial_window_size |
64 kb |
| preserve_client_ip |
False |
| preserve_client_port |
False |
| preserve_dest_ip_port |
False |
| tenant_ref |
admin |
+------------------------------------------------------
+---------------------------------------------------------+
Logs
Logs can be checked using one of the following modes:
The application logs of NSX Advanced Load Balancer display HTTP/2.0 as the HTTP version in
the request. Navigate to Applications > Virtual Services , select the desired virtual service, and
navigate to Logs tab to check logs.
Errors related to HTTP/2 requests and response can be checked under Significant logs.
The following are counters available for the HTTP/2 feature, that can be used during
troubleshooting.
n Protocol errors
n Flow-control error
n Compression errors
Use the show virtualservice <virtual service name> detail command to check for available
counters for the HTTP/2 method.
| cache_bytes | 0 |
| http2_requests_handled | 2 |
| http2_response_2xx | 2 |
| http2_response_3xx | 0 |
| http2_response_4xx | 0 |
| http2_response_5xx | 0 |
| http2_protocol_errors | 0 |
| http2_flow_control_errors | 0 |
| http2_frame_size_errors | 0 |
| http2_compression_errors | 0 |
| http2_refused_stream_errors | 0 |
| http2_enhance_your_calm | 0 |
| http2_miscellaneous_errors | 0
Use the show pool <pool name> detail command to check HTTP/2-related errors in pool status.
n IP Group
IP Group
IP groups are comma-separated lists of IP addresses that can be referenced by profiles, policies,
and logs. Each entry in this list can be an IPv4 address, an IP range, an IP mask, or a country code.
IP groups are reusable objects that can be referenced by any number of features attached to any
number of virtual services. IP groups are commonly used for service classification, white listing,
or black listing and can be automatically updated through external API calls. When an IP group
is updated, the update is pushed from the Controller to any Service Engine that is hosting virtual
services, leveraging the IP group.
IP Group Usage
The following are few examples of IP groups used within the NSX Advanced Load Balancer.
Generally, the IP Group can be used in (or assigned to) any object that accepts an IP address. The
following are the objects in NSX Advanced Load Balancer that can use IP Groups.
n Policies
A network security or HTTP security policy can be configured to drop any clients coming from a
blacklist of IP addresses. Instead of maintaining a long list within the policy, the NSX Advanced
Load Balancer maintains the rule logic of that policy separately from the list of addresses kept in
the IP group. A user can be granted a role that allows them to update the list of IP addresses
without being able to change the policy itself.
n Logs
Logs classify clients by their IP address and match them against an included geographic country
location database. Override this database by using a custom IP group to create specific mappings
such as internal IP addresses. For example, LA_Office can contain 10.1.0.0/16, while NY_Office
contains 10.2.0.0/16. Logs show these clients as originating from these locations. Logs searches
can also be performed on the group name such as LA_Office.
n DataScript
Custom decisions can be made based on a client’s inclusion or exclusion in an IP group. For
examples and syntax, see the DataScript function avi.ipgroup.contains.
n Pool Servers
If multiple pools are needed with different configurations but with the same list of servers, the
server IP address can be placed into the IP group. Each subscribing pool automatically inherits
the change in membership if an IP is added or removed from the group.
The table on the Templates > Groups > IP Group page contains the following information for each
IP group:
Country Codes or EPG - Any configured country codes that are listed.
Creating an IP Group
To create or edit an IP Group:
n Type - Select one of the following from the Type drop-down menu.
n IP Address.
n Country Code.
n Import IP Address From File - Click IMPORT FILE to upload the comma-separated-value
(CSV) file that contains any combination of IP addresses, range, or masks.
n Country Code - Populate the IP address ranges from the geo database for this country.
n Select by Country Code — Select one or more countries, or type the country name into the
search field to filter. Countries may not be combined within an IP group with individual IP
addresses. An IP group that contains countries may not be used as a list of servers for pool
membership.