Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Configuration Guide VMware

Download as pdf or txt
Download as pdf or txt
You are on page 1of 778

VMware NSX Advanced Load Balancer

Configuration Guide

VMware NSX Advanced Load Balancer 21.1.4


VMware NSX Advanced Load Balancer Configuration Guide

You can find the most up-to-date technical documentation on the VMware website at:

https://docs.vmware.com/

VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com

©
Copyright 2022 VMware, Inc. All rights reserved. Copyright and trademark information.

VMware, Inc. 2
Contents

About This Guide 10

1 Load Balancing 11
Cloud Connectors 12
Virtual Services 14
Wildcard VIP 18
Difference Between Virtual Service and Virtual IP 26
Create a Virtual Service 26
Disable a Virtual Service 36
Find Virtual Service UUID 37
Virtual Service Placement Settings 38
HTTP Policy Reuse 39
Block an IP Address from Access to a Virtual Service 40
Impact of Changes to Min-Max Scaleout Per Virtual Service 41
Enhanced Virtual Hosting 46
Custom Controller Utilization Alert Thresholds 51
Enabling Traffic on VIP 53
Wildcard SNI Matching for Virtual Hosting 54
Service Engine Group 56
Creating SE Group 57
Service Engine Datapath Isolation 65
Deactivating IPv6 Learning in Service Engines 70
Storing Inter-SE Distributed Object 71
Setting a Property for Newly Created Service Engine Group 72
Application Profile 73
Redirect HTTP to HTTPS 97
Overview of SSL/TLS Termination 101
TCP or UDP Profile 103
Data-Plane TCP Stack 103
TCP Fast Path 104
TCP Fast Path Configuration 105
TCP Proxy 106
UDP Fast Path 114
UDP Proxy 115
Internet Content Adaptation Protocol 116
ICAP Support for NSX Defender 118
Logs and Troubleshooting 120
ICAPs 123

VMware, Inc. 3
VMware NSX Advanced Load Balancer Configuration Guide

Server Pools 124


Create Pool 141
Specifying Connection Properties at the Pool Level 159
HTTP Server Reselect 162
Deactivating Back-end Servers for Maintenance 166
Rewriting Host Header to Server Name 167
Allowed Characters for Object Names 169
Pool Groups 170
Disable Primary Pool When Down 181
Pool Group Sharing Across Virtual Services 182
Load Balancing Algorithms 187
Persistence 192
HTTP Cookie Persistence 197
App Cookie Persistence 201
HTTP Custom Header Persistence 201
Client IP Persistence 202
TLS Persistence 203
Compression 204
Configuring Compression 204
Custom Compression 205
Caching 206
Purge an Object from HTTP Cache 209
Use Cases 210
Load Balance API Gateways 210
Setting up Microsoft Exchange Server 2016 with NSX Advanced Load Balancer 212
Load Balance FTP 228
Load Balancing Passive FTP on NSX Advanced Load Balancer 229
Load Balancing Active FTP on NSX Advanced Load Balancer 235
Load Balancing RADIUS with Cisco ISE 238
Health Monitoring 247
Health Monitor Types 252
Health Monitor Troubleshooting 281
Parameters to Mark a Virtual Service or Pool Up 286
Determining the Server Status 288
Describing the Reasons for a Marked Down Server 289
Troubleshooting External Health Monitor 290
Flapping Servers Up or Down 292
Validating Server Health 293
Detecting Server Maintenance Mode with a Health Monitor 295
Enabling Authentication HTTP and HTTPs Health Monitor 299

VMware, Inc. 4
VMware NSX Advanced Load Balancer Configuration Guide

2 SE Advanced Networking 303


VRFs 303
SE Data Plane Architecture and Packet Flow 303
Change VRF Context Setting for NSX Advanced Load Balancer SE's Management Network
309
VRF Support for Service Engine Deployment on Bare-Metal Servers 310
Routing 313
Static Route Support for VIP and SNAT IP Reachability 313
NAT Configuration on NSX Advanced Load Balancer Service Engine 318
Source NAT for Application Identification 323
SNAT Source Port Exhaustion 331
TCP Transparent Proxy Support 332
PROXY Protocol Support 332
Autoscale Service Engines 333
Enable a Virtual Service VIP on All Interfaces 340
BGP 341
BGP Learning and Advertisement Support 341
BGP Support for AS Path 346
BGP Support for Scaling Virtual Services 357
BGP/BFD Visibility 389
BGP Community Support on NSX Advanced Load Balancer 402
Multihop BGP 410
Configuring BGP Graceful Restart 415
Service Engine Failure Detection 416
Debugging BGP-based Service Engine Configurations 420
How to Access and Use Quagga Shell using NSX Advanced Load Balancer CLI 420
BGP Peer Monitoring for High Availability 423
IPv6 BGP Peering in NSX Advanced Load Balancer 424
BGP Support in NSX Advanced Load Balancer for OpenShift and Kubernetes 428
DSR and Default Gateway 433
Direct Server Return on NSX Advanced Load Balancer 433
Default Gateway (IP Routing on NSX Advanced Load Balancer SE) 441
Network Service Configuration 448
Configuring Networks for SEs and Virtual IPs 450
Configuring IP Address Pools 450
Enabling VLAN trunking on NSX Advanced Load Balancer Service Engine 452
Enabling VLAN Tagging on ESX 452
Sizing Service Engines 454
Per-App SE Mode 460
Connecting SEs to Controllers When Their Networks Are Isolated 462
SE Memory Consumption 463

VMware, Inc. 5
VMware NSX Advanced Load Balancer Configuration Guide

X-Forwarded-For Header Insertion 470


Resetting PCAP TX Ring for Non-DPDK Deployment 471
Preserve Client IP 472

3 High Availability and Redundancy 477


Control Plane High Availability 478
Operation of NSX Advanced Load Balancer Controller High Availability 478
Converting a Single-Node Deployment to a Three-node Cluster 480
Data Plane High Availability 481
Elastic High Availability for NSX Advanced Load Balancer Service Engines 482
Legacy HA for NSX Advanced Load Balancer Service Engines 490
Virtual Service Scaling 495
Manual Scaling of Virtual Services 498
Automatic Scaling of Virtual Services 498
Throughput 501
Virtual Service Policies 502
Controller Interface and Route Management 511
Auto Scaling 519
Autoscaling in Public Clouds 519
Configuration and Metrics Collection on NSX Advanced Load Balancer for AWS Server
Autoscaling 522
NSX Advanced Load Balancer Integration with AWS Auto Scaling Groups 525
NSX Advanced Load Balancer SE Behavior on Gateway Monitor Failure 529

4 DNS 531
NSX Advanced Load Balancer DNS Feature 535
DNS Load Balancing 536
Configuring DNS 538
DNS Policy 541
Matches 542
Rule Configuration through the NSX Advanced Load Balancer UI 547
Integration with External DNS Providers 555
DNS Configuration 556
Custom IPAM Profile on NSX Advanced Load Balancer 558
Support for Authoritative Domains, NXDOMAIN Responses, NS and SOA Records 568
Adding Custom A Records to an NSX Advanced Load Balancer DNS Virtual Service 571
Clickjacking Protection 573
DNS Queries Over TCP 575
Adding DNS Records Independent of Virtual Service State 575
DNS TXT and MX Record 577
Add Servers to Pool by DNS 580

VMware, Inc. 6
VMware NSX Advanced Load Balancer Configuration Guide

5 Service Discovery using NSX Advanced Load Balancer as IPAM and DNS Provider
582
IPAM Configuration 584
DNS Configuration 586
Configuring the IPAM/DNS Profiles by Provider Type 587

6 IPAM Provider (OpenStack) 591

7 Security 592
Overview of NSX Advanced Load Balancer Security 592
SSL Certificates 594
Integrating Let's Encrypt Certificate Authority with NSX Advanced Load Balancer System 608
Integrating Let's Encrypt Certificate Authority with NSX Advanced Load Balancer 611
OCSP Stapling in NSX Advanced Load Balancer 617
Client SSL Certificate Validation 626
HTTP Application Profile 626
PKI Profile 630
Certificate Authority 631
Physical Security for SSL Keys 632
Layer 4 SSL Support 633
EC versus RSA Certificate Priority 636
Client-IP-based SSL Profiles 637
Configuration Using the NSX Advanced Load Balancer CLI 638
SSL/TLS Profile 641
SSL Profile Templates 642
SSL Client Cipher in Application Logs on NSX Advanced Load Balancer 650
Configure Stronger SSL Cipher 651
Server Name Indication 652
Configuration 653
True Client IP in L7 Security Features 655
Configure True Client 657
App Transport Security 660

8 Certificate Management Integration for CSR Automation 662


Configuring Certificate Management Integration 663
Create the Certificate Management Profile 664
Use the Certificate Management Profile to get Signed Certificates 664
Renewing Default (Self-Signed) Certificates on NSX Advanced Load Balancer 665
Customizing Notification of Certificate Expiration 667
Enabling Client Certificate Authentication on NSX Advanced Load Balancer 670
Configuring CRL 672

VMware, Inc. 7
VMware NSX Advanced Load Balancer Configuration Guide

Exporting PFX Client Key to the Keychain of the Local Workstation 673
Creating PKI Application Profile 673
Configuring HTTP Profile 675
Configuring L4 SSL/ TLS Profile 676
Associating Application Profile with Virtual Service 677
Full-chain CRL Checking for Client Certificate Validation 677
Updating SSL Key and Certificate 678
Customizing Notification of Certificate Expiration 679

9 Hardware Security Module (HSM) 682


Thales Luna (formerly SafeNet Luna) HSM 682
Thales Luna Software Import 684
Enabling HSM Support in NSX Advanced Load Balancer 685
Configuring Dedicated Interfaces for HSM Communication on New NSX Advanced Load
Balancer Service Engines 691
Configuring Dedicated Interfaces for HSM Communication on an Existing NSX Advanced Load
Balancer Service Engine 693
Configuring Dedicated Interfaces for ASM Communication on a New NSX Advanced Load
Balancer Service Engine 697
Configuring Dedicated Interfaces for ASM Communication on an Existing NSX Advanced Load
Balancer Service Engine 699
Configuring Dedicated Interfaces for HSM and Sideband Communication on a new NSX
Advanced Load Balancer Service Engine 701
Configuring Dedicated Interfaces for ASM Communication on an Existing NSX Advanced Load
Balancer Service Engine 705
Configuring Dedicated Interfaces for HSM Communication on New NSX Advanced Load
Balancer Controller 707

10 FIPS Compliance in NSX Advanced Load Balancer 710

11 CIS Compliance for NSX Advanced Load Balancer 715

12 DDoS Attack Mitigation 718


Rate Limiters 722
Static Rate Limiter 723
Dynamic Rate Limiter 727
DataScript Rate Limiter 728
Configure Security Policy for DNS Amplification Egress DDoS Protection 730

13 Load Balancing Workspace ONE UEM Components 733


Load Balancing Workspace ONE UEM Admin Console 735
Load Balancing Workspace ONE UEM Admin API 737
Load Balancing Workspace ONE UEM Device Services 738
Load Balancing AirWatch Cloud Messaging 739

VMware, Inc. 8
VMware NSX Advanced Load Balancer Configuration Guide

Load Balancing VMware Tunnel (Tunnel Proxy) 741


Load Balancing VMware Tunnel (Per-App VPN) 742

14 Configuring TSO GRO RSS 744


TSO GRO RSS Features 749
Recommendation for Better Performance of Service Engines 751
Certificate Management Integration for Trust Anchor 753

15 Migration of Service Engine Properties 757


Service Engine Bootup Properties 757
Changes in 20.1.3 760

16 HTTP/2 Support on NSX Advanced Load Balancer 761


IP Group 777

VMware, Inc. 9
About This Guide

The VMware NSX Advanced Load Balancer Configuration Guide provides information about
configuring the NSX Advanced Load Balancerincluding creating virtual services, creating and
managing Service Sngine groups, controlling the behavior of Service Engines using application
profiles, creating and configuring pools and pool groups, autoscaling Service Engines and virtual
services, configuring load balancing algorithms, configuring different types of persistence, health
monitoring of deployed servers, configurations for high availability, options for operating NSX
Advanced Load Balancer securely, certificate management including registration, renewal, and
Client Certificate Authentication, and more.

VMware, Inc. 10
Load Balancing
1
NSX Advanced Load Balancer is a software load balancer that provides scalable application
delivery across any infrastructure. NSX Advanced Load Balancer provides 100% software load
balancing to ensure a fast, scalable and secure application experience. It delivers elasticity and
intelligence across any environments. It scales from 0 to 1 million SSL transactions per second in
minutes. It achieves 90% faster provisioning and 50% lower TCO than traditional appliance-based
approach.

NSX Advanced Load Balancer is built on software-defined principles, enabling a next generation
architecture to deliver the flexibility and simplicity expected by IT and lines of business. The NSX
Advanced Load Balancer has three components.

n The NSX Advanced Load Balancer Service Engine

n The NSX Advanced Load Balancer cluster

n The NSX Advanced Load Balancer admin console

To know more about the architecture of NSX Advanced Load Balancer, see NSX Advanced Load
Balancer Overview in the NSX Advanced Load Balancer Installation Guide.

This chapter includes the following topics:

n Cloud Connectors

n Virtual Services

n Service Engine Group

n Application Profile

n Overview of SSL/TLS Termination

n TCP or UDP Profile

n Internet Content Adaptation Protocol

n Server Pools

n Rewriting Host Header to Server Name

n Allowed Characters for Object Names

n Pool Groups

n Load Balancing Algorithms

VMware, Inc. 11
VMware NSX Advanced Load Balancer Configuration Guide

n Persistence

n Compression

n Caching

n Use Cases

n Health Monitoring

Cloud Connectors
Clouds are containers for the environment that NSX Advanced Load Balancer is installed or
operating within. During initial setup of NSX Advanced Load Balancer, a default cloud, named
Default-Cloud, is pre-configured. This is where the first Controller is deployed. Additional clouds
may be added, containing SEs and virtual services.

To view the clouds available, from the NSX Advanced Load Balancer UI, navigate to Infrastructure
> Clouds.

In this screen, you can find all the clouds that are created with the Type of environments, such as
vCenter, OpenStack, or bare metal servers (no orchestrator) and the Status of the cloud indicating
the readiness of the cloud. Hovering the mouse over the icon provides more information about the
status, such as ready for use or incomplete configuration.

Additionally, from this screen, you can perform the following functions.

n Edit an existing cloud.

n Convert the cloud from read access mode or write access mode to no access mode.
When in no access mode, Avi Controllers do not have access to the cloud’s orchestrator, such
as vCenter. See the installation documentation for the orchestrator to see the full implications
of no access mode.

VMware, Inc. 12
VMware NSX Advanced Load Balancer Configuration Guide

n Download the SE Image. When Avi Vantage is deployed in read access mode or no
access mode, SEs must be installed manually. Use this button to pull the SE image for the
appropriate image type (ova or qcow2). The SE image will have the Controller’s IP or cluster
IP address embedded within it, so an SE image may only be used for the Avi Vantage
deployment that created it.

n Generate Token. Authentication tokens are used for securing communication between
Controllers and SEs. If Avi Vantage is deployed in read access mode or no access mode, the
SE authentication tokens must be copied manually by the Avi Vantage user from the Controller
web interface to the cloud orchestrator.

n Click the plus icon or anywhere within the table row to expand the row and show more
information about the cloud. For instance, in AWS the Region, Availability Zone, and Networks
are shown.

n Select a cloud and click Delete to remove the cloud. However, a cloud cannot be deleted if it is
associated with a virtual service, or any other object such as a pool or Service Engine group.

Creating a Cloud
1 NSX Advanced Load Balancer UI, navigate to Infrastructure > Clouds.

2 Click Create and select the environment in which NSX Advanced Load Balancer has to be
installed.

3 Configure the settings based on the cloud selected. Click on an installation reference to view
the configuration options for each cloud/environment.

a Google Cloud Platform

b Linux Server Cloud

c OpenStack

d VMware NSX-T

e Linux KVM (No- Access mode)

f Microsoft Azure

g Amazon Web Services

h Cisco CSP

i Cisco ACI based Environments

j Oracle Cloud

k Nutanix Acropolis Based Environments

l VMware vSphere Environments

VMware, Inc. 13
VMware NSX Advanced Load Balancer Configuration Guide

Virtual Services
Virtual services are the core of the load balancing and proxy functionality. A virtual service
advertises an IP address and ports to the external world and listens for client traffic. When a
virtual service receives traffic, it can be configured to:

n Proxy the client’s network connection.

n Perform security, acceleration, load balancing, gather traffic statistics, and other tasks.

n Forward the client’s request data to the destination pool for load balancing.

A virtual service can be thought of as an IP address that NSX Advanced Load Balancer is listening
to, ready to receive requests. In a normal TCP/HTTP configuration, when a client connects to the
virtual service address, NSX Advanced Load Balancer will process the client connection or request
against a list of settings, policies and profiles, then send valid client traffic to a back-end server
that is listed as a member of the virtual service’s pool.

Typically, the connection between the client and NSX Advanced Load Balancer is terminated or
proxied at the SE, which opens a new TCP connection between itself and the server. The server
will respond directly to the NSX Advanced Load Balancer IP address, not to the original client
address. NSX Advanced Load Balancer forwards the response to the client via the TCP connection
between itself and the client.

Virtual Service Pool


Server List
IP: Port Listener
Load Balance Algorithm
Network Profile
Health Monitoring
Client App Profile Servers
Persistence Profile

Service Engine

A typical virtual service consists of a single IP address and service port that uses a single network
protocol. NSX Advanced Load Balancer allows a virtual service to listen to multiple service ports or
network protocols.

For instance, a virtual service could be created for both service port 80 (HTTP) and 443 SSL
(HTTPS). In this example, clients can connect to the site with a non-secure connection and
later be redirected to the encrypted version of the site. This allows administrators to manage a
single virtual service instead of two. Similarly, protocols such as DNS, RADIUS and Syslog can be
accessed via both UDP and TCP protocols.

It is possible to create two unique virtual services, where one is listening on port 80 and the other
is on port 443; however, they will have separate statistics, logs, and reporting. They will still be
owned by the same Service Engines (SEs) because they share the same underlying virtual service
IP address.

VMware, Inc. 14
VMware NSX Advanced Load Balancer Configuration Guide

To send traffic to destination servers, the virtual service internally passes the traffic to the
pool corresponding to that virtual service. A virtual service normally uses a single pool, though
an advanced configuration using policies or DataScripts can perform content switching across
multiple pools. A script also can be used instead of a pool, such as a virtual service that only
performs an HTTP redirect.

A pool can be associated with multiple virtual services if they have the same Layer 4 or 7
application profile.

When creating a virtual service, that virtual service listens to the client-facing network, which is
most likely the upstream network where the default gateway exists. The pool connects to the
server network.

Normally, the combined virtual service and pool are required before NSX Advanced Load
Balancer can place either object on an SE. When making an SE placement decision, NSX
Advanced Load Balancer must choose the SE that has the best reachability or network access
to both client and server networks. Alternatively, both the clients and servers may be on the same
IP network.

Viewing Virtual Services


From the NSX Advanced Load Balancer UI, navigate to Application > Virtual Services , to view
all the virtual services created. From this screen, you can create a new virtual service, search for
existing virtual services, edit or delete virtual services.

Field Description

Name Lists the name of each virtual service. Clicking the name of
a virtual service opens the Analytics tab of the respective
virtual service.

Health Displays a numeric, color-coded health status of the virtual


service. A red exclamation mark (!) indicates that the
virtual service is down. A dash appears if the virtual service
is disabled, not deployed, or in error state.
Hover the cursor over the health score to view the Health
Score popup for the virtual service.
Click the View Health button at the bottom of the popup
screen to view more insights on the health status.

Address Displays the IP address advertised by the virtual service.

VMware, Inc. 15
VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Services Lists the service ports configured for the virtual service.
Ports that are configured for terminating SSL/TLS
connections are denoted in parenthesis.
A virtual service may have multiple ports configured. For
example:
n 80 (HTTP)
n 443 (SSL)

Pools Lists the pools assigned to each virtual service. Clicking a


pool name opens the Analytics tab of the respective pool.

Service Engine Group Displays the group from which Service Engines may be
assigned to the virtual service.

Service Engines Lists the Service Engines to which the virtual service
is assigned. Clicking a Service Engine name opens the
Analytics tab of the respective Service Engine.

Total Service Engines Shows the number of SEs assigned to the virtual service as
a time series. This is useful to see if a virtual service scales
up or down the number of SEs.

Throughput Displayes a thumbnail chart of the throughput for each


virtual service for the time frame selected.
Hovering the cursor over this graph shows the throughput
for the highlighted time.
Clicking a graph opens the Analytics tab of the virtual
service.

Open Conns Displayes the average number of open connections.

Client RTT Displays the average TCP latency between clients of the
virtual service and the respective SEs.

Server RTT Displays the average TCP latency between backend


servers of the virtual service and its SEs.

Conns Displays the rate of total connections per second.

Bad Connections Displays the rate of errored connections per second.

RX Packets Displays the average rate of packets received per second

TX Packets Average rate of packets transmitted per second.

Policy Drops Displays the rate of total connections dropped due to


virtual service policies (per second). It includes drops due
to rate limits, security policy drops, connection limits, etc.

DoS Attacks Displays the number of DoS attacks occurring per second.

Alerts Displays the number of alerts related to the virtual service,


pool, or Service Engines.

To customize the columns in the table, click the settings icon. Add or remove columns by using the
arrows in the screen.

VMware, Inc. 16
VMware NSX Advanced Load Balancer Configuration Guide

Virtual Services
The Virtual Services screen shows extensive information about the virtual service selected.

To view the details of a specific virtual service, navigate to Applications > Virtual Services. Click
the required virtual service.

Alternatively, you can also navigae to Applications > Dashboard. Click the required virtual service.

The Virtual Service screen has the following tabs for the virtual service selected.

n Virtual Service Analytics

n Virtual Service Application Logs

n Virtual Service Health

n Virtual Service Clients Page

n View Security Insights

n Virtual Service Events Page

n Virtual Service Alerts Page

Virtual Service Quick Info Popup


You can view the Virtual Service quick info pop up from any tab of the Virtual Service screen.
Hover over or clicking the virtual service name.

The Virtual Service quick info popup has the following buttons:

n Scale-Out distributes connections for the virtual service to one additional SE per click, up to
the maximum number of SEs defined in the SE group properties.

n Scale In removes the VIP address from the selected Service Engine. If Primary is selected, one
of the existing Secondaries will become the new Primary.

VMware, Inc. 17
VMware NSX Advanced Load Balancer Configuration Guide

n Migrate moves the virtual service from the SE it is currently on to a different SE within the
same SE group.

Note For information related to the SE group settings min_scaleout_per_vs and


max_scaleout_per_vs, refer to Impact of Changes to Min/Max Scaleout per Virtual Service.

This popup also displays the following information (if applicable) for the virtual service:

Field Description

Service Engine Names or IP addresses of the SEs this virtual service is


deployed on. Clicking on an SE name opens the Service
Engine Details page for that SE.

Uptime / Downtime The amount of time the virtual service has been in the
current up or downstate.

Address IP address of the virtual service.

Application Profile The application profile applied to the virtual service.

Service Port Service port(s) on which the virtual service is listening for
client traffic.

TCP/UDP Profile The profile is applied to the virtual service.

SSL Certificates The certificate(s) applied to the virtual service.

Non-Significant Logs When disabled, the virtual service defaults to logging


significant events or errors. When enabled, all connections
or requests are logged.

Real-Time Metrics When this option is disabled, metrics are collected every
five minutes, regardless of whether the Display Time is set
to the Real-Time. When the option is enabled, metrics are
collected every 15 seconds.

Client Log Filters Number of custom log filters applied to the virtual service.
Log filters can selectively generate non-significant logs.

Client Insights Type of client insights gathered by the virtual service:


Active, Passive, or None.

Wildcard VIP
This section explains the configuration, common deployment, and use case scenarios of the
wildcard VIP.

In NSX Advanced Load Balancer, a virtual service is configured with an IP address as VIP and
ports as services to load balance the client traffic from the external world. NSX Advanced Load
Balancer processes the client connection or request against a list of settings, policies, and profiles,
then load balances valid client traffic to a back-end application server listed as a pool member of
the virtual service.

In addition to loadbalancing the client traffic to the application servers, NSX Advanced Load
Balancer also provides supportability, manageability, and scalability to the application servers.

VMware, Inc. 18
VMware NSX Advanced Load Balancer Configuration Guide

You can upgrade application serverswith zero downtime when deployed with the NSX Advanced
Load Balancer.

For more information, see NSX Advanced Load Balancer Platform Overview.

Wildcard VIP extends the capability of a virtual service to provide advanced load balancing
services to network elements like firewalls and firewall devices.

Wildcard VIP allows the network match configuration in a virtual service. The application virtual
service accepts the connections destined to a VIP, whereas the wildcard VIP accepts the
connections destined to a subnet and is configurable as CIDR notation.

Features Supported
The following profiles support wildcard virtual services:

n Network Profile: TCP fast path and UDP fast path.

n Application Profile: System L4 application profile

Supported Environments
The following environments support wildcard VIP:

n Active/ Standby SE group, in DPDK based environments

n VMware Read/Write modes and Bare-metal clouds

Network Address Match


Network traffic match is configured to handle a huge range of incoming traffic. In a corporate
network, it can be a subnet/prefix configuration.

For example, it could be a prefix of 10.0.0.0/8 (to accept between 10.0.0.0 - 10.255.255.255) or
0.0.0.0/0 (to accept every incoming packet).

The following is a demonstration of wildcard VIP in deployment:

VMware, Inc. 19
VMware NSX Advanced Load Balancer Configuration Guide

Figure 1-1. Wildcard VIP

www

Avi SE 1

FW 1 FW 2 FW 3

Internal

In this deployment mode, the wildcard virtual service is in the frontend, facing the client traffic.
Three firewalls: FW1, FW2, and FW3 are configured as pool members. The wildcard virtual service
load balances the traffic across the firewalls FW1, FW2, and FW3.

Firewalls are rarely the destination for the client traffic, and the traffic is expected to be
transparently forwarded to the firewall. Hence, the traffic from the clientmust be sent as is to
the pool member without any source address or a destination address translation (SNAT or DNAT)
In such a deployment, the network address match is configured in the traffic selection criteria of
the VIP. The wildcard VIP of the virtual service will only load balance the traffic to these firewalls
without changing the client traffic.

Traffic Selection Criteria


With the introduction of network address match as part of the virtual service, the following
combinations can be configured as the traffic selectors for the VIP:

VMware, Inc. 20
VMware NSX Advanced Load Balancer Configuration Guide

Destination Service Port Virtual Service Configuration

IP Address Port

IP Address Any Service Port is in the range 1 - 65535

Network Address Port Prefix in ip_address (VIP)

Network Address Any Prefix in ip_address, Service Port: 0

Any Port Prefix 0 in ip_address

Any Any Prefix 0 in ip_address, Service Port 0

Note Currently only TCP/ UDP fast network profiles support Wildcard VIP.

In addition to the networkport supported, with the network address match feature, you can
configure all the aforesaid variants of the wildcard virtual service . If you configure multiple virtual
services with varying combinations as specified in the table, the virtual service with the most
specific matchis selected in the same order of preference as listed in the table.

Traffic Selection Match

www

VS with most specific match will be selected.

VS Destination Service Port


(IP Address / Network) (Port / Port Range)
VS Vip Service Port
10.10.10.10, 80 vs1 10.10.10.10 80
vs1 10.10.10.10 80
10.10.10.10, 50 10.10.10.254, 80 vs2 10.10.10.0 - 10.10.10.255 1-100
vs2 10.10.10.10/24 1-100
10.10.10.10, 500 10.10.10.12, 50 vs3 10.10.0.0 - 10.10.255.255 1-1000
vs3 10.10.0.0/16 1-1000
10.10.10.10, 1500 10.10.10.13, 500 vs4 10.0.0.0 - 10.255.255.255 1-2000
vs4 10.10.0.0/8 1-2000

10.10.10.10, 2500 1.2.3.4, 1500 vs5 ANY 1-3000


vs5 ANY 1-3000

10.10.10.10, 3500 1.2.3.4, 56000 vs6 ANY ANY


vs6 ANY ANY

NSX-ALB

Configuring Wildcard VIP


This section discusses the steps to configure wild card VIP in a virtual service.

The wildcard VIP is configured through the NSX Advanced Load Balancer Controller CLI as
follows:

VMware, Inc. 21
VMware NSX Advanced Load Balancer Configuration Guide

Enabling Wildcard VIP in Virtual Service Configuration


1 Configure the wildcard VIP in a virtual service using prefix_length to take the net mask into
consideration.

configure vsvip <vsvip_name>


vip index 0
ip_address 10.0.0.0
prefix_length 8
save
vrf_context_ref <vrf>
tenant_ref <tenant>
cloud_ref <cloud>
save

2. Configure the placement VIP. To enable wildcard VIP, the placement subnet is mandatory for
the virtual service that is referring the inline virtual service VIP.

[admin:abc-ctrl-wildcard]: > show vsvip vsvip-wc-Default-Cloud


+---------------------------+-------------------------------------------+
| Field | Value
|
+---------------------------+-------------------------------------------+
| uuid |vsvip-7524a40f-33d0-4e4e-8d20-193f31b8b39 |
| name | vsvip-wc-Default-Cloud
|
| vip[1] |
|
| vip_id | |
| ip_address | 10.0.0.0 |
| enabled | True
|
| auto_allocate_ip | False
|
| auto_allocate_floating_ip | False |
| avi_allocated_vip | False |
| avi_allocated_fip | False |
| auto_allocate_ip_type | V4_ONLY
|
| placement_networks[1] |
|
| network_ref | vxw-dvs-26-virtualwire-9-sid-2210008-wdc-|
| |02-vc21-avi-dev001 |
| subnet | 100.64.1.0/24
|
| prefix_length | 8
|
| vrf_context_ref | global
|
| east_west_placement | False
|
| tenant_ref | admin
|
| cloud_ref | Default-Cloud |

VMware, Inc. 22
VMware NSX Advanced Load Balancer Configuration Guide

+---------------------------+-------------------------------------------+
[admin:abc-ctrl-wildcard]: >

Configuring the Port Range


Port ranges can be configured as part of the service object of the virtual service.In NSX Advanced
Load Balancer Controlleryou can configure port 0 that accepts the complete port-range of
1-65535.

configure virtualservice <vs-name>


services
port 0
save
save

Configuring the Application Profile


In the application profile, a new field, preserve_dest_ip_port has been introduced to enable the
no-DNAT functionality.

As firewalls expect the client traffic unchanged for validation, configure the application
profile of the wildcard virtual servicewith preserve_client_ip, preserve_client_port, and
preserve_destination_ip_port.

Configure preserve_destination_ip_port in the application profile.

configure applicationprofile <app_profile_name>


preserve_dest_ip_port
save

The application profile is configured as shown below:

[admin:abc-ctrl-wildcard]: >
show applicationprofile test1 | grep
preserve|
| preserve_client_ip | True || preserve_client_port
| True || preserve_dest_ip_port | True |

Configuring Routing Pool


To configure the routing pool,

configure pool <pool_name>


routing_pool
save

The configured routing pools appear as shown below:

[admin:abc-ctrl-wildcard]: > show pool test1 | grep routing_pool|


| routing_pool| True |
[admin:abc-ctrl-wildcard]: >

VMware, Inc. 23
VMware NSX Advanced Load Balancer Configuration Guide

Placement Network in VIP


This section explains the configuration and placement netwroks on SE.

In No-Access and Linux Server Cloud scenarios, the Controller cannot configure the vNICs on
demand on the SE. In this case, the SE is configured with a specific number of VNICs which have
access to specific sub-nets. To load balance a VIP which is not present on any of the sub-nets
accessible to the SE, the Controller cannot place the virtual service on the SE.

A placement network can be configured from the subnets accessible to the SE. Once that is
configured, the Controller will forcefully place the VIP on the VNICs which have access to the
placement networks. The user can then configure static routes on the SE, or on the previous hop
router to ensure the traffic for the VIP is forwarded to the placement vNIC.

Consider this configuration where,

n The VIP is 0.0.0.0/0, with placement networks in 2.2.20/24 and 3.3.3.0/24

n The clients are trying to access 1.1.10.10 and 1.1.20.10

n The servers are 5.5.5.11, 5.5.5.12, 5.5.5.13

n The SE has vNICs in the subnets 2.2.2.0/24, 3.3.3.0/24 and 4.4.4.0/24

n Router 1 provides connectivity between 1.1.1.0/16, 2.2.2.0/24 and 3.3.3.0/24

n Router 2 provides connectivity between 4.4.4.0/24 and 5.5.5.0/2In this case, all traffic
intended for 1.1.10.10 and 1.1.20.10 is matched by the VIP 0.0.0.0/0 and is routed to the SE
via the VNIC in 2.2.2.0/24 network subnet, and then load balanced across the Servers via the
vNIC in 4.4.4.0/24 network subnet.

NSX Advanced Load Balancer checks all the matching networks on the SE and places the virtual
service on all the vNIC’s of the matching SEs.

NSX Advanced Load Balancer supports multiple placement networks and enables placement of
virtual services on the virtual services can be placed on multiple vNICs.

Consider the following scenarios to understand this further:

Scenario Placement Networks Networks on SE Placement Behavior

SE has access to the 2.2.2.0/24 SE 1eth1: 2.2.2.0/24SE The virtual service will be
exact subnet of the VIP 2eth1: 2.2.2.0/24 placed on the matching
placement network vNICs of both SE - eth1 on
SE 1 and eth1 on SE 2

Both the SEs have access to 2.2.2.0/24 SE 1 The virtual service will be
the same network, which is 3.3.3.0/24 SE 1 eth1: 2.2.2.0/24 placed on the matching
one of the two placement vNICs of both SE - eth1 on
SE 2 eth1: 2.2.2.0/24
networks SE 1 and eth1 on SE 2

SE has access to a single 2.2.2.0/25 SE 1 The virtual service will be


network which is a superset 2.2.2.128/25 eth1: 2.2.2.0/24 placed on the matching
of all the subnets of the vNICs of both SE - eth1 on
SE 2 eth1: 2.2.2.0/24
placement network SE 1 and eth1 on SE 2

VMware, Inc. 24
VMware NSX Advanced Load Balancer Configuration Guide

Scenario Placement Networks Networks on SE Placement Behavior

SE has access to a single 2.2.2.0/25 SE 1 eth1: 2.2.2.0/24 Since the vNIC on SE


network which is a superset 2.2.2.128/25 SE 2 eth1: 2.2.2.0/24 covers both the placement
of all the subnets of the networks, the virtual service
placement network is placed on the vNIC with
2.2.2.0/24 network - eth1
on SE 1 and eth1 on SE 2

There are two placement 2.2.2.0/24 SE 1 eth1: 2.2.2.0/24 The virtual service will be
networks, and both SE 3.3.3.0/24 SE 2 eth1: 3.3.3.0/24 placed on the matching
have access to separate vNICs of both SE - eth1 on
placement networks SE 1 and eth1 on SE 2

SE has access to all the 2.2.2.0/24 SE 1 eth1: 2.2.2.0/24 The virtual service is placed
subnets of the placement 3.3.3.0/24 eth2: 3.3.3.0/24 on all the matching vNICs of
networks both SE - eth1, eth2 on SE 1
SE 2 eth1: 2.2.2.0/24
and eth1, eth2 on SE 2.
eth2: 3.3.3.0/24

Placement network gets Before:2.2.2.0/24 Before: SE 1 The virtual service is placed


modified to add a new After:2.2.2.0/243.3.3.0/24 eth1: 2.2.2.0/24 on the matching vNICs of
placement network after both SE - eth1, eth2 on SE
eth2: 3.3.3.0/24
the virtual service is placed. 1 and eth1, eth2 on SE 2.
After: SE 2
Network can be modified
eth1: 2.2.2.0/24
by either adding an SE
network or by adding a eth2: 3.3.3.0/24
placement network.

SE gets access to a new 2.2.2.0/24 3.3.3.0/24 Before: The virtual service is placed
network after the virtual SE 1 on the matching vNICs of
service is placed. both SE - eth1, eth2 on SE
eth1: 2.2.2.0/24
1 and eth1, eth2 on SE 2
SE 2
eth1: 2.2.2.0/24
After:
SE 1 eth1: 2.2.2.0/24
eth2: 3.3.3.0/24
SE 2
eth1: 2.2.2.0/24
eth2: 3.3.3.0/24

Note Placement Networks is currently supported only for IPv4 configuration.

Caveats
Wildcard virtual service does not support the following:

n BGP based scale-out and other associated BGP features

n Flow monitoring

n Shared VIP

n Traffic cloning

VMware, Inc. 25
VMware NSX Advanced Load Balancer Configuration Guide

Difference Between Virtual Service and Virtual IP


NSX Advanced Load Balancer uses virtual services and virtual IP addresses (VIPs). These are
related but separate things. This section discusses the differences between VIP and a virtual
service.

n Virtual IP (VIP): A single IP address owned and advertised by a SE.

n Virtual Service: A VIP plus a specific layer of four protocol ports used to proxy an application.
A single VIP can have multiple virtual services. As an example, all the following virtual services
can exist on a single VIP:

n 192.168.1.1:80,443 (HTTP/S)

n 192.168.1.1:20,21 (FTP)

n 192.168.1.1:53 (DNS)

The VIP in this example is 192.168.1.1. The services are HTTP/S, FTP, and DNS. Thus, VS HTTPS is
advertised with address 192.168.1.1:80,443, which is the VIP plus protocol port 443.

The VIP concept is essential in NSX Advanced Load Balancer because a given IP address can be
advertised (ARPed) from only a single SE. If the SE that owns a VIP is busy and needs to migrate a
virtual service’s traffic to a less active SE, then all the VSs are moved from the busy SE to the same
new (less busy) SE. If an SE fails, all of its virtual services would be moved to a single SE. This is
true even if multiple idle SEs are available in the SE group.

Create a Virtual Service


A new virtual service can be created either using the basic mode or the advanced mode. In
the basic mode, not all features are displayed during the initial setup. However, after the virtual
service has been created , all the options are displayed in the edit mode. While basic mode may
have been used to create the virtual service, it does not preclude access to any advanced features.

Creating a Virtual Service in Basic Setup


The basic setup enables quick creation of required objects, in particular the pool containing
servers.

Procedure

1 Navigate toApplications > Virtual Services > CREATE VIRTUAL SERVICE.

2 Select Basic Setup.

3 If NSX Advanced Load Balancer is configured for multiple cloud environments, such as
VMware and Amazon Web Services (AWS), select required the cloud for the virtual service
deployment. If NSX Advanced Load Balancer exists in a single environment, skip this step.

4 Select the VRF context.

5 Enter a unique Name for the virtual service.

6 Enter the VS VIP address. This is used during the creation of Shared virtual service.

VMware, Inc. 26
VMware NSX Advanced Load Balancer Configuration Guide

7 Select the Application Type.

Option Description

HTTP The virtual service will listen for non-secure Layer 7 HTTP. Selecting this
option auto-populates the Service port field to 80. Override the default with
any valid port number; however, clients will need to include the port number
when accessing this virtual service. Browsers default to automatically append
the standard port 80 to HTTP requests. Selecting HTTP enables an HTTP
application profile for the virtual service. This allows NSX Advanced Load
Balancer to proxy HTTP requests and responses for better visibility, security,
acceleration, and availability.

HTTPS The virtual service will listen for secure HTTPS. Selecting this option auto-
populates port 443 as the service port. Override this default with any valid
service port number. However, clients will need to include the port number
when accessing this virtual service as browsers automatically append the
standard port 443 to HTTPS requests. When selecting HTTPS, use the
Certificate pull-down menu to reference an existing certificate or create a new
self-signed certificate. A self-signed certificate will be created with the same
name as the virtual service and will be an RSA 2048 bit cert and key. The
certificate can be swapped out later if a valid certificate is not yet available at
time of virtual service creation.

L4 The virtual service will listen for layer 4 requests on the port you specify in the
Service port field. Select this option to use the virtual service for non-HTTP
applications, such as DNS, mail, or a database.

L4 SSL/TLS The virtual service will listen for secure layer 4 requests. Selecting this option
auto-populates port 443 in the Service port field. Override this default with
any valid service port number.

8 In the Service field, accept the default port displayed for the selected Application Type.
Alternatively, you can enter the service port manually, as required. To add multiple service
ports or ranges, edit the virtual service after creation.

9 The pool directs load balanced traffic to this list of destination servers. The servers can be
configured by IP address, name, network or via IP Address Group. Add one or more servers to
the new virtual service by using one of the options:

n Select IP Address, Range, or DNS Name and enter the Server IP Address required. Click
Add Server.

n Select IP Address, Range, or DNS Name an click Select Servers by Network to open a
list of reachable networks to add the server from. See Select Servers by Network for more
information.

n Click the option IP Group to select an IP Group from a list of servers from the IP Address
Group available.

10 Click Save.

VMware, Inc. 27
VMware NSX Advanced Load Balancer Configuration Guide

Results

The virtual service is assigned automatically to a Service Engine. If an available SE already exists,
the virtual service will be deployed and be ready to accept traffic. If a new SE must be created, it
may take a few minutes before it is ready.

In some environments, NSX Advanced Load Balancer may require additional networking
information, such as IP addresses or clarification of desired networks, subnets, or port groups
to use prior to a new Service Engine creation. The UI will prompt for additional information if this is
required.

Creating a Virtual Service in Advanced Setup


Creating a virtual service in the advanced setup allows you to configure all the options through
different tabs.

Procedure

1 Navigate toApplications > Virtual Services > CREATE VIRTUAL SERVICE.

2 Select Advanced Setup.

3 If NSX Advanced Load Balancer is configured for multiple cloud environments, such as
VMware and Amazon Web Services (AWS), select required the cloud for the virtual service
deployment. If NSX Advanced Load Balancer exists in a single environment, skip this step.

4 Step 1: Settings .

5 Step 2: Policies.

6 Step 3: Analytics.

7 Step 4: Advanced.

VMware, Inc. 28
VMware NSX Advanced Load Balancer Configuration Guide

8 Click Save.

Step 1: Settings
Configure the basic setting for a virtual service like the VIP Address, pool, profiles and policies and
more.

Procedure

1 Enter a unique Name for the virtual service.

2 The Enabled? toggle icon is green by default. This implies that the virtual service will accept
and process traffic normally. To deactivate the virtual service click the toggle button. The
existing concurrent connections will be terminated, and the virtual service will be unassociated
from all Service Engines. No health monitoring is performed for deactivated virtual services.

3 The Traffic Enabled?option is to enable selected by default. Click the option to stop virtual
service traffic on its assigned service engines. This option is effective only when the virtual
service is enabled.

4 Select Virtual Hosting VS if this virtual service participates in virtual hosting via SSL’s Server
Name Indication (SNI). This allows a single SSL decrypting virtual service IP:port to forward
traffic to different internal virtual services based on the name of the site requested by the
client. The virtual hosting VS must be either a parent or a child.

Option Description

Parent The parent virtual service is external facing, and owns the listener IP address,
service port, network profile, and SSL profile. Specifying a pool for the parent
is optional, and will only be used if no child virtual service matches a client
request. The SSL certificate may be a wildcard certificate or a specific domain
name. The parent’s SSL certificate will only be used if the client’s request
does not match a child virtual service domain. The parent virtual service
will receive all new client TCP connections, which will be reflected in the
statistics. The connection is internally handed off to a child virtual service, so
subsequent metrics such as concurrent connections, throughput, requests,
logs and other stats will only be shown on the child virtual service.

Child The child virtual service does not have an IP address or service port. Instead,
it points to a parent virtual service, which must be created first. The domain
name is a fully qualified name requested by the SNI-enabled client within
the SSL handshake. The parent matches the client request with the child’s
domain name. It does not match against the configured SSL certificate. If no
child matches the client request, the parent’s SSL certificate and pool are
used.

5 Select the Virtual Hosting Type as Enhanced Virtual Hosting or SNI.

6 Enter the VS VIP address. This is used during the creation of Shared virtual service.

VMware, Inc. 29
VMware NSX Advanced Load Balancer Configuration Guide

7 Under Profiles, select the following.

a The TCP/ UDP Profile to determine thenetwork settings such as protocol, TCP or UDP,and
related options for the protocol.

b The Application Profile to enable application layer specific features for the virtual service

c Bot Detection Policy

d ICAP Profile to configure the ICAP server when checking the HTTP request.

e The Error Page Profile to be used for this virtual service. This profile is used to send the
custom error page to the client generated by the proxy.

8 Under the Service Port section, enter the Services, which are the service ports that the virtual
service will listen for incoming traffic. Click Add Port to add multiple ports.

a Click Switch to Advanced to enter a range of service ports.

b SelectUse as Horizon Primary/Tunnel Protocol Ports in case of a Horizon deployment.


This option is used for L7 redirect.

c Select an Application Profile underOverride Application Profile to enable application


layer specific features for the this specific service.

d Enable Override TCP/UDP and select the profile required to override the virtual service's
default TCP/UDP profile on a per-service port basis.

e Click Add Port to add another range of service ports and configure the same.

9 Under the Pool section, either select a Pool or a Pool Group. Using the Pool drop-down
list, select the required pool that contains destination servers and related attributes such as
load-balancing and persistence.

10 Select Ignore network reachability constraints for the server pool, if required. If the pool
contains servers in networks unknown or inaccessible to NSX Advanced Load Balancer, the
Controller is unable to place the new virtual service on a SE, as it does not know which SE
has the best reachability. This requires you to manually choose the virtual service placement.
Selecting this option will allow the Controller to place the virtual service, even though some
or all servers in the pool may be inaccessible. For instance, you can select this option while
creating the virtual service, and later configure a static route to access the servers.

Step 2: Policies
Use the Policies tab to define policies or DataScripts for the virtual service. DataScripts and
policies consists of one or more rules that control the flow of connections or requests through
the virtual service to control security, client request attributes, or server response attributes. Each
rule is a match/action pair that uses if/then logic: If something is true, then it matches the rule
and corresponding actions will be performed.Policies are simple GUI-based, wizard-driven logic,
whereas DataScript allows more powerful manipulation using Avi Vantage’s Lua-based scripting
language.

VMware, Inc. 30
VMware NSX Advanced Load Balancer Configuration Guide

Procedure

1 Configure Network Security to explicitly allow or block traffic based on network (TCP/UDP)
information.

a Select the IP Reputation DB.

b Select the Geo DB.

c Click the + to view the Add Network Security Rule sub-screen.

d Select the Logging checkbox for NSX Advanced Load Balancer to log when an action has
been invoked.

e Under Matching Rules, select the network security match criteria from the Add New
Matchdrop down list. For example, Service Port is 80.

f Under Actions, select a configurable action to be implemented when the match criteria is
met. For more information, see Network Security.

g In the Role-Based Access Control (RBAC) section click Add and configure the Key and
the corresponding Value to provide granular access to control, manage and monitor
applications.For more information, see Granular Role Based Access Controls per App.

h Click Save Rule.

2 Similarly, configure HTTP Security, HTTP Request, HTTP Response rules, as required.

3 Under DataScripts, click Add DataScript.

a Select the Script to Execute from the drop-down list or Create DataScript.

b Click Save DataScript.

4 Author custom authentication policies and attach the policies to identity providers (IdP). Under
Access, select and configure one of the following:

Option Description

SAML Security Assertion Markup Language (SAML) is an XML-based markup


language for exchanging authentication and authorization between an
identity provider (IdP) and a service provider(SP). To know how to configure
an application for SAML-based authentication, create an SSO policy and bind
it to the virtual service, see SAML Configuration on NSX Advanced Load
Balancer.

PingAccess Ping Identity’s PingAccess Agent can be used to control client access to a
virtual service. To know how to create a PingAccess Agent profile, create an
SSO Policy of type PingAccess, and associate it with the virtual service.

VMware, Inc. 31
VMware NSX Advanced Load Balancer Configuration Guide

Option Description

JWT JWT validation is supported as one of the access policies for secure
communication through NSX Advanced Load Balancer and it is based on a
JWT issued by an authorization server. To know more, see Configuring NSX
Advanced Load Balancer for JSON Web Tokens (JWT) Validation.

LDAP LDAP is an extension of the basic authentication policy where the provided
username and password will be authenticated against the target LDAP
server. LDAP is a commonly used protocol for accessing a directory service.
A directory service is a hierarchical object oriented database view of
an authentication system. NSX Advanced Load Balancer supports LDAP
authentication for virtual services. To know more, see Configuring LDAP
Authentication.

5 Click Next.

Step 3: Analytics
The Analytics tab of the New Virtual Service wizard defines how the NSX Advanced Load
Balancer captures analytics for the virtual service. These settings control the thresholds for
defining client experience and the resulting impact on end-to-end timing and the health score,
the level of metrics collection, and the logging behavior.

Procedure

1 Select an Analytics Profile from the drop-down menu. This profile determines the thresholds
for determining client experience. It also defines errors that can be tailored to ignore
certain behavior that might not be an error for a site, such as an HTTP 401 (authentication
required) response code. The NSX Advanced Load Balancer uses errors and client experience
thresholds to determine the health score of the virtual service and might generate significant
log entries for any issues that arise.

2 There are several number of metrics, such as End to End Timing, Throughput, Requests, and
more. The NSX Advanced Load Balancer updates these metrics periodically, either at a default
interval of five minutes, or as defined in the Metric Update Frequency. Enable Real Time
Metrics to gather detailed metrics aggressively for a limited period, as required.

n Enter 0 to collect metrics aggressive to indefinite periods of time

n Enter a value, for example, 30 min to collect real time metrics for the defined 30 minutes.
After this period of time elapses, the metrics collection reverts to slower polling. Real time
metrics is helpful when troubleshooting.

n Note Capturing real time metrics can negatively impact system performance for busy
Controllers with large numbers of virtual services or configured with minimal hardware
resource.

VMware, Inc. 32
VMware NSX Advanced Load Balancer Configuration Guide

3 Data about connecting clients can be captured using Client Insights. Specific clients may be
included or excluded via the Include URL, client IP address, and exclude URL options. By
default, No Insights is selected.

Option Description

Active For HTTP virtual services, the active mode goes further by inserting
an industry standard JavaScript query into a small number of server
responses to provide HTTP navigation and resource timing. Client browsers
transparently return additional information about their experience loading the
web page. NSX Advanced Load Balancer uses this information to populate
the Navigation Timing and Top URL Timing metrics. A maximum of one
HTTP web page per second will be selected for generating the sampled data.

Passive Record data passively flowing through the virtual service. This option enables
recording of the End-to-End Timingand client’s location. For HTTP virtual
services, device, browser, operating system, and top URLs metrics are also
included. No agents or changes are made to client or server traffic.

No Insights No client insights are recorded for this virtual service.

4 Configure user-defined logging under Client Log Settings. Click Log all headers to include all
the headers.

5 Enter the number of significant logs to be generated per second for this virtual service on
each SE as Significant log throttle. The default value is 10 sec per sec. Setting this value to 0
deactivates throttling for significant logs.

6 Enter User defined filters log throttle to limit the total number of UDF logs generated per
second for this virtual service on each SE.

7 Enable Non-significant logs to capture all client logs including connections and requests.

a Enter the number of non-significant logs to be generated per second for this virtual service
on each SE as Non-significant log throttle. The default value is 10 sec per sec. Setting this
value to 0 deactivates throttling for for non-significant logs.

b Enter the Non-significant log duration in minutes.

8 Click Add Client Log Filter. In the Add Client Log Filter section configure the following.

a Enter the Filter Name.

b Enable Log all headers and configure the Duration in minutes.

c Select the condition for the match under Matching Filter. For example, Client IP.

d Select the criteria to match the filter. For example, Is 1.1.1.1. The filter will take effect if the
client IP address is 1.1.1.1.

e Click Add Item to add another critera for the same filter.

f Filter based on the request’s Path and select the required Match criteria and add a string
group or enter a custom string, as required.

9 Click Next to view Step 4: Advanced.

VMware, Inc. 33
VMware NSX Advanced Load Balancer Configuration Guide

Step 4: Advanced
When creating the virtual service, Step 4: Advanced provides advanced and optional
configuration for the virtual service.

Procedure

1 Under Performance Limit Settings, click Performance Limits and define the performance
limits for a virtual service. The limits applied are for this virtual service only, and are based on
an aggregate of all clients. See Rate Shaping and Throttling Options.

Configure limits per client using the application profile’s DDoS tab. Use policies or DataScripts
for more per-client limits.

2 To limit the incoming connections to this virtual service use Rate Limit Number of New TCP
Connections and Rate Limit Number of New HTTP Requests . Configure the following fields.

Option Description

Threshold Set the maximum threshold of new connections, requests or packets


permitted (within the range 1-1000000000) from all clients created for this
virtual service over the configured time period.

Time Period Enter the time (within the range 1-1000000000), in seconds, within which
the threshold is valid. Enter to 0 to keep the threshold perpetually valid.

Action Select the Action. NSX Advanced Load Balancer performs this action upon
rate limiting.

3 Enter the maximum amount of bandwidth for the virtual service in Mbps for each SE using Max
Throughput.

4 Specify the maximum number of concurrent open connections using Max Concurrent
Connections. Connection attempts that exceed this number will be reset (TCP) or dropped
(UDP) until the total number of concurrent connections falls below the threshold.

VMware, Inc. 34
VMware NSX Advanced Load Balancer Configuration Guide

5 Under the section Quality of Service, configure the following.

Option Description

Weight Bandwidth is the packets per second through the SE’s hypervisor, saturation
of the physical interface of the host server, or similar network constrictions.
NSX Advanced Load Balancer provides bandwidth allocation to the traffic
that this virtual service transmits, depending on the weight you assign.
A higher weight prioritizes traffic in comparison to other virtual services
sharing the same Service Engines.
This setting is only applicable if there is network congestion, and only for
packets sent from the Service Engine.

Fairness Fairness determines the algorithm that the NSX Advanced Load Balancer
uses to ensure that each virtual service can send traffic when the Service
Engine experiences network congestion.
The Throughput Fairness algorithm considers the weight defined for the
virtual service into account to achieve this.
Throughput and Delay Fairness is a more thorough algorithm to accomplish
the same task. It consumes greater CPU on the Service Engine when there
are larger numbers of virtual services.
This option is only recommended for latency-sensitive protocols.

6 Under Other Settings configure the following.

a Enable Auto Gateway to send response traffic to clients back to the source MAC
address of the connection, rather than statically sent to the default gateway of the NSX
Advanced Load Balancer. If the NSX Advanced Load Balancer has the wrong default
gateway, no configured gateway, or multiple gateways, client-initiated return traffic will
still flow correctly. The NSX Advanced Load Balancerdefault gateway will still be used for
management and outbound-initiated traffic.

b Enable Use VIP as SNAT for health monitoring and sending traffic to the back-end servers
instead of the SE interface IP. On enabling this option the virtual service cannot be
configured in an active-active HA mode. For example, in environments in which firewalls
separate clients from services (for example, AWS), the feature provides a consistent
source IP for traffic to the origin server. During packet capture, you can filter on the VIP
and capture traffic on both sides of the NSX Advanced Load Balancer, thus eliminating
extraneous traffic.

c Select Advertise VIP via BGP to enable Route Health Injection using the BGP
configuration in the vrf context.

d SelectAdvertise SNAT via BGPto enable Route Health Injection for Source network
address translated (NAT) floating IP Address using the BGP configuration in the vrf
context.

e Enter the network address translated (NAT) floating source IP Address(es) for upstream
connection to servers as the SNAT IP Address.

f Enter the Server network or list of servers for cloning traffic in the Traffic Clone Profile.

VMware, Inc. 35
VMware NSX Advanced Load Balancer Configuration Guide

g If the host header name in a client HTTP request is not the same as this field, or if it is
an IP address,NSX Advanced Load Balancer translates the host header to this name prior
to sending the request to a server. If a server issues a redirect with the translated name,
or with its own IP address, the redirect’s location header will be replaced with the client’s
original requested host name. Host Name Translation does not rewrite cookie domains or
absolute links that might be embedded within the HTML page. This option is applicable to
HTTP virtual services only. This capability can be manually created using HTTP request and
response policies.

h Select the required Service Engine Group. Placing a virtual service in a specific Service
Engine group is used to guarantee resource reservation and data plane isolation, such
as separating production from test environments. This field may be hidden based on
configured roles or tenant options.

i Enable Remove Listening Port when VS Down for the Service Engine to respond to
requests to the VIP and service port with a RST (TCP) or ICMP port unreachable (UDP),
when the virtual service is down. See Remove Listening Port when VS down.

j Enable Scale out ECMP if the network itself performs flow hashing with ECMP in
environments such as GCP. Deactivate the redistribution of flows across Service Engines
for a virtual service.

7 Configure Role-Based Access Control (RBAC)for the virtual service using markers. See
Granular Role Based Access Controls.

a Click Add.

b Enter the Key and the corressponding Value.

8 Click Save to complete the virtual service configuration.

Disable a Virtual Service


A virtual service can be manually disabled by an administrator or an automated script. This section
covers the steps in detail.

While disabled, the virtual service is unattached from the SEs hosting it. Likewise:

n Existing connections are immediately terminated.

n The pool is placed in a grey (unused) state and is eligible for use by another virtual service.

n Health monitors are not sent to the pool’s servers while the virtual service is disabled.

If a virtual IP needs to be disabled, each virtual service must first be disabled. Once all virtual
services using the VIP have been disabled, NSX Advanced Load Balancer SEs will no longer
respond to ARPs or network requests for the VIP.

Using UI
The following are the steps to disable a virtual service from the Controller’s web interface:

1 Navigate to the edit wizard for the virtual service.

VMware, Inc. 36
VMware NSX Advanced Load Balancer Configuration Guide

2 Click the Enabled button in the Settings tab.

3 The button will change from green to red when the virtual service is disabled.

4 Click Save to commit the change.

Using CLI
Execute the following command to enable a virtual service from the CLI:

: > configure virtualservice Test-VS


: > enabled
: > save

To disable a virtual service from the CLI:

: > configure virtualservice Test-VS


: > no enabled
: > save

Find Virtual Service UUID


Each object within theNSX Advanced Load Balancer configuration is assigned a unique identity.
Multiple objects in different tenants may have the same name. For instance, multiple tenants may
have a virtual service named “web.”

For automated interaction with NSX Advanced Load Balancer, particularly through the API, it
is useful to know how to obtain the UUID of objects such as a virtual service. For the example
mentioned, it is recommended to have Tenant Header (X-Avi-Tenant) set in the API calls so the
Controller can resolve the name to the correct tenant. The details for the header insertion are in
the SDK and API guide.

Find UUID through the API


https://10.1.1.1/api/virtualservice?name=FTP-VS&fields=uuid

Find UUID through the GUI


Click into the virtual service. Since the GUI is executing API calls on the Avi Controller, the UUID is
reflected in the URL.

https://10.1.1.1/#/authenticated/applications/virtualservice/virtualservice-0523452d-
c301-4817-a5e0-ee66b95bd287/analytics?timeframe=6h

Find UUID through the CLI


: > show virtualservice FTP-VS
+---------------------------+-----------------------------------------------------+
| Field | Value |
+---------------------------+-----------------------------------------------------+
| uuid | virtualservice-0523452d-c301-4817-a5e0-ee66b95bd287 |
| name | FTP-VS |

VMware, Inc. 37
VMware NSX Advanced Load Balancer Configuration Guide

| address | swapnil2 |
| ip_address | 10.130.129.14 |

Reference Objects by Name


For objects created within NSX Advanced Load Balancer, it is possible to reference objects by
name rather than by UUID. However, there are objects, such as subnets, created outside of NSX
Advanced Load Balancer and pushed down to the Controller, such as through OpenStack. For
these objects, they must be referenced by API calls via their UUID, not through their name.
Therefore, it is generally considered best practice to have API calls reference object UUIDs rather
than names.

Virtual Service Placement Settings


This topic explains

Due to the distributed nature of the NSX Advanced Load Balancer Service Engines, the Controller
directly extends a Service Engine’s NICs to the virtual service and pool member IP networks.

The NSX Advanced Load Balancer Controller enables the user to make this network attachment
decision manually by providing options on the Virtual Service Placement Settings menu in
conjunction with the static routes. For more details, see Configuration.

Network Scenarios
Consider a use case with the following networks, as discovered by the Controller. The discovered
networks are considered as directly-connected networks by the NSX Advanced Load Balancer
Controller.

n 10.10.10.0/24

n 10.10.20.0/24

n 10.10.30.0/24

Servers on an External Network


Virtual service placement fails without a static route to the remote server.

<Insert Image>

Servers on the Discovered Network


By default, a Service Engine directly extends its connectivity to the server network by adding an
NIC. To force an SE to reach a server by Layer 3, select Prefer Static Routes on Virtual Service
Placement Settings and configure a static route for the server IP address.

<Insert Image>

Note The first option on Virtual Service Placement Settings does not apply to the virtual IP unless
the second option is selected.

VMware, Inc. 38
VMware NSX Advanced Load Balancer Configuration Guide

Virtual Service IP on the Discovered Network


By default, an SE extends its connectivity to the virtual service IP network directly by adding an
NIC. To force an SE to attach to the selected network for a virtual service IP, select Prefer Static
Routes and Use Routes for Network Resolution of VIP on Virtual Service Placement Settings and
configure a static route for the virtual service IP.

<Insert Image>

In this case, the Layer 3 switch must have a proper static route entry to reach the virtual service.

Note In this example, choosing the second option without choosing the first has no effect on the
virtual service IP network selection.

Configuration
The Virtual Service Placement Settings menu is displayed during the initial installation menu of
the Controller and can be changed after the installation by navigating to Infrastructure > Cloud
and clicking the edit icon. The Static Routes menu is also available.

You can specify the following placement settings:

Select Prefer Static Routes vs Directly Connected Network and Use Static Routes for Network
Resolution of VIP check boxes to define placement settings.

Note Virtual Service Placement Settings are configured on a per-cloud basis.

HTTP Policy Reuse


HTTP policies created once can be shared across multiple virtual services. This section details the
steps to configure the HTTP policy sets and apply them to virtual services.

Creating HTTP Policy Sets and Attaching them to Virtual Services


To create HTTP policy sets.

n Log in to the Controller and enter the following commands.

admin@abc-controller:~$ shell
Login: admin
Password:

n Create a standalone http policy set named httppolicyset_demo. Configure the required rules
under the policy set and save it. See the following output for more details on the configuration.

+------------------------+----------------------------------------------------+
| Field | Value |
+------------------------+----------------------------------------------------+
| uuid | httppolicyset-dd4e996a-15cc-456c-ad56-086bf21b6e75 |
| name | httppolicyset_demo |
| http_request_policy | |
| rules[1] | |
| name | Demo_Rule1 |

VMware, Inc. 39
VMware NSX Advanced Load Balancer Configuration Guide

| index | 1 |
| enable | True |
| match | |
| path | |
| match_criteria | CONTAINS |
| match_case | INSENSITIVE |
| match_str[1] | index.html |
| switching_action | |
| action | HTTP_SWITCHING_SELECT_LOCAL |
| status_code | HTTP_LOCAL_RESPONSE_STATUS_CODE_429 |
| log | True |
| is_internal_policy | False |
| tenant_ref | admin |
+------------------------+----------------------------------------------------+

n Attach the httppolicyset_demo to the virtual service required.

[admin:abc-controller]: configure virtualservice *VS1*


[admin:abc-controller]: virtualservice> http_policies
[admin:abc-controller]: virtualservice> http_policies http_policy_set_ref

n Press the Tab key to display the list of the httppolicyset objects.

VS1-Default-Cloud-HTTP-Policy-Set-0 VS2-Default-Cloud-HTTP-Policy-Set-0.
*httppolicyset_demo*

n Attach the policy set and save.

[admin:abc-controller]: virtualservice> http_policies http_policy_set_ref


*httppolicyset_demo*
New object being created
[admin:abc-controller]: virtualservice:http_policies>save

n To reattach the HTTP policy to other virtual services, repeat the previous two steps for each
virtual service.

Block an IP Address from Access to a Virtual Service


This section explains the steps to block an IP address or multiple addresses.

A client’s IP address may need to be prevented from accessing an application for several reasons.
Likewise, blocking a client’s access can be accomplished in numerous ways. While this article
focuses on IP addresses, a client also could be identified based on other identifiers such as a
username, session cookie, or SSL client certificate.

Blocking a Client IP
Navigate to virtual service Edit > Rules tab > Network Security tab > New Rule.

A network security policy can be used to deny a single IP address or multiple addresses. For large
IP lists, consider creating a blocklist (Templates > Groups > IP Group. This object can contain
extensive lists of IP addresses or network ranges.

VMware, Inc. 40
VMware NSX Advanced Load Balancer Configuration Guide

An IP group also may be leveraged across multiple virtual-service network security policies. This
simplifies adding or removing IP addresses, which can be performed for many applications by
changing a single IP group.

DataScript
For finer control, DataScripts may be used to evaluate additional criteria before discarding a client
connection.

if avi.vs.client_ip() == "10.1.2.3" then


avi.close_conn()
end

Impact of Changes to Min-Max Scaleout Per Virtual Service


Each NSX Advanced Load Balancer Service Engine (SE) group has settings for minimum and
maximum scaleout per virtual service (VS). These settings govern the number of SEs across which
a VS can be scaled. This section explains the impact of changing these two settings on virtual
services.

For more information on scale out settings, see Service Engine Group.

A virtual service can be affected in the following scenarios:

1 The min-max settings of its SE group are dynamically changed. All virtual services placed on
the SE group are affected.

2 It is moved to another SE group having different settings

Impact of Changes
Changes to the min_scaleout_per_vs or max_scaleout_per_vs settings of an SE group result in
the same behavior as described above with the following exceptions:

n Increase in the min_scaleout_per_vs of an SE group only increases the number of virtual


service placements, if the new minimum value is greater than the current number of virtual
service placements.

n If a VS is disabled and re-enabled, the number of placements are capped at


max_scaleout_per_vs.
n When migrating a VS from one SE group to another, the NSX Advanced Load Balancer
disregards the number of placements made within the group. The VS is placed according to
the min_scaleout_per_vs value of the destination group.

Scenarios
The effect on virtual services when the minimum-maximum scaleout per VS settings of the SE
group are changed is illustrated in the following section.

VMware, Inc. 41
VMware NSX Advanced Load Balancer Configuration Guide

To understand the examples, it should be noted that internally, the number of SEs requested for a
VS is the sum of the following two numbers:

1 The minimum scaleout per VS of its SE group (min_scaleout_per_vs).

2 The user scaleout factor - The user scaleout factor is an internal variable which starts at 0 for
all virtual services. This number increases by 1 when a user scales out and decreases by 1 when
the user scales in.

General Behavior
Following are the rules governing all changes of minimum and maximum scale per VS:

n Decreasing the minimum scale per VS has no effect on the scale of existing virtual services in
the group.

n For this case, the user scaleout is increased by the amount that the minimum scale per VS
is decreased.

n For an existing VS, if the user wishes it to be scaled at the minimum level of the SE group,
the user must explicitly scale in.

n Increasing the minimum scale per VS only increases the scale of existing VSs if the new
minimum is greater than the current scale of the VS.

n Increasing or decreasing the maximum scale per VS of an SE group has no effect on the scale
of existing VSs in the group.

n For VSs with more SEs than the new maximum scale, the user is still able to manually scale
in.

n A VS which is disabled and re-enabled preserves its existing scale, capped by the current
maximum scale per VS of the SE group.

n A VS which is moved to another SE group will be placed on the minimum scale per VS of the
new SE group.

Changing SE Group Settings


The effect on VSs in an SE group when its minimum or maximum scale per VS is changed is
illustrated in the following section:

n Increasing Minimum Scale per VS

For a VS without any user scaleout, increasing the minimum scale of the SE group increases the
number of SEs of the VS to the new minimum.

Action Num of SEs Requested User scaleout Min scaleout per VS

initial state 1 0 1

min scale per VS: 1 → 2 2 0 2

VMware, Inc. 42
VMware NSX Advanced Load Balancer Configuration Guide

For a VS with user scaleout, increasing the minimum scale increases the number of SEs, only if the
new minimum is greater than the current scale of the VS.

Example 1

Action Num of SEs Requested User scaleout Min scaleout per VS

initial state 1 0 1

user scale out 2 1 1

min scale per VS: 1 → 2 2 0 2

Example 2

Action Num of SEs Requested User scaleout Min scaleout per VS

initial state 2 1 1

min scale per VS: 1 → 3 3 0 3

n Decreasing Minimum Scale per VS

Decreasing the minimum scale per VS of an SE group will have no effect on the scale of the
existing VS. To maintain the same number of SEs, the user scaleout is increased by the amount of
decrease in minimum scale.

Example

Action Num of SEs Requested User scaleout Min scaleout per VS

initial state 2 0 2

min scale per VS: 2 → 1 2 1 1

user scale in 1 0 1

The purpose of this behavior is to preserve the current state of all VSs residing inside an SE group
when min scale per VS is reduced. By increasing the user scaleout by the amount of decrease in
min_scaleout_per_vs, we keep the number of SEs requested the same.
If the desired outcome in the above example is to scale every VS in the SE group down to 1 SE,
there are three options:

1 After changing the SE group settings, manually scale down every VS to reduce the user
scaleout to 0.

2 Set the maximum scale per VS of the SE group to 1. Disable and enable all VSs (maximum
scale can also be reduced after the disable).

3 Move all VSs in the SE group to another SE group where the min_scaleout_per_vs is 1.

n Changing Maximum Scale per VS

Changing the maximum scale per VS has no effect on the other variables.

VMware, Inc. 43
VMware NSX Advanced Load Balancer Configuration Guide

If the maximum scale per VS of an SE group is reduced, all VSs within the SE group retain the
same number of SEs. So the number of SEs requested for a VS in this situation can be greater
than the new maximum scale per VS. The user has the option of manually scaling in to reduce this
number to the new max.

Example

Num of SEs
Action User scaleout Min scaleout per VS Max scaleout per VS
Requested

initial state 3 2 1 4

max scale per VS: 4


3 2 1 2
→2

user scale in 2 1 1 2

n VS Disable and Enable

When a VS is disabled and then enabled, it is placed on (current min scale per VS + number of
user scaleouts), capped by the current max scale per VS of the SE group.

If a VS is disabled and then enabled without changing the scale settings of the SE group, the VS
remains at the same scale.

Example 1

Num of SEs
Action User scaleout Min scaleout per VS Max scaleout per VS
Requested

Initial state 3 2 1 4

VS disabled 0 2 1 4

VS enabled 3 2 1 4

Example 2

Num of SEs
Action User scaleout Min scaleout per VS Max scaleout per VS
Requested

Initial state 2 0 2 4

VS disabled 0 0 2 4

Min scale per VS: 2 →


0 1 1 4
1

VS enabled 2 1 1 4

Example 3

VMware, Inc. 44
VMware NSX Advanced Load Balancer Configuration Guide

Num of SEs
Action User scaleout Min scaleout per VS Max scaleout per VS
Requested

Initial state 4 3 1 4

Max scale per VS: 4


4 3 1 1
→1

VS disabled 0 3 1 1

VS enabled 1 0 1 1

n Moving a VS to Another SE Group With Different Settings

Moving a VS to another SE group will always place the VS on the min_scaleout_per_vs of the
new SE group.

Example 1

Num of SEs
Action User scaleout min scaleout per VS Max scaleout per VS
Requested

Initial state 2 0 2 4

User scale out 3 1 2 4

VS moved to new SE
group with (min: 1, 1 0 1 2
max: 2)

Since the VS has been moved to a new SE group, the NSX Advanced Load Balancer does not
attempt to preserve its state and adheres to the settings of the new SE group.

Example 2

Num of SEs
Action User scaleout Min scaleout per VS Max scaleout per VS
Requested

Initial state 1 0 1 4

VS moved to new SE
group with (min: 2, 2 0 2 2
max: 2)

A legacy active-standby SE group effectively has a minimum scale per VS of 2 and a maximum
scale per VS of 2.

Summary
The following table summarizes expected changes in various scenarios:

Action VS: num of SEs

Increase only if current number of SEs <


↑ min_scaleout_per_vs
min_scaleout_per_vs

↓ min_scaleout_per_vs Stays the same

VMware, Inc. 45
VMware NSX Advanced Load Balancer Configuration Guide

Action VS: num of SEs

↑ max_scaleout_per_vs Stays the same

↓ max_scaleout_per_vs Stays the same

VS disable/enable Caps at max_scaleout_per_vs

Move VS to another SE group min_scaleout_per_vs of new SE group

Enhanced Virtual Hosting


EnhancedVirtual Hosting (EVH) helps in enabling the virtual hosting on virtual service irrespective
of Server Name Indication (SNI). This section explains the usage of enhanced virtual hosting (EVH)
in NSX Advanced Load Balancer.

Virtual services can be of two types, namely:

n Non-virtual Hosting Enabled Virtual Service

n Virtual Hosting Enabled Virtual Service

Non-virtual Hosting Enabled Virtual Service


The Virtual Hosting VS option in virtual service configuration is deactivated by default. When you
create a virtual service with this option deactivated, then that particular virtual service would be
non-virtual hosting enabled virtual service.

Virtual Hosting Enabled Virtual Service


n SNI Virtual Hosting

Enabling Virtual Hosting VS option for a virtual service indicates the virtual service is a parent
or child of another service, in a server name indication (SNI) deployment.Server Name Indication
(SNI) is a method of virtual hosting multiple domain names for an SSL enabled virtual IP.

For more information on virtual hosting enabled virtual service, see Server Name Indication,
Wildcard SNI Matching for Virtual Hosting.

n Enhanced Virtual Service

The virtual service placement for EVH service follows the same conditions as SNI parent child.A
parent can either be a host SNI or EVH children but not both at the same time. The child of the
same virtual hosting type can be associated with parent virtual service, i.e. if the parent virtual
service is of SNI type then the associated children should also be of SNI type. Similarly if parent
virtual service is of enhanced virtual service type, then the children associated with this parent
virtual service should be of same type, i.e. EVH. The EVH child can not be associated with SNI
parents and vice versa.

SNI and EVH can be compared as shown in the table:

VMware, Inc. 46
VMware NSX Advanced Load Balancer Configuration Guide

Server Name Indication (SNI) Enhanced Virtual Hosting (EVH)

Multiple domains can be configured under a child virtual The same domain can be configured under multiple
service and are owned by that virtual service. children but with different path match criteria.

SNI can only handle HTTPS traffic. EVH children can handle both HTTP and HTTPS traffic.

The entire connection, including all its requests, the parent The connection is always handled by the parent virtual
virtual service will be handled by one of this child virtual service and individual requests in that connection will be
service, selected during TLS handshake. handled by the selected child virtual service based on the
matching host header, URI path and path match criteria
configured under child virtual service.

n Parent virtual service have the service ports configured on them and need to have SSL
enabled on them.

n In the child virtual service, FQDN field is used to specify the domains for which the virtual
service should be selected. HOST+PATH+match_criteria defines which child virtual service
under a parent virtual service will process a given request.

NSX Advanced Load Balancer supports the EVH switching of different requests (within one
connection) between the child virtual service of a single parent virtual service. Unlike SNI which
switches only TLS connections based on one-to-one mapping of children to FQDN, EVH maps one
FQDN to many children based on resource path requested.

EVH Child Selection


Parent EVH virtual service will terminate the TCP/SSL connection and does HTTP request line
processing. Based on the URI, host header, match criteria, the lookup key is used to find the
matching child.

Path Lookup Criteria


The following are the path lookup criteria supported:

n Equals

n Begins with

n Regex pattern matches

The above search order will be executed to find the matching child virtual service.

Notes
When configuring EVH for a virtual service, note the following:

n A virtual hosting virtual service should be either SNI or EVH.

n If the parent virtual service have EVH defined, then:

n The child virtual service cannot have certs attached or SSL Profile attached to them.

VMware, Inc. 47
VMware NSX Advanced Load Balancer Configuration Guide

n Multiple vh_matches configuration with same host value are not allowed under a child
virtual service. A child virtual service can have multiple paths configured under a single
host.

n Two or more child virtual service cannot share same combinations.

n A parent virtual service cannot be a child of another parent virtual service.

n HTTP/2 is not supported.

n OCSP stapling will not work for certificate other than the first/ default certificate.

Configuring Enhanced Virtual Hosting


While creating the virtual service, you can select if the virtual service is either a parent or a child
virtual hosting virtual service. This section shows how EVH is configured in a virtual service using
the NSX Advanced Load Balancer Controller UI.

Procedure

1 Navigate to Applications > Virtual Services

2 Either click CREATE VIRTUAL SERVICE or edit an existing virtual service.

3 Click Virtual Hosting VS.

VMware, Inc. 48
VMware NSX Advanced Load Balancer Configuration Guide

4 Choose either Parent or Child.

Option Description

Parent The parent virtual service in EVH is configured without any vh_matches
configuration. The virtual service receives all traffic and performs TLS
termination, if necessary, before receiving requests.
The parent virtual service allows multiple certificates to be configured in this
virtual hosting and for SSL connections, the parent virtual service picks the
matching server certificate based on the TLS server name requested by the
client and cipher used. If the server name is requested or no match is found,
the first certificate configured on the virtual service is used. For TLS mutual
authentication, the PKI profile must be configured only on the parent virtual
service.
After theTLS handshake is complete, the parent receives all the requests
and matches them with host names and paths configured on its children
and selects the matching child virtual service and hands off the request to
that virtual service. If none of the child virtual service’s config match the
request, then the request is processed by parent virtual service configuration.
Essentially the connection stays with the parent but request keep switching to
its children for processing.

Child The child virtual service in EVH is configured with host and path match
configuration. The parent virtual service will do the TCP and SSL termination
and request processing is sent to this virtual service if the request host
and URL matches the vh_matches configuration in the child virtual service.
Multiple hosts, each with multiple path matches can be configured under
a child virtual service. Multiple child virtual service with non-conflicting
vh_matches configuration can be associated with a parent virtual service. The
child virtual service cannot do TLS termination and does not accept SSL
configuration such as SSL profile, SSL key and certificate, PKI profile etc.
All request or response specific configuration settings from application
profile, policies, DataScript, caching and compression, WAF profile
configured on the child virtual service apply on the request being processed
by this child virtual service.

5 Select Enhanced Virtual hosting as the Virtual Hosting Type. Ensure that both parent and its
child virtual service have the same Virtual Hosting Type.

a In case of a child virtual service, under Virtual Hosting Match Criteria enter the virtual
service acting as Virtual Hosting Parent.

b Under Domain, select the Host or domain name.

c Select the match Criteria and one or more string group to match the host or domain name
specified.

6 Complete configuring Step 1: Settings. Click Next.

VMware, Inc. 49
VMware NSX Advanced Load Balancer Configuration Guide

7 Complete the virtual service configuration and click Save.

SSL Profile and Certificates in an EVH Virtual Service


Unlike a normal virtual service or an SNI virtual service, where only two certificates each of
type RSA and EC are allowed, the EVH parent allows configuration of multiple domain name
certificates.

The TLS server name will be looked up against the configured certificates and the matching
certificate will be served on the TLS connection. If no TLS server name is present or TLS server
name does not match any common name/ SAN/ DNS information in any of the certificates
configured, the first certificate in the list of certificates (default certificate) configured will be served
for that connection.

VMware, Inc. 50
VMware NSX Advanced Load Balancer Configuration Guide

Each of the child virtual service can have their individual app profiles, WAF profiles, etc.

Application Metrics
With EVH, the connection will technically be received by the parent virtual service and each
individual request will be processed by the matching child virtual service.

Each request will map the metrics data of the matching child virtual service and request level
metrics will be collected on that child. Connection level metrics, including TCP and SSL, will be
collected on the parent virtual service.

Note Features under Virtual Services > Security, for example, SSL Certificate, SSL/ TLS version,
SSL Score are not applicable for a child virtual service.

Custom Controller Utilization Alert Thresholds


An NSX Advanced Load Balancer Controller cluster continuously collects CPU, disk, and memory
utilization metrics from the cluster nodes. When the usage threshold is exceeded, a system alert is
raised to notify the admin.

By default, this threshold has been preconfigured to be 85% for CPU, disk, and memory. In
some deployments, this predefined threshold may not be conservative enough, and a lower value
is desired. The following will provide an example of modifying these thresholds to meet your
deployment’s requirements.

VMware, Inc. 51
VMware NSX Advanced Load Balancer Configuration Guide

Threshold Configuration Options


Custom thresholds can independently be defined for:

n CONTROLLER_CPU_THRESHOLD

n CONTROLLER_MEM_THRESHOLD

n CONTROLLER_DISK_THRESHOLD

When defining the configuration, there are two threshold options to be aware of:

n watermark_thresholds: Threshold value for which event is raised. There can be multiple
thresholds defined. Health score degrades when the target is higher than this threshold.

n reset_thresholds: The value used to reset the event state machine.

Configure Controller Thresholds


The following configuration provides an example of overwriting the default values. This example
will configure a watermark_threshold of 75 and a reset_threshold of 60 for CPU, disk, and
memory. With this configuration, if resource utilization of any of these three exceeds 75%, an alert
will be raised for that resource. Once utilization drops below 60%, the alert will be reset.

[admin:controller]: > configure systemconfiguration


[admin:controller]: systemconfiguration> controller_analytics_policy
[admin:controller]: systemconfiguration:controller_analytics_policy> metrics_event_thresholds
[admin:controller]: systemconfiguration:controller_analytics_policy> metrics_event_thresholds
metrics_event_threshold_type controller_cpu_threshold
New object being created
[admin:controller]: systemconfiguration:controller_analytics_policy:metrics_event_thresholds>
reset_threshold 60 watermark_thresholds 75
[admin:controller]: systemconfiguration:controller_analytics_policy:metrics_event_thresholds>
save

[admin:controller]: systemconfiguration:controller_analytics_policy> metrics_event_thresholds


metrics_event_threshold_type controller_mem_threshold
New object being created
[admin:controller]: systemconfiguration:controller_analytics_policy:metrics_event_thresholds>
reset_threshold 60 watermark_thresholds 75
[admin:controller]: systemconfiguration:controller_analytics_policy:metrics_event_thresholds>
save

[admin:controller]: systemconfiguration:controller_analytics_policy> metrics_event_thresholds


metrics_event_threshold_type controller_disk_threshold
New object being created
[admin:controller]: systemconfiguration:controller_analytics_policy:metrics_event_thresholds>
reset_threshold 60 watermark_thresholds 75
[admin:controller]: systemconfiguration:controller_analytics_policy:metrics_event_thresholds>
save
[admin:controller]: systemconfiguration:controller_analytics_policy> save
[admin:controller]: systemconfiguration> save

+----------------------------------+------------------------------------+
| Field | Value |

VMware, Inc. 52
VMware NSX Advanced Load Balancer Configuration Guide

+----------------------------------+------------------------------------+

| controller_analytics_policy | |
| metrics_event_thresholds[1] | |
| reset_threshold | 60.0 |
| watermark_thresholds[1] | 75 |
| metrics_event_threshold_type | CONTROLLER_CPU_THRESHOLD |
| metrics_event_thresholds[2] | |
| reset_threshold | 60.0 |
| watermark_thresholds[1] | 75 |
| metrics_event_threshold_type | CONTROLLER_MEM_THRESHOLD |
| metrics_event_thresholds[3] | |
| reset_threshold | 60.0 |
| watermark_thresholds[1] | 75 |
| metrics_event_threshold_type | CONTROLLER_DISK_THRESHOLD |
+----------------------------------+------------------------------------+

Enabling Traffic on VIP


This section elaborates the steps to configure enable traffic through NSX Advanced Load
Balancer.

Virtual Service advertises itself by responding to ARP requests to receive traffic. However, this
can be disabled by using the no traffic_enabled command. On configuring this command, the
specific virtual service IP address stops responding to ARP requests.

This command applies only to VMware, VMware NSX, and Linux server cloud environments.

Configuring Enable Traffic using NSX Advanced Load Balancer CLI


The configuration knob traffic_enabled is a VirtualService property and is enabled by default.

The following are the CLI commands to enable and disable this feature for a virtual service vs1:

[admin:admin-ipv6-cntrlr]: > configure virtualservice vs1


[admin:admin-ipv6-cntrlr]: virtualservice> traffic_enabled
[admin:admin-ipv6-cntrlr]: virtualservice> save

Disabling Enable Traffic


[admin:admin-ipv6-cntrlr]: > configure virtualservice vs1
[admin:admin-ipv6-cntrlr]: virtualservice> no traffic_enabled
[admin:admin-ipv6-cntrlr]: virtualservice> save

Configuring Enable Traffic using NSX Advanced Load Balancer UI


To enable the Traffic Enabled option:

1 Navigate to Application > Virtual Services. Click on the Advanced Setup.

2 Choose the required cloud option and select the checkbox for Traffic Enabled as shown
below:

VMware, Inc. 53
VMware NSX Advanced Load Balancer Configuration Guide

Wildcard SNI Matching for Virtual Hosting


Virtual services have a configuration option to enable virtual hosting support. Enabling this option
within a virtual service indicates the virtual service is a parent or child of another service in a server
name indication (SNI) deployment.

During the SSL handshake between a client and a parent virtual service, the parent virtual service
checks the domain names of its children virtual services for a match with the domain name in
the client’s handshake. If there is a match, the parent virtual service passes the client request to
the child virtual service with the matching domain name. Wildcards can be used to match the
beginning or end of the domain name.

Wildcards
Within a child virtual service’s configuration, a wildcard character can be used at the beginning or
end of the domain name:

n *.example.com - Matches on any labels at the beginning of the domain name if the rest
of the domain name matches. This example matches mail.example.com, app1.example.com,
app1.test.example.com, app1.test.b.example.com, any.set.of.labels.in.front.of.example.com,
and so on.

n .example.com - Matches on any set of first labels or no first label. This example matches not
only on any domain name matched by *.example.com but also on “example.com” (with no
other label in front).

n www.example.* - Matches on any set of ending labels if the other labels


match. This example matches www.example.com, www.example.org, www.example.edu,
www.example.edu.any.set.of.labels.after.www.example, and so on.

A domain name can contain any of these wildcard characters, in the positions shown. The use of
wildcards in other label positions within a domain name is not supported. Likewise, using multiple
wildcard characters within the same domain name is not supported.

VMware, Inc. 54
VMware NSX Advanced Load Balancer Configuration Guide

Longest Match is Used


If there are multiple matches, the longest (most specific match is used).

For example, suppose a parent virtual service has the following child virtual services:

n VS1: matches on domain name *.example.com

n VS2: matches on domain name *.test.example.com

If the server certificate contains a domain name that ends with “.test.example.com,” the certificate
matches on VS2 but not on VS1.

Configuring Wildcard SNI Matching for a Child Virtual Service


This section explains the steps to configure wildcard SNI matching for a child virtual service.

Procedure

1 Access the Advanced Setup popup for the child virtual service:

a Navigate to Applications > Virtual Services.

b Click the edit icon next to the virtual service name.

2 On the Settings tab, select Virtual Hosting VS, then select Child. This displays the Domain
Name field.

VMware, Inc. 55
VMware NSX Advanced Load Balancer Configuration Guide

3 Enter the domain name to use for matching. For wildcard matching, enter the wildcard
character.

4 To save the virtual service configuration, click Next until the Review tab appears. If creating a
new pool, specify a name before saving the pool.

5 Configure other settings if applicable, then click Save.

Service Engine Group


Service Engines are created within a group, which contains the definition of how the SEs should be
sized, placed, and made highly available. This section discusses creating and configuring a service
engine group.

Each cloud will have at least one SE group. The options within an SE group might vary based
on the type of cloud they exist in and the cloud's settings, such as no access versus write
access mode. SEs might exist only within one group. Each group acts as an isolation domain.
SE resources within an SE group can be moved around to accommodate virtual services, but SE
resources are never shared between SE groups.

VMware, Inc. 56
VMware NSX Advanced Load Balancer Configuration Guide

Any change made to an SE group,

n Might be applied immediately

n Is only applied to SEs created after the changes are made

n Requires existing SEs to first be disabled before the changes take effect.

Multiple SE groups can exist within a cloud. A newly created virtual service will be placed on the
default SE group. This can be changed through the Applications > Virtual Services page while
creating a virtual service through the Advanced Setup wizard.

To move an existing virtual service from an SE group to another, the virtual service should be
disabled, moved, and then re-enabled. SE groups provide data plane isolation. Therefore, moving
a virtual service from one SE group to another is disruptive to existing connections through the
virtual service.

Note SE group properties are cloud-specific. Based on the cloud configuration, some of the
properties discussed in this section may not be available.

To configure range of port numbers used to open backend server connections run the code
below.

configure serviceenginegroupproperties
configure serviceenginegroup Default-Group ephemeral_portrange_start 5000
configure serviceenginegroup [name] ephemeral_portrange_start 4096
configure serviceenginegroup [name] ephemeral_portrange_end 61440

Note By default, the range starts with 4096 and ends with 61440.

Creating SE Group
This section shows how to create an SE group in the NSX Advanced Load Balancer.

Procedure

1 From the NSX Advanced Load Balancer Controller, navigate to Infrastructure > Cloud
Resources > Service Engine Group.

2 Select the cloud within which the SE group has to be created.

3 Click Create.

4 Configure the Basic Settings.

5 Configure the Advanced settings.

6 Click Save.

SE Group Basic Settings


This section describes how to configure the basic settings for an SE group: High availability and
virtual service placement, SE capacity and limit, memory allocation, and licenses. The options
discussed in this section are specific to an SE group created under the Default-Cloud.

VMware, Inc. 57
VMware NSX Advanced Load Balancer Configuration Guide

Procedure

1 Under the Basic Settings tab, enter the Service Engine Group Name.

2 There are several number of metrics, such as End to End Timing, Throughput, Requests, and
more. The NSX Advanced Load Balancer Controller updates these metrics periodically, either
at a default interval of five minutes, or as defined in the Metric Update Frequency. Enable
Real Time Metrics to gather detailed metrics aggressively for a limited period, as required.

n Enter 0 to collect metrics aggressive to indefinite periods of time

n Enter a value, for example, 30 min to collect real time metrics for the defined 30 minutes.
After this period of time elapses, the metrics collection reverts to slower polling. Real time
metrics is helpful when troubleshooting.

VMware, Inc. 58
VMware NSX Advanced Load Balancer Configuration Guide

3 Under High Availability & Placement Settings configure the behavior of the SE group in the
event of an SE failure. You can also define how the load is scaled across SEs. Select one of the
modes, as required.

Option Description

Legacy HA Select this mode to mimic a legacy appliance load balancer for easy migration
(Active/Standby) to Avi Vantage. Only two Service Engines may be created. For every virtual
service active on one, there is a standby on the other, configured and ready
to take over in the event of a failure of the active SE. There is no Service
Engine scale out in this HA mode.
Health Monitoring on Standby Service Engine(s) to enable active health
monitoring from the standby SE for all placed virtual services.
n Distribute Load to use both the active and standby Service Engines for
virtual service placement in the legacy active standby HA mode.
n Auto-redistribute Load to make failback automatic so that virtual services
that are migrated back to the SE that replaces the failed SE.

Elastic HA Select this mode to permit up to N active SEs to deliver virtual services,
(Active/Active) with the capacity equivalent of M SEs within the group ready to absorb SE(s)
failure(s).
In case of Elastic HA (Active/Active), under VS Placement across Service
Engines, select the mod: required.
n Compact for NSX Advanced Load Balancer to spin up and fill up the
minimum number of SEs. It tries to place virtual services on SEs which are
already running.
n Distributed (default), for NSX Advanced Load Balancer to maximize the
virtual service performance by avoiding placements on existing SEs.
Instead, it places virtual services on newly spun-up SEs, up to the
maximum number of Service Engines.

Elastic HA Select this mode to distribute virtual services across a minimum of two SEs.
(N+M Buffer) In case of Elastic HA (N+M Buffer), under VS Placement across Service
Engines, select the mod: required.
n Compact (default), for NSX Advanced Load Balancer to spin up and fill up
the minimum number of SEs. It tries to place virtual services on SEs which
are already running.
n Distributed (default), for NSX Advanced Load Balancer to maximize the
virtual service performance by avoiding placements on existing SEs.
Instead, it places virtual services on newly spun-up SEs, up to the
maximum number of Service Engines.

4 In the field Virtual Services per Service Engine enter the maximum number of virtual services
(from 1 to 1000), that the Controller cluster can place on a single Service Engine in the group.

5 Select Service Engine Self-Election to enable SEs to elect a primary amongst themselves in
the absence of a connectivity to controller. This ensures Service Engine high availability in
handling client traffic even in headless mode.

VMware, Inc. 59
VMware NSX Advanced Load Balancer Configuration Guide

6 Under Service Engine Capacity and Limit Settings, enter the Max Number of Service Engines
to define the maximum number of service engines that can be created within an SE group.
This number, combined with the virtual services per SE setting, dictates the maximum number
of virtual services that can be created within an SE group. If this limit is reached, new virtual
services may not be deployed. The status will be grey indicating un-deployed status. This
setting can be useful to prevent NSX Advanced Load Balancer from consuming too many
virtual machines.

7 Configure Memory Allocation.

a Enable Host Geolocation Profile to provide extra configuration memory to support a large
geo DB configuration.

b Enter the value of total SE memory reserved for application caching (in percentage).
Restart the SE for this change to take effect. Available Memory for Connections and
Buffers is the memory available besides caching. This field is automatically updated
depemding on the percentage entered as Memory for Caching.

c Use the Connections and Buffers Memory Distributionslider to define the percentage of
memory (10% to reserved to maintain connection state. This is allocated at the expense of
memory used for HTTP in-memory cache.

8 Under the License section, NSX Advanced Load Balancer maps the license type based on the
type of cloud.

Option Description

Container cloud Max SEs

OpenStack and VMware Cores

Linux Sockets

9 Select Enable Per-app Service Engine Mode to deploy dedicated load balancers per
application, that is, per virtual service. In this mode, each SE is limited to a maximum of two
virtual services. vCPUs in per-app SEs count towards licensing at 25% rate.

10 Select the Service Engine Bandwidth Type for the license. This option is deactivated when
Enable Per-app Service Engine Mode is enabled.

11 Enter the Number of Service Engine Data Paths to configure the maximum number of se_dp
processes that handles traffic. If this field is not configured, NSX Advanced Load Balancer
takes the number of CPUs on the SE.

12 Select Use Hyperthreading to enable the use of hyper-threaded cores for se_dp processes.
Restart the SE for this change to take effect.

13 Click Save to complete the configuration. Optionally, you can click the Advanced tab to
continue configuring advanced options for the SE group.

VMware, Inc. 60
VMware NSX Advanced Load Balancer Configuration Guide

SE Group Advanced Settings


This section describes how to configure the advanced settings for an SE group: Advanced HA &
Placement, Security, and Log Collection and Streaming Settings. Advanced configuration options
are not mandatory. The options discussed in this section are specific to an SE group created under
the Default-Cloud.

Procedure

1 UnderAdvanced HA & Placement, configureBuffer Service Engines. This is the excess


capacity provisioned for HA failover. In elastic HA N+M mode, this is capacity is expressed
as M, an integer number of buffer service engines. It actually translates into a count of potential
virtual service placements. To calculate that count, NSX Advanced Load Balancer multiplies M
by the maximum number of virtual services per SE. For example, if one requests two buffer
SEs, (M=2) and the max_VS_per_SE is 5, the count is 10. If max SEs/group is not reached,
NSX Advanced Load Balancer will spin up additional SEs to maintain the ability to perform 10
placements.

2 Select a management network to use for the Service Engines as the Override Management
Network. If the SEs require a different network for management than the Controller,
then select the network here. The SEs will use their management route to establish
communications with the Controllers. This option is only available if the SE group’s overridden
management network is DHCP-defined. An administrator’s attempt to override a statically-
defined management network (Infrastructure > Cloud > Network) will not work due to not
allowing a default gateway in the statically-defined subnet.

3 Enter the Default Gateway.

4 In the field Sacale per Virtual Service, enter the maximum number of active Service Engines
for the virtual service. A pair of integers determine the minimum and number of active SEs
onto which a single virtual service may be placed. With native SE scaling, the greatest value
one can enter as a maximum is 4; with BGP-based SE scaling, the limit is much higher,
governed by the ECMP support on the upstream router.

5 Select CPU socket Affinity for NSX Advanced Load Balancer to allocate all cores for SE VMs
on the same socket of a multi-socket CPU. Appropriate physical resources need to be present
in the ESX Host. If not, then SE creation will fail and manual intervention will be required.

CPU socket Affinity is applicable only for vCenter environments.

6 Select Dedicated dispatcher CPU to dedicate the core that handles packet receive or transmit
from the network to just the dispatching function. This option is particularly helpful in case of a
group whose SEs have three or more vCPUs.

7 Select the HSM Group under the section Security. Hardware security module (HSM) is an
external security appliance used for secure storage of SSL certificates and keys. The HSM
group dictates how Service Engines can reach and authenticate with the HSM. To know how
to configure HSM in NSX Advanced Load Balancer, see Chapter 9 Hardware Security Module
(HSM).

VMware, Inc. 61
VMware NSX Advanced Load Balancer Configuration Guide

8 Under Log Collection and Streaming Settings, configure the following.

a Enter Significant Log Throttle to define the number of significant log entries generated
per second per core on an SE. Set this parameter to zero to disable throttling of the UDF
log.

b Enter UDF Log Throttle to define the number of user-defined (UDF) log entries generated
per second per core on an SE. UDF log entries are generated due to the configured
client log filters or the rules with logging enabled. The default value is 100 log entries per
second. Set this parameter to zero to disable throttling of the UDF log.

c Enter Non-Significant Log Throttle to define the number of non-significant log entries
generated per second per core on an SE.

d Enter the Number of Streaming Threads (1 to 100) to use for log streaming.

e Click Save.

Cloud-specific SE Group Configuration


This section discusses additional options in the SE Group configuration, which are specific to the
cloud that the SE group is created under.

High Availability & Placement Settings


VS Placement across SEs: When placement is compact , NSX Advanced Load Balancer spins up
and fills up the minimum number of SEs and places the virtual services on SEs which are already
running. When placement is distributed, NSX Advanced Load Balancer maximizes virtual service
performance by avoiding placements on existing SEs. Instead, it places virtual services on newly
spun-up SEs, up to the maximum number of Service Engines. By default, placement is compact for
elastic HA N+M mode and legacy HA active/standby mode.

Host & Data Store Scope


Host Scope Service Engine: SEs are deployed on any host that most closely matches the
resources and reachability criteria for placement. This setting directs their placement as follows:

n Any: The default setting allows SEs to be deployed to any host that best fits the deployment
criteria.

n Cluster: Excludes SEs from deploying within specified clusters of hosts. Checking the Include
checkbox reverses the logic, ensuring SEs only deploy within specified clusters.

n Host: Excludes SEs from deploying on specified hosts. The Include checkbox reverses the
logic, ensuring SEs only be deploy within specified hosts.

Data Store Scope for Service Engine Virtual Machine: Sets the storage location for SEs to store
the OVA (vmdk) file for VMware deployments.

n Any: NSX Advanced Load Balancer will determine the best option for data storage. n Local:
The SE will only use storage on the physical host.

VMware, Inc. 62
VMware NSX Advanced Load Balancer Configuration Guide

n Shared: NSX Advanced Load Balancer will prefer using the shared storage location. When this
option is clicked, specific data stores may be identified for exclusion or inclusion.

Hyper-Threading Modes
Hyper-threading works by duplicating certain sections of the processor that store the architectural
state. However, the logical processors in a hyper-threaded core share the execution resources,
including the execution engine, caches, and system bus interface.This allows a logical processor to
borrow resources from a stalled logical core (assuming both logical cores are associated with the
same physical core). A processor stalls due to a delay in sent data owing to a cache miss, branch
misprediction, or data dependency so it can finish processing the current thread.

NSX Advanced Load Balancer has two knobs to control the use of hyper-threaded cores and the
distribution (placement) of se_dps on the hyper-threaded CPUs. These two knobs are part of the
SE group. The following are the two knobs:

n You can enable hyper threading on the SE using:

threaded cores use_hyperthreaded_cores –


True [default] | False
enable or disable se_dps to use hyper-

n You can control the placement of se_dps on the hyper threaded CPUs using:

se_hyperthreaded_mode –
SE_CPU_HT_AUTO[default]
SE_CPU_HT_SPARSE_DISPATCHER_PRIORITY
SE_CPU_HT_SPARSE_PROXY_PRIORITY
SE_CPU_HT_PACKED_CORES

controls the distribution of se_dps on hyper-threads

Note To utilize these knobs:


n The processor should support hyper-threading.

n use of hyper-threading should be enabled at BIOS.

use_hyperthreaded_cores: You can use this knob to enable or disable the use of hyper-threaded
cores for se_dps. This knob can be configured using CLI or UI.

The following are the CLI commands:

[admin:vpr-ctrl]: serviceenginegroup> use_hyperthreaded_cores [admin:vpr-ctrl]:


serviceenginegroup> se_hyperthreaded_mode SE_CPU_HT_AUTO [admin:vpr-ctrl]:
serviceenginegroup> save

VMware, Inc. 63
VMware NSX Advanced Load Balancer Configuration Guide

se_hyperthreaded_mode: You can use this knob to influence the distribution of se_dp on the
hyper-threaded CPUs when the number of datapath processes are less than the number of hyper-
threaded CPUs online. The knob can be configured only through the CLI.

Note You should set use_hyperthreaded_cores to True for the mode configured using
se_hyperthreaded_mode to take effect.

The following are the supported values for se_hyperthreaded_mode:

SE_CPU_HT_AUTO — This is the default mode. The SE automatically determines the best placement.
This mode preserves the existing behavior following CPU hyper-threading topology. If the
number of data path processes is less than the number of CPUs, this is equivalent to
SE_CPU_HT_SPARSE_PROXY_PRIORITY mode.

SE_CPU_HT_SPARSE_DISPATCHER_PRIORITY — This mode prioritises the dispatcher instances by


attempting to place only one data-path process in the physical CPU. This mode exhausts the
physical cores first and then hyper-threads in numerically descending order of CPU number.

SE_CPU_HT_SPARSE_PROXY_PRIORITY — This mode prioritises the proxy (non-dispatcher) instances


by attempting to place only one data-path process in the physical CPU. This is useful when the
number of data path processes is less than the number of CPUs. This mode exhausts the physical
cores and then hyper-threads in numerically ascending order of CPU number.

SE_CPU_HT_PACKED_CORES — This mode places the data path processes on the same physical core.
Each core can have two dispatchers or two non-dispatcher (proxy) instances being adjacent to
each other. This mode is useful when the number of data path processes is less than the number
of CPUs. This mode exhausts the hyper-threads serially on each core before moving on to the next
physical core.

When hyper threading is enabled, there is a change in behaviour with DP isolation


enabled. The modes that will be affected are SE_CPU_HT_SPARSE_DISPATCHER_PRIORITY,
SE_CPU_HT_SPARSE_PROXY_PRIORITY and through extension SE_CPU_HT_AUTO. With DP isolation, a
certain number of physical cores are reserved to be excluded from the dp_set(cgroup of the
se_dp processes). This will result in certain cores being masked from datapath’s HT distribution
logic.

This number is calculated as follows: floor(num_non_dp_cpus/ 2).

For instance, if num_non_dp_cpus is 5, 2 cores are reserved for non-datapath exclusivity. To use HT,
(with or without DP isolation), the following config knobs are provided in SEgroup:

1 use_hyperthreaded_cores (true/ false)

2 se_hyperthreaded_mode (one of the 4 modes discussed here)

Example Configuration

| use_hyperthreaded_cores |
True |
| se_hyperthreaded_mode | SE_CPU_HT_SPARSE_DISPATCHER_PRIORITY |

VMware, Inc. 64
VMware NSX Advanced Load Balancer Configuration Guide

Service Engine Datapath Isolation


This section explains the SE datapath isolation and the configuration of datapath heartbeat and
IPC encap config knobs.

The feature creates two independent CPU sets for datapath and control plane SE functions. The
creation of these two independent and exclusive CPU sets, will reduce the number of se_dp
instances. The number of se_dps deployed depends either on the number of available host CPUs
in auto mode or the configured number of non_dp CPUs in custom mode.

This feature is supported only on host CPU instances with >= 8 CPUs.

Note This mode of operation may be enabled for latency and jitter sensitive applications.

For Linux Server Cloud alone the following prerequisites must be met to use this feature:

1 The cpuset package cpuset-py3 must be installed on the host and be present in /usr/bin/
cset location (a softlink may need to be created)

2 The task set utility must be present on the host

3 pip3 future package required by cset module

For full access environments, the requisite packages will be installed as part of the Service Engine
installation.

You can enable this feature via the SE Group knobs:

SE Group Knobs Character Description

se_dp_isolation Boolean This feature is disabled by default.


If you enable this feature, you need
to create two CPU set on the SE. A
toggle requires SE reboot.

se_dp_isolation_num_non_dp_cpu Integer Allows to ‘1 – 8’ CPUs to be reserved


s for non_dp CPUs.
Configuring ‘0’ enables auto
distribution. By default, Auto is
selected. If you modify it, you need to
rebook the SE.

The following table shows the CPU distribution in auto mode:

Num Total CPUs Num non_dps

1-7 0

8-15 1

16-23 2

24-31 3

32-1024 4

VMware, Inc. 65
VMware NSX Advanced Load Balancer Configuration Guide

Examples:

1 Isolation mode in an instance with 16 host CPUs in auto mode will result in 14 CPUs for
datapath instances and 2 CPUs for control plane applications.

2 Isolation mode in an instance with 16 host CPUs in custom mode of


se_dp_isolation_num_non_dp_cpu configured to 4 will result in 12 CPUs for datapath instances
and 4 CPUs for control plane applications.

This feature is available as GA and the following caveats apply:

n maximum se_dp_isolation_num_non_dp_cpus is limited to 8. This needs to be set explicitly. In


auto-mode, the maximum is still 4.

Datapath Heartbeat and IPC Encap Configuration


The following datapath heartbeat and IPC encap config knobs belong to the segroup:

n dp_hb_frequency

n dp_hb_timeout_count

n dp_aggressive_hb_frequency

n dp_aggressive_hb_timeout_count

n se_ip_encap_ipc

n se_l3_encap_ipc

License
n License Tier — Specifies the license tier to be used by new SE groups. By default, this field
inherits the value from the system configuration.

n License Type — If no license type is specified, Avi applies default license enforcement for the
cloud type. The default mappings are max SEs for a container cloud, cores for OpenStack and
VMware, and sockets for Linux.

n Instance Flavor — Instance type is an AWS term. In a cloud deployment, this parameter
identifies one of a set of AWS EC2 instance types. Flavor is the analogous OpenStack term.
Other clouds (especially public clouds) may have their own terminology for essentially the
same thing.

SE Group Advanced Tab


The Advanced tab in the Service Engine Group supports the configuration of optional
functionality for SE groups. This tab only exists for clouds configured with write access mode.
The appearance of some fields is contingent upon selections made.

Service Engine Name Prefix: Enter the prefix to use when naming the SEs within the SE group.
This name will be seen both within NSX Advanced Load Balancer, and as the name of the virtual
machine within the virtualization orchestrator.

VMware, Inc. 66
VMware NSX Advanced Load Balancer Configuration Guide

Service Engine Folder — SE virtual machines for this SE group will be grouped under this folder
name within the virtualization orchestrator.

Delete Unused Service Engines After — Enter the number of minutes to wait before the
Controller deletes an unused SE. Traffic patterns can change quickly, and a virtual service may
therefore need to scale across additional SEs with little notice. Setting this field to a high value
ensures that the NSX Advanced Load Balancer keeps unused SEs around in the event of a sudden
spike in traffic. A shorter value means the Controller will need to recreate a new SE to handle a
burst of traffic, which might take a couple of minutes.

Host & Data Store Scope


n Host Scope Service Engine: SEs are deployed on any host that most closely matches the
resources and reachability criteria for placement. This setting directs their placement as
follows:

n Any: The default setting allows SEs to be deployed to any host that best fits the
deployment criteria.

n Cluster: Excludes SEs from deploying within specified clusters of hosts. Checking the
Include checkbox reverses the logic, ensuring SEs only deploy within specified clusters.

n Host: Excludes SEs from deploying on specified hosts. The Include checkbox reverses the
logic, ensuring SEs only be deploy within specified hosts.

n Data Store Scope for Service Engine Virtual Machine: Sets the storage location for SEs to
store the OVA (vmdk) file for VMware deployments.

n Any: NSX Advanced Load Balancer will determine the best option for data storage.

n Local: The SE will only use storage on the physical host.

n Shared: NSX Advanced Load Balancer will prefer using the shared storage location. When
this option is clicked, specific data stores may be identified for exclusion or inclusion.

Advanced HA & Placement


n Buffer Service Engines: This is excess capacity provisioned for HA failover. In elastic HA
N+M mode, this is capacity is expressed as M, an integer number of buffer service engines.
It actually translates into a count of potential VS placements.buffer service engines represent
spare capacity dedicated for SE HA To calculate that count, NSX Advanced Load Balancer
multiplies M by the maximum number of virtual services per SE. For example, if one requests
2 buffer SEs (M=2) and the max_VS_per_SE is 5, the count is 10. If max SEs/group hasn’t
been reached, NSX Advanced Load Balancer will spin up additional SEs to maintain the ability
to perform 10 placements. As illustrated at right, six virtual services have already been placed,
and the current count of spare capacity is 14, more than enough to perform 10 placements.
When SE2 fills up, spare capacity will be just right. An 11th placement on SE3 would reduce the
count to 9 and require SE5 to be spun up.

VMware, Inc. 67
VMware NSX Advanced Load Balancer Configuration Guide

n Scale Per Virtual Service: A pair of integers determine the minimum and number of active
SEs onto which a single virtual service may be placed. With native SE scaling, the greatest
value one can enter as a maximum is 4; with BGP-based SE scaling, the limit is much higher,
governed by the ECMP support on the upstream router.

n See also:

n BGP Support for Scaling Virtual Services.

n Impact of Changes to Min/Max Scaleout per Virtual Service.

n Service Engine Failure Detection: This option refers to the time NSX Advanced Load Balancer
takes to conclude SE takeover should take place. Standard is approximately 9 seconds and
aggressive 1.5 seconds.

n Auto-Rebalance: If this option is selected, virtual services are automatically migrated (scaled
in or out) when CPU loads on SEs fall below the minimum threshold or exceed the maximum
threshold. If this option is off, the result is limited to an alert. The frequency with which NSX
Advanced Load Balancer evaluates the need to rebalance can be set to some number of
seconds.

n Affinity: Selecting this option causes NSX Advanced Load Balancer to allocate all cores for
SE VMs on the same socket of a multi-socket CPU. The option is applicable only in vCenter
environments. Appropriate physical resources need to be present in the ESX Host. If not, then
SE creation will fail and manual intervention will be required.

Note The vCenter drop-down list populates the datastores if the datastores are shared. The
non-shared datastores (which means each ESX Host has their own local datastore) are filtered
out from the list because, by default when an ESX Host is chosen for SE VM creation, the local
datastore of that ESX Host will be picked.

n Dedicated dispatcher CPU: Selecting this option dedicates the core that handles packet
receive/transmit from/to the data network to just the dispatching function. This option makes
most sense in a group whose SEs have three or more vCPUs.

n Override Management Network: If the SEs require a different network for management than
the Controller, that network is specified here. The SEs will use their management route to
establish communications with the Controllers.

For more information, see Deploy SEs in Different Datacenter from Controllers.

Note This option is only available if the SE group’s overridden management network
is DHCP-defined. An administrator’s attempt to override a statically-defined management
network (Infrastructure > Cloud > Network) will not work due to not allowing a default
gateway in the statically-defined subnet.*

VMware, Inc. 68
VMware NSX Advanced Load Balancer Configuration Guide

Security
HSM Group: Hardware security modules may be configured within the Templates > Security >
HSM Groups. An HSM is an external security appliance used for secure storage of SSL certificates
and keys. An HSM group dictates how SEs can reach and authenticate with the HSM.

For more information, see Physical Security for SSL Keys.

Log Collection & Streaming Settings


n Significant Log Throttle: This limits the number of significant log entries generated per second
per core on an SE. Set this parameter to zero to disable throttling of the UDF log.

n UDF Log Throttle: This limits the number of user-defined (UDF) log entries generated per
second per core on an SE. UDF log entries are generated due to the configured client
log filters or the rules with logging enabled. Default is 100 log entries per second. Set this
parameter to zero to disable throttling of the UDF log.

n Non-Significant Log Throttle: This limits the number of non-significant log entries generated
per second per core on an SE. Default is 100 log entries per second. Set this parameter to zero
to disable throttling of the non-significant log.

n Number of Streaming Threads: Number of threads to use for log streaming, ranging from 1 to
100.

Other Settings
By default, the NSX Advanced Load Balancer Controller creates and manages a single security
group (SG) for an SE. This SG manages the ingress/egress rules for the SE’s control- and data-
plane traffic. In certain customer environments, it may be required to provide custom SGs to be
also be associated with the SE management- and/or data-plane vNICs.

n For more information about SGs in OpenStack and AWS clouds, see:

n Custom Security Groups in OpenStack.

n Security Group Options for AWS Deployment with NSX Advanced Load Balancer.

n NSX Advanced Load BalancerManaged Security Group: Supported only for AWS clouds,
when this option is enabled, NSX Advanced Load Balancer will create and manage security
groups along with the custom security groups provided by the user. If disabled, it will only
make use of custom SG provided by the user.

n Management vNIC Custom Security Groups: Custom security groups to be associated with
management vNICs for SE instances in OpenStack and AWS clouds.

n Data vNIC Custom Security Groups: Custom security groups to be associated with data vNICs
for SE instances in OpenStack and AWS clouds.

VMware, Inc. 69
VMware NSX Advanced Load Balancer Configuration Guide

n Add Custom Tag: Custom tags are supported for Azure and AWS clouds and are useful in
grouping and managing resources. Click the Add Custom Tag hyperlink to configure this
option. The CLI interface is described here.

n Azure tags enable key:value pairs to be created and assigned to resources in Azure. For
more information on Azure tags, refer to Azure Tags.

n AWS tags help manage instances, images, and other Amazon EC2 resources, you can
optionally assign your own metadata to each resource in the form of tags. For more
information on AWS tags, see AWS Tags and Configuring a Tag for Auto-created SEs in
AWS.

VIP Autoscale
n Display FIP Subnets Only: Only display FIP subnets in the drop-down menu.

n VIP Autoscale Subnet: UUID of the subnet for the new IP address allocation.

Deactivating IPv6 Learning in Service Engines


An optional field deactivate_ip6_discovery is available while configuring Service Engine group
properties which, when enabled, drops all the notifications related to IPv6 addresses and
routes. You cannot configure a static IPv6 address to Service Engine interfaces when the
deactivate_ipv6_discovery is enabled.

Log into the NSX Advanced Load Balancer CLI and use the deactivate_ipv6_discovery command
under the configure serviceenginegroup <se-group name> mode to disable IPv6 learning for the
selected Service Engine group.

For deactivate_ip6_discovery to take effect, reboot all the Service Engines present in the specific
Service Engine group.

[admin-controller] configure serviceenginegroup <se-group name>


[admin-controller]: serviceenginegroup> deactivate_ipv6_discovery
save

Use the show serviceenginegroup <se_group_name> command to check if the knob is enabled or
not.

[admin-controller]: show serviceenginegroup <se_group_name>


| deactivate_ipv6_discovery | True |
+-----------------------------------------+-----------------------------+

Storing Inter-SE Distributed Object


The SE in the NSX Advanced Load Balancer hosts multiple virtual services to serve a specific
application. A single virtual service can be scaled across several SEs, and each virtual service
comprises several objects that are created, updated and destroyed. Some of these virtual service
objects need to be available across all the SEs to ensure a consistent operation across a scaled-out
application instance.

VMware, Inc. 70
VMware NSX Advanced Load Balancer Configuration Guide

The current system utilises the Controller to distribute this information across the participating
SEs. Each SE has a local REDIS instance that connects to the Controller, and the objects are
allocated and synchronised across the SEs through the Controller.

This scheme has limitations on the scale, convergence time and so on. The SEs perform this
distribution and synchronisation without the involvement of the Controller. The SE-SE persistence
sync will be through a new distributed architecture. The VMware and LSC platforms are
supported. The transport for the same will be on port 9001. This port needs to be made open
between SEs.

For more information on port details, see Protocol Ports Used by NSX Advanced Load Balancer
for Management Communication.

The following is the CLI command to change the default port:

configure serviceenginegroup <> objsync_port

The CLI command to disable this feature is as follows:

configure serviceenginegroup <>


no use_objsync

The following are the few debugging commands:

Command Description

show virtualservice <> keyvalsummary Summary of Keyval persistence

show virtualservice <> keyvalsummaryobjsync Summary of Objsync view of Keyval persistence

show pool <> internal Summary of Pool Persistence

show pool <pool_name> objsync filter vs_ref Summary of Objsync view of Pool Persistence objects
<vs_name>

Note
n For any changes to the port 9001 via ‘objsync_port’, you need to change the security
group, ACL etc. See Protocol Ports Used by NSX Advanced Load Balancer for Management
Communication.

n In Azure, for SE object sync you need to configure a port which is less than 4096.

Storing Inter-SE Distributed Object


The SE in the NSX Advanced Load Balancer hosts multiple virtual services to serve a specific
application. A single virtual service can be scaled across several SEs, and each virtual service
comprises several objects that are created, updated, and destroyed. Some of these virtual service
objects need to be available across all the SEs to ensure a consistent operation across a scaled-out
application instance.

VMware, Inc. 71
VMware NSX Advanced Load Balancer Configuration Guide

The Controller distributes this information across the participating SEs. Each SE has a local REDIS
instance that connects to the Controller, and the objects are allocated and synchronised across the
SEs through the Controller. This scheme has limitations on the scale, convergence time, and so on.

The SEs can perform this distribution and synchronisation without the involvement of the
Controller. The SE-SE persistence sync will be through a distributed architecture. The VMware,
and LSC platforms are supported. The transport for the same will be on port 9001. This port needs
to be made open between SEs. For more information, see Ports and Protocols.

Change the default port using the CLI shown here.

configure serviceenginegroup <>


no use_objsync

Note In Azure, for SE object sync you need to configure a port which is less than 4096.

Use the debugging commands, as required.

Command Description

show virtualservice <> keyvalsummary Summary of Keyval persistence

show virtualservice <> keyvalsummaryobjsync Summary of Objsync view of Keyval persistence

show pool <> internal Summary of Pool Persistence

show pool <pool_name> objsync filter vs_ref Summary of Objsync view of Pool Persistence objects
<vs_name>

Setting a Property for Newly Created Service Engine Group


For each newly created tenant, a default Service Engine group (Default-Group) is created and
defined per cloud basis.

A new SE group is created using the Template Service Engine Group option available on NSX
Advanced Load Balancer UI. The new SE group's properties will be the same as the Template
Service Engine Group. Using this option, you can change any property for the default SE group or
any other SE group.

Procedure

1 Navigate to Infrastructure > Service Engine Group.

VMware, Inc. 72
VMware NSX Advanced Load Balancer Configuration Guide

2 Select the appropriate cloud, and use the Template Service Engine Group option shown in
the screenshots below for different clouds. Use this option to customize settings for the SE
group as per the requirement.

Application Profile
Application profiles determine the behavior of virtual services, based on application type.

The application profile types and their options are described in the following sections.

n HTTP Profile

n DNS Profile

n L4 Profile

n SSL Profile

n Syslog Profile

n SIP Profile

Dependency on TCP/UDP Profile


The application profile associated with a virtual service may have a dependency on an underlying
TCP/UDP profile. For example, an HTTP application profile may be used only if the TCP/UDP
profile type used by the virtual service is set to type TCP Proxy. The application profile associated
with a virtual service instructs the Service Engine (SE) to proxy the service’s application protocol,
such as HTTP, and to perform functionality appropriate for that protocol.

VMware, Inc. 73
VMware NSX Advanced Load Balancer Configuration Guide

Application Profiles in NSX Advanced Load Balancer


To view the Application Profiles in NSX Advanced Load Balancer navigate to Templates > Profiles
> Application.

NSX Advanced Load Balancer displays all the application profiles created and their type as shown
here.

In this screen, you can perform the following functions.

n Click the search icon and start typing the name of the application profile to find it.

n Create a new application profile.

n Click the edit icon against an application profile to modify the configuration.

n Delete an existing application profile if it is not assigned to a virtual service.

Note If the profile is still associated with any virtual services, the profile cannot be removed. In
this case, an error message lists the virtual service that still is referencing the application profile.
Neither can any one of the system-standard profiles (as illustrated below) be deleted.

Create/Edit an Application Profile


Click Create and select the type of application profile from the dropdown list according to the
traffic to be procesed.

DNS

Default for processing DNS traffic

HTTP

Default for processing Layer 7 HTTP traffic

L4

Catch-all for any virtual service that is not using an application-specific profile.

L4 SSL/TLS

Catch-all for any virtual service that is SSL-encrypted and not using an application-specific
profile.

VMware, Inc. 74
VMware NSX Advanced Load Balancer Configuration Guide

SIP

Default for processing SIP traffic.

Syslog

Default for processing Syslog traffic.

Configure the application profile in the New Application Profile screen.

The New Application Profileand the Edit Application Profile screens share the same interface
regardless of the application profile chosen.

HTTP Profile
The HTTP application profile allows NSX Advanced Load Balancer to be a proxy for any
HTTP traffic. HTTP-specific functionality such as redirects, content switching, or rewriting server
responses to client requests may be applied to a virtual service. The settings apply to all HTTP
services that are associated with the HTTP profile. HTTP-specific policies or DataScripts also may
be attached directly to a virtual service.

General Configuration
In the General tab configure the basic HTTP basic settings.

Connection Multiplex

This option controls the behavior of HTTP 1.0 and 1.1 request switching and server TCP
connection reuse. This allows NSX Advanced Load Balancer to reduce the number of open
connections maintained by servers and better distribute requests across idle servers, thus
reducing server overloading and improving performance for end-users. The exact reduction
of connections to servers will depend on how long-lived the client connections are, the HTTP
version, and how frequently requests/responses are utilizing the connection. It is important to
understand that “connection” refers to a TCP connection, whereas “request” refers to an HTTP
request and subsequent response. HTTP 1.0 and 1.1 allow only a single request/response to go
over an open TCP connection at a time. Many browsers attempt to mitigate this bottleneck by
opening around six concurrent TCP connections to the destination website.

X-Forwarded-For

VMware, Inc. 75
VMware NSX Advanced Load Balancer Configuration Guide

With this option, NSX Advanced Load Balancer will insert an X-Forwarded-For (XFF) header
into the HTTP request headers when the request is passed to the server. The XFF header
value contains the original client source IP address. Web servers can use this header for
logging client interaction instead of using the layer 3 IP address, which will incorrectly reflect
the Service Engine’s source NAT address. When enabling this option, the XFF Alternate
Name field appears, which allows the XFF header insertion to use a custom HTTP header
name. If the XFF header or the custom name supplied already exists in the client request,
all instances of that header will first be removed. To add the header without removing pre-
existing instances of it, use an HTTP request policy.

WebSockets Proxy

Enabling WebSockets allows the virtual service to accept a client’s upgrade header request.
If the server is listening for WebSockets, the connection between the client and server will
be upgraded. WebSocket is a full-duplex TCP protocol. The connection will initially start over
HTTP, but once successfully upgraded, all HTTP parsing by NSX Advanced Load Balancer will
cease and the connection will be treated as a normal TCP connection.

Note NSX Advanced Load Balancer supports HTTP/2 WebSocket. However, it supports only
WebSocket clients with the same HTTP version as the server.

Preserve Client IP Address

Clicking this option causes the NSX Advanced Load Balancer SE to use the client-IP rather
than its own as the source-IP for load-balanced connections from the SE to back-end
application servers. Enable IP Routing in the SE group is a prerequisite for enabling this
option. Preserve Client IP Address is mutually exclusive with SNAT-ting the virtual services.
Connection Multiplexing from HTTP(s) Application Profile cannot be used with Preserve Client
IP.

Save

Select another tab from the top menu to continue editing or Save to return to the Application
Profiles tab. See also the Preserve Client IP section.

Multiplex plus Persistence


This table shows the difference in multiplexing behavior depending on persistance.

Mutiplex Persistence Behavior

Enabled Disabled Client connections and their requests are decoupled from the server side of the Service Engine.
Requests are load-balanced across the servers in the pool using either new or pre-existing
connections to those servers.
The connections to the servers may be shared by requests from any clients.

Enabled Enabled Client connections and their requests are sent to a single server.
These requests may share connections with other clients who are persisted to the same server.
HTTP requests are not load balanced.

VMware, Inc. 76
VMware NSX Advanced Load Balancer Configuration Guide

Mutiplex Persistence Behavior

Disabled Enabled NSX Advanced Load Balancer opens a new TCP connection to the server for each connection
received from the client.
Connections are not shared with other clients.
All requests received through all connections from the same client are sent to one server.
HTTP client browsers may open many concurrent connections, and the number of client
connections will be the same as the number of server connections.

Disabled Disabled Connections between the client and server are one-to-one.
Requests remain on the same connection they began on.
Multiple connections from the same client may be distributed among the available servers.

Security Configuration
The Security tab of the HTTP application profile controls the security settings for HTTP
applications that are associated with the profile.

Security Information

The HTTP security settings affect how a virtual service should handle HTTPS. If a virtual service is
configured only for HTTP, the HTTPS settings discussed in this section will not apply. Only if the
virtual service is configured for HTTPS, or HTTP and HTTPS, will the settings take effect.

VMware, Inc. 77
VMware NSX Advanced Load Balancer Configuration Guide

More granular settings also can be configured using policies or DataScripts.

Field Description

SSL Everywhere This option enables all of the following options, which together provide
the recommended security for HTTPS traffic.

HTTP to HTTPS Redirect For a single virtual service configured with both an HTTP service port
(SSL disabled) and an HTTPS service port (SSL enabled), this feature
will automatically redirect clients from the insecure to the secure port.
For instance, clients who type www.avinetworks.com into their browser
will automatically be redirected to https://www.avinetworks.com. If the
virtual service does not have both an HTTP and HTTPS service port
configured, this feature will not activate. For two virtual services (one
with HTTP and another on the same IP address listening to HTTPS), an
HTTP request policy must be created to manually redirect the protocol
and port.

VMware, Inc. 78
VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Secure Cookies When NSX Advanced Load Balancer is serving as an SSL proxy for
the backend servers in the virtual service’s pool, NSX Advanced Load
Balancer communicates with the client over SSL. However, if NSX
Advanced Load Balancer communicates with the backend servers over
HTTP (not over SSL), the servers will incorrectly return responses as
HTTP. As a result, cookies that should be marked as secure will not be
so marked. Enabling secure cookies will mark any server cookies with
the Secure flag, which tells clients to send only this cookie to the virtual
service over HTTPS. This feature will only activate when applied to a
virtual service with SSL/TLS termination enabled.

HTTP Strict Transport Security (HSTS) Strict Transport Security uses a header to inform client browsers that
this site should be accessed only over SSL/TLS. The HSTS header is sent
in all HTTP responses, including error responses. This feature mitigates
man-in-the-middle attacks that can force a client’s secure SSL/TLS
session to connect through insecure HTTP. HSTS has a duration setting
that tells clients the SSL/TLS preference should remain in effect for the
specified number of days.
Insert the includeSubdomains directive in the HTTP Strict-Transport-
Security header, if required. Doing so signals the user agent that the
HSTS policy applies to this HSTS host as well as any subdomains of the
host’s domain name. This setting will activate only on a virtual service
that is configured to terminate SSL/TLS.

Note If a virtual service is set temporarily to support SSL/TLS and


HSTS has been set, it cannot gracefully be downgraded back to HTTP.
Client browsers will refuse to accept the site over HTTP. When HSTS is
in effect, clients will not accept a self-signed certificate.

HTTP-only Cookies NSX Advanced Load Balancer supports setting an HTTP-Only flag for the
cookie generated by the Controller. Setting this attribute prevents third-
party scripts from accessing this cookie, if supported by the browser.
This feature activates any HTTP or terminated HTTPS virtual services.
When a cookie has an HTTP-Only flag, it informs the browser that this
special cookie must only be accessed by the server. Any attempt to
access the cookie from the client-side script is strictly forbidden.
To check the CLI command to enable HTTP-Only attribute, see CLI
Command to Enable HTTP-Only flag

VMware, Inc. 79
VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Rewrite Server Redirects to HTTPS When a virtual service terminates client SSL/TLS and then passes
requests to the server as HTTP, many servers assume that the
connection to the client is HTTP. Absolute redirects generated
by the server may therefore include the protocol, such as http://
www.avinetworks.com. If the server returns a redirect with HTTP in
the location header, this feature will rewrite it to HTTPS. Also, if the
server returns a redirect for its IP address, this will be rewritten to
the hostname requested by the client. If the server returns redirects
for hostnames other than what the client requested, they will not be
altered.

Note Consider creating an HTTP response policy if greater granularity


is required when rewriting redirects. This feature will activate only if the
virtual service has both HTTP and HTTPS service ports

X-Forwarded-Proto Enabling this option makes NSX Advanced Load Balancer insert the
X-Forwarded-Proto header into HTTP requests sent to the server, which
informs that server whether the client connected to NSX Advanced Load
Balancer over HTTP or HTTPS. This feature activates for any HTTP or
HTTPS virtual service.

CLI Command to Enable HTTP-Only flag


[admin:admin-controller2]: > configure applicationpersistenceprofile
System-Persistence-Http-Cookie[admin:admin-controller2]: applicationpersistenceprofile>
http_cookie_persistence_profile [admin:admin-controller2]:
applicationpersistenceprofile:http_cookie_persistence_profile> http_only [admin:admin-
controller2]: applicationpersistenceprofile:http_cookie_persistence_profile> save
[admin:admin-controller2]: applicationpersistenceprofile> save

+-------------------------------|---------------------------------------+
|Field |Description |
+-------------------------------+---------------------------------------+
|uuid |applicationpersistenceprofile-04ca34e1 |
|name |System-Persistence-Http-Cookie |
|persistence_type |PERSISTENCE_TYPE_HTTP_COOKIE |
|server_hm_down_recovery |HM_DOWN_PICK_NEW_SERVER |
|http_cookie_persistence_profile| |
| cookie_name |VAJOSFML |
| key[1] | |
| name |40015eba-ee51-40c6-8f8d-06e2ec0516e9 |
| aes_key |b'WX9pow2nYKYTfENMZSdwODZQu8e37Zdraoovt|
| always_send_cookie |False |
| http_only |True |
| is_federated |False |
| tenant_ref |admin |
+-------------------------------+---------------------------------------+

VMware, Inc. 80
VMware NSX Advanced Load Balancer Configuration Guide

Redirect HTTP to HTTPS


For security, an industry best practice is to ensure all HTTP traffic is SSL-encrypted as HTTPS.
Since typical end-users do not specify the HTTPS protocol when entering URLs for requests,
the initial requests arrive over HTTP.NSX Advanced Load Balancer can provide SSL termination
services, it also must handle redirecting of HTTP users to HTTPS. You can enable HTTP-to-HTTPS
redirect in any of the following ways:

n In theApplication Profile, under Security configuration

n Configuring HTTP to HTTPS Redirect in the Application Profile

n Configuring Rewrite Server Redirects to HTTPS in the Application Profile

n Using the HTTP Request Policy

Configuring HTTP to HTTPS Redirect in the Application Profile


If the virtual service is configured for both HTTP (usually port 80) and HTTPS (usually SSL on port
443), enable HTTP-to-HTTPS redirect via the attached HTTP application profile.

1 Navigate to Applications > > > Virtual Services, select the desired virtual service, click on the
edit icon on the right side.

2 Under Settings, go to the Profiles section.

3 Select the System-HTTP profile and click the edit icon.

4 Under Security select HTTP to HTTPS Redirect checkbox.

5 Click Save.

The System-Secure-HTTP profile is similar to the System-HTTP profile except that under SSL
Everywhere the HTTP to HTTPS Redirect option, is enabled by default.

Configuring Rewrite Server Redirects to HTTPS in the Application


Profile
The Rewrite Server Redirects to HTTPS option is available within the Security tab of the
Application Profile . This option will change the location header of redirects from HTTP to HTTPS,
and will also remove any hardcoded ports.

Note

n Relative redirects are not altered, only absolute. Therefore it is encouraged to have both the
options enabled.

n This profile setting will have no impact for virtual services that does not have HTTPS
configured.

VMware, Inc. 81
VMware NSX Advanced Load Balancer Configuration Guide

Using the HTTP Request Policy


For more granularity, use an HTTP Request Policy.

1 Navigate to Applications > > > Virtual Services, select the desired virtual service, click on the
edit icon on the right side.

2 Under the Policies tab, select HTTP Request.

3 Click the create (+) icon.

4 Select Service Port from the drop-down list for Matching Rules, select is and enter 80 in the
Ports field.

5 Save the rule. Optionally, the required criteria can be added to determine when to perform the
redirect.

6 In the Action section, select Redirect from the drop-down menu. Then set the protocol to
HTTPS. This will set the redirect port to 443 and the redirect response code to 302 (temporary
redirect).

HTTP Request Policies are quick and easy to set up, and impact only a single virtual service at a
time.

Adding a Query
Use add_stringfor adding a redirect action in the HTTP Request policy.

The keep_query field when enabled, uses the incoming request’s query parameters to the final
redirect URI.

The add_string field, appends the query string to the Redirect URI.

To understand how keep_query and add_string work, consider the example http://
test.example.com/images?name=animals as an incoming request and the request is to be
redirected to http://google.com.

keep_query add_string Redirect Link

Enabled Not configured http://google.com/images?name=animals

Disabled Not configured http://google.com/images?name=animals

Enabled Set to `type=cats&color=black` http://google.com/images?


name=animals&type=cats&color=black

Disabled Set to `type=cats&color=blac` http://google.com/images?


name=animals&type=cats&color=black

The CLI configuration is as shown below:

[admin:abc-controller]: > configure httppolicyset vs1-Default-Cloud-HTTP-Policy-Set-0


[admin:abc-controller]: httppolicyset> http_request_policy
[admin:abc-controller]: httppolicyset:http_request_policy>
[admin:abc-controller]: httppolicyset:http_request_policy> rules index 1

VMware, Inc. 82
VMware NSX Advanced Load Balancer Configuration Guide

[admin:abc-controller]: httppolicyset:http_request_policy:rules>[admin:abc-controller]:
httppolicyset:http_request_policy:rules> redirect_action
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action>
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> add_string
images=cat keep_query
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> status_code
http_redirect_status_code_302
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> port 80
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> host
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host> type
uri_param_type_tokenized
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host> tokens
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host:tokens>
type uri_token_type_string str_value www.google.com
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host> save
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> save
[admin:abc-controller]: httppolicyset:http_request_policy:rules> save
[admin:abc-controller]: httppolicyset:http_request_policy> save
[admin:abc-controller]: httppolicyset> save
+------------------------+----------------------------------------------+
| Field | Value |
+------------------------+----------------------------------------------+
| uuid | httppolicyset-2ee5<truncated>
| |
| |
| name | vs1-Default-Cloud-HTTP-Policy-Set-0 |
| http_request_policy | |
| rules[1] | |
| name | Rule 1 |
| index | 1 |
| enable | True |
| match | |
| method | |
| match_criteria | IS_IN |
| methods[1] | HTTP_METHOD_GET |
| redirect_action | |
| protocol | HTTP |
| host | |
| type | URI_PARAM_TYPE_TOKENIZED |
| tokens[1] | |
| type | URI_TOKEN_TYPE_STRING |
| str_value | www.vmware.com |
| tokens[2] | |
| type | URI_TOKEN_TYPE_STRING |
| str_value | www.google.com |
| port | 80 |
| keep_query | True |
| status_code | HTTP_REDIRECT_STATUS_CODE_302 |
| add_string | images=cat |
| is_internal_policy | False |
| tenant_ref | admin |
+------------------------+----------------------------------------------+

VMware, Inc. 83
VMware NSX Advanced Load Balancer Configuration Guide

Redirect Using DataScript


For maximum granularity and reusability, use a DataScript to configure the redirect behavior.

To add a DataScript,

1 Navigate to Applications > > > Virtual Service, select the desired virtual service, and click the
edit option.

2 Click Add DataScript.

3 Click the Script To Execute dropdown list.

4 Click Create a DataScript.

5 In the New DataScript Set screen, under Events, click Add.

6 From the dropdown menu, select HTTP Request.

7 Enter the following script in the HTTP Request Event Script text box and save.

if avi.vs.port() ~= "443" then avi.http.redirect("https://" .. avi.http.hostname() ..


avi.http.get_uri()) end

Client SSL Certificate Validation


NSX Advanced Load Balancercan validate SSL certificates presented by clients against a trusted
certificate authority (CA) and a configured certificate revocation list (CRL). Further options allow
passing certificate information to the server through HTTP headers.

Field Description

Validation Type Enables client validation based on their SSL certificates. Select one of the
following:
n None — Disables validation of client certificates.
n Request — This setting expects clients to present a client certificate.
If a client does not present a certificate, or if the certificate fails the
CRL check, the client connection and requests are still forwarded to
the destination server. This allows NSX Advanced Load Balancer to
forward the client’s certificate to the server in an HTTP header, so
that the server may make the final determination to allow or deny the
client.
n Require — NSX Advanced Load Balancer requires a certificate to be
presented by the client, and the certificate must pass the CRL check.
The client certificate, or relevant fields, may still be passed to the
server through an HTTP header.

PKI Profile The Public Key Infrastructure (PKI) profile contains configured certificate
authority (CA) and the CRL. A PKI profile is not necessary if validation is
set to Request, but is mandatory if validation is set to Require.

VMware, Inc. 84
VMware NSX Advanced Load Balancer Configuration Guide

Field Description

HTTP Header Name Optionally, NSX Advanced Load Balancer may insert the client’s
certificate, or parts of it, into a new HTTP header to be sent to the server.
To insert a header, this field is used to determine the name of the header.

HTTP Header Value Used with the HTTP Header Name field, the Value field is used to
determine the portion of the client certificate to insert into the HTTP
header sent to the server. Using the plus icon, additional headers may
be inserted. This action may be in addition to any performed by HTTP
policies or DataScripts, which could also be used to insert headers in
requests sent to the destination servers.

Compression
Compression is an HTTP 1.1 standard for reducing the size of text-based data using the Gzip
algorithm. The typical compression ratio for HTML, Javascript, CSS and similar text content types
is about 75%, meaning that a 20-KB file may be compressed to 5 KB before being sent across the
Internet, thus reducing the transmission time by a similar percentage.

Compression enables HTTP Gzip compression for responses from NSX Advanced Load Balancer
to the client.

Use the Compression tab to view or edit the application profile’s HTTP compression settings.

VMware, Inc. 85
VMware NSX Advanced Load Balancer Configuration Guide

The compression percentage achieved can be viewed using the Client Logs tab of the virtual
service. This may require enabling full client logs on the virtual service’s Analytics tab to log some
or all client requests. The logs will include a field showing the compression percentage with each
HTTP response.

Note It is highly recommended to enable compression in conjunction with caching, which


together can dramatically reduce the CPU costs of compressing content. When both compression
and caching are enabled, an object such as the index.html file will need to be compressed only
one time. After an object is compressed, the compressed object is served out of the cache
for subsequent requests. NSX Advanced Load Balancer does not needlessly re-compress the
object for every client request. For clients that do not support compression, NSX Advanced Load
Balancer also will cache an uncompressed version of the object.

Configure the compression settings as discussed in the table.

Field Description

Enable Compression Select the checkbox to enable compression. Enabling this option displays
the other settings for compression.

Compression Mode Compressionmodes enable different levels of compression for different


clients. For instance, filters can be created to provide aggressive
compression levels for slow mobile clients while disabling compression for
fast clients from the local intranet. Auto is recommended, to dynamically
tune the settings based on clients and available Service Engine CPU
resources.
n Auto mode enables NSX Advanced Load Balancer to determine the
optimal settings.

Note By default, the Compression Mode is Auto. The content


compression depends on the client’s RTT, as mentioned below:
n If RTT is less than 10ms, then no compression is required.
n If RTT is 10 to 200ms, then normal compression is required.
n If RTT is above 200ms, then aggressive compression is required.
n Custom mode allows creation of custom filters that provide more
granular control over who should receive what level of compression.

Compressible Content Types This field determines which HTTP content-types are eligible to be
compressed. Selecta string group which contains the compressible type list
from the dropdown list.

Remove Accept Encoding Header This field removes the Accept Encoding header, which is sent by HTTP
1.1 clients to indicate they are able to accept compressed content.
Removing the header from the request prior to sending the request to the
server allows NSX Advanced Load Balancer to ensure the server will not
compress the responses. Only NSX Advanced Load Balancer will perform
compression.

Number of Buffers Specify the number of buffers to use for compression output.

Buffer Size Specify the size of each buffer used for compression output, this should
ideally be a multiple of pagesize.

VMware, Inc. 86
VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Normal Level Specify the level of compression to apply on content selected for normal
compression.

Aggressive Level Specify the level of compression to apply on content selected for aggressive
compression.

Window Size Specify the window size used by compression, rounded to the last power of
2.

Hash Size Specify the hash size used by compression, rounded to the last power of 2.

Response Content Length Specify the minimum response content length to enable compression.

Max Low RTT If client RTT is higher than this threshold, enable normal compression on the
response.

Min High RTT If client RTT is higher than this threshold, enable aggressive compression on
the response.

Mobile Browser Identifier Select the values that identify mobile browsers in order to enable
aggressive compression.

Custom Compression
To create a custom compression filter:

1 Click +Compression Filter to create a custom filter.

2 In the Add Compression Filter section, configure the following.

Field Description

Filter Name Provide a unique name for the filter (optional).

Matching Rules Determine if the client (via Client IP or User Agent


string) is eligible to be compressed via the associated
Action. If both Client IP and User Agent rules are
populated, then both must be true for the compression
action to take effect.
n Client IP Address allows you to use an IP Group
to specify eligible client IP addresses. For example,
an IP Group called Intranet that contains a list of all
internal IP address ranges. Clearing the Is In button
reverses this logic, meaning that any client that is
not coming from an internal IP network will match
the filter.
n User Agent matches the client’s User-Agent string
against an eligible list contained within a String
Group. The User-Agent is a header presented by
clients indicating the type of browser or device
they may be using. The System-Devices-Mobile
Group contains a list of HTTP User-Agent strings for
common mobile browsers.

VMware, Inc. 87
VMware NSX Advanced Load Balancer Configuration Guide

3 The Action section determines what will happen to clients or requests that meet the match
criteria, specifically the level of HTTP compression that will be used.

Field Description

Aggressive compression It uses Gzip level 6, which will compress text content by about 80% while
requiring more CPU resources from both NSX Advanced Load Balancer
and the client.

Normal compression It uses Gzip level 1, which will compress text content by about 75%, which
provides a good mix between compression ratio and the CPU resources
consumed by both NSX Advanced Load Balancer and the client.

No Compression It disables compression. For clients coming from very fast, high bandwidth
and low latency connections, such as within the same data center,
compression may actually slow down the transmission time and consume
unnecessary CPU resources.

HTTP Caching
NSX Advanced Load Balancer can cache HTTP content, thereby enabling faster page load times
for clients and reduced workloads for both servers and NSX Advanced Load Balancer. When a
server sends a response, such as logo.jpg, NSX Advanced Load Balancer can add the object to its
cache and serve it to subsequent clients that request the same object. This can reduce the number
of connections and requests sent to the server.

Enabling caching and compression allows NSX Advanced Load Balancer to compress text-based
objects and store both the compressed and original uncompressed versions in the cache.
Subsequent requests from clients that support compression will be served from the cache,
meaning that NSX Advanced Load Balancer will need not compress every object every time, which
greatly reduces the compression workload.

Note Regardless of the configured caching policy, an object can be cached only if it is eligible for
caching. Some objects may not be eligible for caching.

By default, caching is deactivated. Click EnableCaching to configure options specific to caching.

VMware, Inc. 88
VMware NSX Advanced Load Balancer Configuration Guide

Configure the caching properties, as required.

Field Description

X-Cache NSX Advanced Load Balancer will add an HTTP header labeled X-Cache for
any response sent to the client that was served from the cache. This header is
informational only and will indicate the object was served from an intermediary
cache.

Age Header NSX Advanced Load Balancer will add a header to the content served from the
cache that indicates to the client the number of seconds that the object has been
in an intermediate cache. For example, if the originating server declared that the
object should expire after 10 minutes and it has been in the NSX Advanced Load
Balancer cache for 5 minutes, then the client will know that it should only cache
the object locally for 5 more minutes.

Date Header If a date header was not added by the server, then NSX Advanced Load Balancer
will add a date header to the object served from its HTTP cache. This header
indicates to the client when the object was originally sent by the server to the
HTTP cache in NSX Advanced Load Balancer.

Cacheable Object Size The minimum and maximum size of an object (image, script, and so on) that can
be stored in the NSX Advanced Load Balancer HTTP cache, in bytes. Most objects
smaller than 100 bytes are web beacons and should not be cached despite being
image objects.

Cache Expire Time An intermediate cache must be able to guarantee that it is not serving stale
content. If the server sends headers indicating how long the content can be
cached (such as cache control), then NSX Advanced Load Balancer will use
those values. If the server does not send expiration timeouts and NSX Advanced
Load Balancer is unable to make a strong determination of freshness, then NSX
Advanced Load Balancer will store the object for no longer than the duration of
time specified by the Cache Expire Time.

Heuristic Expire If a response object from the server does not include the Cache-Control header
but does include an If-Modified-Since header, then NSX Advanced Load Balancer
will use this time to calculate the cache-control expiration, which will supersede
the Cache Expire Time setting for this object.

VMware, Inc. 89
VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Cache URL with Query Arguments This option allows caching of objects whose URI includes a query argument.
Disabling this option prevents caching these objects. When enabled, the request
must match the URI query to be considered a hit. Below are two examples of URIs
that include queries. The first example may be a legitimate use case for caching
a generic search, while the second may be a unique request posing a security
liability to the cache.
n www.search.com/search.asp?search=caching
n www.foo.com/index.html?loginID=User

Cacheable MIME Types Statically defines a list of cacheable objects. This may be a string group, such as
System-Cacheable-Resource-Types, or a custom comma-separated list of MIME
types that NSX Advanced Load Balancer should cache. If no MIME types are listed
in this field, then NSX Advanced Load Balancer will by default assume that any
object is eligible for caching.

Non-Cacheable MIME Types Statically define a list of objects that are not cacheable. This creates a blacklist that
is the opposite of the cacheable list.

HTTP DDoS
The Distributed Denial of Service (DDoS) section allows the configuration of mitigation controls for
HTTP and the underlying TCP protocols. By default, NSX Advanced Load Balancer is configured
to protect itself from a number of types of attacks. For instance, if a virtual service is targeted by a
SYN flood attack, NSX Advanced Load Balancer will activate SYN cookies to validate clients before
opening connections. Many of the options listed below are not quite as straightforward, as bursts
of data may be normal for the application. NSX Advanced Load Balancer provides a number of
knobs to modify the default behavior to ensure optimal protection.

In addition to the DDoS settings described below, NSX Advanced Load Balancer also can
implement connection limits to a virtual service and a pool, configured through the Advanced
properties page. Virtual services also may be configured with connection rate limits and burst
limits in the Network Security Policies section. Because these settings apply to individual virtual
services and pools, they are not configured within the profile.

VMware, Inc. 90
VMware NSX Advanced Load Balancer Configuration Guide

HTTP Limits
The first step in mitigating HTTP-based denial of service attacks is to set parameters for the
transfer of headers and requests from clients. Many of these settings protect against variations of
HTTP SlowLoris and SlowPOST attacks, in which a client opens a valid connection then very slowly
streams the request headers or POSTs a file. This type of attack is intended to overwhelm the
server (in this case the SE) by tying up buffers and connections.

Clients that exceed the limits defined below will have that TCP connection reset and a log
generated. This does not prevent the client from initiating a new connection and does not
interrupt other connections the same client might have open.

Field Description

Client Header Timeout Set the maximum length of time the client is allowed for successfully
transmitting the complete headers of a request. The default is 10 seconds.

HTTP Keep-alive Timeout Set the maximum length of time an HTTP 1.0 or 1.1 connection may be idle.
This affects only client-to-NSX Advanced Load Balancer interaction. The
NSX Advanced Load Balancer-to-server keep-alive is governed through the
Connection Multiplex feature.

Client Body Timeout Set the maximum length of time for the client to send a message body. This
usually affects only clients that are POSTing (uploading) objects. The default
value of 0 disables this timeout

Post Accept Timeout Once a TCP three-way handshake has been completed, the client has this
much time to send the first byte of the request header. Once the first byte
has been received, this timer is satisfied and the client header timeout
(described above) kicks in.

Send Keep-Alive header Check this to send the HTTP keep-alive header to the client.

VMware, Inc. 91
VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Use App Keep-Alive Timeout When the above parameter is checked such that keep-alive headers are
sent to the client, a timeout value needs to be specified therein. If this box
is unchecked, NSX Advanced Load Balancer will use the value specified in
the HTTP Keep-Alive Timeout field. If it is checked, the timeout sent by the
application will be honored.

Client Post Body Size Set the maximum size of the body of a client request. This generally limits
the size of a client POST. Setting this value to 0 disables this size limit.

Client Request Size Set the maximum combined size of all the headers in a client request.

Client Header Size Set the maximum size of a single header in a client request.

Rate Limits
This section controls the rate at which clients may interact with the site. Each enabled rate limit has
three settings:

Field Description

Threshold The client has violated the rate limit when the defined threshold of connections,
packets, or HTTP requests has occurred within the specified time period.

Time Period The client has violated the rate limit when the defined threshold of connections,
packets, or HTTP requests have occurred within the specified time period.

Action Select the action to perform when a client has exceeded the rate limit. The options
will depend on whether the limit is a TCP limit or an HTTP limit.
n Report Only— A log is generated on the virtual server log page. By default, no
action is taken. However, this option may be used with an alert to generate an
alert action to send a notice to a remote destination or to take action through a
ControlScript.
n Drop SYN Packets — For TCP-based limits, silently discard TCP SYNs from the
client. NSX Advanced Load Balancer also will generate a log. However, during
high volumes of DoS traffic, repetitive logs may be skipped.
n Send TCP RST — Reset client TCP connection attempts. While more graceful
than the Drop SYN Packet option, sending a TCP reset does generate extra
packets for the reset, versus the Drop SYN Packet option which does not
send a client response. NSX Advanced Load Balancer also will generate a log.
However, during high volumes of DoS traffic, repetitive logs may be skipped.
n Close TCP Connection — Resets a client TCP connection for an HTTP rate limit
violation.
n Send HTTP Local Response — The Service Engine will send an HTTP response
directly to the client without forwarding the request to the server. Select the
HTTP status code of the response, and optionally a response page.
n Send HTTP Redirect — Redirect the client to another location.

The following rate limits can be configured.

VMware, Inc. 92
VMware NSX Advanced Load Balancer Configuration Guide

Rate Limit Connections from a Client Rate limit all connections made from any single client IP
address to the virtual service.

Rate Limit Requests from a Client to all URLs Rate limit all HTTP requests from any single client IP
address to all URLs of the virtual service.

Rate Limit Requests from all Clients to a URL Rate limit all HTTP requests from all client IP addresses to
any single URL.

Rate Limit Requests from a Client to a URL Rate limit all HTTP requests from any single client IP
address to any single URL.

Rate Limit Failed Requests from a Client to all URLs Rate limit all requests from a client for a specified period
of time once the count of failed requests from that
client crosses a threshold for that period. Clients are
tracked based on their IP address. Requests are deemed
failed based on client or server-side error status codes,
consistent with how NSX Advanced Load Balancer logs
and how metrics subsystems mark failed requests.

Rate Limit Failed Requests from all Clients to a URL Rate limit all requests to a URI for a specified period
of time once the count of failed requests to that URI
crosses a threshold for that period. Requests are deemed
failed based on client- or server-side error status codes,
consistent with how NSX Advanced Load Balancer logs
and metrics subsystems mark failed requests.

Rate Limit Failed Requests from a Client to a URL Rate limit all requests from a client to a URI for a
specified period of time once the count of failed requests
from that client to the URI crosses a threshold for that
period. Requests are deemed failed based on client- or
server-side error status codes, consistent with how NSX
Advanced Load Balancer logs and metrics subsystems
mark failed requests.

Rate Limit Scans from a Client to all URLs Automatically track clients and classify them into three
groups: Good, Bad, and Unknown. Clients are tracked
based on their IP address. Clients are added to the
Good group when the NSX Advanced Load Balancer scan
detection system builds a history of requests from the
clients that complete successfully. Clients are added to the
Unknown group when there is insufficient history about
them. Clients with a history of failed requests are added
to the Bad group and their requests are rate limited
with stricter thresholds than the Unknown client's group.
The NSX Advanced Load Balancer scan detection system
automatically tunes itself so that the Good, Bad, and
Unknown client-IP group members change dynamically
with changes in traffic patterns through NSX Advanced
Load Balancer. In other words, if a change to the
website causes mass failures (such as 404 errors) for most
customers, NSX Advanced Load Balancer adapts and does
not mark all clients as attempting to scan the site.

Rate Limit Scans from all Clients to all URLs Similar to the previous limit, but restricts the scanning from
all clients as a single entity rather than individually. Once
a limit is collectively reached by all clients, any client that
sends the next failed request will be reset.

VMware, Inc. 93
VMware NSX Advanced Load Balancer Configuration Guide

Note You can upload any type of file as a local response. It is recommended to configure a local
file using the UI. To update the local file using API, encode the base64 file out of band and use the
encoded format in the API.

DNS Profile
A DNS application profile specifies settings dictating the request-response handling by NSX
Advanced Load Balancer.

By default, this profile will set the virtual service’s port number to 53, and the network protocol to
UDP with per-packet parsing.

VMware, Inc. 94
VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Number of IPs returned by DNS Specifies the number of IP addresses returned by the DNS service. Default is 1.
server Enter 0 to return all IP addresses. Otherwise, the valid range is 1 to 20.

TTL The time in seconds (default = 30) a served DNS response is to be considered valid
by requestors of the DNS service. The valid range is 1 to 86400 seconds.

Subnet prefix length This length is used in concert with the DNS client subnet (ECS) option. When the
incoming request does not have any ECS and the prefix length is specified, NSX
Advanced Load Balancer inserts an ECS option in the request to upstream servers.
Valid lengths range from 1 to 32.

Process EDNS Extensions This option makes the DNS service aware of the Extension mechanism for DNS
(EDNS). EDNS extensions are parsed and shown in logs. For GSLB services, the
EDNS subnet option can be used to influence load balancing. EDNS is supported.

Negative TTL Specifies the TTL value (in seconds) for SOA (Start of Authority) (corresponding to
a authoritative domain owned by this DNS Virtual Service) record’s minimum TTL
served by the DNS Virtual Service. Negative TTL is a value in the range 0-86400.

(Options for) Invalid DNS Query Specifies whether the DNS service should drop or respond to a client when
processing processing its request results in an error. By default, such a request is dropped
without any response, or passed through to a passthrough pool, if configured.
When set to respond, an appropriate response is sent to the client, e.g.,
NXDOMAIN response for non-existent records, empty NOERROR response for
unsupported queries, etc.

Respond to AAAA queries with Enable this option to have the DNS service respond to AAAA queries with an
the empty response empty response when there are only IPv4 records.

Rate Limit Connections from a Limits connections made from any single client IP address to the DNS virtual
Client service for which this profile applies. The default (=0) is interpreted as no rate
limiting.

Threshold Specifies the maximum number of connections or requests or packets that will be
processed in the time value specified in the Time Period field (legitimate values
range from 10 to 2500). A higher number will result in rate-limiting. Specifying a
number higher than 0 makes the Time Periodfield mandatory.

Time Period The span of time, in seconds, during which NSX Advanced Load Balancer monitors
for exceeded threshold. The allowed range is from 1 to 300. NSX Advanced
Load Balancer calculates and takes specified action if the inbound request rate
is exceeded. This rate is the ratio of the maximum number to the time span.

Action Choose one of three actions from the pulldown to be performed when rate limiting
is required: Report Only, Drop SYN Packets, or Send TCP RST.

Preserve Client IP Address Enablethis option to have the client IP address pass through to the back end. Be
sure you understand what the back-end DNS servers expect and what they will do
when offered the client IP address. This option is not compatible with connection
multiplexing.

VMware, Inc. 95
VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Valid subdomains A comma-delimited allowlist of subdomain names. Identifies the subdomains


serviced by the DNS virtual service with which this profile is associated; all others
will not be processed. This option’s best use is in the context of GSLB, in which
the GSLB DNS’ sole purpose is to return IP addresses corresponding to the
global applications being served. Valid subdomains are configured with ends-with
semantics.

Authoritative Domain Names A comma-delimited set of domain names for which the GSLB DNS’ SEs can provide
authoritative translation of FQDNs to IP addresses. Queries for FQDNs that are
subdomains of these domains and do not have any DNS record in NSX Advanced
Load Balancer are either dropped or an NXDOMAIN response is sent (depending
on the option set for invalid DNS queries, described above). Authoritative domain
names are configured with ends-with semantics.

Note
n All labels in a subdomain and authoritative domain names must be complete. To illustrate
by example, suppose alpha.beta.com, delta.beta.com, delta.eta.com, and gamma.eta.com are
valid FQDNs. If we intend the GSLB DNS to return authoritative responses to queries for each
of the four FQDNSs, two authoritative domains could be identified, beta.com and eta.com. It
is not sufficient to stipulate eta.com alone because “eta” is not a complete label, and therefore
doesn’t match either alpha.beta.com or delta.beta.com.

n EDNS option is enabled by default for the System-DNS profile. If NSX Advanced Load
Balancer is upgraded from an older version to a newer version, EDNS is not enabled by
default in the existing DNS profile. However, if a new DNS profile is created on the same NSX
Advanced Load Balancer Controller, EDNS is enabled by default.

L4 Profile
The L4 Profile is used for any virtual service that does not require application-layer proxying.

Note Using an L4 profile is equivalent to setting the virtual service’s application profile to ‘none’.

Rate limits may be placed on the number of TCP connections or UDP packets that may be made to
the virtual service from a single client IP address.

VMware, Inc. 96
VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Threshold The client has violated the rate limit when the defined threshold of connections
(TCP) or packets (UDP) is reached within the specified time period.

Time Period The client has violated the rate limit when the defined threshold of connections
(TCP) or packets (UDP) is reached within the specified time period.

Action Select the action to perform when a client has exceeded the rate limit.
n Report Only — A log is generated in the virtual service logs page. By default,
no action is taken. However, this option may be used with an alert to generate
an alert action to send a notice to a remote destination or to take action using a
ControlScript.
n Drop SYN Packets — For TCP-based limits, silently discard TCP SYNs from the
client. NSX Advanced Load Balancer also will generate a log. However, during
high volumes of DoS traffic, repetitive logs may be skipped.
n Send TCP RST — Reset client TCP connection attempts. While more graceful
than the Drop SYN Packet option, sending a TCP reset does generate extra
packets for the reset, versus the Drop SYN Packet option which does not
send a client response. NSX Advanced Load Balancer also will generate a log.
However, during high volumes of DoS traffic, repetitive logs may be skipped.

Syslog Profile
The Syslog application profile allows NSX Advanced Load Balancer to decode the Syslog protocol.
This profile will set the virtual service to understanding Syslog, and the network profile to UDP
with per-stream parsing.

SIP Profile
SIP profile allows NSX Advanced Load Balancer to process traffic for SIP applications. This profile
defines the transaction timeout allowed for SIP traffic through NSX Advanced Load Balancer.
Configure the timeout within the range of 16 to 512 seconds.

Redirect HTTP to HTTPS


It is an industry best practice to ensure that all HTTP traffic is SSL-encrypted as HTTPS, to ensure
secure access. Typical end-users do not specify the HTTPS protocol when entering URLs for
requests and so the initial requests arrive over HTTP. As the NSX Advanced Load Balancer can
provide SSL termination services, it must also handle redirecting of HTTP users to HTTPS. You
can enable HTTP-to-HTTPS redirect in any of the ways presented in this section. The methods are
listed in order from simplest (with fewest options) to most advanced.

Configuration Using Application Profile


n Option 1

If the virtual service is configured for both HTTP (usually port 80) and HTTPS (usually SSL on port
443), enable HTTP-to-HTTPS redirect through the attached HTTP application profile.

VMware, Inc. 97
VMware NSX Advanced Load Balancer Configuration Guide

Use the following steps to configure HTTPS redirect through the application profile.

n Navigate to Applications > Virtual Services, select the desired virtual service, click the edit
icon on the top right corner, and navigate to the Profiles section under Settings tab.

n Select the edit option for the attached Application Profile (System-HTTP profile), and
navigate to the Security tab. Under the SSL Everywhere section of this tab, select the HTTP-
to-HTTPS-Redirect check box.

The NSX Advanced Load Balancer also has the option for System-Secure-HTTP profile in the
drop-down menu for the Application Profile. This profile is identical to the System-HTTP profile
with the exception that the SSL Everywhere check box, that includes the HTTP to HTTPS Redirect
option, is already enabled.

n Option 2

Rewrite Server Redirects to HTTPS option is available under the Security tab in Edit Application
Profile screen. This option changes the Location header of redirects from HTTP to HTTPS, and
also removes any hard-coded ports. The following example shows a Location header sent from a
server.

http://www.test.com:5000/index.htm

The NSX Advanced Load Balancer rewrites the Location header, sending the following request to
the client.

https://www.test.com/index.htm

Note

n Absolute redirects are altered, while relative redirects are not. Therefore it is suggested to
have both the check boxes enabled.

n This profile setting does not have any impact on virtual services that do not have HTTPS
configured.

Configuration Using HTTP Request Policy


For more granularity, use a HTTP Request Policy. Navigate to Applications > Virtual Services.
Click the edit icon against one of the listed virtual services. Navigate to Policies > HTTP Request
tab and click the Create icon.

Enter the desired name for the new rule, select Service Port from the drop-down menu under
Matching Rules, and provide 80 as the value for Ports option.

Optionally, the required criteria can be added to determine when to perform the redirect by
choosing between Is and Is not options and specifying one or more ports.

Note When redirecting to the same virtual service, you must specify a match criteria to prevent a
redirect loop.

VMware, Inc. 98
VMware NSX Advanced Load Balancer Configuration Guide

Under the Action section, select Redirect from the drop-down menu. Set the Protocol to HTTPS.
This sets the redirect Port to 443 and the redirect response code (Status Code) to 302 (temporary
redirect).

HTTP Request Policies are quick and easy to set up, and impact only a single virtual service at a
time. For more information on the usage of HTTP request policy, see HTTP Request Policy.

Adding a Query
The field add_string is used for redirect action in the HTTP Request policy. When the field
keep_query is enabled, the query parameters of the incoming request are used in the final
redirect URI. The field add_string appends the query string to the Redirect URI. To understand
how keep_query and add_string work, consider the following example.

Assume that http://test.example.com/images?name=animals is the incoming request and it has


to be redirected to http://google.com.

keep_query add_string Redirect Link

Enabled Not configured http://google.com/images?


name=animals

Disabled Not configured http://google.com/images

Enabled Set to `type=cats&color=black` http://google.com/images?


name=animals&type=cats&color=b
lack

Disabled Set to `type=cats&color=black` http://google.com/images?


type=cats&color=black

Use the following commands for configuring through CLI.

[admin:abc-controller]: > configure httppolicyset vs1-Default-Cloud-HTTP-Policy-Set-0


[admin:abc-controller]: httppolicyset> http_request_policy
[admin:abc-controller]: httppolicyset:http_request_policy>
[admin:abc-controller]: httppolicyset:http_request_policy> rules index 1
[admin:abc-controller]: httppolicyset:http_request_policy:rules>[admin:abc-controller]:
httppolicyset:http_request_policy:rules> redirect_action
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action>
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> add_string
images=cat keep_query
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> status_code
http_redirect_status_code_302
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> port 80
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> host
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host> type
uri_param_type_tokenized
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host> tokens
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host:tokens>
type uri_token_type_string str_value www.google.com
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action:host> save
[admin:abc-controller]: httppolicyset:http_request_policy:rules:redirect_action> save
[admin:abc-controller]: httppolicyset:http_request_policy:rules> save
[admin:abc-controller]: httppolicyset:http_request_policy> save

VMware, Inc. 99
VMware NSX Advanced Load Balancer Configuration Guide

[admin:abc-controller]: httppolicyset> save


+------------------------+----------------------------------------------------+
| Field | Value |
+------------------------+----------------------------------------------------+
| uuid | httppolicyset-2ee531f1-1592-4471-98df-a3fc7d9819d7 |
| name | vs1-Default-Cloud-HTTP-Policy-Set-0 |
| http_request_policy | |
| rules[1] | |
| name | Rule 1 |
| index | 1 |
| enable | True |
| match | |
| method | |
| match_criteria | IS_IN |
| methods[1] | HTTP_METHOD_GET |
| redirect_action | |
| protocol | HTTP |
| host | |
| type | URI_PARAM_TYPE_TOKENIZED |
| tokens[1] | |
| type | URI_TOKEN_TYPE_STRING |
| str_value | www.vmware.com |
| tokens[2] | |
| type | URI_TOKEN_TYPE_STRING |
| str_value | www.google.com |
| port | 80 |
| keep_query | True |
| status_code | HTTP_REDIRECT_STATUS_CODE_302 |
| add_string | images=cat |
| is_internal_policy | False |
| tenant_ref | admin |
+------------------------+----------------------------------------------------+

Using DataScript
For maximum granularity and re-usability, use a DataScript to specify the redirect behavior.
DataScript can be used for both basic or complex requirements. Use the following steps to
configure HTTPS redirect using DataScript.

n Navigate to Applications > Virtual Service and click the edit icon for the desired virtual
service.

n Navigate to Policies > DataScripts tab.

n Click the Add DataScript button to create a new DataScript policy.

n Select Create DataScript from the drop-down menu.

n Provide a name for the script. Under Events tab, click Add and choose HTTP Request from
the drop-down menu.

VMware, Inc. 100


VMware NSX Advanced Load Balancer Configuration Guide

n Paste the following text in the space provided and click Save.

if avi.vs.port() ~= "443" then


avi.http.redirect("https://" .. avi.http.hostname() .. avi.http.get_uri())
end

For more information on using DataScript for redirecting HTTP to HTTPS, see DataScript for HTTP
Redirect.

Overview of SSL/TLS Termination


NSX Advanced Load Balancer supports the termination of SSL- and TLS-encrypted HTTPS traffic.
The SSL and TLS names are used interchangeably throughout the documentation unless otherwise
noted.

Using NSX Advanced Load Balancer as the endpoint for SSL enables it to maintain full visibility
into the traffic and also to apply advanced traffic steering, security, and acceleration features. The
following deployment architectures are supported for SSL:

n None: SSL traffic is handled as pass-through (layer 4), flowing through NSX Advanced Load
Balancer without terminating the encrypted traffic.

n Client-side: Traffic from the client to NSX Advanced Load Balancer is encrypted, with
unencrypted HTTP to the back-end servers.

n Server-side: Traffic from the client to NSX Advanced Load Balancer is unencrypted HTTP, with
encrypted HTTPS to the back-end servers.

n Both: Traffic from the client to NSX Advanced Load Balancer is encrypted and terminated at
NSX Advanced Load Balancer, which then re-encrypts traffic to the back-end server.

n Intercept: Terminate client SSL traffic, send it unencrypted over the wire for taps to intercept,
then encrypt to the destination server.

Configuring SSL/TLS Termination


NSX Advanced Load Balancer supports multiple architectures for terminating SSL traffic. For
client-to-NSX Advanced Load BalancerSSL, the configuration is done on the virtual service page.
For NSX Advanced Load Balancer-to-server SSL encryption, the configuration is performed by
editing the pool. For either, a virtual service or pool must be configured with an SSL profile and an
SSL certificate, as described in the following section

Virtual Service Configuration


n Pool Configuration

n Server Name Indication (SNI)

VMware, Inc. 101


VMware NSX Advanced Load Balancer Configuration Guide

SSL Profile
The profile contains the settings for the SSL-terminated connections. This includes the list of
supported ciphers and their priority, the supported versions of SSL/TLS, and a few other options.

n SSL Profile

n App Transport Security

n SSL Version Support

n Configure Strong SSL Cipher Strength

SSL Certificate
An SSL certificate is presented to a client to authenticate the application. A virtual service may be
configured with two certificates at the same time, one each of RSA and elliptic curve cryptography
(ECC). A certificate may also be used for authenticating NSX Advanced Load Balancer to back-end
servers.

n SSL Certificates

n EC versus RSA Certificate Priority

n Notification of Certificate Expiration

n Client Certificate Validation / PKI Profile

n Physical Security for SSL Certificates

n Thales nShield Integration

SSL Performance
SSL-terminated traffic performance depends on the underlying hardware allocated to the NSX
Advanced Load Balancer SE, the number of SEs available to handle the virtual service, and the
certificate and cipher settings negotiated. Generally, each vCPU core can handle about 1000 RSA
2K transactions per second (TPS) or 2500 ECC SSL TPS. A vCPU core can push about 1 Gb/s
SSL throughput. SSL-terminated concurrent connections are more expensive than straight HTTP
or layer 4 connections and may necessitate additional memory to sustain high concurrency.

n SSL Performance

n SE Memory Consumption

Additional Topics
SSL is a complicated subject, occasionally requiring redirects, rewrites, and other manipulation
of HTTP to ensure proper traffic flow. NSX Advanced Load Balancer includes a number of useful
tools for troubleshooting and correcting SSL-related issues. They are described in the articles
below:

n SSL Everywhere

n HTTP to HTTPS Redirect

VMware, Inc. 102


VMware NSX Advanced Load Balancer Configuration Guide

n SSL Visibility and Troubleshooting

TCP or UDP Profile


A TCP or UDP profile determines the type and settings of the network protocol that a subscribing
virtual service will use. It sets several parameters, such as whether the virtual service is a TCP
proxy versus a pass-through using a fast path. A virtual service can have both TCP and UDP
enabled, which is useful for protocols such as DNS or Syslog.

NSX Advanced Load Balancer rewrites the client IP address before sending any TCP connection
to the server, regardless of which type of TCP profile is used by a virtual service. Similarly, the
destination address is rewritten from the virtual service IP address to the IP address of the server.
The server always sees the source IP address of the Service Engine. UDP profiles have an option
to disable SE source NAT.

For the UDP and TCP fast path modes, connections occur directly between the client and the
server, even though the IP address field of the packet has been altered. For HTTP applications,
NSX Advanced Load Balancer can insert the client’s original IP address using X-Forwarded-For
(XFF) into an HTTP header sent to the server. For more information, see X-Forwarded-For Header
Insertion.

The following profiles are explained in detail with information on how to create them:

n TCP Fast Path Configuration

n TCP Proxy

n UDP Fast Path

n UDP Proxy

TCP Settings

Data-Plane TCP Stack


This section focuses on the custom TCP stack for data plane services running on the NSX
Advanced Load Balancer SEs. Though SEs run on Ubuntu Linux, the data NICs utilize a modified
BSD TCP stack to provide a leaner profile and faster performance. The different modes that the
NSX Advanced Load Balancer supports for handling TCP traffic and various parameters that can
be tuned for optimization of the TCP traffic are also detailed here.

Note This section focuses on the data-plane NICs of the NSX Advanced Load Balancer SEs.
The TCP stack outlined, excludes the SE management NIC and the NSX Advanced Load Balancer
Controller, which rely on a different TCP stack.

VMware, Inc. 103


VMware NSX Advanced Load Balancer Configuration Guide

Avi SE

Avi Linux

Hypervisor

eth0 eth1 eth2


Mgmt Data Data
Plane Plane

TCP Settings - TCP Profile


Every virtual service configured on the NSX Advanced Load Balancer requires a TCP/UDP
profile. The profile is a reusable template containing defined settings for establishing network
connections. Two different modes are supported to handle TCP traffic for a virtual service:

n TCP Proxy

n TCP Fast Path

By default, most new virtual services use the System-TCP-Proxy profile, which is configured for
TCP proxy. This is the recommended setting for protocols such as HTTP. Some protocols such as
DNS, can automatically select a different TCP/UDP profile, such as UDP.

TCP Fast Path


A TCP fast path profile does not proxy TCP connections. It directly connects clients to the
destination server and translates the destination virtual service address of the client with the IP
address of the chosen destination server. The source IP address of the client can be NATed to the
IP address of the SE. The option for configuring this is available through settings in the SE group
and other profiles.

On receiving a TCP SYN from the client, the NSX Advanced Load Balancer makes a load-
balancing decision and forwards the SYN and all subsequent packets directly to the server.
The client-to-server communication occurs over a single TCP connection, using the parameters,
sequence numbers, and TCP options negotiated between the client and the server.

VMware, Inc. 104


VMware NSX Advanced Load Balancer Configuration Guide

TCP Fast Path Profile


The options of a TCP proxy profile are not relevant in a TCP fastpath configuration, because the
TCP session is negotiated directly between the client and server, with the SE performing only NAT
operations. The fastpath profile type has the following settings:

n Enable SYN Protection — When disabled, the NSX Advanced Load Balancer performs load
balancing based on the initial client SYN packet. The SYN is forwarded to the server. The
NSX Advanced Load Balancer merely forwards the packets between the client and the
server, leaving servers vulnerable to SYN flood attacks from spoofed IP addresses. With
SYN protection enabled, the NSX Advanced Load Balancer proxies the initial TCP three-way
handshake with the client to validate that the client is not a spoofed source IP address. Once
the three-way handshake has been established, the NSX Advanced Load Balancer replays the
handshake on the server side. After the client and server are connected, it drops back to the
pass through (fastpath) mode. This process is also called delayed binding.

Note Consider using TCP Proxy mode for maximum TCP security.

n Session Idle Timeout — Idle flows terminate (time out) after the specified period. The NSX
Advanced Load Balancer issues a TCP reset to both the client and the server.

TCP Fast Path Configuration


This section describes configuration of TCP Fast Path through the NSX Advanced Load Balancer
UI.

To create a TCP fast path network profile:

Procedure

1 Navigate to Templates > > Profiles > > TCP/UDP.

2 Click Create and selct TCP Fast Path from the dropdown list.

3 Enter the Name of the network profile.

4 Configure Direct Server Return (DSR) if required.

a Click Enable DSR.

b Click the DSR Type (L2 or L3) to select the mode.

c Select IPinip as the DSR Encapsulation Type.

VMware, Inc. 105


VMware NSX Advanced Load Balancer Configuration Guide

5 Configure the TCP Fast Path Settings .

a Click Enable Syn Protection.

NSX Advanced Load Balancer will complete the three-way handshake with the client
before forwarding any packets to the server. It will protect the server from SYN flood
and half open SYN connections.

b Enter the Session Idle Timeout (between 5-14400 seconds).

This is the time for which a connection needs to be idle before it is eligible to be deleted.

Note Enter 0 to make the session idle timeout infinite.

6 Click Save.

Disabled by default, the timeout parameter SYN Protection, modifies the connection setup
behavior slightly. The initial three-way handshake of client is first proxied by the NSX
Advanced Load Balancer SE. On completion of the three-way handshake, the SE replays this
process on the server side of the SE, including passing through client TCP supported options.
This enables the NSX Advanced Load Balancer to provide TCP DoS mitigation and validation
of the connection before handing off the connection to the server.

TCP Proxy
The TCP proxy terminates client connections to the virtual service, processes the payload, and
then opens a new TCP connection to the destination server. Any application data from the
client that is destined for a server is forwarded to that server over the new server-side TCP
connection. Separating (or proxying) the client-to-server connections enables the NSX Advanced
Load Balancer to provide enhanced security, such as TCP protocol sanitization and denial of
service (DoS) mitigation.

SYN
SYN+ACK
ACK SERVICE
CLIENT ENGINE SERVER

SYN
SYN+ACK
ACK
REQUEST REQUEST
RESPONSE RESPONSE

The TCP proxy mode also provides better client and server performance, such as maximizing
client and server TCP maximum segment size (MSS) or window sizes independently, and buffering
server responses.

VMware, Inc. 106


VMware NSX Advanced Load Balancer Configuration Guide

Each connection negotiates the optimal TCP settings for the connecting device. For example,
consider a client connecting to the NSX Advanced Load Balancer with a 1400-byte MTU, while the
server is connected to it with a 1500-byte MTU. In this case, the NSX Advanced Load Balancer
buffers the 1500-byte server responses and sends them back to the client separately as 1400-byte
responses.

If the client connection drops a packet, the NSX Advanced Load Balancer handles re-transmission,
as the server might have already finished the transmission and moved on to handling the next
client request. This optimization is particularly useful in environments with high-bandwidth, low-
latency connectivity to the servers and low-bandwidth, high-latency connectivity to the clients (as
is typical of Internet traffic).

Use a TCP/UDP profile with the type set to Proxy for application profiles such as HTTP.

To create a TCP proxy network profile,

1 In the New TCP/UDP Profile screen, enter the Name of the network profile.

2 Select TCP Proxy as the Type.

3 Under TCP Proxy, select the mode (Auto Learn or Custom) to set the configurations for this
profile.

4 Click Save.

TCP Parameters
The NSX Advanced Load Balancer exposes only the configurable parameters of the TCP protocol
that might have tangible benefits on application performance. Additional configuration options are
available through the NSX Advanced Load Balancer CLI or REST API.

VMware, Inc. 107


VMware NSX Advanced Load Balancer Configuration Guide

Auto Learn
Auto Learn mode sets all parameters to default values and dynamically changes the buffer size.

In practice, many NSX Advanced Load Balancer administrators have found that manual TCP
tweaking is rarely needed. The default TCP Profile in NSX Advanced Load Balancer is set to Auto
Learn and a majorit of its customers might never have to deviate from this top level setting. This
approach is for reducing the complexity involved in managing application delivery platforms and
simplifying service consumption by application owners.

With the TCP Proxy profile, enabling Auto Learn makes the NSX Advanced Load Balancer set
the configuration parameters. The NSX Advanced Load Balancer can make changes to the TCP
settings at any point in time. For example, if an SE is running low on memory, it might reduce
buffers or window sizes to ensure application availability.

On selecting the auto learn mode, the default values configured in each field are as shown in the
following table:

VMware, Inc. 108


VMware NSX Advanced Load Balancer Configuration Guide

Settings Default Value

TCP Keep Alive Enabled

Idle Duration 10 minutes.


After 10 minutes idle time, the NSX Advanced Load
Balancer initiates the TCP keepalive protocol. If the other
side responds, the connection continues to live.

Max Retransmissions 8

Max SYN Retransmissions 8

IP DSCP No special DSCP values used.

Nagles Algorithm Disabled.

Buffer Management The receive window advertised to the client and on the
server dynamically change. It starts small (2 KB) and
can grow when needed up to 64 MB for a single TCP
connection. The algorithm also takes into account the
amount of memory available in the system and the number
of open TCP connections.

Custom Mode
The custom mode is used to configure the TCP Proxy Settings manually. When the TCP proxy
profile is set to custom, administrators can use the NSX Advanced Load Balancer UI, CLI or REST
API to alter the TCP proxy profile default parameters described in the following section.

Timeout Parameters
Idle Connections - specified by the Idle Duration parameter, the NSX Advanced Load Balancer
terminates the connection. Any packet sent or received over the connection by the SE, client or
server will reset the Idle Duration timer.

n Select either TCP keepalive or Age Out Idle Connections to control the behavior of the idle
connections.

a TCP keepalive: Periodically send a keepalive packet to the client that will reset the idle
duration timer on successful client acknowledgment. The keepalive packet sent from the
SE does not reset the timer.

VMware, Inc. 109


VMware NSX Advanced Load Balancer Configuration Guide

b Age Out Idle Connections: Terminates the idle connections that have no keep-alive signal
from the client, as specified by the Duration field. The NSX Advanced Load Balancer does
not send out keepalives, though it still honors keepalive packets received from clients or
servers.

n Enter the Idle Duration in seconds (between 5-14400 seconds, or a 0 for an infinite timeout).
This is the time before the TCP connection is eligible to be proactively closed by NSX
Advanced Load Balancer. The timer resets when any packet is sent or received by the client,
server or SE.

Note
n Setting this value higher can be appropriate for long-lived connections that do not use
keepalive packets. Higher settings can also increase the vulnerability of NSX Advanced
Load Balancer to denial of service attacks, as the system will not proactively close out idle
connections.

n The Default value for Idle Duration is 600 seconds. The Range is - 5 - 3600 seconds. (0
seconds for infinite timeout and disabling proactive closing of idle connections).

n When a connection between a SE and a client or the SE and a server is closed, the unique
client or server IP: Port + service engine IP: port (called a 4-tuple) is placed in a TIME_WAIT
state for some time. This 4-tuple cannot be reused till it is determinedthat there are no more
delayed packets on the network that are still in flight or that are yet to be delivered. The
Time Wait value defines the timeout period before this 4-tuple can be reused. Enter a value
between 500 – 2000 ms or enable the Ignore Time Wait option to allow NSX Advanced Load
Balancer to immediately reopen the 4-tuple connection, if it receives an SYN packet from the
remote IP that matches the same 4-tuple. Default value is 2000 ms.

Retransmission Behavior Parameters


1 Max Retransmissions - Enter a value (between 3 and 8). It is the number of attempts at
re-transmitting before closing the connection. Default value is 8.

2 Max SYN Retransmissions - Enter a value (between 3 and 8). It is the maximum number of
attempts at re-transmitting an SYN packet before giving up. Default value is 8.

Buffer Management Parameters


1.Receive Window - It informs the sender how much data the NSX Advanced Load Balancer can
buffer before sending a TCP acknowledgment. It can be a value in the range between 32KB and
65536KB.

2. Max Segment Size -

Max Segment Size (MSS) is calculated by using the Maximum Transmission Unit (MTU) length for
a network interface. The MSS determines the largest size of data that can be safely inserted into a
TCP packet.

VMware, Inc. 110


VMware NSX Advanced Load Balancer Configuration Guide

In some environments, the MSS must be smaller than the MTU. For example, traffic between the
NSX Advanced Load Balancer and a client that is traversing a site-to-site VPN might require some
space reserved for padding with encryption data. Click Use Network Interface MTU for Size to
set the MSS based on the MTU size of the network interface. The MSS is set to MTU - 40 bytes to
account for the IP and TCP headers. For an MTU of 1500 bytes, the MSS is be set to 1460.

Alternatively, you can enter a custom value in the range 512–9000 bytes.

QoS & Traffic Engineering Parameters


DDifferentiated Service Code Point (DSCP) allows NSX Advanced Load Balancer to either pass an
existing differentiated services code point parameter or specify a custom number. DSCP is an 8-bit
field in the TCP header that can be used for classifying traffic.

The following parameters canbe configured using through the NSX Advanced Load Balancer CLI
and REST API.

Congestion Control Parameters


1 Aggressive Congestion Avoidance - Congestion window defines the amount of data a
sender can reliably transmit without an ACK. The congestion window size keeps increasing up
to the maximum receive window, or until the network reaches its congestion limit. In networks
where there are no transmissions or timeouts observed, the NSX Advanced Load Balancer can
choose higher initial congestion windows to avoid slow start and ramp up TCP connections
faster. The following are the possible values for the field:

a Enabled — 10x.
b Disabled — 1x the size of the MSS.
c Default value is Disabled.

2 CC Algo - The congestion control algorithm governs the behavior for identifying and
responding to detected network congestion. The following are the possible values for the field:

a New Reno — A versatile TCP congestion control algorithm for most networks.
b Cubic — Designed for long fat networks (LFN), with high throughput and high latency.
c HTCP — Recommended only for high-throughput and high-latency networks.
d Default value is New Reno.

3 Congestion Recovery Scaling Factor - Defines the congestion window scaling factor
after recovery and used in conjunction with aggressive congestion avoidance. It can be in the
range 0 to 8 and defaults to 2.

4 Min Rexmt Timeout - TCP has built-in logic for ensuring that packets are received by the
remote device, failing which the sender re-transmits the packets. This parameter sets the
minimum time to wait before re-transmitting a packet. The value can be in between 50 and
5000 ms.

VMware, Inc. 111


VMware NSX Advanced Load Balancer Configuration Guide

5 Reassembly Queue Size - Defines the size of the buffer used to reassemble TCP segments
when the segments have arrived out of order, i.e. the maximum number of TCP segments
that can be queued for reassembly. Lower values might lead to issues in downloading large
content or handling bulk traffic. The value can be between 0 and 5 k. Default value is 0
(provides unlimited queue size).

6 Reorder Threshold - Controls the number of duplicate ACKs required to trigger a re-
transmission. A higher value means less number of re-transmissions caused by packet
reordering. If out-of-order packets are common in the environment, it is advisable to use a
higher number. The value can be in between 1 and 100. Default value is 8 for public clouds
(e.g., AWS, Azure, GCP) and 3 for others.

7 Slow Start Scaling Factor - Congestion window scaling factor during slow start. It is
different from the window scaling factor. This parameter is in effect only when aggressive
congestion avoidance is enabled. The field value can be between0 and 8. Default value is 1.

8 Time Wait Delay - The time to wait before closing a connection in the TIME_WAIT state.
The field can take the following values:

a Range — 500 - 2000 ms.

b Special — 0 (immediate close).

c Default — 0.

There are a few more optimization parameters that are enabled by default in the NSX Advanced
Load Balancer TCP stack that cannot be changed by users. These parameters are described in the
following section.

Unalterable Parameters
1 Window Scaling Factor - Window scaling determines the amount of TCP data the receiver
(i.e., the SE) can buffer for a connection. The default initial window is 65535 bits. For modern
TCP clients supporting this TCP extension, the window scaling factor increases this number
significantly by doubling the window size by x number of times (where x is the scale factor).
This is helpful for networks with high latency and high throughput, which describes most
broadband Internet connections. The NSX Advanced Load Balancer window scale factor is 10,
which implies that it can buffer up to 67,104,840 bits. Value for the field is 10 (which means a
buffer of up to 67,104,840 bits when the receive window is set to 65535).

2 Selective ACK - With selective acknowledgments, the data receiver can inform the sender
about all segments that have arrived successfully. So the sender needs to only re-transmit the
segments that have actually been lost. Consider the scenario where the first five packets are
successfully received, the sixth packet is lost and is not yet received and the packets seven
to ten are successfully received. In this case, without SACK, the sender would re-transmit all
packets starting from packet six, since it cannot figure out which packets were actually lost.
This would lead to unnecessary re-transmits, further consuming bandwidth and impacting TCP
performance. The value for this field is Enabled.

VMware, Inc. 112


VMware NSX Advanced Load Balancer Configuration Guide

3 Limited Transmit Recovery - This parameter is used to more effectively recover lost
segments when the congestion window of a connection is small, or when a large number
of segments are lost in a single transmission window. The limited transmit algorithm allows
sending a new data segment in response to each of the first two duplicate acknowledgments
that arrive at the sender. Transmitting these segments increases the probability that TCP can
recover from a single lost segment using the fast re-transmit algorithm, instead of using a
costly re-transmission timeout. The value for this field is Enabled.

4 Delayed ACK - Instead of sending one ACK segment per data segment received, the NSX
Advanced Load Balancer can improve efficiency by sending delayed ACKs. This is part of TCP
congestion control. As per RFC, timestamp to delay ACK is less than 0.5 seconds, and in a
stream of full-sized segments, an ACK would be available for at least every second segment.

Configuring MTU using the CLI


The Maximum Transmission Unit (MTU) can be configured as a global property, which sets the
MTU across all SEs managed by the Controller cluster. By default the MTU is learned using DHCP.
This can be manually set using the CLI. The following command sets the MTU to 1500 bytes. Two
examples illustrate the need to change MTU from the default:

n If the installation is in an environment using VXLAN or some other type of overlay network
(for example, OpenStack), the MTU must be reduced to accommodate the additional tunnel
headers.

n If the DHCP option sets the MTU to 9000 (jumbo), but the entire infrastructure (switches and
routers) does not support jumbo MTU. It can happen in AWS environments.

configure serviceengineproperties
se_runtime_properties
global_mtu 1500
Overwriting the previously entered value for global_mtu
save
save

Note NSX Advanced Load Balancer SEs support MTU a maximum of 1500 bytes.

Protection from TCP Attacks


Apart from performance tuning parameters of TCP, the NSX Advanced Load Balancer also
has in-built mechanisms to protect itself from some common TCP level attacks as explained in
the following section. This list is not exhaustive and includes some common attacks. For more
information, see Chapter 12 DDoS Attack Mitigation.

VMware, Inc. 113


VMware NSX Advanced Load Balancer Configuration Guide

Attack Type Description Mitigation

SYN flood A form of denial-of-service attack The NSX Advanced Load Balancer starts sending SYN
in which an attacker sends cookies by default if the TCP table has half-open
a succession of SYN requests connections. There is currently no configuration to allow
to a target system without specific clients from this behavior. In a TCP fastpath
acknowledging the SYN ACKs. profile where there is no TCP proxying, SYN protection
This is done in an attempt can be enabled, causing the NSX Advanced Load
to consume enough server Balancer to delay establishing a TCP session with the
resources and make the system server, until a complete three-way handshake with the
unresponsive to legitimate traffic. client has taken place. This protects the server from SYN
flood or half-open states.

LAND attacks This acts like a SYN flood attack. When this attack is detected, the NSX Advanced Load
The difference is that the source Balancer drops the packets at the dispatcher layer.
and destination IP addresses are
identical, which makes the IP
stack process the same packet
over and over again, potentially
leading to a crash of the victimized
system.

Port scan An attacker launches a port When this attack is detected, the NSX Advanced Load
scan by sending TCP packets on Balancer drops the packets at the dispatcher layer.
various ports to find out listening
ports for next level of attacks.
Most of these ports are non-
listening ports.

UDP Fast Path


The UDP fast path profile enables a virtual service to support UDP. NSX Advanced Load Balancer
translates the client’s destination virtual service address to the destination server and writes the
source IP address of the client to the address of the SE, when forwarding the packet to the server.
This ensures that server response traffic traverses symmetrically through the original SE.

To create a UDP Fast Path network profile:

Procedure

1 In the New TCP/UDP Profile: screen, enter the Name of the network profile.

2 Select UDP Fast Path as the Type.

3 Enter the Direct Server Return details, if required.

Note Configuring the DSR settings is optional.

4 Click Enable DSR.

a Click the DSR Type (L2 or L3) to select the mode.

b Select IPinip as DSR Encapsulation Type.

VMware, Inc. 114


VMware NSX Advanced Load Balancer Configuration Guide

5 Enter the UDP Fast Path Settings as shown below:

a Enabling NAT Client IP Address (SNAT) performs source NAT for all client UDP packets.

NAT Client IP Address (SNAT): By default, NSX Advanced Load Balancer translates the
source IP address of the client to an IP address of the Avi SE. This can be disabled for
connectionless protocols which do not require server response traffic to traverse back
through the same SE. For example, a syslog server will silently accept packets without
responding. Therefore, there is no need to ensure response packets route through the
same SE. When SNAT is disabled, it is recommended to ensure the session idle timeout is
kept to a lower value.

b Enable Per-Packet Load Balancing to consider every UDP packet as a new transaction.
When disabled, packets from the same client source IP and port are sent to the same
server.

Per-Packet Load Balancing: By default, NSX Advanced Load Balancer treats a stream of
UDP packets from the same client IP:Port as a session, making a single load balancing
decision and sending subsequent packets to the same destination server. For some
application protocols, each packet should be treated as a separate session that can be
uniquely load balanced to a different server. DNS is one example where enabling per-
packet load balancing causes NSX Advanced Load Balancer to treat each packet as an
individual session or request.

c Enter the Session Idle Timeout (between 2-3600 seconds). It is the amount of time a flow
needs to be idle before it is deleted.

Session Idle Timeout: Idle UDP flows terminate (time out) after a specified time period.
Subsequent UDP packets could be load balanced to a new server unless a persistence
profile is applied.

6 Click Save.

UDP Proxy
The UDP proxy profile is currently supported only for SIP applications. This profile maintains
different flow for both front end and back end transmissions.

To create a UDP Proxy network profile:

Procedure

1 In the New TCP/UDP Profile: screen, enter the Name of the network profile.

2 Select UDP Proxy as the Type.

3 Enter the Session Idle Timeout (between 2-3600 seconds). It is the amount of time a flow
needs to be idle before it is deleted.

4 Click Save.

For more information, see Configuring VMware NSX Advanced Load Balancer for SIP
Application.

VMware, Inc. 115


VMware NSX Advanced Load Balancer Configuration Guide

Internet Content Adaptation Protocol


Internet Content Adaptation Protocol (ICAP) is a lightweight HTTP-like protocol to transport HTTP
messages to third party services. The server executes its transformation service on messages and
sends back responses to the client, usually with modified messages.

For more information on ICAP, see RFC3507.

ICAP is supported for HTTP request processing through NSX Advanced Load Balancer. With
the implementation of the ICAP client functionality within the NSX Advanced Load Balancer, the
following use-cases are supported:

n Antivirus scanning - Using third party antivirus scan engine

n Content sanitization - Using third party content sanitization service

n Other request modification options using ICAP services, for example, URL filtering

Starting with NSX Advanced Load Balancer version 21.1.3, ICAPs are supported.

Configuring NSX Advanced Load Balancer for ICAP


NSX Advanced Load Balancer as an ICAP client supports the followings:

n Preview functionality

n Streaming of payload

n Content rewrite

The followings are the main configuration components for enabling ICAP for virtual service on an
NSX Advanced Load Balancer:

n Configuring an ICAP pool group

n Configuring an ICAP profile (attached to the virtual service)

n Configuring an HTTP Policy for the virtual service with the action set as Enable ICAP

n Associating the ICAP profile to the virtual service

n Configuring HTTP security policy for ICAP

1 Configuring ICAP Pool Group

Navigate to Application > Pool Group and create a pool group. The field for the Fail Action
under the Pool Group Failure Settings needs to be empty.

2 Configuring ICAP Pool

Navigate to create an ICAP pool. Configure the default port as 1344. Multiple Servers can be
added as Pool members.

3 Configuring ICAP Profile

Refer to the following table for the various attributes used in the ICAP profile configuration:

VMware, Inc. 116


VMware NSX Advanced Load Balancer Configuration Guide

Config Item Description Example

Name Name of this profile ICAP-APPX

Cloud Defines, which cloud object this is Default-Cloud


associated with

Pool Group Pool group of all ICAP server pools ICAP-Pool-Group

Vendor Vendor specific configuration if a OPSWAT


vendor is supported Generic-ICAP

Service URL ICAP Server service URL When using OPSWAT: /


OMSScanReq-AV

Request Buffer Size Maximum buffer size for request Default: 51200 (50 MB)
body

Enable ICAP Preview Enable ICAP Preview functionality, Default: Enabled (Boolean)
where the ICAP server can make
decisions by examining the preview
size payload

Preview Size Payload size for ICAP preview Default: 5000 (5 MB)

Response Timeout When this threshold is hit, the Default: 60000 (60 seconds)
request will be handled as an
error and the failure action will be
executed

Slow Response warning Threshold When this threshold is hit, the Default: 10000 (10 seconds)
request will cause a significant log
entry, but will still be served

Actions

Failure Action Handling of error with ICAP server. Fail Closed/ Fail Open
If failed closed, a 503 will be sent if
an error is occurring.

Large Upload Failure Action Handling of size exceeded error. If Fail Closed/ Fail Open
fail closed, a 413 will be sent.

Navigate to Virtual Service > Edit > ICAP Profile or Templates > Profiles > ICAP Profile to
create an ICAP profile.

4 Navigate to the Application > Virtual Service, select the required virtual service, and select
the ICAP profile (created in the previous step).

5 Creating HTTP Security Policy

Create a security policy to define the rules based on which the ICAP scanning should be
performed. Navigate to Application > Virtual Service, select the desired virtual service, and
click Edit. Select Policies > HTTP Security, and create a new rule with the following options:

n Select match criteria for the ICAP requests

VMware, Inc. 117


VMware NSX Advanced Load Balancer Configuration Guide

n Select Enable ICAP as the action

Note The rule name configured in this step will appear in the logs, so it is recommended to
make it self-explanatory for ease of troubleshooting.

6 Save the virtual service configuration.

With these steps, the ICAP configuration for the virtual service is complete. Incoming requests
on the virtual service that match the rule or the match criteria of the HTTP security policy will
use ICAP.

NSX Advanced Load Balancer supports the following ICAP servers (Third party AV-
Malware/CDR vendors):

n OPSWAT

n MetaDefender ICAP Server (with MetaDefender Core)

To set up an OPSWAT server for ICAP scanning, see OPSWAT documentation.

Limitations
The followings are the limitations for ICAP support on NSX Advanced Load Balancer:

n ICAP is not supported for HTTP/2 virtual services.

n ICAP client does not work in the response context.

ICAP Support for NSX Defender


Starting with NSX Advanced Load Balancer release 21.1.1, ICAP support is available for NSX
Defender server for preventing malicious file uploads. When compared with OPSWAT, there
are minor difference in the method using which the NSX Defender sends file back to the NSX
Advanced Load Balancer.

This section covers the followings:

n NSX Defender ICAP configuration

n NSX Advanced Load Balancer integration with NSX Defender

n Required visibility changes for NSX Defender reported information

Configuring NSX Defender for ICAP


Log in to the NSX Defender and navigate to Appliances > Admin > Configuration > Proxy.

Select the ENABLED option for the following on the UI:

n ICAP Server

n INLINE ANALYSIS

Under the Blocking pages section, enable the following:

n BLOCKED PAGE DETAILS

VMware, Inc. 118


VMware NSX Advanced Load Balancer Configuration Guide

n X-LASTLINE HEADER

n LASTLINE LOGO

The followings are the blocking types available on the NSX Defender. For more information, see
NSX Defender documentation.

n PASSIVE - No blocking is attempted on this type of file, but any relevant content will be
analyzed.

n SENSOR-KNOWN - Block all artifacts known to be malicious by the Sensor (listed in its local
cache). This method offers the lowest levels of protection but ensures minimal lag.

n MANAGER-KNOWN - Block all artifacts known to be malicious by the NSX Defender Manager.
These data are listed in the Manager cache and shared across all managed appliances.

n FULL - This mode allows the proxy to stall an ICAP request for as long as necessary to
provide a verdict on the file, within the limits set by the ICAP timeout. Depending on the client
implementation, this can cause the transaction to appear as unresponsive for a long time (in
the order of minutes in some cases).

This blocking mode is particularly suitable for the integration with third-party proxies that
implement mechanisms to improve the user experience. Such mechanisms can include data
trickling or “patience pages”, providing feedback to the user.

n FULL WITH FEEDBACK - This mode will generate “patience pages” that provide feedback to
the user on the analysis progress. These mechanisms have been tested exclusively with the
squid proxy. They can lead to unwanted side-effects when using third-party proxies, which
can implement caching mechanisms that disrupt the NSX Defender operation. Such third party
proxies often implement their own mechanisms to improve user experience and therefore can
perform better with the Full blocking mode.

Configuring NSX Advanced Load Balancer for NSX Defender


The following configuration ICAP server-specific options are required to enable ICAP scanning
using NSX Defender:

n Service URI - This needs to be set to /lastline to use the NSX Defender service.

n ICAP Pool - ICAP pool needs to point to NSX Defender ICAP server:port.

n Status URL - Only applicable to NSX Defender and has a default value of https://
user.lastline.com/portal#/analyst/task/$uuid/overview.

Rest all the configuration options are generic and not tied to any particular ICAP server.

VMware, Inc. 119


VMware NSX Advanced Load Balancer Configuration Guide

Visibility Changes for NSX Defender Reported Information


X-Lastline HTTP headers Pages analyzed by the ICAP instance may contain additional information
on the analysis status using additional HTTP headers. The presence of such headers can be
disabled from the ICAP configuration.

n X-Lastline-Status - Provides information on the state of the object at the time of analysis. The
following values are possible:

n new - The specific file hash has not been recently analyzed by NSX Defender and a score is
not currently available.

n known - The specific file is known, and a score is associated with it.

n blacklist - The contacted remote endpoint has a low reputation.

n timeout - The process reached its timeout while waiting for the analysis of the file.

n error - An error is preventing the analysis of the file.

n X-Lastline-Score — The score currently associated with the file, if known, is expressed as a
value between 0 and 100.

n X-Lastline-Task — The NSX Defender task UUID is associated with the analysis of the file. It is
possible to use this UUID to access the analysis details from the NSX Defender Portal/Manager
Web UI. The following is the REST API to access information about any upload using UUID:

https://user.lastline.com/portal#/analyst/task//overview

ICAP Response Header


NSX Defender can also send the following ICAP headers as part of the ICAP response as per the
ICAP extensions draft.

n X-Infection-Found: Type=0;Resolution=1;Threat=LastlineArtifact(score=XX;md5=;uuid=)

n X-Virus-ID: LastlineArtifact(score=100;md5=;task_uuid=)

Logs and Troubleshooting


This section discusses the various logs and troubleshooting options available for ICAP on NSX
Advanced Load Balancer. The NSX Advanced Load Balancer UI and CLI can be used to check logs
and error messages for analytics and troubleshooting.

Log for the requests that are handled by the ICAP server has an icap_log section populated.

If the ICAP server blocks or modifies a request, the consequent log entry is significant. The
following example shows details of the available logs on NSX Advanced Load Balancer. As shown
under the Response Information, the overall request is blocked, and a 403 response code is sent
back to the client.

VMware, Inc. 120


VMware NSX Advanced Load Balancer Configuration Guide

n The following log exhibits ICAP scan detects an infection (JSON log file):

"icap_log": {
"action": "ICAP_BLOCKED",
"request_logs": [
{
"icap_response_code": 200,
"icap_method": "ICAP_METHOD_REQMOD",
"http_response_code": 403,
"http_method": "HTTP_METHOD_POST",
"icap_absolute_uri": "icap://100.64.3.15:1344/OMSScanReq-AV ",
"complete_body_sent": true,
"pool_name": {
"val": "ICAP-POOL-GROUP",
"crc32": 1799851903
},
"pool_uuid": "poolgroup-c7dd3b93-60c1-4190-b6d6-26c22d55dc30",
"latency": "1275",
"icap_headers_sent_to_server": "Host: 100.64.3.15:1344\r\nConnection:
close\r\nPreview: 653\r\nAllow: 204\r\nEncapsulated: req-hdr=0, req-body=661\r\n",
"icap_headers_received_from_server": "Date: Thu, 19 Nov 2020 13:55:00
G11T\r\nServer: Metadefender Core V4\r\nISTag: \"001605794100\"\r\nX-ICAP-Profile:
File process\r\nX-Response-Info: Blocked\r\nX-Response-Desc: Infected\r\nX-Blocked-Reason:
Infected\r\nX-Infection-Found: Type=0",
"action": "ICAP_BLOCKED",
"reason": "Infected",
"threat_id": "EICAR-Test-File (not a virus)"
}]
},

n The following is the log entry when the ICAP server modifies the ICAP request:

VMware, Inc. 121


VMware NSX Advanced Load Balancer Configuration Guide

n The following log shows that the ICAP scan is performed successfully. The action field for the
icap_log exhibits the value as ICAP_PASSED.

{"icap_log":
{"action": "ICAP_PASSED", "request_logs":
[{
"icap_response_code": 204,
"icap_method": "ICAP_METHOD_REQMOD",
"http_method": "HTTP_METHOD_POST",
"icap_absolute_uri":
"icap://100.64.3.15:1344/OMSScanReq-AV ",
"complete_body_sent": true,
"pool_name": {"val": "ICAP-POOL-GROUP", "crc32": 1799851903},
"pool_uuid": "poolgroup-c7dd3b93-60c1-4190-b6d6-26c22d55dc30",
"latency": "456",
"icap_headers_sent_to_server": "Host: 100.64.3.15:1344\r\nConnection:
close\r\nPreview: 0\r\nAllow: 204\r\nEncapsulated: req-hdr=0, null-body=661\r\n",
"icap_headers_received_from_server": "Date: Wed, 18 Nov 2020 12:54:06
G11T\r\nServer: Metadefender Core V4\r\nISTag: \"000000000096\"\r\nX-Response-Info:
Allowed\r\nEncapsulated: null-body=0\r\n", "action": "ICAP_PASSED"}]}

n The log entries will show the action for icap_log as ICAP_DISABLED if the ICAP feature is not
enabled.

"icap_log": {"action": "ICAP_DISABLED"}

Log Analytics
When ICAP is enabled, the log analytics on NSX Advanced Load Balancer provides an additional
overview. All data items are clickable and allow the quick addition of filters for a detailed log view.

Troubleshooting
ICAP Server Connection Failed: The following example shows a log error message for a failed
ICAP server connection. The ICAP Error is logged against the Significance field. To solve this
issue, check the direct connectivity from the SEs to the ICAP servers.

VMware, Inc. 122


VMware NSX Advanced Load Balancer Configuration Guide

ICAP Server Error: The following example shows the ICAP Request is blockedmisconfiguration of
the ICAP server will exhibit the action for the ICAP log as ICAP_BLOCKED. The reason for the action
is No security rule matched as available in the ICAP header.

"icap_log":
{"action": "ICAP_BLOCKED",
"request_logs":
[{
"icap_response_code": 200,
"icap_method": "ICAP_METHOD_REQMOD",
"http_response_code": 403,
"http_method": "HTTP_METHOD_POST",
"icap_absolute_uri": "icap://100.64.3.15:1344/OMSScanReq-AV ",
"complete_body_sent": true, "pool_name": {"val": "ICAP-POOL-GROUP", "crc32":
1799851903}, "pool_uuid": "poolgroup-c7dd3b93-60c1-4190-b6d6-26c22d55dc30", "latency": "17",
"icap_headers_sent_to_server": "Host: 100.64.3.15:1344\r\nConnection:
close\r\nPreview: 0\r\nAllow: 204\r\nEncapsulated: req-hdr=0, null-body=661\r\n",
"icap_headers_received_from_server": "Date: Thu, 19 Nov 2020 13:25:15 G11T\r\nServer:
Metadefender Core V4\r\nISTag: \"001605792300\"\r\nX-Response-Info: Blocked\r\nX-Response-
Desc: No security rule matched\r\nEncapsulated: res-hdr=0, res-body=91\r\n", "action":
"ICAP_BLOCKED"}]}

To solve this issue, see the ICAP server used for the deployment.

ICAPs
Starting with NSX Advanced Load Balancer 21.1.3, ICAPs is supported. ICAP traffic can now be
encrypted using SSL.

The following are the configuration components for enabling ICAPs:

n To configure ICAPs on NSX Defender, enable Secure ICAP in Proxy configurations as shown
below:

VMware, Inc. 123


VMware NSX Advanced Load Balancer Configuration Guide

n To configure ICAPs on OPSWAT, see Configuring TLS.

n In NSX Advanced Load Balancer, when configuring a pool for ICAPs, ensure SSL is enabled
in the Pool, that is referred to in the ICAP profile (has IPs of ICAP servers) and configure the
default port as 11344.

Starting with NSX Advanced Load Balancer version 21.1.3, the ICAP supports HTTP2 traffic to the
virtual service. If the virtual service has HTTP2 enabled for any port, and ICAP is configured, the
HTTP2 traffic will be subjected to the ICAP server.

Server Pools
This section contains the following sections:

n Pools Page

n Pool Details Page

n Pool Analytics Page

n Pool Logs Page

n Pool Health Page

n Pool Servers Page

n Pool Events Page

n Pool Alerts Page

Pools maintain the list of servers assigned to them and perform health monitoring, load balancing,
persistence, and functions that involve NSX Advanced Load Balancer-to-server interaction. A
typical virtual service will point to one pool; however, more advanced configurations may have a
virtual service content switching across multiple pools via HTTP Request Policies or DataScripts. A
pool may only be used or referenced by only one virtual service at a time.

VMware, Inc. 124


VMware NSX Advanced Load Balancer Configuration Guide

Virtual Service Pool


Server List
IP: Port Listener
Load Balance Algorithm
Network Profile
Health Monitoring
Client App Profile Servers
Persistence Profile

Service Engine

Creating a virtual service using the basic method automatically creates a new pool for that virtual
service, using the name of the virtual service with a -pool appended. When creating a virtual
service via the advanced mode, an existing, unused pool may be specified, or a new pool may be
created.

Pools Page
Navigate to Applications > Pools to open the pools page. This page displays a high-level overview
of configured pools.

You can create a new pool by clicking CREATE POOL, or edit the pool by clicking the pencil icon.

The following are the information for each pool. The columns shown may be modified using the
sprocket icon in the top right of the table:

Field Description

Name Lists the name of each pool. Clicking the name opens the
Analytics tab of the Pool Details page.

Health Provides both a number from 1-100 and a color-coded


status to provide quick information about the health of
each pool. This will be gray if the pool is unused, such
as not associated with a virtual service or associated with
a VS that can not or has not been placed on a Service
Engine.
n Hovering the cursor over the health score opens the
pool’s Health Score popup.
n Clicking the View Insights link at the bottom of the
pool’s Health Score popup opens the health Insights
tab of the Pool Detail page.
n Clicking elsewhere within the pool’s Health Score
popup opens the Analytics tab of the Pool Details
page.

VMware, Inc. 125


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Servers Displays the number of servers in the pool that are up


out of the total number of servers assigned to the pool.
For instance, 2/3 indicates that two of the three servers
in the pool are successfully passing health checks and are
considered up.

Virtual Service The VS the pool is assigned to. Clicking a name in this
column opens the VS Analytics tab of the Virtual Service
Details page. If no virtual service is listed, this pool is
considered unused.

Cloud It displays the relevant cloud.

RPS It indicates the performance of the CPU.

Open Conns It displays the open conns of the respective pool.

Throughput Thumbnail chart of the throughput in Mbps for each pool


for the time frame selected.
n Hovering the cursor over this graph shows the
throughput at the selected time.
n Clicking a graph opens the Analytics tab of the pool’s
Details page.

Pool Details Page


Clicking into a pool brings up the Details pages, which provide deeper views into the current pool.

This page contains the following sub-pages:

n Analytics

n Logs

n Health

n Servers

n Events

n Alerts

Pool Analytics Page


The pool’s Analytics tab presents information about various pool performance metrics. The data
shown is filtered by the time period selected.

VMware, Inc. 126


VMware NSX Advanced Load Balancer Configuration Guide

Refer to the following for detailed information about this tab:

n End-to-End Timing

n Metric Tiles

n Chart Pane

n Overlays Pane

n Anomalies

n Alerts

n Config Events

n System Events

Pool End-to-End Timing


The End-to-End Timing pane at the top of the Analytics tab of the Pool Details page provides
a high-level overview of the quality of the end-user experience and where any slowdowns may
be occurring. The chart breaks down the time required to complete a single transaction, such an
HTTP request.

It may be helpful to compare the end-to-end time against other metrics, such as throughput,
to see how traffic increases impact the ability of the application to respond. For instance, if
new connections double but the end-to-end time quadruples, you may need to consider adding
additional servers.

From left to right, this pane displays the following timing information:

VMware, Inc. 127


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Server RTT This is Service Engine to server round trip latency. An


abnormally high server RTT may indicate either that the
network is saturated or more likely that a server’s TCP
stack is overwhelmed and cannot quickly establish new
connections.

App Response The time the servers take to respond. This includes the
time the server took to generate content, potentially
fetch back-end database queries or remote calls to other
applications, and begin transferring the response back to
NSX Advanced Load Balancer. This time is calculated by
subtracting the Server RTT from the time of the first byte
of a response from the server. If the application consists of
multiple tiers (such as web, applications, and database),
then the App Response represents the combined time
before the server in the pool began responding. This
metric is only available for a layer 7 virtual service.

Data Transfer Represents the average time required for the server to
transmit the requested file. This is calculated by measuring
from the time the Service Engine received the first byte of
the server response until the client has received the last
byte, which is measured as the when the last byte was sent
from the Service Engine plus one half of a client round
trip time. This number may vary greatly depending on the
size of objects requested and the latency of the server
network. The larger the file, the more TCP round trip times
are required due to ACKs, which are directly impacted by
the client RTT and server RTT. This metric is only used for
a Layer 7 virtual service.

Total Time Total time from when a client sent a request until they
receive the response. This is the most important end-to-
end timing number to watch, because it is the sum of the
other four metrics. As long as it is consistently low, the
application is probably successfully serving traffic.

Pool Metrics
The sidebar metrics tiles contain the following metrics for the pool. Clicking any metric tile will
change the main chart pane to show the chosen metric.

VMware, Inc. 128


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

End to End Timing Shows the total time from the pool’s End to End Timing
graph. To see the complete end-to-end timing, including
the client latency, refer to Analytics tab of the Virtual
Service Details page, which includes the client to Service
Engine metric.

Open Connections The number of open (existing) connections during the


selected time period.

New Connections The number of client connections that were completed or


closed over the selected time period. Refer to this article
for an explanation of new versus closed connections per
second.

Throughput Total bandwidth passing between the virtual service and


the servers assigned to the pool. This throughput number
may be different than the virtual service throughput, which
measures throughput between the client and the virtual
service. Many features may affect these numbers between
the client and server side of NSX Advanced Load Balancer,
such as caching, compression, SSL, and TCP multiplexing.
Hovering your mouse cursor over this graph displays the
throughput in Mbps for the selected time period.

VMware, Inc. 129


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Requests The number of HTTP requests sent to the servers


assigned to the pool. This metric also shows errors
sent to servers or returned by servers. Any client
requests that received an error generated by NSX
Advanced Load Balancer as a response (such as
a 500 when no servers are available) are not be
forwarded to the pool and will not be tracked in this

view.

Servers Displays the number of servers in the pool and their


health. The X-axis represents the number of HTTP
requests or connections to the server, while the Y-axis
represents the health score of the server. The chart
enables you to view servers in relation to their peers within
the pool, thus helping to spot outliers. Within the chart
pane, click and drag the mouse over server dots to select
and display a table of the highlighted servers below the
Chart pane. The table provides more details about these
servers, such as hostname, IP address, health, new
connections or requests, health score, and the server’s
static load balanced ratio. Clicking on the name of a server
will jump to the pool’s Server Insight page, which shows
additional health and resource status.

Pool Chart Pane


The main chart pane in the middle of the Analytics tab displays a detailed historical chart of the
selected metric tile for the current pool.

n Hovering the mouse over any point in the chart will display the results for that selected time in
a popup window.

n Clicking within the chart will freeze the popup at that point in time. This may be useful when
the chart is scrolling as the display updates over time.

n Clicking again will unfreeze the highlighted point in time.

VMware, Inc. 130


VMware NSX Advanced Load Balancer Configuration Guide

Many charts contain radio buttons in the top right that allow customization of data that should be
included or excluded from the chart. For instance, if the End to End Timing chart is heavily skewed
by one very large metric, then deselecting that metric by clearing the appropriate radio button
will re-factor the chart based on the remaining metrics shown. This may change the value of the
vertical Y-axis.

Some charts also contain overlay items, which will appear as color-coded icons along the bottom
of the chart.

Pool Overlays Pane


The overlays pane is used to highlight important events within the timeline of the chart pane. This
feature helps correlate anomalies, alerts, configuration changes, or system events with changes in
traffic patterns.

Within the overlays pane:

n Each overlay type displays the number of entries for the selected time period.

n Clicking an overlay button toggles that overlay’s icons in the chart pane. The button lists the
number of instances (if any) of that event type within the selected time period.

n Selecting an overlay button displays the icon for the selected event type along the bottom of
the chart pane. Multiple overlay icon types may overlap. Clicking the overlay type’s icon in the
chart pane will bring up additional data below the overlay Items bar. The following overlay
types are available:

n Anomalies — Display anomalous traffic events, such as a spike in server response time,
along with corresponding metrics collected during that time period.

n Alerts — Display alerts, which are filtered system-level events that have been deemed
important enough to notify an administrator.

n Config Events — Display configuration events, which track configuration changes made to
NSX Advanced Load Balancer by either an administrator or an automated process.

n System Events — Display system events, which are raw data points or metrics of interest.
System Events can be noisy, and are best used as the basis of alerts which filter and
classify raw events by severity.

VMware, Inc. 131


VMware NSX Advanced Load Balancer Configuration Guide

Pool Anomalies Overlay


The anomalies overlay displays periods during which traffic behavior was considered abnormal
based on recent historical moving averages. Changing the time interval will provide greater
granularity and potentially show more anomalies.

Clicking Anomalies Overlay button displays yellow anomaly icons in the chart
pane. Selecting one of these icons within the chart pane brings up additional information in a
table at the bottom of the page. During times of anomalous traffic, NSX Advanced Load Balancer
records any metrics that have deviated from the norm, which may provide hints as to the root
cause of the anomaly.

An anomaly is defined as a metric that has a deviation of 4 sigma or greater across the moving
average of the chart.

Anomalies are not recorded or displayed while viewing with the real-time display period.

Field Description

Timestamp Date and time when the anomaly was detected. This may
either span the full duration of the anomaly, or merely be
near the same time window.

Type The specific metric deviating from the norm during the
anomaly period. To be included, the metric deviation must
be greater than 4 sigma. Numerous types of metrics, such
as CPU utilization, bandwidth, or disk I/O may trigger
anomalous events.

Entity Name of the specific object that is reporting this metric.

Entity Type Type of entity that caused the anomaly. This may be one of
the following:
n Virtual Machine (server): these metrics require NSX
Advanced Load Balancer to be configured for either
read or write access to the virtualization orchestrator
such as vCenter or OpenStack. In the example shown
above, CPU utilization of the two servers was learned
by querying vCenter.
n Virtual service
n Service Engine

Time Series Thumbnail historical graph for the selected metric,


including the most current value for the metric which
will be data on the far right. Moving the mouse over
the chart pane will show the value of the metric for the
selected time. Use this to compare the normal, current,
and anomaly time periods.

Deviation Change or deviation from the moving average, either


higher or lower. The time window for the moving average
depends on the time series selected for the Analytics tab.

VMware, Inc. 132


VMware NSX Advanced Load Balancer Configuration Guide

Pool Alerts Overlay


The alerts overlay displays the results of any events that meet the filtering criteria defined in the
alerts tab. Alerts notify administrators about important information or changes to a site that may
require immediate attention.

Alerts may be transitory, meaning that they may expire after a defined period of time. For
instance, NSX Advanced Load Balancer may generate an alert if a server is down and then allow
that alert to expire after a specified time period once the server comes back online. The original
event remains available for later troubleshooting purposes.

Clicking the alerts icon in the overlay items bar displays any red alerts icons in
the chart pane. Selecting one of these chart alerts will bring up additional information below the
overlay Items bar, which will show the following information:

Field Description

Timestamp Date and time when the alert occurred.

Resource Name Name of the object that is reporting the alert.

Level Severity of the alert. You can use the priority level to
determine whether additional notifications should occur,
such as sending an email to administrators or sending
a log to Syslog servers. The level may be one of the
following:
n High — Red
n Medium — Yellow
n Low — Blue

Summary Brief description of the event.

Pool Config Events overlay


The config events overlay displays configuration events, such as changing the NSX Advanced
Load Balancer configuration by adding, deleting, or modifying a pool, virtual service, or Service
Engine, or an object related to the object being inspected. If traffic dropped off at precisely
10:00am, and at that time an administrator made a change to the virtual services security settings,
there’s a good chance the cause of the change in traffic was due to the (mis)configuration.

VMware, Inc. 133


VMware NSX Advanced Load Balancer Configuration Guide

Clicking Config Events icon in the Overlay Items bar displays any blue config
event icons in the chart pane. Selecting one of these chart alerts will bring up additional
information below the Overlay Items bar, which will show the following information:

Field Description

Timestamp Date and time when the configuration change occurred.

Resource Type This event type will always be scoped to configuration


event types.

Resource Name Name of the object that has been modified.

Event Code n There are three event codes:


n CONFIG_CREATE

n CONFIG_UPDATE

n CONFIG_DELETE

User It displays the user.

Description Brief description of the event.

Expand/Contract Clicking the plus (+) or minus sign (-) for a configuration
event either expands or contracts a sub-table showing
more detail about the event. When expanded, this shows a
difference comparison of the previous configuration versus
the new configuration, as follows:
n Additions to the configuration, such as adding a health
monitor, will be highlighted in green in the new
configuration.
n Removing a setting will be highlighted in red in the
previous configuration.
n Changing an existing setting will be highlighted in
yellow in both the previous and new configurations.

Pool System Events Overlay


This overlay displays system events relevant to the current object, such as a server changing status
from up to down or the health score of a virtual service changing from 50 to 100.

Clicking the system events icon in the overlay items bar displays any purple
system event icons in the Chart Pane. Select a system event icon in the chart pane to bring up
more information below the overlay items bar.

VMware, Inc. 134


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Timestamp Date and time when the system even occurred.

Event Type This will always be system

Resource Name Name of the object that triggered the event.

Event Code High-level definition of the event, such as


VS_Health_Change or VS_Up.

Description Brief description of the system event.

Expand/Contract Clicking the plus (+) or minus sign (-) for a system event
expands or contracts that system event to show more
information.

Pool Logs Page


Client logs viewed from within a pool are identical to the logs shown within a virtual service, except
they are filtered to only show log data specific to the pool. For instance, information such as End
to End Timing is only shown from the Service Engine to the servers, rather than from the clients to
the servers. Viewing logs within a pool may be useful when a virtual service is performing content
switching across multiple pools. It is still possible within the virtual service logs page to add a
filter for a specific pool, which would then provide complete End to End Timing for connections or
requests sent to the specified pool.

For the complete descriptions of logs, refer to the VS logs page for more details.

Pool Health Page


The health tab presents a detailed breakdown of health score information for the pool.

The health score of a pool is comprised of the following scores:

VMware, Inc. 135


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Performance Score Performance score (1-100) for the selected item. A score of
100 is ideal, meaning clients are not receiving errors and
connections or requests are quickly returned.

Resources Score Any score assessed because of resource availability issues


is assigned a score, which is then subtracted from
the performance score. A penalty score of 0 is ideal,
meaning there are no obvious resource constraints on
NSX Advanced Load Balancer or servers.

Anomaly Score Any score assessed because of anomalous events is


assigned a score, which is then subtracted from the
performance score. An ideal score is 0, which means NSX
Advanced Load Balancer has not seen recent anomalous
traffic patterns that may imply future risk to the site.

Health Score The final health score for the selected item equals the
performance score minus the Resource and anomaly
penalty scores

The sidebar tiles show the scores of each of the three subcomponents of the health score, plus the
total score. To determine why a pool may have a low health score, select one of the first three tiles
that are showing a sub-par score.

This will bring up additional sub-metrics which feed into the top-level metric/tile selected. Hover
the mouse over a time period in the main chart to see the description of the score degradation.
Some tiles may have additional information shown in the main chart section that requires scrolling
down to view.

Pool Servers Page


Information for each server within a pool is available on the Server Details page. This page offers
views into the correlation between server resources, application traffic, and response times.

Server Page

The Server Page may be accessed by clicking on the server’s name from either the Pool > Servers
page or the Pool > Analytics Servers tile. When viewing the Server Details page, the server
shown is within the context of the pool it was selected within. Rephrased, if the server (IP: Port) is
a member of two or more pools, the stats and health monitors shown are only for the server within
the context of the viewed pool.

Not all metrics within the Server Page are available in all environments. For instance, servers
that are not virtualized or hooked into a hypervisor are not able to have their physical resources
displayed.

VMware, Inc. 136


VMware NSX Advanced Load Balancer Configuration Guide

The statistics can be changed or skewed by switching between Average Values, Peak Values, and
Current Values. To see the highest CPU usage over the past day, change the time to 24 hours and
the Value to Peak. This will show the highest stats recorded during the past day.

Field Description

CPU Stats The CPU Stats box shows the CPU usage for this server,
the average during this time period across all servers in
the pool, and the hypervisor host.

Memory Stats The memory Stats box shows the Memory usage for this
server, the average during this time period across all
servers in the pool, and the hypervisor host.

Health Monitor This table shows the name of any health monitors
configured for the pool. The Status column shows the
most current up or down health of the server. The Success
column shows the percentage of health monitors that
passed or failed during the display time frame. Clicking the
plus will expand the table to show more info for a down
server. Refer to Why a Server Can Be Marked Down for
more details.

Main Panel The large panel shows the highlighted metric, similar
to the Virtual Service Details and Pool Details pages.
Overlay Items shows anomalies, alerts, configuration
events, and system events that are related to this server
within the pool.

VMware, Inc. 137


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Pool Tile Bar The pool in the top right bar shows the health of the pool.
This can also be used to jump back up to the Pool Page.
Under the pool name is a pull-down menu that enables
quick access to jump to the other servers within the pool.

Metrics Tile Bar The metrics options will vary depending on the hypervisor
NSX Advanced Load Balancer is plugged into. For
non-virtualized servers, the metrics are limited to non-
resource metrics, such as end-to-end timing, throughput,
open connections, new connections, and requests. Other
metrics that may be shown include CPU, memory, and
virtual disk throughput.

Pool Events Page


The events tab presents system-generated events over the time period selected for the pool.
System events apply to the context in which you are viewing them. For instance, when viewing
events for a pool, only events that are relevant to that pool are displayed.

The top of this tab displays the following items:

Field Description

Search The search field enables you to filter the events using
whole words contained within the individual events.

Refresh Clicking refresh updates the events displayed for the


currently selected time.

Number The total number of events being displayed. The date/time


range of those events appears beneath the search field on
the left.

VMware, Inc. 138


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Clear Selected If filters have been added to the Search field, clicking the
Clear Selected (X) icon on the right side of the search bar
will remove those filters. Each active search filter will also
contain an X that you can click to remove the specific filter.

Histogram The histogram shows the number of events over the


period of time selected. The X-axis is time, while the Y-axis
is the number of events during that bar’s period of time.
n Hovering the cursor over a Histogram bar displays the
number of entries represented by that bar, or period of
time.
n Click and drag inside the histogram to refine the date/
time period which further filters the events shown.
When drilling in on the time in the histogram, zoom
to the selected link appears above the histogram. This
expands the drilled in time to expand to the width
of the histogram and also changes the displaying pull-
down menu to custom. To return to the previously
selected time period, use the displaying pull-down
menu.

The table at the bottom of the Events tab displays the events that matched the current time
window and any potential filters. The following information appears for each event:

Field Description

Timestamp Date and time the event occurred. Highlighting a section


of the histogram provides further filtering of events within
a smaller time window.

Event Type This may be one of the following:


n System — System events are generated by NSX
Advanced Load Balancer to indicate a potential issue
or create an informational record, such as VS_Down.
n Configuration — Configuration events track changes to
the NSX Advanced Load Balancer configuration. These
changes may be made by an administrator (through
the CLI, API, or GUI), or by automated policies.

Resource Name Name of the object related to the event, such as the pool,
virtual service, Service Engine, or Controller.

Event Code A short event definition, such as Config_Action or


Server_Down.

Description A complete event definition. For configuration events, the


description will also show the username that made the
change.

Expand/Contract Clicking the plus (+) or minus sign (-) for an event log
either expands or contracts that event log. Clicking the +
and – icons in the table header expands and collapses all
entries in this tab.

VMware, Inc. 139


VMware NSX Advanced Load Balancer Configuration Guide

For configuration events, expanding the event displays a different comparison between the
previous and new configurations.

n New fields will appear highlighted in green in the new configuration

n Removed fields will appear highlighted in red.

n Changed fields will show highlighted in yellow

Pool Alerts Page


The alerts tab displays user-specified events for the selected time period. You can configure alert
actions and proactive notifications via Syslog or email in the Notifications tab of the Administration
page. Alerts act as filters that provide notification for prioritized events or combinations of events
through various mechanisms. NSX Advanced Load Balancer includes a number of default alerts
based on events deemed to be universally important.

The top of this tab shows the following items:

Field Description

Search The search field enables you to filter the alerts using whole
words contained within the individual alerts.

Refresh Clicking refresh updates the alerts displayed for the


currently-selected time.

Number The total number of alerts being displayed. The date/time


range of those alerts appear beneath the search field on
the left.

Dismiss Select one or more alerts from the table below then click
dismiss to remove the alert from the list.

Alerts are transitory, which means they will eventually and automatically expire. They intend to
notify an administrator of an issue, rather than being the definitive record for issues. Alerts are
based on events, and the parent event will still be in the Events record.

The table at the bottom of the Alerts tab displays the following alert details:

Field Description

Timestamp Date and time when the alert was triggered. Changing
the time interval using the display pull-down menu may
potentially show more alerts.

Resource Name Name of the object that is the subject of the alert, such as
a Server or virtual service.

VMware, Inc. 140


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Level Severity level of the alert, which can be high, medium,


or low. Specific notifications can be set up for the
different levels of alerts via the Administration page’s
Alerts Overlay.

Summary Summarized description of the alert.

Action Click the appropriate button to act on the alert.

Expand/Contract Clicking the plus (+) or minus sign (-) for an event log
either expands or contracts that event log to display more
information. Clicking the + and – icon in the table header
expands and collapses all entries in this tab.

Create Pool
The Create Pool popup and the Edit Pool popup share the same interface that consists of the
following tabs:

n Settings

n Servers

n Advanced

n Review

Step 1: Settings
The Create/Edit Pool > Settings tab contains the basic settings for the pool. The exact options
shown may vary depending on the types of clouds configured in NSX Advanced Load Balancer.
For instance, servers in VMware may show an option to “Select Servers by Network”.

VMware, Inc. 141


VMware NSX Advanced Load Balancer Configuration Guide

To add or edit pool settings:

Field Description

Name Provide a unique name for the pool.

Note The special character “$” is not allowed in the


Name field.

Default Server Port New connections to servers will use this destination
service port. The default port is 80, unless it is either
inherited from the virtual service (if the pool was created
during the same workflow), or the port was manually
assigned. The default server port setting may be changed
on a per-server basis by editing the Service Port field for
individual servers in the Step2: Servers tab.

Graceful Disable Timeout A time value ranging from 1 to 7,200 minutes used to
gracefully disable a back-end server. The virtual service
will wait for the specific time before terminating existing
connections to disabled servers. To values are special:
0 causes immediate termination and -1 (negative one,
standing for “infinite”) never terminates.

VMware, Inc. 142


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Load Balancing Select a load-balancing algorithm from the pull-down


menu. This choice determines the method and
prioritization for distributing connections or HTTP requests
across available servers. The available algorithms are:
n Least Connections — New connections are sent to
the server that currently has the least number of
outstanding concurrent connections. This is the default
algorithm when creating a new pool and is best for
general-purpose servers and protocols. New servers
with zero connections are introduced gracefully over a
short period of time via the Connection Ramp setting
in the Step 3: Advanced tab, which slowly brings the
new server up to the connection levels of other servers
within the pool.

Note A server that is having issues, such as


rejecting all new connections, will have a concurrent
connection count of zero and be the most eligible
to receive all new connections that will fail. Use the
Least Connections algorithm in conjunction with the
Passive Health Monitor which recognizes and adjusts
for scenarios like this.
n Round Robin — New connections are sent to the next
eligible server in the pool in sequential order. This
static algorithm is best for basic load testing, but is
not ideal for production traffic because it does not take
the varying speeds or periodic hiccups of individual
servers into account.
n Least Load — New connections are sent to the server
with the lightest load, regardless of the number of
connections that server has. For instance, if an HTTP
request that will require a 200kB response is sent
to a server and a second request that will generate
a 1kB response is sent to a server, this algorithm
will estimate that —based on previous requests— the
server sending the 1kB response is more available than
the one still streaming the 200kB of data. The idea is
to ensure that a small and fast request does not get
queued behind a very long request. This algorithm is
HTTP specific. For non-HTTP traffic, the algorithm will
default to the Least Connections algorithm.
n Fewest Servers — Instead of attempting to distribute
all connections or requests across all servers, NSX
Advanced Load Balancer will determine the fewest
number of servers required to satisfy the current
client load. Excess servers will no longer receive
traffic and may be either de-provisioned or temporarily
powered down. This algorithm monitors server
capacity by adjusting the load and monitoring the
server’s corresponding changes in response latency.
Connections are sent to the first server in the pool

VMware, Inc. 143


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

until it is deemed at capacity, with the next new


connections sent to the next available server down the
line. This algorithm is best for hosted environments
where virtual machines incur a cost.
n Consistent Hash — New connections are distributed
across the servers using a hash that is based on
a key specified in the field that appears below the
Load Balance field or (as of release 17.2.4) in a
custom string. This algorithm inherently combines load
balancing and persistence, which minimizes the need
to add a persistence method. This algorithm is best
for load balancing large numbers of cache servers
with dynamic content. It is ‘consistent’ because adding
or removing a server does not cause a complete
recalculation of the hash table. For the example of
cache servers, it will not force all caches to have to
re-cache all content. If a pool has nine servers, adding
a tenth server will cause the pre-existing servers to
send approximately 1/9 of their hits to the newly-
added server based on the outcome of the hash.
Hence persistence may still be valuable. The rest of
the server’s connections will not be disrupted. The
available hash keys in the pull-down menu are:
n Source IP Address of the client
n Source IP Address and Port of the client
n URI, which includes the host header and the path,
e.g., www.acme.com/index.htm.
n Callid — This field specifies the call ID field
within the SIP header. With this option, SIP
transactions with new call IDs are load balanced
using consistent hash, while existing call IDs are
retained on the previously chosen servers. The
state of existing call IDs are maintained for an
idle timeout period defined by the ‘transaction
timeout’ parameter in the application profile. The
state of existing call IDs are relevant for as long
as the underlying TCP/UDP transport state for
the SIP transaction remains the same. For more
information about SIP, refer to NSX Advanced
Load Balancer for SIP Applications.
n Custom String, which is provided by the user via
the avi.pool.chash DataScript function.
n Custom Header — Specify the HTTP header to use
in the Custom Header field, such as referer. This
field is case sensitive. If the field is blank or if the
header does not exist, the connection or request is
considered a miss, and will hash to a server.
n Fastest Response — New connections are sent to the
server that is currently providing the fastest response
to new connections or requests. This is measured as
time to first byte. In the End-to-End Timing chart,

VMware, Inc. 144


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

this is reflected as Server RTT plus App Response


time. This option is best when the pool’s servers
contain varying capabilities or they are processing
short-lived connections. A server that is having issues,
such as a lost connection to the data store containing
images, will generally respond very quickly with HTTP
404 errors. It is best practice when using the fastest
response algorithm to also enable a passive health
monitor, which recognizes and adjusts for scenarios
like this by taking into account the quality of server
response, not just speed of response.

Note A server that is having issues, such as a


lost connection to the data store containing images,
will generally respond very quickly with HTTP 404
errors. You should therefore use the fastest response
algorithm in conjunction with the passive health
monitor, which recognizes and adjusts for scenarios
such as this.There are several other factors beyond
the load balancing algorithm that can affect connection
distribution, such as connection multiplexing, server
ratio, connection ramp, and server persistence.
n Fewest Tasks — Load is adaptively balanced, based
on server feedback. This algorithm is facilitated by an
external health monitor. It is configurable via the NSX
Advanced Load Balancer CLI and REST API, but is
not visible in the NSX Advanced Load Balancer UI.
For details, refer to the Fewest Tasks Load-Balancing
Algorithm article.
n Core Affinity — To Be Supplied.

Persistence By default, NSX Advanced Load Balancer will load balance


clients to a new server each time the client opens a new
connection to a virtual service. There is no guarantee that
the client will reconnect to the same server to which it was
previously connected. A persistence profile ensures that
subsequent connections from the same client will connect
to the same server. Persistence can be thought of as the
opposite of load balancing: a client’s first connection to
NSX Advanced Load Balancer is load balanced; thereafter,
that client and any connections made by it will be persisted
to the same server for the desired duration of time.
Persistent connections are critical for most servers that
maintain client session information locally. For instance,
many HTTP applications will keep a user’s information
in memory for 20 minutes, which enables the user
to continue their session by reconnecting to the same
server. As a best practice, HTTP virtual services requiring
persistence should use HTTP cookies, while general TCP
or UDP applications requiring persistence will use the
client’s source IP. For more information on persistence
types, refer to the [Persistence Profile] article.

VMware, Inc. 145


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

AutoScale Policy

n Name — Name chosen for the policy.


n Instances — The minimum and maximum number of
instances that can be running at any given time. The
default minimum is zero. The maximum permitted is
400.
n Scale Out
n Alerts — The pool will be scaled out when
alerts are raised due to any of the selected alert
configurations. Multiple selections can be made, as
shown below.

n Cooldown Period — The time period (in seconds)


during which no new scale-out operations will
be triggered, to give time for previous scale-out
operations to complete.
n Adjustment Step — The maximum number of
server instances to simultaneously launch when
the system determines it is necessary to scale out.
The actual number of instances launched is chosen
such that the final total number of server instances
will be less than or equal to the specified maximum
for the pool.
n Scale In
n Alerts — The pool will be scaled in when alerts
are raised due to any of the selected alert
configurations. Multiple selections can be made, as
shown above.

VMware, Inc. 146


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

n Cooldown Period — The time period (in seconds)


during which no new scale-in operations will
be triggered, to give time for previous scale-in
operations to complete.
n Adjustment Step — The maximum number of
server instances to simultaneously terminate when
the system determines it is necessary to scale
in. The actual number of instances terminated is
chosen such that the final total number of server
instances remaining will be greater than or equal to
the specified minimum for the pool.

AutoScale Launch Config If configured, then NSX Advanced Load Balancer will
trigger orchestration of pool-server creation and deletion.
This option is only supported for public cloud autoscale
groups and OpenStack.

Health Monitoring Incorporate a Health Monitor to verify the health of server


instances within the pool. There are two kinds of health
monitors:
n Passive Health Monitor — A passive health monitor
listens only to client-to-server communication. If
servers are replying with errors (such 500 busy or TCP
connection errors), then the passive health monitor will
reduce the amount of connections or requests sent to
that server. The reduction percentage depends on the
number of servers available within the pool. When the
server responds satisfactorily to the throttled requests
directed to it, the passive health monitor will restore
the server to full traffic volume. You may use this
monitor in conjunction with any other health monitors.
Errors are defined in the analytics profile assigned to
the virtual service. Best practice is to ensure that a
passive health monitor is enabled in addition to any
synthetic check that may also be configured.
n Active Health Monitor — In addition to normal client-
to-server traffic, NSX Advanced Load Balancer can
generate synthetic connections or requests to servers
to ensure the integrity of the server’s health. Add one
or more health monitors to the pool by clicking on
the green + Add Active Monitor button and either
selecting a health monitor or clicking to create a new
one. You may disassociate a health monitor from the
pool by clicking the trash can icon to the right of the
monitor name.

Lookup Server by Name Enable server lookup by name.

Rewrite Host Header to Server Name Rewrite the incoming host header to the name of the
server to which the request is proxied. Enabling this
feature rewrites the host header of requests sent to all
servers in the pool.

VMware, Inc. 147


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

SSL to Backend Servers Enables SSL encryption between the NSX Advanced Load
Balancer Service Engine and the back-end servers. This
is independent from the SSL option in the virtual service,
which enables SSL encryption from the client to the NSX
Advanced Load Balancer Service Engine.
n SSL Profile: Determines which SSL versions and
ciphers NSX Advanced Load Balancer will support
when negotiating SSL with the server.
n Server SSL Certificate Validation PKI Profile: This
option validates the certificate presented by the
server. When not enabled, the Service Engine
automatically accepts the certificate presented by the
server when sending health checks. Refer to the
PKI Profile section for additional help on certificate
validation.
n Service Engine Client Certificate: When establishing
an SSL connection with a server, either for normal
client-to-server communications or when executing a
health monitor, the Service Engine will present this
certificate to the server.

Enable real time metrics Checking this option enables real time metrics for server
and pool metrics. Default is OFF.

Step 2: Servers
The Servers tab supports the addition/removal/disablement/enablement of servers and displays
the results of those actions.

Add Servers

VMware, Inc. 148


VMware NSX Advanced Load Balancer Configuration Guide

The servers added to a pool may be specified one of three ways:

1 IP address, IP range, or DNS name

2 IP group

3 Auto-scaling groups defined by public cloud ecosystems such as Amazon Web Services (AWS)
and Microsoft Azure.

VMware, Inc. 149


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

IP Address, Range, or DNS Name Add one or more servers to the pool using one or more
of the listed methods. The example below shows servers
created using multiple methods.
n Add by IP Address - Into the Server IP Address field
enter the IP address of a server you want to add.
The Add Server button will change from light grey to
green. You may also enter a range of IP addresses via
a dash, such as 10.0.0.1-10.0.0.20.
n Add by DNS Resolvable Name - Into the Server
IP Address field enter the FQDN of the server you
want to add. If the server successfully resolves, the
IP address will appear and the Add Server button will
change to green. Click the Add Server button to add
it to the pool server list. See Add Servers by DNS for
more information.
n n Clicking the Select Servers by Network opens
a list of reachable networks. What appears will
resemble the below example:

n In this example, we drift the cursor over the


network named 2a-private - 10.0.1.0/24, and it
becomes highlighted.
n Clicking on its row causes the servers on that
network to appear as below. You can filter the
search for servers, such as searching for “Demo”
then select all matching servers.

n Checking the boxes beside both servers results in


the below.

VMware, Inc. 150


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

n Clicking the Add Servers button in the


above window completes the selection process.
Successful addition is reflected in the table, as
shown below.

IP Group Rather than add individual servers to a pool one at a time,


multiple servers may be added in one step by storing their
IP addresses in a comma-separated IP Group. This may be
useful if the same list is used elsewhere for IP allowlists,
DataScripts, or similar automation purposes. However, be
advised that many common pool features are unavailable
when using this method, such as manually disabling a
server, setting a specific service port, or setting a ratio.
The IP Group method for adding servers may not be used
with other methods.

Auto Scaling groups External environments such as AWS and Azure define and
manage autoscaling groups of their own.
n Clicking this option reveals one’s choices.

n One or more may be selected, as shown below. Once


chosen, notice how the list of candidates shrinks down
to just one, ScaleoutASG.

Servers

VMware, Inc. 151


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Changing Server Status Adding servers to the pool populates the table within
the Servers tab. Use it to remove, enable, disable, or
gracefully disable servers. Changes to server status take
effect immediately when changes are saved. The table
below shows two servers have been enabled.

n Remove — Select one or more servers to remove from


the pool. This will immediately reset any existing client
connections for these servers and purge the server
from the pool’s list.
n Enable — Select one or more disabled servers, and
then reactivate them by clicking the Enable button.
Enabling a server makes that server immediately
available for load balancing, provided it passes its first
health check.
n Disable — Select one or more enabled servers to
disable. NSX Advanced Load Balancer immediately
marks a disabled server as unavailable for new
connections and resets any existing client connections.
A server will not receive health checks while it is in a
disabled state.

Editing Servers Servers added to the pool can be modified by editing


theirIP Address, Port, or Ratio fields.
n Status - The status of a server may be Enabled or
Disabled.
n Server — Name of the server (or the IP address, if the
server was added manually).
n IP Address — Changing the IP address for an existing
server will reset any existing connections for the
server.
n Port - This optional field overrides the default service
port number for the pool by giving server a specific
port number that might differ from the other servers in
the pool.
n Ratio — his optional field creates an unequal
distribution of traffic to a server relative to its peers.
The ratio is used in conjunction with the Load
Balancing algorithm. For instance, If Server A has a

VMware, Inc. 152


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Ratio of two and Server B has a Ratio of one, then


Server A will receive two connections for every one
connection that is sent to Server B. The Ratio may be
any number between 1 and 20.

Note The Ratio is statically assigned to servers.


Dynamic load balancing algorithms work with Ratio
but may produce inexact results with Ratio, and are
not recommended for normal environments. The ratio
is most commonly used to send a small sampling of
traffic to a test server (such as one running a newer,
untested version of code).
n Network — Shows networks of the servers in the pool
if the Select Servers by Network option was used.
n Header Value — This special field is used by the
custom HTTP header persistence. Each server may be
statically allocated an identifier, such as s1, s2, etc. If
the selected client header exists, and the header value
is s1, this server will receive the connection or request.
n Rewrite Host Header — This is analogous to the pool-
level feature described earlier in this article but on a
more granular, server-level basis.

Step 3: Advanced
The Advanced tab of the Pool Create/Edit popup specifies optional settings for the pool.

VMware, Inc. 153


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Placement Settings In some scenarios, a server may exist in multiple networks.


Similarly, a network may have multiple IP subnets or a
single subnet may exist in multiple networks. For instance,
VMware servers may have multiple port groups assigned
to a single subnet, or a single port group is assigned to
multiple subnets. Normally, NSX Advanced Load Balancer
will try to determine the network for the servers. However,
in scenarios where it cannot determine which network to
use, an administrator may be required to manually select it
as follows.
n Server Network — Click the pull-down menu icon to
reveal potential networks for server placement. The
list might look as shown below. Click on the desired
network.

n Subnet — Once the network has been chosen, a


subnet(s) will be displayed. Choose one or enter one
using the syntax 10.1.1.0/24.

Pool Full Settings This section configures HTTP request queuing, which
causes NSX Advanced Load Balancer to queue
requests that are received after a back-end server has
reached its maximum allowed number of concurrent
connections. Queuing HTTP requests provides time for
new connections to become available on the server, thus
avoiding the configured pool-down action. For complete
details, refer to the HTTP Request Queueing article.

VMware, Inc. 154


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Pool Failure Settings Fail Action — Three fail actions are defined.
n Close Connection — If all servers in a pool are down,
the default behavior of the virtual service is to close
new client connection attempts by issuing TCP resets
or dropping UDP packets. Existing connections are not
terminated, even though their server is marked down.
The assumption is the server may be slow but may
still be able to continue processing the existing client
connection.
n HTTP Local Response — Returns a simple web page.
Specify a status code of 200 or 503. If a custom HTML
file has not been uploaded to NSX Advanced Load
Balancer, it will return a basic page with the error
code.

n Status Code — Select 200 or 503 from the pull-


down.
n Upload File — Click the button to navigate to and
select an HTML page to be returned as the SE-
local response.

Note You can upload any type of file as a local response.


It is recommended to configure a local file using the UI. To
update the local file using API, encode the base64 file out
of band and use the encoded format in the API.

n HTTP Redirect — Returns a redirect HTTP response


code, including a specified URL.

n Status Code — Choose 301, 302, or 307 from the


pull-down.
n HTTP/HTTPS — By default NSX Advanced Load
Balancer will redirect via HTTPS unless HTTP is
clicked instead.
n URL — Enter a URL of the format domain.com/
path/file?query=bbb.

Note A virtual service is marked up when at least one


of the pools associated with the virtual service is up, or
if there are redirect policies that do not need pools. The
mere presence of pool-down action by itself does not
mark the virtual service up.

You can specify the minimum threshold parameters for a


pool to make it serviceable. For more information, view
Parameters to Mark a Virtual Service or Pool up.

VMware, Inc. 155


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

If a virtual service is marked down, and any of the


following situations apply, then the pool-down action is
not triggered:
1 “Remove Listening Port when VS Down” setting on
the VS - When this is set, and the VS is down,
the dispatcher drops the packets, and the pool-down
action is not triggered.
2 BGP cases - When the virtual service is marked down,
BGP withdraws this VS from the peer. Consequently,
the peer does not send any traffic to this SE.
3 ECMP case (without route summarization) - When
the virtual service is marked down, the Controller
withdraws the VS from the router. As a result, the
router does not send any traffic to this SE.

VMware, Inc. 156


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Other Settings Disable Port Translation — This feature is for virtual


services that are listening on multiple service ports, such
as Microsoft Lync, which has multiple listener ports.
Instead of having all connections directed to a single port
on the server (defined by the pool’s default server port or
the server’s optional port field), they will be sent to the
same port that they were received on the virtual service.
n Ignore Server Port — The Ignore Server Port option
is only relevant when the pool is configured to
use the consistent hash load balancing algorithm or
when Disable Port Translation is set. When Ignore
Server Port is enabled, the consistent hash algorithm
considers only the server IP address and ignores the
server port, resulting in the same server being selected
across pools with the same pool members but different
server ports.

n Description — Enter an optional description of up


to 256 characters in this field. This field is for user
convenience only.
n Connection Ramp — Enabling this option by entering
a number larger than 0 results in a graceful increase in
the number of new connections sent to a server over
the specified time period. For instance, assume that
the load balancing algorithm is set to least connections
and a pool has two servers with 100 connections each.
Adding a third server would immediately overwhelm
that third server by immediately sending the next 100
consecutive connections to it. Setting a connection
ramp adds traffic to a new server in a manner similar
to using a ratio. Over the specified period of time,
the new server will receive an ever-increasing ratio
of traffic in relation to its peers. For instance, setting
the ramp to 4 seconds means that the new server will
receive 1/4 of the traffic it would normally be given
for the 1st second. By the 2nd second, the server will
be receiving 1/2 the traffic it might otherwise have
been given. After the 4-second ramp time has elapsed,
the server will receive the normal amount of traffic as
determined by the load balancing algorithm.

VMware, Inc. 157


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Max Connections per Server Specify the maximum number of concurrent connections
allowed for a server. If all servers in the pool reach this
maximum the virtual service will send a reset for TCP
connections or silently discard new UDP streams unless
otherwise specified in the Pool Down Action, described
above. As soon as an existing connection to the server
is closed, that server is eligible to receive the next client
connection. A value of 0 disables the connection limit.

HTTP Server Reselect This option retries an HTTP request that fails or returns
one of a set of user-specified error codes from the
backend server. Normally, NSX Advanced Load Balancer
forwards these error messages back to the client. For
more information, see HTTP Server Reselect.

Step 4: Review
The Review tab displays a summary of the information entered in the previous pool creation tabs.

Review this information and then click Save to finish creating the pool. If needed, you may return
to any previous step by clicking the appropriate tab at the top of the window.

Note The Review tab only displays when creating a new pool; it does not display when editing an
existing pool.

VMware, Inc. 158


VMware NSX Advanced Load Balancer Configuration Guide

User Service SSL Mode


A new knob is introduced in the pool configuration called user_service_ssl_mode. When this knob
is enabled, the server-side connection’s SSL mode is decided based on the client-side connection
mode. The SSL mode of the connection to the server is decided by the SSL mode on the virtual
service port on which the request was received.

Note This knob can currently be configured only using the CLI/API.

When both use_service_ssl_mode and use_service_port are configured for SSL-enabled VS


service ports, the SSL traffic will be sent to the server by using the pool’s SSL profile/certificate
settings. For non-SSL enabled VS service ports, non-SSL traffic will be sent to the server.

Refer to the table below

use_service_ssl_mode Pool SSL Traffic Sent to the Backend Server

False False All traffic Plaintext

False True All traffic SSL

True True For SSL-enabled VS service ports,


SSL traffic using Pool SSL profile/cert
settings.For the non-SSL enabled VS
service ports, non SSL traffic.
For example, If the VS has received
non-SSL traffic on port 8081 then
we will send the non ssl traffic to
the backend servers on port 8081 .
Similarly if VS has received ssl traffic
on say port 8443 then NSX Advanced
Load Balancer will send ssl traffic to
backend servers on port 8443 using
the SSL settings configured on pool
level.

Note This knob can only be enabled if user_service_port (Disable port translation) is set to true.
So NSX Advanced Load Balancer will keep the client’s destination port to the back-end server.

Configure the ssl_profile on the pool’s side to use the option use_service_ssl_mode.

Specifying Connection Properties at the Pool Level


In NSX Advanced Load Balancer, four properties are configured at a more granular level (pool
level).

The properties are:

1 upstream_connpool_conn_max_reuse — Maximum times a connection to a server in the pool


can be reused. Default value is 0 (which implies the connection may be used an unlimited
number of times).

VMware, Inc. 159


VMware NSX Advanced Load Balancer Configuration Guide

2 upstream_connpool_server_max_cache — Maximum cached connections per server in the pool.


Default value is 0 (which implies unlimited cached connections).

3 upstream_connpool_conn_idle_tmo — Connection idle timeout, measured in seconds. Default


value is 60.

4 upstream_connpool_conn_life_tmo — Connection life timeout, measured in seconds. Default


value is 600.

These four parameters are currently accessible via the REST API and NSX Advanced Load Balancer
CLI. It is not required to restart the SE for the changes to take effect.

Configuration throughNSX Advanced Load Balancer CLI


This section discusses the configuration of the four properties through NSX Advanced Load
Balancer CLI.

Configuring Maximum Reuse Property

: > configure pool pool-test


: pool> conn_pool_properties
: pool:conn_pool_properties> upstream_connpool_conn_max_reuse 5
: pool:conn_pool_properties> save
: pool> save

Configuring Maximum Cache Property

: > configure pool pool-test


: pool> conn_pool_properties
: pool:conn_pool_properties> upstream_connpool_server_max_cache 10
: pool:conn_pool_properties> save
: pool> save

Configuring Connection Idle Time Property

: > configure pool pool-test


: pool> conn_pool_properties
: pool:conn_pool_properties> upstream_connpool_conn_idle_tmo 20
: pool:conn_pool_properties> save
: pool> save

Configuring Connection Life Timeout Property

: > configure pool pool-test


: pool> conn_pool_properties
: pool:conn_pool_properties> upstream_connpool_conn_life_tmo 100
: pool:conn_pool_properties> save

Configuration through NSX Advanced Load Balancer UI


This section discusses the configuration of the four properties through NSX Advanced Load
Balancer UI.

VMware, Inc. 160


VMware NSX Advanced Load Balancer Configuration Guide

Procedure

1 Navigate to Applications > Pools. If no pools exist, create one. Otherwise, click the pencil icon
in the row of the pool you wish to edit.

2 Once into the Pool Editor, click on the Advanced tab of the pool-creation wizard, as shown
below.

3 Enter the relevant fields under Pool Full Settings:

Option Description

Request Queuing Enable or Disable Request Queuing when the pool is full by selecting
appropriate radio buttons.

Queue Length Specify the minimum number of requests to be queued when pool is full.

4 Enter the relevant fields under Pool Failure Settings:

Option Description

Fail Action Select an action when a pool failure happens from the drop-down menu. The
menu displays the following values:
n Close Connection
n HTTP Local Response
n HTTP Redirect
By default, the connection will be closed, if a pool experiences a failure.

VMware, Inc. 161


VMware NSX Advanced Load Balancer Configuration Guide

5 Enter the relevant fields under Connection Pool Settings:

Option Description

Connection Idle Timeout Specify the idle connection timeout. This time starts on the last time the
connection is used, the connection will be closed after this time.

Connection Life Timeout Specify the connection life timeout. This time starts when the connection is
created, the connection will be closed after this time.

Connection Max Used Times Specify the maximum number of times the connection is used.

Max Cache Connections Per Server Specify the maximum cache connections per server.

6 Fill in the required fields under Other Settings:

Option Description

Disable Port Translation Select this box to disable port translation.

Connection Ramp Specify the duration for which new connections will be gradually ramped up
to a server recently brought online.

Default Server Timeout Specify a value between 0 milliseconds and 21600000 millisecond (6 hours).
Server timeout value specifies the time within which a server connection
needs to be established and a request response exchange completes
between NSX Advanced Load Balancer and the server.

Note If the Server Timeout value is not entered, by default the value will be
set to 3600000 milliseconds (1 hour).

Description Specify the pool description.

Max Connections per Server Specify the maximum number of concurrent connections allowed to each
server within the pool.

HTTP Server Reselect Select HTTP Server Reselect box when server responds with specific
response codes.

7 Click Save.

HTTP Server Reselect


When an HTTP request fails or returns an error code that is part of user-specified error codes, the
Controller forwards these error messages back to the client. The NSX Advanced Load Balancer
can be configured to retry such requests.

Configurable Options
HTTP server reselect is disabled by default. The feature can be configured within individual pools.
One can optionally select the error codes that trigger the feature. Once enabled, the feature works
in all connection or SSH failure scenarios.

VMware, Inc. 162


VMware NSX Advanced Load Balancer Configuration Guide

Error Codes
The pool configuration specifies the HTTP error response codes that must result in server
reselection. The error codes can be specified in any of the following ways.

n Explicit code number(s)

Enter one or more individual error codes (for example, 404).

n Range of codes

Enter a range between 400 and 499 or 500 and 599 (for example, 501-503).

n Entire block of codes

4xx or 5xx (for example, 4xx).

Maximum Retries
The default maximum retry setting is 4. Following the first error response, the NSX Advanced Load
Balancer resends the request to the pool up to 4 more times, for a total of 5 attempts. Each retry is
sent to a different server, and each server can receive only one attempt.

If the setting for maximum retries is higher than the number of enabled and running servers within
the pool, each of those servers still receives only one attempt. For example, if maximum retries
is set to 4 but the pool has only 3 servers, the maximum number of retries is only 2. The initial
attempt that fails goes to one of the servers, leaving 2 more servers to try. If the second server
also sends a 4xx or 5xx error code in response to the request, the request is sent to the last server
in the pool. If the last server also sends a 4xx or 5xx, the response from the server is sent back to
the client.

Server Retry Timeout


The srv_retry_timeout variable can be set through the NSX Advanced Load Balancer REST API
or CLI. Ensure that the server_reselect.enabled boolean is set to True for the srv_retry_timeout
variable setting to take effect. The timeout range is 0-3600000 ms (60 mins). A value of 0 causes
the timeout to default to the connection timeout value.

Server Reselection for Idempotent Requests


HTTP server reselect applies only to idempotent request methods, since a given request of this
type always has the same result, even if an identical request is received multiple times by a
server. Multiple identical requests of non-idempotent request methods (POST, LOCK, PATCH,
and CONNECT) are by definition, not guaranteed to have the same effect as that of single such
requests. So, HTTP server reselect is not performed for these request methods.

Configuring HTTP Server Reselect


HTTP server reselect can be enabled on the Advanced tab of the pool configuration.

1 Navigate to Applications > Pools.

VMware, Inc. 163


VMware NSX Advanced Load Balancer Configuration Guide

2 Open the configuration popup for the pool.

a If you are enabling the feature in an existing pool, click the edit icon for the pool.

b For creating a new pool, click Create Pool, select the cloud name and click Next. Enter a
name for the pool on the Settings tab, select the servers on the Servers tab.

3 Click the Advanced tab.

4 Select the HTTP Server Reselect check box.

5 Enter the error response codes that trigger server reselection.

6 Save the changes.

a If creating a new pool, click Next to review the settings and click Save.

b If editing an existing pool, click Save.

The following example enables HTTP server reselection for all 4xx error codes.

Based on this configuration, if a server in this pool responds to a client request with a 4xx error
code, the NSX Advanced Load Balancer retries the request by sending it to another server in the
pool. The retry process can happen up to 4 times (to 4 different servers).

CLI Example

Note Only significant lines of interest from the CLI output are included in the following example.

[admin:10-10-27-18]: > configure pool vs-test-pool


Updating an existing object. Currently, the object is:
+---------------------------------------+------------------------------------------------+
| Field | Value |
+---------------------------------------+------------------------------------------------+
| uuid | pool-8e91b1a6-17bf-490e-b59a-05efd942a3f6 |

VMware, Inc. 164


VMware NSX Advanced Load Balancer Configuration Guide

| name | vs-test-pool |
. .
. .
. .
| server_reselect | |
| enabled | False |
| num_retries | 4 |
| retry_nonidempotent | False |
| srv_retry_timeout | 0 milliseconds |
. .
. .
. .
+---------------------------------------+------------------------------------------------
+
[admin:10-10-27-18]: pool> server_reselect enabled
[admin:10-10-27-18]: pool:server_reselect> srv_retry_timeout 5000
Overwriting the previously entered value for srv_retry_timeout
[admin:10-10-27-18]: pool:server_reselect> save
[admin:10-10-27-18]: pool> exit
+---------------------------------------+------------------------------------------------+
| Field | Value |
+---------------------------------------+------------------------------------------------+
| uuid | pool-8e91b1a6-17bf-490e-b59a-05efd942a3f6 |
| name | vs-test-pool |
. .
. .
. .
| server_reselect | |
| enabled | True |
| num_retries | 4 |
| retry_nonidempotent | False |
| srv_retry_timeout | 5000 milliseconds |
. .
. .
. .
+---------------------------------------+------------------------------------------------+

[admin:10-10-27-18]: > configure pool vs-test-pool server_reselect


[admin:10-10-27-18]: pool:server_reselect> no enabled
+---------------------+-------------------+
| Field | Value |
+---------------------+-------------------+
| enabled | False |
| num_retries | 4 |
| retry_nonidempotent | False |
| srv_retry_timeout | 5000 milliseconds |
+---------------------+-------------------+
[admin:10-10-27-18]: pool:server_reselect> srv_retry_timeout 0
Overwriting the previously entered value for srv_retry_timeout
[admin:10-10-27-18]: pool:server_reselect> save
[admin:10-10-27-18]: pool> save
+---------------------------------------+------------------------------------------------+
| Field | Value |
+---------------------------------------+------------------------------------------------+
| uuid | pool-8e91b1a6-17bf-490e-b59a-05efd942a3f6 |

VMware, Inc. 165


VMware NSX Advanced Load Balancer Configuration Guide

| name | vs-test-pool |
. .
. .
. .
| server_reselect | |
| enabled | False |
| num_retries | 4 |
| retry_nonidempotent | False |
| srv_retry_timeout | 0 milliseconds |
. .
. .
. .
+---------------------------------------+------------------------------------------------+
[admin:10-10-27-18]: >

Deactivating Back-end Servers for Maintenance


NSX Advanced Load Balancer provides a way to disable back-end servers for maintenance
actively. When a server is deactivated for maintenance, it marks the server Disabled. Existing
sessions are terminated immediately or allowed to end gracefully, either with a user-settable
maximum timeout or with no timeout.

The Graceful Disable Timeout parameter set for a pool governs how servers within the pool
are disabled as follows:

n Disable with immediate effect: All client sessions are immediately terminated. The
pool’s Graceful Disable Timeout parameter must be set to 0.

n Gracefully disable with a finite timeout: No new sessions are sent to the server.
Existing sessions are allowed to terminate on their own, up to the specified timeout. Once the
timeout is reached, any remaining sessions are immediately terminated. The pool’s Graceful
Disable Timeout parameter must range from 1 to 7200 minutes.

n Gracefully disable with infinite timeout: No new sessions are sent to the server.
All existing sessions are allowed to terminate on their own. The pool’s Graceful Disable
Timeout parameter must be set to -1.

When servers are gracefully deactivated until all flows drain, the idle connections, if any, will be
deleted immediately.

In-flight connections are closed at the end of a request for a request-switched virtual service and
the end of a client connection for a connection-switched virtual service.

When servers are gracefully deactivated with a timeout value, the idle connections are closed
immediately, while any busy or bound connection will be closed at the end of the timeout. Any
existing request will be processed until the end of the timeout. Any new requests, whether from an
existing connection or a new connection, will be sent to a new server.

Setting a Pool’s Graceful Disable Timeout Parameter


1 Navigate to Applications > Pools to display a list of pools.

VMware, Inc. 166


VMware NSX Advanced Load Balancer Configuration Guide

2 Identify the pool containing the servers whose timeout parameter is to be set, and click on the
pencil icon at the right end of that pool.

3 In the Edit Pool window, set the Graceful Disable Timeout field to 0, -1, or within the range of
1 to 7200 minutes.

Disabling a Server for Maintenance


1 Navigate to Applications > Pools.

2 Click on the pool name.

3 Click Servers tab.

4 Select the checkbox next to the name of each server that you wish to disable.

5 Click Disable button.

Note NSX Advanced Load Balancer can be configured to use information in the health-check
responses from servers to detect when a server is in maintenance mode. For information, see
Detecting Server Maintenance Mode with a Health Monitor.

You can configure how the pool server should behave when it is disabled through the CLI as
follows:

Use disallow_new_connection to specify that a node or pool member allows existing


connections to time out but to not allow new connections, as shown below:

configure pool <pool name> server_disable_type disallow_new_connection

save

Use allow_new_connection_if_persistence_present to disallow the allocation of new


connections with persistence disabled, as shown below:

configure pool <pool name> server_disable_type allow_new_connection_if_persistence_present

save

When allow_new_connection_if_persistence_present is configured, the timer is refreshed


if a disabled server is picked from the persist table.

When disabled, the node or pool member continues to process persistent and active connections.

New connections can be accepted only if the connections belong to an existing persistent
connection.

These persistence matches for new connections continue until persistence times out.

Rewriting Host Header to Server Name


This section elaborates the steps to enable rewrite host header to server name.

VMware, Inc. 167


VMware NSX Advanced Load Balancer Configuration Guide

When proxying a request to a back-end server through NSX Advanced Load Balancer, an SE
can rewrite the host header to the server name of the back end server to which the request is
forwarded. This functionality can be turned on for selected or all servers in the pool.

Enabling Rewrite Host Header Option


The Rewrite Host Headeroption can be enabled in two ways through the Edit Pool window:

1 Under the Settings tab, select the Rewrite Host Header to Server Name checkbox.

2 Under the Servers tab, select the Rewrite Host Header checkbox corresponding to the
individual server for which this behavior is intended.

The pool-level checkbox (option 1) takes precedence over option 2. If the pool-level option is
selected, the behavior is ON for all servers, no matter what selections have been made on a
per-server basis.

If the rewrite host header to SNI is turned ON as well as this feature, it takes precedence over the
“to server name” feature.

Using rewrite_host_header_to_server_name with


rewrite_host_header_to_sni
Below are the few observations which clarifies how rewrite_host_header_to_server_name
interacts with rewrite_host_header_to_sni.

n For Non-SSL back-end servers : rewrite_host_header_to_sni has no effect on the non-SSL


back-end servers. Host Header is set according to the rewrite_host_header_to_server_name
flag.

n For SSL back-end servers with the TLS SNI Enabled flag set as OFF: The
rewrite_host_header_to_sni has no effect. The Host header is set according to the
rewrite_host_header_to_server_name flag.

n For SSL back-end servers with the TLS SNI Enabled flag set as ON – Incoming Host Header =
Abc.com.

Note The following combination of the configuration options is not supported because the SNI
name used in the SSL handshake, and the host header used in the request do not match.
n The TLS SNI Enabled flag is set as ON.

n SNI name is configured in the pool, while the rewrite_host_header_to_server_name option is


enabled.

Rewriting Host Header with Pool Member and Port Number


Support for providing port number is available while rewriting the host header with the pool
member.

VMware, Inc. 168


VMware NSX Advanced Load Balancer Configuration Guide

To update the port to the hostname in the host header, the following options are available under
the pool configuration:

n Append port if not default port for protocol (80 and 443)

n Never append port

n Always append port

The following screenshot shows the Append Port option available under the Pool > Settings on
the NSX Advanced Load Balancer UI.

Allowed Characters for Object Names


This section discusses rules and limitations in naming virtual services, pools and other objects in
the NSX Advanced Load Balancer.

Object Names
Object names within the NSX Advanced Load Balancer, such as the names of virtual services and
pools, have the following limitations:

n Uniqueness within tenants: an object name must be unique within a given tenant. Different
tenants can use the same name.

n Maximum Length: 128 characters.

n Alphabetic characters allowed: a -> z; A -> Z

n Digital characters allowed: 0 -> 9

n The space character, plus these special symbols: . @ + - _

Object names can be changed without impact to linked objects. For instance, each virtual service
is associated with a pool. The name of a virtual service can be changed without requiring a change
to the configuration of the pool that the virtual service is associated with.

VMware, Inc. 169


VMware NSX Advanced Load Balancer Configuration Guide

Local User Names


The names of NSX Advanced Load Balancer user accounts that are maintained locally, in the
Controller database, support the same characters as those for other object names within the
Controller. (The supported characters are listed above.)

Note
n User accounts created through Keystone or LDAP / AD have the same limitations as other user
accounts in those authentication systems.

n The NSX Advanced Load Balancer user names that include any of the supported special
characters ( . @ + - _ ) can access the Controller through the web interface, API, or CLI.
However, these accounts cannot access the Controller’s Linux shell. For example:

shell
Shell access not allowed for this user

Best Practice for Referring To Object Names


Each object is assigned a unique identifier (UUID). As a best practice, API calls and custom scripts
must refer to UUIDs rather than the object names. This practice can eliminate potential impact to
the operation of scripts following a change to the name of an object.

Pool Groups
A pool group is a list of server pools, accompanied by logic to select a server pool from the list.
Wherever a virtual service can refer to a server pool (directly, or via rules, DataScripts, or service
port pool selector), the virtual service could instead refer to a pool group.

Note Pool selection is often referred to as pool switching.

The pool group is a powerful construct that can be used to implement the following:

n Priority Pools/Servers

n Backup Pools

n A/B Pools

n Blue/Green Deployment

n Canary Upgrades

Note This feature is not supported for IPv6.

VMware, Inc. 170


VMware NSX Advanced Load Balancer Configuration Guide

What is a Pool Group?


A pool group is a list of member (server) pools, combined with logic to select a member from
the list. The PoolGroup object is represented as a list of 3-tuples { Priority, Pool, Ratio }, each
tuple describing a member. For example, defining the pool group depicted below would require a
PoolGroup object with nine 3-tuples.

Pool Selection Step 2: By Weight

Pool Selection Step 1: By Priority


pool1, pool2 pool3,
high_pri
weight_1 weight_2 weight_3

pool4, pool5, Decreasing


VIP med_pri weight_4 weight_5 Pool Priority

pool6, pool7, pool8, pool9,


low_pri
weight_6 weight_7 weight_8 weight_9

Pool group with 9 members

How Does a Pool Group Work?


Let’s use figure 1 to describe a typical scenario using the above diagram.

When a Service Engine responsible for a virtual service needs to identify a server to which to
direct a particular client request, these are the steps.

n Step 1: Identify the best pools within the group. This is governed by pool priority. This group
of nine members defines three priorities— high_pri, med_pri, and low_pri — but pool1, pool2,
and pool3 are the preferred (best) ones because they’ve all been assigned the highest priority.
NSX Advanced Load Balancer will do all it can to pick one of them.

n Step 2: Identify one of the highest-priority pools. This choice will be governed by the weights
assigned to the three pool members, weight_1, weight_2, and weight_3. The ratio implied by
those weights governs the percentage of traffic directed to each of them.

n Step 3: Identify one server with the chosen pool. Each of the 9 members can be configured
with a different load-balancing algorithm. The algorithm associated with the chosen pool will
govern which of its servers is selected.

The Effect of Persistence


Above we have described the algorithm as it would be applied to client requests initially and
thereafter, absent the effect of persistence. However, persistence will have an overriding effect for
the 2nd through nth request from a given client, if persistence is configured, which it can be, on a
per pool basis.

VMware, Inc. 171


VMware NSX Advanced Load Balancer Configuration Guide

To enable persistence in a pool, navigate to Applications > Pools > Edit Pool > Settingsand select
a persistence from the Persistence drop-down menu provided.

Pool or Pool Group?


Pools and pool groups can be interchangeably used on a virtual service. If you anticipate needing
to address any of its use cases in the future, use a pool group. You will profit from its flexibility,
without disruption to existing traffic. There is no traffic disruption when pool group membership
changes. Connections to servers in an existing pool member are complete even if the pool
member is removed from the pool group. Likewise, the pool group can be expanded dynamically.

On the other hand, if the functionality of a pool group is not anticipated, use a pool. A simple pool
that does the job is more efficient than a pool group. It consumes less SE and Controller memory
by avoiding the configuration of an additional full-fledged uuid object.

Note The list of pools eligible to be members of a pool group will exclude those associated with
other virtual servers.

Configuration
Considering a pool group consisting of two pools, following are the steps to configure the feature:

Create Pool
Create individual pools that will be attached to the pool group by navigating to Applications >
Pools > CREATE POOL. The pools pool-1, pool-2, and cart2 have been created here.

For more information on configuring pool settings, see Server Pools.

Create Pool Group


1 Navigate to Applications > Pool Groups > CREATE POOL GROUP.

VMware, Inc. 172


VMware NSX Advanced Load Balancer Configuration Guide

2 In the Pool Group Members section, add the previously created pools as member pools or
create new member pools. Note that each pool has been assigned a priority here.

3 Click +Add Pool Group Member.

a Select/ create a pool from the Pool drop-down menu.

b Enter a Ratio from 1-1000.

c Select a pool with a higher priority.

d Select a Deployment State from the drop-down menu. The deployment state options are:

1 EvaluationFailed

2 Evaluation In Progress

3 In Service

VMware, Inc. 173


VMware NSX Advanced Load Balancer Configuration Guide

4 Out Of Service

4 In the Pool Servers section, specify the optional settings for the pool group:

a Enable HTTP2 - Select to enable HTTP/2 for traffic from virtual service to all the backend
servers in all the pools configured under this pool group.

Minimum number of servers - The minimum number of servers to distribute traffic. You
can enter a range from 1-65535.

5 In the Pool Group Failure Settings section, specify the action to be executed when the pool
group experiences failure. There are three options available as fail actions:

a Close Connection- If all servers in a pool are down, the default behavior of the virtual
service is to close new client connection attempts by issuing TCP resets or dropping UDP
packets. Existing connections are not terminated, even though their server is marked
down. The assumption is the server may be slow but may still be able to continue
processing the existing client connection.

b HTTP Local Response - Returns a simple web page. Specify a status code of 200 or 503. If
a custom HTML file has not been uploaded to NSX Advanced Load Balancer, it will return a
basic page with the error code.

1 Status Code- Select 200 or 503 from the drop-down menu.

c HTTP Redirect - Returns a redirect HTTP response code, including a specified URL.

1 Status Code - Choose 301, 302, or 307 from the drop-down menu.

2 HTTP/HTTPS - By default NSX Advanced Load Balancer will redirect using HTTPS
unless HTTP is clicked instead.

3 URL - Enter a URL of the format domain.com/path/file?query=bbb.

Note By default Close Connection is selected.

6 Select/ create a Pool Group Deployment Policy. Autoscale manager automatically promotes
new pools into production when deployment goals are met as defined in the Pool Group
Deployment Policy.

7 In the Role-Based Access Control (RBAC) section, click ADD.

a Enter the Key and the corresponding Value(s).

To know more about configuring labels, see Granular Role Based Access Controls per App.

Attach the Pool Group to a Virtual Service


1 Create a virtual service (in Advanced Mode) and configure its pool settings to include a pool
group as follows:

VMware, Inc. 174


VMware NSX Advanced Load Balancer Configuration Guide

2 Navigate to Applications > CREATE VIRTUAL SERVICE > Advanced Setup > New Virtual
Service.

a Under Step 1: Settings tab, Select Pool Group radio button to attach the previously
created pool group to the virtual service.

b The pool group is attached and the virtual service is active, as shown below:

3 To view the overall setup of the virtual service and pool groups, navigate to Applications >
Dashboard and select VS Tree from View VS List drop down menu.

VMware, Inc. 175


VMware NSX Advanced Load Balancer Configuration Guide

Use Cases
Priority Pools/Servers

Consider a case where a pool has different kinds of servers — newer, very powerful ones, older
slow ones, and very old, very slow ones. In the diagram, imagine the blue pools are comprised
of the new, powerful servers, the green pools have the older slow ones, and the pink pool the
very oldest. Further note they’ve been assigned priorities from high_pri down to low_pri. This
arrangement causes NSX Advanced Load Balancer to pick the newer servers in the 3 blue pools
as much as possible, potentially always. Only if no server any of the highest priority pools can be
found, NSX Advanced Load Balancer will send the slower members some traffic as well, ranked by
priority.

One or a combination of circumstances trigger such an alternate selection (of a lower priority
pool):

1 A running server can't be found.

2 Similar to #1, no server at the given priority level will accept an additional connection. All
candidates are saturated.

3 No pool at the given priority level is running the minimum server count configured for it.

Operational Notes

n It is recommended to keep the priorities spaced, and leave gaps. This makes the addition of
intermediate priorities easier at a later point.

n For the pure priority use case, the ratio of the pool group is optional.

n Setting the ratio to 0 for a pool results in sending no traffic to this pool.

n For each of the pools, normal load balancing is performed. After NSX Advanced Load
Balancer selects a pool for a new session, the load balancing method configured for that pool
is used to select a server.

Sample Configuration for a Priority Pool

With only three pools in play, each at a different priority, the values in the Ratio column don’t
enter into pool selection. The cart2 will always be chosen, barring any of the three circumstances
described above.

VMware, Inc. 176


VMware NSX Advanced Load Balancer Configuration Guide

Backup Pools

The pre-existing implementation of backup pools is explained in the Pool Groups section. The
existing option of specifying a backup pool as a pool-down/fail action is deprecated. Instead,
configure a pool group with two or more pools, with varying priorities. The highest priority pool
will be chosen as long as a server is available within it (in alignment with the three previously
mentioned circumstances).

Operational Notes

n A pool with a higher value of priority is deemed better, and traffic is sent to the pool with the
highest priority, as long as this pool is up, and the minimum number of servers is met.

n It is recommended to keep the priorities spaced, and leave gaps. This makes the addition of
intermediate priorities easier at a later point.

n For each of the group’s pool members, normal load balancing is performed. After NSX
Advanced Load Balancer selects a pool for a new session, the load balancing method
configured for that pool is used to select a server.

n The addition or removal of backup pools does not affect existing sessions on other pools in the
pool group.

VMware, Inc. 177


VMware NSX Advanced Load Balancer Configuration Guide

Sample Configuration for a Backup Pool

1 Create a pool group ‘backup’, which has two member pools — primary-pool with a priority of
10, and backup-pool which has a priority of 3.

Object details:

{
url: "https://10.10.25.20/api/poolgroup/poolgroup-f51f8a6b-6567-409d-9556-835b962c8092",
uuid: "poolgroup-f51f8a6b-6567-409d-9556-835b962c8092",
name: "backup",
tenant_ref: "https://10.10.25.20/api/tenant/admin",
cloud_ref: "https://10.10.25.20/api/cloud/cloud-3957c1e2-7168-4214-bbc4-dd7c1652d04b",
_last_modified: "1478327684238067",
min_servers: 0,
members:
[
{
ratio: 1,
pool_ref: "https://10.10.25.20/api/pool/pool-4fc19448-90a2-4d58-bb8f-
d54bdf4c3b0a",
priority_label: "10"
},
{
ratio: 1,
pool_ref: "https://10.10.25.20/api/pool/pool-
b77ba6e9-45a3-4e2b-96e7-6f43aafb4226",
priority_label: "3"
}
],
fail_action:
{

VMware, Inc. 178


VMware NSX Advanced Load Balancer Configuration Guide

type: "FAIL_ACTION_CLOSE_CONN"
}
}

A/B Pools

NSX Advanced Load Balancer supports the specification of a set of pools that could be deemed
equivalent pools, with traffic sent to these pools in a defined ratio.

For example, a virtual service can be configured with a single priority group having two pools, A
and B. Further, the user could specify that the ratio of traffic to be sent to A is 4, and the ratio of
traffic for B is 1.

The A/B pool feature sometimes referred to as blue/green testing, provides a simple way to
gradually transition a virtual service’s traffic from one set of servers to another. For example,
to test a major OS or application upgrade in a virtual service’s primary pool (A), a second
pool (B) running the upgraded version can be added to the primary pool. Then, based on the
configuration, a ratio (0-100) of the client-to-server traffic is sent to the B pool instead of the A
pool.

To continue this example, if the upgrade is performing well, the NSX Advanced Load Balancer
user can increase the ratio of traffic sent to the B pool. Likewise, if the upgrade is unsuccessful or
sub-optimal, the ratio to the B pool easily can be reduced again to test an alternative upgrade.

To finish transitioning to the new pool following successful upgrade, the ratio can be adjusted to
send all traffic to the pool, which now makes pool B the production pool.

To perform the next upgrade, the process can be reversed. After upgrading pool A, the ratio of
traffic sent to pool B can be reduced to test pool A. To complete the upgrade, the ratio of traffic to
pool B can be reduced back to 0.

Operational Notes

n Setting the ratio to 0 for a pool results in sending no traffic to this pool.

n For each of the pools, normal load balancing is performed. After NSX Advanced Load
Balancer selects a pool for a new session, the load balancing method configured for that pool
is used to select a server.

n The A/B setting does not affect existing sessions. For example, setting the ratio sent to B to
1 and A to 0 does not cause existing sessions on pool A to move to B. Likewise, A/B pool
settings do not affect persistence configurations.

n If one of the pools that has a non-zero ratio goes down, new traffic is equally distributed to the
rest of the pools.

n For pure A/B use cases, the priority of the pool group is optional.

n Pool groups can be applied as default on the virtual service, or attached to rules, DataScripts
and Service port pool selector as well.

VMware, Inc. 179


VMware NSX Advanced Load Balancer Configuration Guide

Sample Configuration for an A/B Pool

1 Create a pool group ‘ab’, with two pools in it — a-pool and b-pool — without specifying any
priority:

In this example, 10% of the traffic is sent to b-pool, by setting the ratios of a-pool and b-pool to
10 and 1 respectively.

2 Apply this pool group to the VS, where you would like to have A/B functionality:

VMware, Inc. 180


VMware NSX Advanced Load Balancer Configuration Guide

Object details:

{
url: "https://

/api/poolgroup/poolgroup-7517fbb0-6903-403e-844f-6f9e56a22633", uuid:
"poolgroup-7517fbb0-6903-403e-844f-6f9e56a22633", name: "ab", tenant_ref: "https://

/api/tenant/admin", cloud_ref: "https://

/api/cloud/cloud-3957c1e2-7168-4214-bbc4-dd7c1652d04b", min_servers: 0, members:


[ { ratio: 10, pool_ref: "https://

/api/pool/pool-c27ef707-e736-4ab6-ab81-b6d844d74e12" }, { ratio: 1, pool_ref:


"https://

/api/pool/pool-23853ea8-aad8-4a7a-8e9b-99d5b749e75a" } ], }

Additional Use Cases


Blue/Green Deployment

This is a release technique that reduces downtime and risk by running two identical production
environments, only one of which (e.g., blue) is live at any moment, and serving all production
traffic. In preparation for a new release, deployment and final-stage testing takes place in an
environment that is not live (e.g., green). Once confident in green, all incoming requests go to
green instead of blue. Green is now live, and blue is idle. Downtime due to application deployment
is eliminated. In addition, if something unexpected happens with the new release on the green, roll
back to the last version is immediate; just switch back to blue.

Canary Upgrades

This upgrade technique is so-called because of its similarity to miner’s canary, which would detect
toxic gasses before any humans might be affected. The idea is that when performing system
updates or changes, a group of representative servers gets updated first, are monitored/tested for
a period of time, and only thereafter are rolling changes made across the remaining servers.

Disable Primary Pool When Down


Starting with NSX Advanced Load Balancer 20.1.7, the deactivate primary pool when downoption
is available for the pool groups associated with a virtual service. If the primary pool goes down, it
is disabled and will not become primary pool group, even when it returns online. This forces the
new or existing connections to be routed to the secondary pool (which takes over the role as the
primary), until the administrator manually re-enables the primary pool group.

The Process
n A pool group is configured with members (each with different priorities).

VMware, Inc. 181


VMware NSX Advanced Load Balancer Configuration Guide

n By default, the pool configured with the highest priority acts as the primary pool and receives
all the connections or requests.

n When the highest priority pool goes down, the next available priority pool takes over the
current primary role and receives all connections and requests.

n When the previous primary pool comes back online, it does not resume the current primary
role automatically. Once the primary pool goes down, it is not eligible to take over, until the
administrator manually makes it the primary pool.

n When the admin configures one of the members to primary, it clears off all the connections to
the old primary and makes the requested one, the new primary.

Enabling Deactivate Primary Pool on Down Option


Enable the deactivate_primary_pool_on_down flag under thepool group configuration as shown
below:

[admin:cntrlr]: > configure poolgroup <poolgroup name>


[admin:cntrlr]: poolgroup> deactivate_primary_pool_on_down
[admin:cntrlr]: poolgroup> save

Enabling One of the Pools as the Primary Pool


[admin:cntrlr]: > show poolgroup pg1 detail
+---------------------------------------+--------------------------------------------------+
| Field | Value |
+---------------------------------------+--------------------------------------------------+
| last_primary_pool | pool2(pool-4c86d835-16ec-4a60-839c-064d33040dff) |
| current_primary_pool | pool1(pool-23aad7e1-4f5a-4dbf-8361-0324480cc2c9) |
| last_primary_pool_switchover_time | Wed Aug 18 12:42:22 2021 ms(0) UTC |
| primary_pool_switchover_in_progress | False |
| num_conn_drops_during_pool_switchover | 22 |
+---------------------------------------+--------------------------------------------------+

Use the enable_primary_pool option to make the highest priority pool primary:

[admin:cntrlr]: > clear poolgroup pg enable_primary_pool

Use the following option to make the specified pool primary:

[admin:cntrlr]: > clear poolgroup pg1 enable_primary_pool pool_uuid


pool-4c86d835-16ec-4a60-839c-064d33040dff

Pool Group Sharing Across Virtual Services


NSX Advanced Load Balancer supports the sharing of pool groups across multiple virtual services.
The feature supports use cases wherein the same back-end servers are being used by different
virtual services, each virtual service having its purpose and properties.

VMware, Inc. 182


VMware NSX Advanced Load Balancer Configuration Guide

A Pool Groups is a list of member (server) pools, combined with logic to select a member from the
list. Like a pool, a pool group can be shared by the same type of Layer 7 virtual service. This article
explains the feature’s capabilities, the related CLI commands, and the present limitations.

Pool Group Sharing


A virtual service might refer to a given pool group through multiple techniques:

n As the default pool group defined for a virtual service.

n Through policy-based content-switching, a virtual service might choose one of its pool groups

n Via DataScript, a virtual service might programmatically choose one of its pool groups

A pool group can be referenced by multiple virtual services. In accessing the shared pool group,
each virtual service can independently use any one of the multiple techniques listed above. As
before, a virtual service may access multiple pools, some of them shared and others not. Virtual
services sharing a pool group need not be placed on the same SE group.

Note This feature is supported for combinations of IPv4, IPv6, and IPv4v6 addresses.

Restrictions
These are some restrictions when sharing a pool group:

1 Only similar virtual services can share a pool group

2 Layer 4 virtual service cannot share a pool group as yet.

3 A pool can be part of multiple pool groups either through the same virtual service or different
virtual services.

4 If pool or pool group is selected using a service_port_selector, then it can’t be shared.

5 Pool groups cannot contain pool groups.

6 A pool directly linked to a virtual service should not be part of a pool group.

Note Some of these restrictions may be removed in future releases.

Configuring Pool Group Sharing


This section covers the configuration steps of pool group sharing.

While working with pools or pool groups continue to be the same, with pool group sharing:

n There is an increased number of pool group choices when configuring a virtual service.

n There are more ways to extract pool-related information when querying for statistics.

To assign a pool group to an existing virtual service:

VMware, Inc. 183


VMware NSX Advanced Load Balancer Configuration Guide

Procedure

1 Navigate to Applications > Virtual Services.

2 Click the edit icon against the required virtual service.

3 Select Pool Group in the Edit Virtual Service: screen.

4 Select the required pool groups from the list.

The Edit Virtual Service: screen appears as shown in the following figure:

VMware, Inc. 184


VMware NSX Advanced Load Balancer Configuration Guide

Note You can create a pool group by clicking Create Pool Group or navigating to
Applications > Pool Groups > Create Pool Group. Refer to Configuration section in the Pool
Groups for more details.

5 Click Save.

Results

The selected pool groups are now assigned to the required virtual services. With pool group
sharing, you can see there is a broader set of pool groups available.

Reporting
This section covers the steps to view the overall setup of the pool groups.

To view the overall setup of the pool groups:

Procedure

1 Navigate to Applications > Dashboard.

2 Click the View VS Tree filter.

3 Select the specific virtual service.

Pool group sharing set up with the virtual service is represented as shown in the following
image:

VMware, Inc. 185


VMware NSX Advanced Load Balancer Configuration Guide

4 Click a virtual service to view the pool groups associated.

The following image shows the Virtual Service screen for a selected virtual service, with the
pool groups shared:

VMware, Inc. 186


VMware NSX Advanced Load Balancer Configuration Guide

You can see the ability for a single virtual service to be associated with multiple pool groups.

Load Balancing Algorithms


Load balancing algorithm selection determines the method and prioritization for distributing
connections or HTTP requests across available servers. The available algorithms are:

n Consistent Hash

n Core Affinity

n Fastest Response

n Fewest Servers

n Least Connections

n Least Load

n Round Robin

n Fewer Tasks

The load balancing algorithm is changed using NSX Advanced Load Balancer UI and NSX
Advanced Load Balancer CLI. Select a local server load-balancing algorithm using the Algorithm
field within the Applications > Pool > Settings page. Changing a pool’s LB algorithm will only
affect new connections or requests, and will have no impact on existing connections. The available
options in alphabetic order are:

VMware, Inc. 187


VMware NSX Advanced Load Balancer Configuration Guide

Consistent Hash
New connections are distributed across the servers using a hash that is based on a key specified in
the field that appears below the LB Algorithm field or in a custom string provided by the user via
the avi.pool.chash DataScript function. Below is an example of persisting on a URI query value:

<pre> hash = avi.http.get_query("r") if hash then avi.pool.select("Pool-Name")


avi.pool.chash(hash) end </pre>

This algorithm inherently combines load balancing and persistence, which minimizes the need
to add a persistence method. This algorithm is best for load balancing large numbers of cache
servers with dynamic content. It is ‘consistent’ because adding or removing a server does not
cause a complete recalculation of the hash table. For the example of cache servers, it will not
force all caches to have to re-cache all content. If a pool has nine servers, adding a tenth server
will cause the pre-existing servers to send approximately 1/9 of their hits to the newly-added
server based on the outcome of the hash. Hence, persistence may still be valuable. The rest of the
server’s connections will not be disrupted. The available hash keys are:

Field Description

Custom Header Specify the HTTP header to use in the Custom Header
field, such as Referer. This field is case-sensitive. If
the field is blank or if the header does not exist, the
connection or request is considered a miss and will hash
to a server.

Call-ID Specifies the Call ID field in the SIP header. With this
option, SIP transactions with new call IDs are load
balanced using consistent hash, while existing call IDs are
retained on the previously chosen servers. The state of
existing call IDs is maintained for an idle timeout period
defined by the ‘Transaction timeout’ parameter in the
Application Profile. The state of existing call IDs is relevant
for as long as the underlying TCP/UDP transport state for
the SIP transaction remains the same.

Source IP Address Source IP Address of the client.

Source IP Address and Port Source IP Address and Port of the client.

HTTP URI It includes the host header and the path. For instance,
www.avinetworks.com/index.htm.

Core Affinity
Each CPU core uses a subset of servers, and each server is used by a subset of cores. Essentially
it provides a many-to-many mapping between servers and cores. The sizes of these subsets
are parameterized by the variable lb_algorithm_core_nonaffinity in the pool object. When
increased, the mapping increases up to the point where all servers are used on all cores.

If all servers that map to a core are unavailable, the core uses servers that map to the next (with
wraparound) core.

VMware, Inc. 188


VMware NSX Advanced Load Balancer Configuration Guide

Fastest Response
New connections are sent to the server that is currently providing the fastest response to new
connections or requests. This is measured as time to the first byte. In the End-to-End Timing chart,
this is reflected as Server RTT plus App Response time. This option is best when the pool’s servers
contain varying capabilities or they are processing short-lived connections. A server that is having
issues, such as a lost connection to the data store containing images, will generally respond very
quickly with HTTP 404 errors. It is best practice when using the fastest response algorithm to also
enable the Passive Health Monitor, which recognizes and adjusts for scenarios like this by taking
into account the quality of server response, not just speed of response.

Note A server that is having issues, such as a lost connection to the data store containing images,
will generally respond very quickly with HTTP 404 errors. You should therefore use the Fastest
Response algorithm in conjunction with the Passive Health Monitor, which recognizes and adjusts
for scenarios like this.

Fewest Servers
Instead of attempting to distribute all connections or requests across all servers, NSX Advanced
Load Balancer will determine the fewest number of servers required to satisfy the current client
load. Excess servers will no longer receive traffic and may be either de-provisioned or temporarily
powered down. This algorithm monitors server capacity by adjusting the load and monitoring the
server’s corresponding changes in response latency. Connections are sent to the first server in the
pool until it is deemed at capacity, with the next new connections sent to the next available server
down the line. This algorithm is best for hosted environments where virtual machines incur a cost.

Least Connections
New connections are sent to the server that currently has the least number of outstanding
concurrent connections. This is the default algorithm when creating a new pool and is best
for general-purpose servers and protocols. New servers with zero connections are introduced
gracefully over a short period of time via the Connection Ramp setting in the Pool > Advanced
page. This feature slowly brings the new server up to the connection levels of other servers within
the pool.

NSX Advanced Load Balancer uses Least Connections as the default algorithm because generally
provides an equal distribution when all servers are healthy, and yet is adaptive to slower or
unhealthy servers. It works well for both long-lived and quick connections.

VMware, Inc. 189


VMware NSX Advanced Load Balancer Configuration Guide

Note A server that is having issues, such as rejecting all new connections, may have a concurrent
connection count of zero and be the most eligible to receive all new connections. NSX Advanced
Load Balancer recommends using the Least Connections algorithm in conjunction with the Passive
Health Monitor which recognizes and adjusts for scenarios like this. A passive monitor will reduce
the percent of new connections sent to a server based on the responses it returns to clients.

Least Load
New connections are sent to the server with the lightest load, regardless of the number of
connections that the server has. For example, if an HTTP request requiring a 200-kB response
is sent to a server and a second request that will generate a 1-kB response is sent to a server, this
algorithm will estimate that —based on previous requests— the server sending the 1-kB response
is more available than the one still streaming 200 kB. The idea is to ensure that a small and fast
request does not get queued behind a very long request. This algorithm is HTTP-specific. For
non-HTTP traffic, the algorithm will default to the least connections algorithm.

Round Robin
New connections are sent to the next eligible server in the pool in sequential order. This static
algorithm is best for basic load testing but is not ideal for production traffic because it does not
take the varying speeds or periodic hiccups of individual servers into account. A slow server will
still receive as many connections as a better-performing server.

VMware, Inc. 190


VMware NSX Advanced Load Balancer Configuration Guide

In the example illustration, a server was causing significant app response time in the end-to-end
timing graph as seen by the orange in the graph. By switching from the static round-robin
algorithm to a dynamic LB algorithm (the blue config event icon at the bottom), NSX Advanced
Load Balancer successfully directed connections to servers that were responding to clients faster,
virtually eliminating the app response latency.

Fewest Tasks
Load is adaptively balanced, based on server feedback. This algorithm is facilitated by an external
health monitor. It is configurable via the NSX Advanced Load Balancer CLI and REST API but
is not visible in the NSX Advanced Load Balancer UI. For details, refer to the Fewest Tasks
Load-Balancing Algorithm.

Configuring Using NSX Advanced Load Balancer CLI


configure pool foo
lb_algorithm lb_algorithm_fewest_tasks
save

An external health monitor can feedback a number (for example, 1-100) to the algorithm by writing
data into the <hm_name>.<pool_name>.<ip>.<port>.tasks file. Each output from this file would
be used to feedback to the algorithm. The range of numbers provided as feedback and the send
interval of the health monitor may be adjusted to tune the load balancing algorithm behavior to
the specific environment.

For example, consider a pool p1 with 2 back-end servers, s1 and s2. Suppose the health monitor
ticks every 10 seconds (send-interval), and sends back feedback of 100 (high load) and 10 (low
load). At time t1, s1 and s2 are set with 100 tasks and 10 tasks respectively. Now, if you send 200
requests, the first 90 would go to s2, since it had “90” more units available. The next 110 would
be sent equally to s1 and s2. At time t2 = t1 + 10 sec, s1 and s2 get replenished to the new data
provided by the external health monitor.

Here is an example script for use by the external health monitor:

#!/usr/bin/python
import sys
import httplib

VMware, Inc. 191


VMware NSX Advanced Load Balancer Configuration Guide

import os
conn = httplib.HTTPConnection(sys.argv[1]+':'+sys.argv[2])
conn.request("GET", "/")
r1 = conn.getresponse()
print r1
if r1.status == 200:
print r1.status, r1.reason ## Any output on the screen indicates SUCCESS for health monitor

try:
fname = sys.argv[0] + '.' + os.environ['POOL'] + '.' + sys.argv[1] + '.' + sys.argv[2] +
'.tasks'
f = open(fname, "w")
try:
f.write('230') # Write a string to a file - instead of 230 - find the data from the curl
output and feed it.
finally:
f.close()
except IOError:
pass

You can use the show pool <foo> detail and show poo <foo> server detail commands to see
detailed information about the number of connections being sent to the servers in the pool.

Weighted Ratio
NSX Advanced Load Balancer does not include a dedicated weighted ratio algorithm. Instead,
weight may be achieved via the ratio, which may be applied to any server within a pool. The ratio
may also be used in conjunction with any load balancing algorithm. With the ratio setting, each
server receives statically adjusted ratios of traffic. If one server has a ratio of 1 (the default) and
another server has a ratio of 4, the server set to 4 will receive 4 times the amount of connections
it otherwise would. For instance, using the least connections, one server may have 100 concurrent
connections while the second server has 400.

Persistence
A persistence profile governs the settings that force a client to stay connected to the same server
for a specified duration of time. This is sometimes referred to as sticky connections.

By default, load balancing can send a client to a different server, every time the client connects
with a virtual service or even distribute every HTTP request to a different server, when connection
multiplex is enabled. Server persistence guarantees the client will reconnect to the same server
every time they connect to a virtual service, as long as the persistence is still in effect. Enabling a
persistence profile ensures that the client will reconnect to the same server every time, or at least
for a desired duration of time. Persistent connections are critical for most servers that maintain
client session information locally.

VMware, Inc. 192


VMware NSX Advanced Load Balancer Configuration Guide

All persistence methods are based on the same principle, which is to find a unique identifier of a
client and remember it for the desired length of time. The persistence information can be stored
locally on NSX Advanced Load Balancer SEs or can be sent to a client through a cookie or TLS
ticket. The client will then present that identifier to the SE, which directs the SE to send the client
to the correct server.

Persistence is an optional profile configured within Templates > Profiles > Persistence Profile.
Once the profile is created, it may be attached to one or more pools.

Types of Persistence
NSX Advanced Load Balancer can be configured with a number of persistence templates:

n HTTP Cookie Persistence: NSX Advanced Load Balancer inserts a cookie into HTTP responses.

n App Cookie Persistence: NSX Advanced Load Balancer reads existing server cookies or URI
embedded data such as JSessionID.

n HTTP Custom Header Persistence: Administrators may create custom, static mappings of
header values to specific servers

n Client IP Persistence: The client’s IP is used as the identifier and mapped to the server

n : Persist information is embedded in the client’s SSL/TLS ticket ID

n GSLB Site Cookie Persistence: GSLB application can be configured to persist to the sites in
which the transactions are initiated

Outside of the persistence profiles, two other types of persistence are available:

n DataScript: Custom persistence may be built using DataScripts for unique persistence
identifiers

n Consistent Hash: This is a combined load balancing algorithm and persistence method, which
can be based on a number of different identifiers as the key

Persistence Mirroring
Persistence data is either stored locally on NSX Advanced Load Balancer Service Engines or is
sent to and stored by clients.

Client stored persistence, which includes HTTP cookie, HTTP header mapping, and consistent
hash, are not kept locally on Service Engines. When the data, such as a cookie presented by the
client, is received, it contains the IP address and port of the persisted server for the client. No local
storage or memory is consumed to mirror the persistence. Persist tables may be infinite in size, as
no table is locally maintained.

VMware, Inc. 193


VMware NSX Advanced Load Balancer Configuration Guide

Locally stored persistence methods, which includes HTTP app cookies, TLS, client IP addresses,
and DataScripts, NSX Advanced Load Balancer SEs maintain the persist mappings in a local table.
This table is automatically mirrored to all other Service Engines supporting the virtual service
as well as the Controllers. An SE failover will not result in a loss of persistence mappings. To
support larger persistence tables, allocate more memory to Service Engines and the SE Group >
Connection table setting.

Persistence Profile Settings


Select Templates > Profiles > Persistence to open the Persistence Profiles tab.

n Search: Search across the list of objects.

n Create: Opens the New Persistence Profile popup.

n Edit: Opens the Edit Persistence Profile popup.

n Delete: A profile may only be deleted if it is not currently assigned to a virtual service. An error
message will indicate the virtual service referencing the profile. The default system profiles can
be edited, but not deleted.

The table on this tab provides the following information for each persistence profile:

Field Description

Persistence Name Name of the profile.

Type Full descriptions of each type of persistence are described


in the next section. The types of persistence can be one of
the following:
n App Cookie
n Client IP Address
n Custom HTTP Header
n GSLB Site
n HTTP Cookie
n TLS

Create Persistence Profile


The New Persistence Profile and Edit Persistence Profile popups share the same interface.

VMware, Inc. 194


VMware NSX Advanced Load Balancer Configuration Guide

To create or edit a Persistence Profile:

VMware, Inc. 195


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Name Enter a unique name for the Persistence Profile in the


Name field.

Type Select the persistence type using the Type pull-down


menu. The available options are:
n App Cookie: Rather than have Avi insert a new cookie
for persistence, Avi will use an existing cookie that has
been inserted by the server. If the cookie does not
exist, Avi will look for a URI query of the same name
and will persist on that value. Typically this persistence
will be performed on a ASP or Java session ID.
n Client IP Address: NSX Advanced Load Balancer
will record the client’s source IP address in a table
for the duration of the Persistence Timeout for this
profile. While the IP address remains in the table,
any new connection by the user will be sent to the
same server. The Client IP Address persistence table
is stored in memory on the Service Engine, and is
automatically mirrored to the Controller and all other
Service Engines that support this virtual service.

Note Starting with release 18.1.2, this feature is


supported for IPv6 in NSX Advanced Load Balancer.
IPv4 and IPv6, both the type of IP addresses can be
used for the persistence type – Client IP Address.
n Custom HTTP Header: This method allows an HTTP
header to be specified for persistence. The Service
Engine will inspect the value of the defined header,
and will match the value against a statically assigned
header field for each server in the pool. If there is a
match, the client will be persisted. The server’s header
field is configured in the Pool’s edit server page, where
new servers are added.
n Header Name: Specify the HTTP header whose
value will be used for the persistence lookup.
n GSLB Site: This permits a given client of a global
application to persist to the first site to which it is
directed. Refer to GSLB Site Cookie Persistance for
more information.
n HTTP Cookie: Applicable to virtual services with an
attached HTTP application profile. NSX Advanced
Load Balancer inserts a cookie into outgoing HTTP
responses and reads the cookie on incoming requests.
The cookie is session based, meaning that the cookie
persistence remains valid as long as the client keeps
their browser open. Closing the browser removes the
cookie stored by the client browser, thereby tearing
down both the connections and the persistence.
Cookies uniquely identify each client, which is useful

VMware, Inc. 196


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

if multiple users are accessing the virtual service from


the same IP address. Clients store the persistence
information, so it does not consume memory on the
Service Engine.
n HTTP Cookie Name: Specify the HTTP cookie
name. If this field is left blank, then the system
generates a random 8-character cookie name.
n TLS: Applicable to virtual services that are terminating
SSL or TLS. This method embeds user persistence
information within the ticket field of a TLS session.
Clients negotiating with older SSL v3 will use a
variation that inserts the persistence information into
the SSL Session ID. NSX Advanced Load Balancer
does not allow clients to renegotiate the session,
which is more secure and also ensures that Avi
NSX Advanced Load Balancer can maintain the
persistence as it controls if and when the Session ID
is renegotiated and recreated.

Select New Server When Persistent Server Down Determine how this profile will handle a condition when
the Health Monitor marks a server as down while NSX
Advanced Load Balancer is persisting clients to it.
n Immediate: NSX Advanced Load Balancer will
immediately select a new server to replace the one
that has gone down and switch the persistence entry
to the new server.
n Never: No replacement server will be selected.
Persistent entries will be required to expire normally
based upon the persistence type.

Persistence Timeout Enter the number of minutes to preserve the client’s


IP address in the Persistence Timeout field. Entering 0
disables persistence and allows new connections to be
load balanced to a new server immediately. The timeout
begins counting down when all connections from the same
source IP address to the virtual service are closed. This
field applies to Client IP persistence only.

HTTP Cookie Persistence


The HTTP cookie mode of persistence enables sticking a client to a server for the duration of a
session. HTTP cookie persistence can be applied to any virtual service with an attached HTTP
application profile. With this persistence method, the NSX Advanced Load Balancer SEs insert an
HTTP cookie into the first response of a server to a client.

VMware, Inc. 197


VMware NSX Advanced Load Balancer Configuration Guide

To use HTTP cookie persistence, no configuration changes are required on the back-end servers.
HTTP persistence cookies created by the NSX Advanced Load Balancer have no impact on
existing server cookies or behavior.

Note The NSX Advanced Load Balancer also supports an app cookie persistence mode, that
relies on cookies. The app cookie method enables persistence based on information in existing
server cookies, rather than inserting a new NSX Advanced Load Balancer-created cookie.

To validate if HTTP cookie persistence is working, enable all headers for the virtual service
analytics and view logs to see the cookies sent by a client.

See Overview of Server Persistence for descriptions of other persistence methods and options.

Cookie Format
The following is an example of an HTTP session-persistence cookie created by NSX Advanced
Load Balancer.

Set-Cookie: JKQBPMSG=026cc2fffb-b95b-41-dxgObfTEe_IrnYmysot-VOVY1_EEW55HqmENnvC;
path=/

The cookie payload contains the back-end server IP address and port.

The payload is encrypted with AES-256. When a client makes a subsequent HTTP request, it
includes the cookie, which the SE uses to ensure that the client’s request is directed to the same
server.

Configuring Cookie Persistence


Starting with NSX Advanced Load Balancer version 21.1.3, the field is_persistent_cookie is
introduced, which when set to True, enables persistence in a HTTP cookie. By default this field
is set to False, which implies the cookie is a session cookie by default.

Cookie Persistence Timeout


Persistence profiles allow configuration of a persistence timeout. The persistence timeout sets the
maximum amount of time a persistence cookie is valid.

The persistence timeout applies to persistence cookies that are created by NSX Advanced Load
Balancer for individual client sessions with virtual services that use the persistence profile.

Generally, the client or browser has the responsibility to clear a persistent session cookie, after
the session associated with the cookie is terminated, or when the browser is closed. Setting a
persistence timeout takes care of cases where the client or browser does not clear the session
cookies.

If the persistence timeout is set, the maximum lifetime of any session cookie that is created based
on the profile is set to the timeout. In this case, the cookie is valid for a maximum of the configured
timeout, beginning when the NSX Advanced Load Balancer creates the cookie.

VMware, Inc. 198


VMware NSX Advanced Load Balancer Configuration Guide

For example, if the persistence timeout is set to 720 minutes, a cookie created based on the profile
is valid for a maximum of 12 hours, from the cookie creation time. After the persistence timeout
expires, the cookie expires and is no longer valid.

By default there is no timeout. The cookie sent is a session cookie, which is cleared by the client
after the session ends.

Starting with NSX Advanced Load Balancer version 21.1.3,

The timeout field in a HttpCookiePersistenceProfile is translated to max-age. The max-age


attribute represents the number of seconds for the cookie to expire. If the value of max-age is
either zero or lesser (negative numbers), the cookie expires instantly.

Note
n If the flag is_persistent_cookie is disabled , the timeout behavior remains unchanged (the
cookie expires according to the non-zero value of the timeout).

n If the flag is enabled and the value of timeout is zero, the cookie expires immediately, as the
max-age is set to zero.

To configure cookie persistence timeout use Set-Cookie: <cookie-name>=<cookie-value> Max-


Age=<number>.

Example:

Set-Cookie: JKQBPMSG=026cc2fffb-b95b-41-dxgObfTEe_IrnYmysot-VOVY1_EEW55HqmENnvC;
path=/ ; Max-Age=3600.

Persistence Mirroring
Since clients maintain the cookie and present it when visiting the site, the NSX Advanced Load
Balancer does not need to store the persistence information or mirror the persistence mappings
to other SEs. This allows for greater scale with minimal effort.

Persistence Duration
HTTP cookie persistence leverages a session-based cookie, which is valid as long as the client
maintains an HTTP session with the NSX Advanced Load Balancer. If the client closes a browser,
the cookie is deleted and the persistence is terminated.

NSX Advanced Load Balancer UI Configuration Options


To enable cookie persistence using the UI, navigate to Templates > Profiles > Persistence.

The following table describes the fields needed to configure a persistence profile in the
persistence profile editor:

VMware, Inc. 199


VMware NSX Advanced Load Balancer Configuration Guide

Field Name Description

Name Unique name for the persistence profile.

Select New Server When Persistent Server Down Action to be taken when a server is marked down, such as
by a health monitor or when it has reached a connection
limit. Indicates whether existing persisted users continue
to be sent to the server, or load balanced to a new server.
Immediate: The NSX Advanced Load Balancer
immediately selects a new server to replace the one
marked down and switch the persistence entry to the new
server.
Never: No replacement server will be selected. Persistent
entries will be required to expire normally based upon the
persistence type.

Description Optional, custom description for the profile.

Type HTTP Cookie. Changing the type will change the profile to
another persistence method.

HTTP Cookie Name This field comes up blank. By populating this optional
field, the cookie will be inserted with the user-chosen
custom name. If it is not populated, the NSX Advanced
Load Balancer auto-generates a random eight-character
alphabetic name.

Is Persistence Cookie Select to enable persistence. If this option is not enabled,


the cookie is a session cookie.

Persistence Timeout The maximum lifetime of any session cookie. The allowed
range is 1-14400 minutes. No value or zero indicates no
timeout

Always Send By default, a persistence cookie is sent once at the


beginning of a session to the client. Clients then respond
with the cookie for each request. However, some
web applications, such as those incorporating Java or
Javascript, might not include the cookie in their request
if it was not received in the previous response. Enabling
Always Send causes the NSX Advanced Load Balancer to
include the cookie for every response.

Note Starting with version 21.1.1, the NSX Advanced Load Balancer supports setting an HTTP-
Only flag for the cookie set by it. Setting this attribute helps to prevent the third-party scripts
from accessing this cookie if supported by the browser. This feature will activate for any HTTP or
terminated HTTPS virtual service.

When you set a cookie with the HTTP-Only flag, it informs the browser that this special cookie
should only be accessed by the server. Any attempt to access the cookie from a client-side script
is strictly forbidden.

For more details on enabling HTTP-Only attribute,see SSL Everywhere guide.

VMware, Inc. 200


VMware NSX Advanced Load Balancer Configuration Guide

App Cookie Persistence


The app cookie mode of persistence can be applied to any virtual service with an attached HTTP
application profile. With this persistence method, the NSX Advanced Load Balancer does not
insert its own cookie into HTTP responses for persistence. Instead, it relies on either an existing
cookie that is inserted by the server, or a header. If the specified cookie does not exist in the client
request, the NSX Advanced Load Balancer looks for a URI query of the same name and persists on
that value. This persistence is performed on an ASP or a Java session ID.

Using session ID, servers do not have control over whether a client will accept a cookie. For this
purpose, they can choose to embed the session ID in both a cookie and the URI. Older browsers
or clients from Europe can skip the cookie and still include the session ID within the query of their
requests. For this reason the NSX Advanced Load Balancer automatically checks both locations.

Once an identifier has been located in a server response and a client’s request, the NSX Advanced
Load Balancer creates an entry in a local persistence table for future persistence like below:

www.avinetworks.com/index.html?jsessionid=a1b2c3d4e5

Note This method involves using an existing server cookie. For the NSX Advanced Load
Balancer to use its own cookie for persistence, use the HTTP Cookie persistence mode, which
is straightforward and more scalable.

See also Overview of Server Persistence for descriptions of other persistence methods and
options.

Persistence Table
Since app cookie persistence is stored locally on each SE, larger tables consume more memory.
For very large persist tables, consider adding additional memory to the SEs through the SE Group
properties for SE memory and the SE Group > Connection table setting. See also SE Memory
Consumption.

The app cookie persistence table is automatically mirrored to all SEs supporting the virtual service
using a pool configured with this persistence type.

Configuration Options
For details on fields for configuring App Cookie persistence profile, see NSX Advanced Load
Balancer UI Configuration Options under HTTP Cookie Persistence. Ensure that App Cookie is
selected from the Type drop-down menu.

HTTP Custom Header Persistence


The custom HTTP header mode of persistence can be applied to a virtual service with an attached
HTTP application profile. This method allows an HTTP header to be manually mapped to a specific
server for persistence.

VMware, Inc. 201


VMware NSX Advanced Load Balancer Configuration Guide

The SE inspects the value of the defined HTTP header and matches the value against a statically
assigned header field for each server. If there is a match, the client is persisted to the server. The
server’s header field is configured in the Application > Pool > edit server page using the Header
Value field within the server table.

In the example below, when a client sends an HTTP request, the controller checks if a header
exists, based on the name configured in the customer HTTP header persistence profile. If the
header exists in the client’s request, the value is mapped against the servers as shown. If the value
was server2, the controller sends the client to apache2. If the header does not exist, or the value
does not match, the client is free to be load balanced to any server:

Persist Table
This method is a static mapping of header values to servers, to avoid need to maintaining
persistence table on each SE and mirroring. All SEs supporting a virtual service whose pool is
configured with this persistence type, automatically direct or persist users correctly to the same
servers.

Configuration Options
For details for configuring Custom HTTP Header persistence profile, see NSX Advanced Load
Balancer UI Configuration Options under HTTP Cookie Persistence. Ensure that Custom HTTP
Header is selected from the Type drop-down menu.

Client IP Persistence
This section discusses about the client IP persistence and its configuration.

The client IP address mode of persistence can be applied to any virtual service, regardless of TCP
or UDP. With this persistence method, NSX Advanced Load Balancer SEs will stick the client to the
same server for the configurable duration of time and store the mapping in a local database.

See also Persistence for descriptions of other persistence methods and options.

Persist Table
Since client IP persistence is stored locally on each SE, larger tables will consume more memory.
For extensive persist tables, consider adding additional memory to the SEs through the SE Group
Properties for SE memory and through the Infrastructure > Service Engine Group > Edit >
Memory Allocation.

See also SE Memory Consumption.

The client IP persistence table is automatically mirrored to all Service Engines supporting the
virtual service and pool configured with this persistence type. To validate if a client IP address is
currently persisted, from the CLI use the following command to view entries in the table.

The following example searches the persistence table for the test-pool, searching for client 10.1.1.1.

show pool test-pool persistence | grep 10.1.1.1

VMware, Inc. 202


VMware NSX Advanced Load Balancer Configuration Guide

Configuration Options
n Name: A unique name for the persistence profile.

n Description: An optional, custom description for the profile.

n Type: TLS. Changing the type will change the profile to another persistence method.

n **Select New Server When Persistent Server Down**: If a server is marked DOWN, such as by
a health monitor or when it has reached a connection limit, should existing persisted users
continue to be sent to the server or load balanced to a new server?

n Immediate: NSX Advanced Load Balancer immediately selects a new server to replace the
one marked DOWN and switches the persistence entry to the new server.

n Never: No replacement server will be selected. Persistent entries will be required to expire
normally based upon the persistence type.

n Persistence Timeout: NSX Advanced Load Balancer keeps the persistence value for the
configured time once a client has closed any open connections to the virtual service. Once
the time has expired without the client reconnecting, the entry is expired from the persist
table. If the client reconnects before the timeout has expired, they are persisted to the same
server, and the timeout is canceled. The default timeout value is 5 minutes.

TLS Persistence
This section discusses about the TLS persistence and its configuration.

The TLS mode of persistence can be applied to any virtual service configured to terminate HTTPS.
With this persistence method, the NSX Advanced Load Balancer embeds the client-to-server
mapping in the TLS ticket ID sent to the client. It is similar to how HTTP cookies behave. The data
is embedded in an encrypted format that a SE can read should a client reconnect to a different SE.

Note This persistence method is often confused for an older, broken method of persistence
called SSL Session ID. While both are used for secure connections, these methods are unrelated.

See also Persistence for descriptions of other persistence methods and options.

Persist Table
The TLS ticket ID is automatically mirrored to all Service Engines supporting the virtual service,
regardless of this persistence mode. If this persistence is enabled, it adds no additional overhead
to the SEs or the automated TLS ticket mirroring.

As with any SSL/TLS concurrency, additional memory is beneficial for increasing the maximum size
of concurrent connections and, therefore, TLS persistence mappings.

Configuration Options
n Name: A unique name for the persistence profile.

n Description: An optional, custom description for the profile.

VMware, Inc. 203


VMware NSX Advanced Load Balancer Configuration Guide

n Type: TLS. Changing the type will change the profile to another persistence method.

n Select New Server When Persistent Server Down: If a server is marked DOWN, such as by
a health monitor or when it has reached a connection limit, should existing persisted users
continue to be sent to the server or load balanced to a new server?

n Immediate: NSX Advanced Load Balancer will immediately select a new server to replace
the one marked DOWN and switch the persistence entry to the new server.

n Never: No replacement server will be selected. Persistent entries will be required to expire
normally based upon the persistence type.

Compression
The compression option on NSX Advanced Load Balancer enables HTTP Gzip compression for
responses from NSX Advanced Load Balancer to the client.

Compression is an HTTP 1.1 standard for reducing the size of text-based data using the Gzip
algorithm. The typical compression ratio for HTML, Javascript, CSS and similar text content types
is about 75%, meaning that a 20-KB file may be compressed to 5 KB before being sent across the
internet, thus reducing the transmission time by a similar percentage.

Note It is highly recommended to enable compression in conjunction with caching, which


together can dramatically reduce the CPU costs of compressing content. When both compression
and caching are enabled, an object such as the index.html file will need to be compressed only
one time. After an object is compressed, the compressed object is served out of the cache
for subsequent requests. NSX Advanced Load Balancer does not needlessly re-compress the
object for every client request. For clients that do not support compression, NSX Advanced Load
Balancer also caches an uncompressed version of the object.

Configuring Compression
The Compression tab permits one to view or edit the application profile’s compression settings.

To configure compression:

Procedure

1 Navigate to Templates > Application profile

2 Click Create to create a new profile or use the existing application profile as required.

3 Select the Compression tab and enable to feature if it is not enabled.

4 Select the compression mode as desired.

The Auto and Custom mode is described in the later section. The compression percentage
achieved can be viewed using the Client Logs tab of the virtual service. This may require
enabling full client logs on the virtual service’s Analytics tab to log some or all client requests.
The logs will include a field showing the compression percentage with each HTTP response.

VMware, Inc. 204


VMware NSX Advanced Load Balancer Configuration Guide

To specify compression settings, perform the following:

n Check the Compression checkbox to enable compression. You may only change
compression settings after enabling this feature.

n Select either Auto or Custom, which enables different levels of compression for different
clients. For instance, filters can be created to provide aggressive compression levels for
slow mobile clients while disabling compression for fast clients from the local intranet. Auto
is recommended, to dynamically tune the settings based on clients and available Service
Engine CPU resources.

n Auto mode enables NSX Advanced Load Balancer to determine the optimal settings.

Note By default, the Compression Mode is Auto. The content compression depends on
the client’s RTT, as mentioned below:

n RTT less than 10ms, no compression

n RTT 10 to 200ms, normal compression

n RTT above 200ms, aggressive compression

n Custom mode allows the creation of custom filters that provide more granular control
over who should receive what level of compression.

n Compressible Content Types determine which HTTP Content-Types are eligible to be


compressed. This field points to a String Group which contains the compressible type list.

n Remove Accept Encoding Header removes the Accept-Encoding header, which is sent
by HTTP 1.1 clients to indicate they can accept compressed content. Removing the header
from the request prior to sending the request to the server allows NSX Advanced Load
Balancer to ensure the server will not compress the responses. Only NSX Advanced Load
Balancer will perform compression.

Custom Compression
This section covers the steps to create a custom compression filter.

To create a custom compression filter:

Procedure

1 Click Add New Filter to create a custom filter.

VMware, Inc. 205


VMware NSX Advanced Load Balancer Configuration Guide

2 Enter the following:

n Filter Name: Provide a unique name for the filter (optional).

n Matching Rules: determine if the client (via Client IP or User Agent string) is eligible
to be compressed via the associated Action. If both Client IP and User Agent rules are
populated, then both must be true for the compression action to fire.

n Client IP Address allows you to use an IP Group to specify eligible client IP addresses.
For example, an IP Group called Intranet that contains a list of all internal IP address
ranges. Clearing the Is In button reverses this logic, meaning that any client that is not
coming from an internal IP network will match the filter.

n User-Agent matches the client’s User-Agent string against an eligible list contained
within a String Group. The User-Agent is a header presented by clients indicating
the type of browser or device they may be using. The System-Devices-Mobile Group
contains a list of HTTP User-Agent strings for common mobile browsers.

3 The Action section determines what will happen to clients or requests that meet the Match
criteria, specifically the level of HTTP compression that will be used.

n Aggressive compression uses Gzip level 6, which will compress text content by about
80% while requiring more CPU resources from both NSX Advanced Load Balancer and the
client.

n Normal compression uses Gzip level 1, which will compress text content by about 75%,
which provides a good mix between compression ratio and the CPU resources consumed
by both NSX Advanced Load Balancer and the client.

n No Compression disables compression. For clients coming from very fast, high bandwidth,
and low latency connections, such as within the same data center, compression may
actually slow down the transmission time and consume unnecessary CPU resources.

Caching
NSX Advanced Load Balancer caches HTTP content, thereby enabling faster page load times for
clients and reduced workloads for both servers and NSX Advanced Load Balancer.

When a server sends a response (for example logo.png), NSX Advanced Load Balancer adds the
object to its HTTP cache and serves the cached object to subsequent clients that request the same
object. Caching thus reduces the number of connections and requests sent to the server.

logo.png

logo.png logo.png

logo.png

VMware, Inc. 206


VMware NSX Advanced Load Balancer Configuration Guide

Enabling caching and compression allows NSX Advanced Load Balancer to compress text-based
objects and store both the compressed and original uncompressed versions in the cache.
Subsequent requests from clients that support compression will be served from the cache. NSX
Advanced Load Balancer does not need to compress every object every time, greatly reducing the
compression workload.

Responses Eligible for Caching


When caching is enabled, NSX Advanced Load Balancer caches HTTP objects for the following
types of responses:

n HTTP/HTTPS

n GET, HEAD methods

n 200 status code

NSX Advanced Load Balancer also supports caching objects from servers in HTTPS pools.

Note HTTP/2 responses from the server are cached.

Responses Not Cached


NSX Advanced Load Balancer does not cache HTTP objects for the following types of responses:

n Put / Post / Delete methods

n Request Headers:

n Cache-Control: no-store

n Authorization

n Response Headers:

n Cache-Control: no-cache

n Expires header’s date is already expired

n Warning, Set-Cookie, Vary: *

n Cache-Control: private, no-store

n Both etag and Last-Modified headers do not exist and either:

n GET/HEAD method includes a Query

n No expires/max-age header exists

n Non-200 status codes

Note It is possible for caching to not work with policies or DataScripts present on the virtual
service. Consider disabling caching in the application profile if policies and DataScripts need to be
applied to the virtual service.

VMware, Inc. 207


VMware NSX Advanced Load Balancer Configuration Guide

Verify Object Served from Cache


To validate that an object is successfully served from the cache, navigate to the logs page of a
virtual service. Apply the filter - cache_hit= ”true”. This filters all requests that were successfully
served from the cache. When using logs, ensure that you enable Non-Significant Logs to show
non-error traffic, and ensure the logging engine is capturing the Non-Significant logs for the
duration of the test. For more infomation, see Virtual Service Logs.

Cache Size

The size of a cache is indirectly determined based on the memory allocation for a Service
Engine handling a virtual service that has caching enabled. This is determined within the SE
Group properties via the connection memory slider. Memory allocated to buffers is used for TCP
buffering (and hence accelerating), HTTP request and response buffering, and also for HTTP
cache.

Cache Configuration Options


HTTP caching is enabled within the Templates > Profiles > HTTP Application Profile. Within the
HTTP profile, navigate to the Caching tab and enable caching by selecting the Enable Caching
check box.

The following parameters are optional:

X-Cache - NSX Advanced Load Balancer adds an HTTP header labeled X-Cache for any response
sent to the client that was served from the cache. This header is informational in nature, and
indicates that the object was served from an intermediary cache.

Age Header - NSX Advanced Load Balancer adds a header to the content served from cache that
indicates to the client the number of seconds that the object has been in an intermediate cache.
For example, if the originating server declared that the object must expire after 10 minutes and it
has been in the NSX Advanced Load Balancer cache for 5 minutes, the client knows that it must
only cache the object locally for 5 more minutes.

Date Header - If a date header was not added by the server, then Avi Vantage will add a date
header to the object served from its HTTP cache. This header indicates to the client when the
object was originally sent by the server to the HTTP cache in Avi Vantage.

Cacheable Object Size - The minimum and maximum size of an object to be cached, in bytes.
Most objects smaller than 100 bytes are web beacons and must not be cached despite being
image objects. Large objects, such as streamed videos can be cached, though it might not be
appropriate and might saturate the cache size quickly.

VMware, Inc. 208


VMware NSX Advanced Load Balancer Configuration Guide

Cache Expire Time - An intermediate cache must be able to guarantee that it is not serving
stale content. If the server sends headers indicating how long the content can be cached (such
as cache control), the NSX Advanced Load Balancer uses those values. If the server does not
send expiration timeouts and the NSX Advanced Load Balancer is unable to make a strong
determination of freshness, it stores the object for no longer than the duration of time specified by
the Cache Expire Time.

Heuristic Expire - If a response object from the server does not include the Cache-Control header
but includes an If-Modified-Since header, the NSX Advanced Load Balancer uses this time to
calculate the cache-control expiration, which supersedes the Cache Expire Time setting for this
object.

Cache URI with Query Arguments - This option allows caching of objects whose URI includes a
query argument. Disabling this option prevents caching these objects. When enabled, the request
must match the URI query to be considered a hit. Following are two examples of URIs that include
queries. The first example might be a legitimate use case for caching a generic search, while the
second, a unique request posing a security liability to the cache.

n www.search.com/search.asp?search=caching

n www.foo.com/index.html?loginID=User

Cacheable Mime Types - Statically defines a list of cacheable object types. This can be a String
Group, such as System-Cacheable-Resource-Types, or a custom comma-separated list of Mime
types that the NSX Advanced Load Balancer must cache. If no Mime Types are listed in this field,
the NSX Advanced Load Balancer by default assumes that any object is eligible for caching.

Non-Cacheable Mime Types- Statically define a list of object types that are not cacheable. This
creates an exclusion list that is the opposite of the cacheable list. An object listed in both lists is not
cached.

Purge an Object from HTTP Cache


Often a single object or page may become stale, such as when a website is updated. Rather than
invalidate or expire all objects from NSX Advanced Load Balancer’s HTTP content cache, only the
impacted items should be invalidated. When the virtual service and pool are running on redundant
or scaled-out SEs, purging the object from the cache will be performed on all applicable SEs for
the pool.

The following commands show how to perform this action from the CLI.

Procedure

1 Check to see if the desired object exists within the cache. The truncated example below
returns the stats from the object found in the cache.

: > show pool prod-l7-pool httpcache filter resource_name analytics.js

--------------------------------------------------------------------------------
URI: /path1/analytics.js
ctype: text/javascript

VMware, Inc. 209


VMware NSX Advanced Load Balancer Configuration Guide

raw_key: pool-0-4]avinetworks.com:/path1/analytics.js
key: e6ce7ac2ab8668a8acc9f2d505281412
key_extn:
data_size: 146398 meta_size: 172 hdr_size: 414
body_size: 145984
date_time: 1449185388 last_mod_time: -1 etag:
"-725089702"
(Thu Dec 3 23:29:48 2015) (Wed Dec 31 23:59:59
1969)
in_time: 1449187395 exp_age: 120 init_age: 2007
last_used:
(Fri Dec 4 00:03:15 2015) (Fri Dec 4 00:05:15
2015)

--------------------------------------------------------------------------------

2 To clear the object from cache:

: > clear pool prod-l7-pool httpcache resource_name analytics.js

3 Validate the object has been removed from cache:

: > show pool prod-l7-pool httpcache filter resource_name analytics.js

--------------------------------------------------------------------------------
--------------------------------------------------------------------------------

Use Cases
This section covers the following topics:

n Load Balance API Gateways

n Setting up Microsoft Exchange Server 2016 with NSX Advanced Load Balancer

n Load Balance FTP

n Load Balancing Passive FTP on NSX Advanced Load Balancer

n Load Balancing RADIUS with Cisco ISE

Load Balance API Gateways


In microservice-based architectures, an API gateway enables easy consumption of services
required for their client or server applications, by exposing a single end-point to those
applications. The API gateway acts as a bridge between applications and microservices.

VMware, Inc. 210


VMware NSX Advanced Load Balancer Configuration Guide

Desktop App Internal HTTP


Microservice
DB

Internal HTTP Message


API Gateway
Microservice Bus
DB

Internal AMQP
Mobile App Microservice

Server side App

Load Balancing API Gateway


The availability of the API gateway is key to ensuring application availability. API gateway
availability requires a load balancer that can provide flexibility to cope with rapid changes
in microservices, such as versioning and dynamically shifting scale. Also, being exposed to
the external network, the API gateway provides secure transportation and authentication, and
different access policies for external clients and internal clients. API gateway requires protection
from DDoS attacks too.

Often, API response time directly impacts the end-user experience; therefore, it is critical to also
have a monitoring tool that can provide complete API transaction logs.

NSX Advanced Load Balancer Solution


NSX Advanced Load Balancer provides the following out-of-box benefits when deployed to load
balance API gateways:

n API versioning through easy-to-use Layer 7 policy

n Route API calls to different pools based on version information

n Redirect API calls to the default API version pool

n API quality monitoring with full visibility

n Score API quality based on response time, response code error ratio, and resource
utilization

n Pinpoint API bottlenecks: are they in the client-facing network? Datacenter network? API
gateway itself?

n Full API transaction logs per client IP, device type, and so on:

VMware, Inc. 211


VMware NSX Advanced Load Balancer Configuration Guide

n Secure API with access control

n End-to-end encryption with client certificate authentication

n Redirect for non-secure APIs to secure APIs

n Block/allow API calls based on custom IP groups

n Per-client rate limiting

n DDoS attack mitigation with detailed attack information (example: Top- N attackers)

Setting up Microsoft Exchange Server 2016 with NSX Advanced Load


Balancer
Microsoft Exchange Server 2016 is an e-mail server solution, with a calendar and contact manager,
which supports a variety of clients such as Outlook, web browser, and mobile devices.

NSX Advanced Load Balancer's Exchange Server Solution Benefits


NSX Advanced Load Balancer solution provides the following benefits for Exchange deployment:

Horizontal scale: You do not have to be caught off guard by a sudden traffic surge. NSX
Advanced Load Balancer can adjust the capacity of the load balancer infrastructure dynamically by
scaling out and scaling in its data plane engines called Service Engine (SE).

Analytics and visibility: Analytics and visibility play a key role in troubleshooting issues and
evaluating risks that can affect end-user experience. Unlike other ADC vendors, NSX Advanced
Load Balancer provides an end-to-end timing chart, pinpointing latency distribution across
segments of a client, the ADC, and servers. NSX Advanced Load Balancer understands the
resource utilization of servers, combines it with observed performance, and presents the result
as a health score. By looking at the health score, you can judge the current end-user experience
and risk coming from resource utilization.

SSL offload and management with ease of use: Simply select NSX Advanced Load Balancer's
SSL Everywhere and import a certificate. The rest will be taken care of by NSX Advanced Load
Balancer. You do not have to convert a certificate and configure multiple things to make Exchange
secure. Other significant advantages include SSL compute offload and HTTP visibility. In particular,
SSL compute offload allows the reduction of the number of CAS units and related license cost.
By terminating SSL on NSX Advanced Load Balancer, you can fully enjoy NSX Advanced Load
Balancer's innovative analytics and visibility engine.

Cloud-optimized deployment and high availability: The NSX Advanced Load Balancer Controller
automatically discovers available resources, such as networks and servers in the virtual
infrastructure. This allows IT admins to be less vulnerable to human errors. In addition, the NSX
Advanced Load Balancer Controller detects a problem when its SE or a hypervisor has a problem;
it automatically looks for a best available hypervisor and launches an SE to recover. Unlike other
ADC solutions, this approach does not require a redundant device.

VMware, Inc. 212


VMware NSX Advanced Load Balancer Configuration Guide

Deployment Architecture

OWA Outlook EAS EAC PowerShell IMAP SMTP Telephony

Load Balancer

IIS
POP
CAS2013 SMTP UM SIP+RTP
IMAP
HTTP Proxy

POP
HTTP IMAP SMTP

IIS
POP
Transport UM
IMAP
RpcProxy
MBX2013
RPS OWA, EAS, EWS, ECP, OAB
RPC CA MDB MailQ

Exchange Server 2016 has two roles for servers, the Client Access server (CAS) and the Mailbox
server, which comprise CAS Array and DAG (Database Access Group) respectively for high
availability and increased performance. The CAS provides client protocols, SMTP, and a Unified
Messaging Call Router. The client protocols include HTTP/HTTPS and POP3/IMAP4. The UM Call
Router redirects SIP traffic to a Mailbox server.

Note An external load balancer is required to build a CAS array. Unlike CAS array, DAG does
NOT require an external load balancer. A server can take both roles of the Client Access and the
Mailbox.

CAS provides the following services that require load balancing:

Outlook Anywhere It enables an Outlook client to connect to the Exchange


server. It uses RPC over HTTP(S).

Outlook Web Access It enables any Web browser to connect to the Exchange
server, offering Outlook-client like experience on the
browser.

Exchange Web Service It enables client applications to communicate with the


Exchange server. EWS provides access to much of the
same data that is made available through Microsoft
Outlook.

Exchange Administration Center It provides a web-based management console for the


Exchange server.

Exchange Management Shell It enables a remote admin over HTTP(S) to perform


every task that can be performed by the Exchange
Administration Center.

ActiveSync It enables mobile devices, such as iPhone and Android


devices, to synchronize mail, calendar, contact, and tasks
with the Exchange server.

VMware, Inc. 213


VMware NSX Advanced Load Balancer Configuration Guide

AutoDiscover It enables a client application, e.g., ActiveSync app or


Outlook, to configure itself with minimal user information.
With the AutoDiscover service, a user's e-mail address
and password are enough to find out the rest of the
configuration information.

Offline Address Book It enables an Outlook client in Cached Exchange Mode to


lookup addresses when offline.

POP3/IMAP4 It enables 3rd party e-mail clients to download e-mail from


the Exchange server. SMTP is used for outgoing e-mail.

SMTP It enables 3rd-party e-mail clients to use the Exchange


server as an outgoing e-mail server. POP3/IMAP4 is used
for incoming e-mail.

MAPI It enables client programs to become (e-mail) messaging-


enabled, aware, or based by calling MAPI subsystem
routines that interface with certain messaging servers.

Setting Up Exchange for Load Balancing


The Exchange 2016 System Requirements Microsoft Technet article specifies requirements for
setting up Exchange Server 2016.

n In this case, a Windows 2012 Server (using a 2012 iso) was brought up on a VM with an 8-core
CPU, 8 GB of RAM, and 100 GB of disk capacity. (Ideally, the disk should be partitioned into
four drives for OS, Logs, Exchange Install Directory, and Databases).

n An Exchange server in 2016 then needs to be installed on the Windows 2012 server. An
Exchange server license can be obtained free of cost for 180 days using Outlook credentials
(personal). The license can be obtained from here: Microsoft Exchange Server 2016 product
page, Microsoft Exchange Server 2016 download page.

n With an Exchange 2016 server, it's a prerequisite that the server has a static IP.

n Before Exchange 2016 can be installed, it's necessary that the prerequisites are installed, else
the setup.exe file for 2016 fails with multiple errors. The same can be installed using Windows
PowerShell from the 2012 server VM that was created. Once installed, the server needs to be
rebooted. ** .NET 4.5 support (Ideally, you need 4.5.2, but the same would be upgraded to
4.5.2 automatically once the setup.exe is run.) ** Desktop Experience ** Internet Information
Service (IIS) ** Windows Failover Clustering.

n After the reboot, install Unified Communications Managed API (UCMA) 4.0 Runtime: download
page

n In case the server chosen is 2012 RTM, Windows Management Framework 4.0 needs to be
installed as well: download page

n Install the Active Directory Remote Server Administration Tools plugin on the Exchange server
using PowerShell.

n Install Active Directory per the steps outlined here: Setting up an Active Directory Lab (Part 1).

VMware, Inc. 214


VMware NSX Advanced Load Balancer Configuration Guide

n An important step to note is that the DNS Resolver under System Settings in NSX Advanced
Load Balancer should point to the local DNS server set-up during Active Directory install. In
this case, AD, Exchange 2016, DNS, and IIS were installed on one single server.

n From the link above we need to make sure that we have a client machine that can be a
part of the domain we create ( avitest.com in this case) and the user that we create in Active
Directory can log in to the same. For test purposes, a Win7 test machine was chosen as
the client machine ( VM spawned out of a Windows 7 iso) which was made a part of the
domain avitest.com and with credentials configured in AD for the said test user from the client
machine.

n Once the client machine is a part of the domain, switch to the 2012 server PowerShell prompt
wherein the 2016 setup file resides and then configure Active Directory to receive Exchange
2016. The Exchange Schema version should be on 15317. Verify this using ADSI edit.

n The setup.exe for 2016 can now be executed and we need to set it up for the Mailbox rule.

n Once set up, ECP can be browsed using https://servername/ecp (in our case the server name
is lab-dc01).

n Since this is a lab-only environment, we need to skip the namespace part of Split DNS for
external and internal access. In this case, the internal and external hostname was kept as same
for being lab-dc01.avitest.com for all the Exchange services. (The same needs to be done from
the ECP login as done above.)

n MAPI and auto-discover services cannot be configured through ECP in the browser and need
to be configured via Exchange Management Shell.

n Log in to the Exchange Admin Center and create a self-signed certificate for the server. Export
the same to the desktop, as the same would be used for importing in the VS that we create.

n The self-signed certificate needs to be assigned to the IIS service.

n Create two mailbox users using EAC so that emails can be sent from two accounts.

n An Exchange client could be on Outlook 2016 or Outlook 2013. For tests, we used the OWA
access through a normal Chrome/Firefox browser.

n To enable SSL offload on Exchange 2016, and make changes to each Exchange service as
described in the Configuring SSL offloading in Exchange 2013 Microsoft TechNet article.

n To set up a secondary Exchange Server, follow the steps above. We don’t need to go ahead
with an AD installation but have to make sure that the secondary Exchange Server is part of
the same domain and that a new forest domain is NOT created. We just need the existing
domain that was created.

VMware, Inc. 215


VMware NSX Advanced Load Balancer Configuration Guide

Load-Balancing Policies

Pool-OA, Pool-OWA, Pool-EWS, Pool-EAC, Pool_EMS


VS-HTTPS
Pool-AS, Pool-AD, Poola-OAB

VS-IMAP4
VS-POP3 Pool-IMAP4
VS-SMTP Pool-POP3
Pool-SMTP

CAS MBX

Avi SE

CAS MBX

NSX Advanced Load Balancer supports the deployment of an Exchange solution in three different
ways:

1 One virtual service (VS) and one pool: This is the quickest way to deploy the Exchange service
and requires only one virtual IP address. However, individual health monitoring for different
services is not possible. If you deploy Exchange 2016, you have to choose one persistence
method across all services; this may result in suboptimal operational results because different
Exchange 2016 services require different persistence methods for the best result. The statistics
and analytics information from the NSX Advanced Load Balancer system will be an aggregate
of all services.

2 One virtual service and multiple pools: This requires configuring the Layer 7 policy on
NSX Advanced Load Balancer, to forward an HTTP message based on the host header to
a corresponding pool. This deployment requires only one virtual IP address and enables
individual health monitoring for different services. In addition, for Exchange 2016,NSX
Advanced Load Balancer supports a different persistence method per pool. This deployment
enables NSX Advanced Load Balancer to provide statistics and analytics information on a
per-pool basis.

3 Multiple virtual services and one pool per virtual service: This requires as many IP addresses
as Exchange services to load balance. Each virtual service will have one pool. This deployment
enables NSX Advanced Load Balancer to provide statistics and analytics information on a
per-VS basis.

Note A virtual service is defined as a virtual IP address and a port number.

VMware, Inc. 216


VMware NSX Advanced Load Balancer Configuration Guide

In this section, we are going to use the second deployment model. We will create a single virtual
service for all services with multiple pools. Each pool corresponds to an Exchange service. The
table below lists all the Exchange services and ports to load balance and health check methods.
Exchange 2016 provides pre-defined HTML pages for health monitoring by a load balancer.

Table 1-1. Table 1. Exchange 2016 services for load balancing


CAS Service Ports on VS Ports on Pools FQDN for VIP Path

/rpc/
Outlook Anywhere 443/HTTPS 80/HTTP lab-dc01.avitest.com
healthchecks.htm

/OWA/
Outlook Web Access 443/HTTPS 80/HTTP lab-dc01.avitest.com
healthchecks.htm

Exchange Web /EWS/


443/HTTPS 80/HTTP lab-dc01.avitest.com
Service healthchecks.htm

Exchange
/ECP/
Administration 443/HTTPS 80/HTTP lab-dc01.avitest.com
healthchecks.htm
Center

Exchange /PowerShell/
443/HTTPS 80/HTTP lab-dc01.avitest.com
Management Shell healthchecks.htm

/Autodiscover/
AutoDiscover 443/HTTPS 80/HTTP lab-dc01.avitest.com
healthchecks.htm

/Microsoft-Server-
ActiveSync 443/HTTPS 80/HTTP lab-dc01.avitest.com ActiveSync/
healthchecks.htm

/OAB/
Offline Address Book 443/HTTPS 80/HTTP lab-dc01.avitest.com
healthchecks.htm

Messaging
Application /MAPI/
443/HTTPS 80/HTTP lab-dc01.avitest.com
Programming healthchecks.htm
Interface

POP3 995/POP3 with SSL 995/POP3 with SSL lab-dc01.avitest.com TCP port 995

IMAP4 993/IMAP4 with SSL 993/IMAP4 with SSL lab-dc01.avitest.com TCP port 993

SMTP 465/SMTP with SSL 465/SMTP with SSL lab-dc01.avitest.com TCP port 465

In table 1, _lab-dc01.avitest.com_ and _autodiscovery.avitest.com_ should point to the virtual


IP. All HTTPS-based services will be terminated by NSX Advanced Load Balancer. The traffic will
be decrypted and sent to the pool and will be encrypted and sent back to the client. For SMTP/
IMAP4/POP3 traffic, the Layer 4 policy will be applied. With the Layer 4 policy, NSX Advanced
Load Balancer just terminates a TCP connection but bypasses the SSL connection.

NSX Advanced Load Balancer System Configuration


Exchange 2016 SLB configuration involves the following activities:

VMware, Inc. 217


VMware NSX Advanced Load Balancer Configuration Guide

Health Monitor
1 Navigate to Templates > Profile > Monitor.

2 Create an HTTP health monitor for each Exchange service (8 in number). Use URLs listed
in table 1. Client Request Data needs to be set to GET //healthcheck.htm HTTP/1.1. As an
example, this one is set for OWA as GET /OWA/healthcheck.htm HTTP/1.1.

3 Create a TCP health monitor each for POP3, IMAP4, and SMTP on specific port numbers as
shown in table 1.

VMware, Inc. 218


VMware NSX Advanced Load Balancer Configuration Guide

SSL Certificate
1 Navigate to Template > Profile > Certificate.

2 Click Create > Application Certificate. Import the self-signed certificate that was exported
when the CSR was created on Exchange Server. The Exchange Server that is exported is in
PFX format and needs to be converted to .pem format to be imported into the NSX Advanced
Load Balancer UI. This can be achieved as “openssl pkcs12 -in cert.PFX -out cert.pem -nodes”.

Virtual Service
1 Navigate to Application > Virtual Services. Create an L7 Virtual Service for Exchange service
and associate it with other objects, such as an application profile, health monitor, SSL, etc.

2 For HTTPS, use System-Secure-HTTP and System-TCP-Proxy for Application Profile and
TCP/UDP Profile. Note: When HTTPS or the System-Secure-HTTP profile are used, disable
the "Secure Cookies" and "HTTP-only Cookies" checkboxes in the Security tab for that HTTP
profile.

VMware, Inc. 219


VMware NSX Advanced Load Balancer Configuration Guide

3 Create three L4 Virtual Services each for POP3, IMAP4, and SMTP, use System-L4-
Application and System-TCP-Proxy with the same IP address as the L7 VS (this is optional)
but different service port numbers than the L7 VS.

Note You can create a shared VS using different ports.

Pool
n This can be accessed separately or from the Virtual Services configuration wizard. The pool
is a construct that includes servers, load balancing method, persistence method, and health
monitor. Add servers across which load is to be balanced and choose Least-Connections for
the load balancing method. Below is an example of a pool created for the Outlook Web Access
(OWA) service.

n The Active health monitor is chosen as the one created above. In this case, it’s the OWA health
monitor which is chosen.

VMware, Inc. 220


VMware NSX Advanced Load Balancer Configuration Guide

n The server IP address is the IP of the Exchange server which resolves to lab-dc01.avitest.com.

VMware, Inc. 221


VMware NSX Advanced Load Balancer Configuration Guide

n Create 12 pools with names based on table 2.

HTTP Policy
1 This can be added after creating a virtual service or from the Virtual Service configuration
wizard.

2 Create a HTTP policy and it includes 8 HTTP request rules, each rule corresponding to an
Exchange service.

3 To create the HTTP policy, follow the steps next.

4 Navigate to Application > Virtual Services. Click the virtual services edit icon. This will pop up
in the Edit Virtual Service menu.

5 Navigate to Policy > HTTP Request.

6 Click Add HTTP Request Rule.

VMware, Inc. 222


VMware NSX Advanced Load Balancer Configuration Guide

7 Enter a rule name, e.g., rule-pool-oa.

8 Select Path and Begins With for Matching Rules. Then, enter /rpc.

9 Select Content Switch and Pool for Action. Then, select a corresponding pool, e.g., pool-oa.

10 Click Save Rule.

Below we can see an example of creating the same for an L7 virtual service for OWA.

Below we see all HTTP-based policies created for the L7 virtual service.

VMware, Inc. 223


VMware NSX Advanced Load Balancer Configuration Guide

n Repeat the steps for each Exchange pool. Refer to table 2 for URLs and pools.

Table 1-2. Table 2. Pools for Exchange 2016 services


CAS Service Pool Name Ports on Pools Path

Outlook Anywhere pool-oa 80/HTTP /rpc/

Outlook Web Access pool-owa 80/HTTP /owa/

Exchange Web Service pool-ews 80/HTTP /ews/

Exchange Administration
pool-eac 80/HTTP /ecp/
Center

Exchange Management
pool-ems 80/HTTP /powershell/
Shell

AutoDiscover pool-ad 80/HTTP /autodiscover/

/microsoft-server-
ActiveSync pool-as 80/HTTP
activesync/

Offline Address Book pool-oab 80/HTTP /oab/

Messaging Application
pool-mapi 80/HTTP /mapi/
Programming Interface

POP3 pool-pop3 995/POP3 with SSL -

IMAP4 pool-imap4 993/IMAP4 with SSL -

SMTP pool-smtp 465/SMTP with SSL -

VMware, Inc. 224


VMware NSX Advanced Load Balancer Configuration Guide

Load Balancing

DNS lookup: lab-dc01.avitest.com

External DNS
DNS response: 71.72.221.140 (GoDaddy, etc.)

HTTPS
Connection established
71.72.221.140
translated to
External 10.15.1.7
Internal

Ex 16-01
10.15.1.13

HTTPS Avi Service Engine


Connection Virtual IP (VIP): 10.15.1.7
established Ex 16-02
10.15.1.14
DNS lookup: lab-dc01.avitest.com

DNS response: 10.15.1.7 Internal DNS

n To support load balancing across Exchange Servers on a single VIP, choose the “Round
Robin” load balance option under all pools that have been configured. Below we show this
being done for the owa-pool.

VMware, Inc. 225


VMware NSX Advanced Load Balancer Configuration Guide

n Add the secondary exchange server IP under all pools. This is seen below for the owa-pool.

VMware, Inc. 226


VMware NSX Advanced Load Balancer Configuration Guide

Confirming Proper Operation


The L7 service had a default pool pointing to pool-as (ActiveSync). The below screenshot confirms
clients accessed the Exchange virtual service several times during the 15-minute timeframe
depicted in the timeline.

Non-significant logs having been on, one observes a total of 43 log entries, including the
successful ones (return code = 200). The most recent log entry is shown expanded. The other 42,
collapsed into single-line rows, are not shown in the screenshot. The L7 virtual service successfully
content-switched requests to the pool-owa pool as a result of the rule-pool-owa request policy
rule.

The NSX Advanced Load Balancer solution provides additional information about the client from
which the request originated, including the client’s operating system (Android), device type (Moto
G Play), browser (Chrome Mobile), SSL version (TLSv1.2), certificate type (RSA), and so on.

VMware, Inc. 227


VMware NSX Advanced Load Balancer Configuration Guide

Load Balance FTP


For file transfer protocol (FTP) communication, clients open a TCP-based control channel on port
21. For active FTP, a second data channel is initiated from the server to the client via port 21. NSX
Advanced Load Balancer only supports passive FTP, in which the client initiates the data channel
via a high port negotiated with the server.

Passive FTP
NSX Advanced Load Balancer supports passive FTP using the following configuration:

A Note on High Availability

Exactly one SE in an SE group may deliver the FTP service at any given time. Virtual service
scale-out to two or more SEs is not supported with NSX Advanced Load Balancer FTP. Therefore,
legacy active/standby and 1+M elastic HA are supported. Active/active elastic HA is not.

Virtual Service Settings:

Application Profile: L4

TCP/UDP Profile: TCP-proxy

VMware, Inc. 228


VMware NSX Advanced Load Balancer Configuration Guide

Service Ports: Set to Advanced via the NSX Advanced Load Balancer UI

Port: 21 to 21

Port: 1024 to 65534

Pool Settings:

Load Balance Algorithm: Least Connections

Persistence: Client IP

Health Monitor: TCP

Health Monitor Port: 21

Port Translation: Disable

Active FTP

Active FTP is not supported. NSX Advanced Load Balancer recommends the use of passive FTP as
a workaround.

> ftp ftp.test.com


Connected to ftp.test.com.
ftp.test.com FTP server ready.
Name (test:user): anonymous
Password required for anonymous.
Password: ******
User anonymous logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> passive
Passive mode on.

Load Balancing Passive FTP on NSX Advanced Load Balancer


This section explains configuring NSX Advanced Load Balancer virtual service to load balance
passive FTP.

In passive FTP, the client sends a PASV command to the server on port 21. The server responds
with the server IP address data port that is greater than 1023 to connect to. On using a virtual
IP on the load balancer for passive FTP, the server IP has to be changed to the virtual IP on
the load balancer so that the client connects to the load balancer instead of connecting to the
server directly. A DataScript is used for changing the server IP to a virtual IP configured in the FTP
payload in the server response.

Configuring NSX Advanced Load Balancer


To configure NSX Advanced Load Balancer for load balancing passive FTP, follow the steps
below:

1 Configuring health monitor for FTP

2 Configuring pool with the required FTP servers

VMware, Inc. 229


VMware NSX Advanced Load Balancer Configuration Guide

3 Configuring Layer 4 response DataScript for FTP

4 Configuring Layer 4 virtual service with port configuration for the data channel

Configuring Health Monitor


To configure an external health monitor for FTP, on NSX Advanced Load Balancer UI navigate to
Templates > Profiles > Health Monitors and click Create.

n Enter a name for the health monitor.

n Click the dropdown for Type and select External.

n Enter a relevant value in the Send Interval field.

Under External Settings,

n Enter port number 21 in the Health Monitor Port field.

n Paste the below bash script for the FTP health monitor in the Script Code section.

#!/bin/bash
curl -s ftp://$IP/$path --ftp-pasv -u $user:$pass

n Enter the Username, Password, and the Filepath in the Script Variables section.

The file path is the absolute path of the file to be checked in the health monitor. The curl opens
up an FTP connection using the username and password provided to the servers in the pool and
requests for a directory listing in the path provided. The curl runs in silent mode (as specified by
the option -s) and returns a directory listing output only if a file exists in the file path and the health
monitor will pass. If no file exists in the file path, the health monitor will fail. The path is optional,
and if not specified, the curl will retrieve the root directory listing.

VMware, Inc. 230


VMware NSX Advanced Load Balancer Configuration Guide

Configuring Pool
To configure the pool with required FTP servers, on NSX Advanced Load Balancer UI navigate to
Applications > Pools and click Create Pool.

n Enter a name for the pool.

n Enter port number 21 in the Default Server Port field.

n Under Load Balance, select Consistent Hash > Source IP Address.

Consistent Hash with Source IP Address is chosen as the load balancing algorithm to avoid a
different server being selected by each SE if a virtual server is scaled out to multiple Service
Engines.

n Click +Add Active Monitor and from the dropdown list select the health monitor configured in
the previous step - FTP.

n Navigate to the Servers tab and add the relevant servers.

n Navigate to the Advanced tab.

n Under Other Settings, click the checkbox for Disable Port Translation to enable the option.

The FTP data channel will be established on an ephemeral port and this port has to be used to
send the traffic to the server without any modification. Hence, Disable Port Translation has to be
enabled.

VMware, Inc. 231


VMware NSX Advanced Load Balancer Configuration Guide

Configuring DataScript
To configure the Layer 4 response DataScript, on NSX Advanced Load Balancer UI navigate to
Templates > Scripts > DataScripts and click Create.

Add the below DataScript to the VS Datascript Evt L4 Response Event Script section and click
Save.

-- Handle passive FTP 227 response rewrite (server IP to VIP)


function string.tohex(str)
return (str:gsub('.', function (c)
return string.format('%02X', string.byte(c))
end))
end
-- Do not run DS for data ports
if avi.vs.port() ~= '21' then
avi.l4.ds_done()
end
-- Read entire payload (assumption that entire response we are looking for is a single packet)
local payload = avi.l4.read()
local p1, p2 = string.match(payload, '227 Entering Passive Mode %(%d+,%d+,%d+,%d+,(%d+),(%d+)
%)%.\r\n')
if p1 ~= nil then
local vip_ip = string.gsub(avi.vs.ip(), '%.', ',')
local rewrite = '227 Entering Passive Mode (' .. vip_ip .. ',' .. p1 .. ',' ..
p2 ..').\r\n'
avi.l4.modify(string.tohex(rewrite))
rewrite_len = rewrite:len()
payload_len = payload:len()
if rewrite_len < payload_len then
avi.l4.discard(payload_len-rewrite_len, rewrite_len)
end
end

VMware, Inc. 232


VMware NSX Advanced Load Balancer Configuration Guide

Configure Virtual Service


To configure a Layer 4 virtual service for FTP on NSX Advanced Load Balancer UI:

1 Navigate to Applications > Virtual Service, click on Create Virtual Service, and select
Advanced Setup.

2 Under Profiles,

a For Application Profile, click the dropdown and select System-L4-Application.

b For TCP/UDP Profile, click the dropdown and select System-TCP-Proxy.

3 Under Service Port, click Switch to Advanced.

4 Under Services, enter the port range as 1024 TO 65534.

Note From a security perspective, it is recommended to identify the specific passive port
range configured on the FTP servers and to configure this port range under the Virtual Service
rather than the full range of high ports.

5 Under Pool, click the dropdown and select the pool configured - FTP.

VMware, Inc. 233


VMware NSX Advanced Load Balancer Configuration Guide

6 Click Next.

7 In the Policies tab, under DataScripts, click + Add DataScript. From the dropdown, select the
DataScript configured in the previous section - FTP-DataScript.

8 Click Save DataScript.

9 Click Next to navigate to the next two tabs and Save the configuration.

Additional Configuration
The FTP servers could enforce that the control and data connections are sourced from the same
IP. Hence, the Service Engines that load balances the control and data traffic should be the same.
This can be achieved by deploying the Service Engines in an active/standby high availability mode.

VMware, Inc. 234


VMware NSX Advanced Load Balancer Configuration Guide

For deployment in active/active mode with native Layer 2 scaleout, to ensure the same Service
Engine load balances the traffic to the FTP servers, configure the following on the virtual service
using the CLI:

[admin:10-10-10-1]: > configure virtualservice virtual-service-name


[admin:10-10-10-1]: virtualservice> flow_dist consistent_hash_source_ip_address
[admin:10-10-10-1]: virtualservice> save

Note On using BGP / ECMP scaleout, as in the deployment for FTP load balancing in Azure or
GCP, the flow would reach the Service Engines based on the routing hash done on the upstream
device. Therefore the above CLI configuration is not applicable for BGP / ECMP scaleout.

The virtual service is now ready for load balancing FTP. The FTP server IP for clients would be the
VIP configured on the FTP virtual service.

Load Balancing Active FTP on NSX Advanced Load Balancer


This section explains the steps to configure the NSX Advanced Load Balancer to load balance the
active FTP traffic to a pool of servers.

The support for load balancing Active FTP is available starting with NSX Advanced Load Balancer
release 20.1.6. NSX Advanced Load Balancer uses the Layer 4 application virtual service that
listens on the FTP port and the preserve_client_ip option to achieve the Active FTP load
balancing.

Prerequisites
n Knowledge of Active FTP and its configuration.

n Active FTP virtual service requires 2 functionalities.

n Preserve client IP - See Preserve Client IP for the deployment requirements and
configuration options.

n NAT Policy - See Configuring NAT on NSX Advanced Load Balancer Service Engine for
the deployment requirements and configuration options.

IP routing feature is required for NAT functionality, hence the requirement of SE HA mode of
Legacy(Active/Standby) is mandatory.

VMware, Inc. 235


VMware NSX Advanced Load Balancer Configuration Guide

Topology

NSX Advanced Load Balancer is logically inline between the user’s network and the FTP Server
Network. All traffic to FTP Servers and the return traffic from FTP Servers to users flow to the NSX
Advanced Load Balancer (Service Engines).

In the active mode FTP, the client connects from a random port (N > 1023) to the FTP server’s
command port, port 21. Then, the client starts listening on port N+1 and sends the FTP command
port N+1 to the FTP server.

The server will then connect back to the client’s specified data port from its local data port, which
is port 20.

To support the active mode FTP, the following communication channels need to be opened at the
server-side firewall:

n FTP server’s port 21 from anywhere (Client initiates connection)

n FTP server’s port 21 to ports > 1023 (Server responds to client’s control port)

n FTP server’s port 20 to ports > 1023 (Server initiates data connection to client’s data port)

n FTP server’s port 20 from ports > 1023 (Client sends ACKs to server’s data port)

VMware, Inc. 236


VMware NSX Advanced Load Balancer Configuration Guide

FTP Load Balancing Solution


The options that should be enabled while configuring the load balancing solution for the active
FTP servers are:

n For FTP load balancing, the SE exists between the client and server. FTP virtual service
(Listening on port 21) is configured on the SE, and the FTP servers are configured as the pool
members. Also, the Preserve Client IP Address is enabled on the virtual service application
profile.

n With the Preserve Client IP option enabled in the L4 Application Profile.

n Floating Interface IP is configured that can act as the default gateway for the back-end
server network.

n If the deployment Network has a firewall, configure NAT for the server’s connection with FTP
virtual service IP address.

n In the absence of a firewall in the deployment network, the random NAT IP address
configuration is required, and still, the active FTP works as expected.

Configuration
Follow the steps mentioned below to configure NSX Advanced Load Balancer for FTP load
balancing:

1 Create FTP virtual service using System L4 Application with FTP port (21) as listening service.

2 Enable Preserve Client IP Address under the application profile.

3 Configure the floating interface IP address under the Network Service, which acts as the
default gateway for the back-end server network.

4 Create a NAT Profile with the following parameters:

a Match Criteria: Server subnet as source IP address match and source port as 20 (for the
active FTP).

VMware, Inc. 237


VMware NSX Advanced Load Balancer Configuration Guide

b Action: Nat IP should be the same as virtual service IP address at step1. (This is to prevent
the firewall problems in the front-end deployments).

5 Attach the above NAT Profile to the Network Service to ensure that the Server Originating FTP
Requests is NAT’ed properly.

Note The rule has Server Network and the source port 20 included in the match. The source
port rule is necessary to match only FTP traffic, or else the SSH connections to the server from the
client will fail.

Supportability
The following tech-support commands and packet captures are available to debug the problems
regarding the Active FTP.

FTP VS:

n show serviceengine <se> vshash # listening service on VNIC with FTP command port 21.

NAT supportability Commands:

n show serviceengine <activeSE> natpolicystat

n show serviceengine <activeSE> nat-flows

n show serviceengine <activeSE> route-flows

Packet Captures:

n packet captures for virtual service

n NAT+Routing Packet captures for NAT and routing packets

n show networkservice <ns>

For NAT packet captures:

n debug serviceengine <key> flags flag debug_pcap_nat

Load Balancing RADIUS with Cisco ISE


This section explains the steps to configure NSX Advanced Load Balancer to load balance RADIUS
traffic to Cisco Identity Services Engine (ISE). NSX Advanced Load Balancer uses L4 DataScripts to
achieve persistence using various RADIUS attributes and load balance DHCP profiling traffic to the
same server as RADIUS.

Prerequisites
n Knowledge of Cisco ISE and its configuration is required before configuring NSX Advanced
Load Balancer to load balance RADIUS traffic to Cisco ISE.

n An active/standby SE group with IP routing enabled is required to support the preservation of


client IP for the RADIUS virtual service.

VMware, Inc. 238


VMware NSX Advanced Load Balancer Configuration Guide

Topology

PSN

ISE-PSN-1

PSN
End User Network Access Network / Router Avi Load Balancer
Device (Service Engine) ISE-PSN-2

PSN

ISE-PSN-3

As shown in the topology, NSX Advanced Load Balancer is logically in line between the user’s
network and the ISE Policy Service nodes (PSN). All traffic to ISE PSNs flow via NSX Advanced
Load Balancer load balancers (Service Engines), as well as return traffic from ISE PSNs to users.

Scenario
An NSX Advanced Load Balancer VIP is configured as a RADIUS server on the network access
device (NAD). Once NSX Advanced Load Balancer receives the RADIUS authentication traffic from
the users, it is load balanced to one of the ISE PSNs using configured load balancing algorithms.
A persistence entry is created using DataScripts which parses the RADIUS requests and creates an
entry based on the configured RADIUS attributes. Any subsequent RADIUS authentication traffic
or DHCP profile traffic from the same client will be sent to the same server using the persistence
entry.

Change of Authorization Source NAT Support

The Cisco-ISE will send a Change of Authorization (CoA) request with the following details:

n The source IP of the individual PSN originating the CoA

n The destination IP of the NAD

n The destination port, UDP 1700 (by default)

The NAD expects the source IP to be that of the configured RADIUS server; in this case, it is the
NSX Advanced Load Balancer VIP.

The NAT policy has been configured on NSX Advanced Load Balancer to NAT the source IP of the
server to the VIP if the destination port of the packet is UDP 1700.

VMware, Inc. 239


VMware NSX Advanced Load Balancer Configuration Guide

Configuration
Follow the below-mentioned steps to configure NSX Advanced Load Balancer for RADIUS load
balancing:

1 Configure DataScript to parse RADIUS and DHCP packets and persistence using required
fields.

2 Configure the health monitor for RADIUS. The SE IP needs to be configured as NAD on the ISE
with the same credentials on the ISE and NSX Advanced Load Balancer.

3 Configure the virtual service and pool.

4 Attach DataScript to the virtual service.

5 Configure NAT for CoA and attach to required Service Engine group.

Configuring DataScript to Parse RADIUS/DHCP Traffic


The functionality of the DataScript is explained using a sample DataScript. The DataScript can be
modified as per the user's requirements. Refer to the Layer 4 DataScripts for more details on the
DataScript function.

DataScript

The DataScript details are provided in RADIUS-DHCP-HTTPS.

DataScript Logic

RADIUS requests are parsed, and NAS-IP-ADDRESS, CALLING-STATION-ID, and NAS-PORT-


TYPE attributes are noted. If NAS-PORT-TYPE is 19 (wireless clients), then the aging time for
the entries is set to 3600. For all other client types (wired/virtual), the aging time is 28800. If a
CALLING-STATION-ID is populated in the RADIUS request, then that is used for persistence. If the
request does not contain a CALLING-STATION-ID, NAS-IP-ADDRESS is used for persistence.

DHCP packets are parsed and the host populated client-identifier is noted, if any. Client-identifier
is expected to be the host MAC address. If the client-identifier is populated, then it will match the
persistence entry created for RADIUS using calling-station-id and will send the DHCP packet
to the same PSN as RADIUS. If the client-identifier is not present in the DHCP packet, it will be
forwarded using the configured load balancing algorithms to one of the three ISE PSNs.

DataScript also creates persistence entry using framed-ip-address, if present, in RADIUS


accounting packets. Any subsequent HTTPS request from the same client to the VIP will be sent to
the same PSN using the source IP of the packet, by matching the framed-ip-address entry.

Configuring RADIUS Health Monitoring


Navigate to Templates > Profiles > Health Monitors to configure a RADIUS health monitor to
monitor the status of ISE.

VMware, Inc. 240


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Name Specify the name for the health monitor.

Description Specify the description for the name given for the health
monitor.

Send Interval Specify the interval frequency in seconds to send health


checks to a server.

Receive Timeout Specify the receive timeout frequency in seconds to


receive a valid response from the server within the receive
timeout window. This timeout must be less than the send
interval.

Type Select Type as 'RADIUS' from the drop-down list.

Successful Checks Specify the number of continuous successful health checks


before the server is marked up.

Failed Checks Specify the number of continuously failed health checks


before the server is marked down.
This field describes the object's replication scope. Check
this box to replicate the object across the federation.

Is Federated If this field is unchecked, then the object is only visible


within the Controller cluster and its associated Service
Engines.

When done specifying the necessary details, click Save.

Configuring Pool
1 A single pool needs to be configured for all protocols. The pool members will be ISE-PSN. The
default server port should be 1812.

VMware, Inc. 241


VMware NSX Advanced Load Balancer Configuration Guide

2 Attach the RADIUS health monitor created to the pool.

3 In the Advanced tab of the pool, select Disable Port Translation.

VMware, Inc. 242


VMware NSX Advanced Load Balancer Configuration Guide

4 Click Save.

Configuring Virtual Service


1 Configure a virtual service to accept all required RADIUS traffic and DHCP traffic. Also, accept
HTTPS traffic and SNMP if required.

Note
a The application profile selected should be System-L4-Application with the Preserve Client
IP option enabled.

b The network profile selected should be System-UDP-Fast-Path.

VMware, Inc. 243


VMware NSX Advanced Load Balancer Configuration Guide

2 Configure all required ports for RADIUS and DHCP. For DHCP, use System-UDP-Per-Pkt by
overriding the TCP/UDP profile. Use UDP per packet profile as the ISE does not respond
to the DHCP packets. If HTTPS is configured, it should be overridden to use the System-TCP-
Proxy profile.

3 Attach the pool configured earlier and click Save.

Configuring and Attaching DataScript to the Virtual Service


The following are the steps to configure and attach the DataScript to the virtual service:

1 Navigate to Templates > Scripts.

2 Click the Create button to create a new DataScript.

VMware, Inc. 244


VMware NSX Advanced Load Balancer Configuration Guide

3 Scroll down to the VS Datascript Evt L4 Request Event Script section.

4 The script parses requests from the client towards the server; hence, it is a request event
script.

5 Attach the script to this event.

6 In the Pools section, select the pool configured for RADIUS and DHCP.

7 Save the DataScript.

8 Select required protocol parsers. Select Default-DHCP and Default-Radius in this DataScript.

9 Attach the DataScript to the VS. Navigate to Edit Virtual Service > Policies > DataScripts >
Add DataScript and select the configured DataScript. Click Save DataScript.

VMware, Inc. 245


VMware NSX Advanced Load Balancer Configuration Guide

Configuring NAT
NAT rules are configured as a policy called nat policy via the NSX Advanced Load Balancer CLI
and are attached to the Service Engine group. NAT rules are per-VRF. NAT rules match criteria
can be from source/dest IP/ranges or source/dest port/ranges.

The action for NAT in the ISE use case is to set the source IP as the virtual service VIP for CoA
packets. The ISE sends the CoA packets to UDP port 1700 (by default) to ensure there are match
criteria. The nat_ip is the IP, that the source IP of the matched traffic will be translated to. In this
case, it is the NSX Advanced Load Balancer VIP of the RADIUS virtual service.

Refer to Configuring NAT on NSX Advanced Load Balancer Service Engine for more details on
NAT configuration. It is recommended to use a separate Service Engine group for RADIUS load
balancing.

Note
1 NAT will work only if IP routing is enabled on the SE group, hence all the limitations that are
applicable to enable IP routing will apply here. SEs must be in legacy active/standby. Refer to
Default Gateway (IP Routing on NSX Advanced Load Balancer SE) for more details.

2 For RADIUS load balancing with ISE, it is recommended to preserve the client IP, since the
ISE sends CoA to the NAD IP which is obtained from the IP header and not the IP from the
RADIUS header. If the client IP is not preserved, the ISE will see SE as NAD and CoA will fail.
Refer to Preserve Client IP for more details.

3 NAT will work only for UDP traffic as of release 18.2.5. It will not work for any other traffic
(ICMP/TCP).

Forwarding for Non-load Balanced Traffic

VMware, Inc. 246


VMware NSX Advanced Load Balancer Configuration Guide

Since NSX Advanced Load Balancer SEs are configured with IP routing enabled, any traffic that
does not require load balancing and is destined directly to/from the ISE PSN IPs will be routed by
the SE from/to network hosts.

Health Monitoring
This section describes the details of health monitors used by NSX Advanced Load Balancer. NSX
Advanced Load Balancer uses servers to accommodate additional workload before load balancing
a client to a server. NSX Advanced Load Balancer ensures that the servers perform correctly.
Health monitors perform this function either by actively sending a synthetic transaction to a server
or by passively monitoring client experience with the server. NSX Advanced Load Balancer sends
active health monitors periodically that originate from Service Engines hosting the virtual service.

The following are the features of the health monitor:

n The health monitors are attached to a pool for a virtual service.

n A pool that is not attached to a virtual service will not send health monitors and is considered
as an inactive configuration.

n A pool can have multiple actively concurrent health monitors, such as ping, TCP, and HTTP,
and a passive monitor.

n All active health monitors must be successful for the server to be marked up.

Types of Health Monitors


The following are the two types of health monitors:

n Active Health Monitor

n Passive Health Monitor

Active Health Monitors


Active health monitors send customer queries to the servers. You can define send and receive
timeout intervals, to determine the server response is successful or failed.

Active health monitors originate from the Service Engines hosting the virtual service. Each SE
must be able to send monitors to the servers, which ensures there are no routing or intermediate
networking issues that might prevent access to a server from all the active Service Engines. If one
SE marks a server up and another SE marks a server down, each SE will include or exclude the
server from load balancing according to their local monitor results.

The following are the configurable active health monitors:

n DNS Monitor

n External Monitor

n GSLB Monitor

n HTTP Monitor

VMware, Inc. 247


VMware NSX Advanced Load Balancer Configuration Guide

n HTTPS Monitor

n Ping Monitor

n RADIUS Monitor

n TCP Monitor

n UDP Monitor

n SIP Monitor

Passive Health Monitor


While active health monitors provide a binary good/bad analysis of server health, passive health
monitors provide a more subtle check by attempting to understand and react to the client-to-
server interaction. Passive health monitors do not send a check to the servers, instead, NSX
Advanced Load Balancer monitors end-user interaction with the servers. The server should quickly
respond with valid responses, such as HTTP 200. If the server is sending back errors, such as TCP
resets or HTTP 5xx errors, then the server is assumed to have errors. Errors are defined by the
analytics profile attached to the virtual service. The analytics profile also defines the threshold for
response time before a server is considered responding slowly.

With active health monitors, NSX Advanced Load Balancer will mark a server down after the
specified number of consecutive failures and will no longer send new connections or requests until
that the server can correctly pass the periodic active health monitors.

With passive health monitors, server failures will not cause NSX Advanced Load Balancer to mark
that server as down. Rather, the passive health monitor will reduce the number of connections or
requests sent to the server relative to the other servers in the pool by about 75%. Further failures
may increase this percentage.

Note Best practice is to enable both a passive and an active health monitor to each pool.

Configuring Health Monitor Using NSX Advanced Load Balancer UI


The following are the steps to configure from NSX Advanced Load Balancer UI:

1 Navigate to Templates > Profiles > Health Monitors.

2 Click the edit icon at the top right to edit health monitors.

3 Select the desired HTTP health monitor.

4 Check the Use Exact Request box.

VMware, Inc. 248


VMware NSX Advanced Load Balancer Configuration Guide

5 Click Save.

Configuring Health Monitor Using NSX Advanced Load Balancer CLI


Login to the NSX Advanced Load Balancer CLI and use configure healthmonitor System-HTTP
command to change the value of the exact-http-request flag.

[admin:10-1-1-1]: > configure healthmonitor System-HTTP


[admin:10-1-1-1]: healthmonitor> http_monitor
[admin:10-1-1-1]: healthmonitor:http_monitor> http_request
[admin:10-1-1-1]: healthmonitor:http_monitor> http_request "HEAD / HTTP/1.0\r\n\r\n"
Overwriting the previously entered value for http_request
[admin:10-1-1-1]: healthmonitor:http_monitor> exact_http_request
Overwriting the previously entered value for exact_http_request
[admin:10-1-1-1]: healthmonitor:http_monitor>
[admin:10-1-1-1]: healthmonitor:http_monitor> save
[admin:10-1-1-1]: healthmonitor> save

Setting up Health Monitor


The following are the functionalities of health monitors:

1 Navigate to Templates > Profiles > Health Monitor.

2 Open Health Monitor tab.

The health monitor tab is displayed as follows:

VMware, Inc. 249


VMware NSX Advanced Load Balancer Configuration Guide

This tab includes the following functions:

n Search: Click the search icon to search across the list of objects.

n Create: Click the edit icon to open the Edit Health Monitor window.

n Edit: Click the edit icon to open the Edit Health Monitor window.

n Delete: You can delete a profile if it is not currently assigned to a virtual service. An error
message will indicate the VS referencing the profile. You can edit the default system profiles,
but cannot delete the same.

The table on this tab provides the following information for each health monitor profile:

Field Description

Name The system displays the name of the health monitor.

Type The system displays one of the following types of health


monitor:
n DNS — Validates the health of responses from DNS
servers.
n External — Uses a custom script to validate the health
of a diverse array of applications.
n HTTP — Validates the health of HTTP web servers.
n HTTPS — Validates the health of HTTPS web servers
when the connection between NSX Advanced Load
Balancer and the server is SSL/TLS encrypted.
n Ping — Validates application health. An ICMP ping
monitors any server that responds to pings. This
is a lightweight monitor, but it does not validate
application health.
n TCP — Validates TCP applications via simple TCP
request/response data.
n UDP — Validates UDP applications via simple UDP
request/response data.
n SIP — Validates SIP applications via SIP request code
and response.

VMware, Inc. 250


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Send Interval The system displays the frequency at which the health
monitor initiates a server check, in seconds.

Receive Timeout The system displays the maximum amount of time before
the server must return a valid response to the health
monitor, in seconds.

Successful Checks The system displays the number of consecutive health


checks that must succeed before NSX Advanced Load
Balancer marks a down server as being back up.

Failed Checks The system displays the number of consecutive health


checks that must fail before NSX Advanced Load Balancer
marks an up server as being down.

Creating New Health Monitor


The health monitors are attached to the pool for the virtual service. A pool that is not attached
to a virtual service will not send health monitors. You can create a new health monitor by clicking
Create button. The following window is displayed:

Note The New Health Monitor and Edit Health Monitor windows share the same interface.

To create or edit a health monitor specify the following details (applicable to active health monitors
of every type):

VMware, Inc. 251


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Name Specify a unique name for the health monitor. This is a


mandatory field.

Description Specify free form text to be associated with the monitor.

Send Interval Specify how frequently the health monitor initiates an


active check of a server, in seconds. The minimum
frequency is 1 second, and the maximum is 3600 seconds.

Receive Timeout Specify the maximum amount of time before the server
must return a valid response to the health monitor,
in seconds. The minimum value is 1 second, and the
maximum is the shorter of either 2400 seconds or the
Send Interval value minus 1 second. If the status of
a server continually flips between up and down, this
may indicate that the value for Receive Timeout is too
aggressive for the server.

Type Select the type of health monitor from the drop-down list.
The following are the options available in the drop-down
list:
n DNS Monitor
n External Monitor
n HTTP Monitor
n HTTPS Monitor
n Ping Monitor
n Radius Monitor
n TCP Monitor
n UDP Monitor
n SIP Monitor

Successful Checks Specify the number of consecutive health checks that must
succeed before NSX Advanced Load Balancer marks a
down server as up. The minimum is 1, and the maximum
is 50.

Failed Checks Specify the number of consecutive health checks that must
fail before NSX Advanced Load Balancer marks an up
server as down. The minimum is 1, and the maximum is
50.

Is Federated Check this box to replicate the health monitor across the
federation of Controller clusters. If you uncheck this box,
the health monitor will be visible within the Controller
cluster and its associated SEs.

Note In NSX Advanced Load Balancer, once the Type field is set and the monitor profile is
created, you cannot amend this field.

Health Monitor Types


This section describes the configuration details of the type of health monitors used by NSX
Advanced Load Balancer.

VMware, Inc. 252


VMware NSX Advanced Load Balancer Configuration Guide

The following are the configurable active health monitors:

n DNS Health Monitor

n External Health Monitor

n GSLB Health Monitor

n HTTP Health Monitor

n HTTPS Health Monitor

n Ping Health Monitor

n RADIUS Health Monitor

n SIP Health Monitor

n TCP Health Monitor

n UDP Health Monitor

n POP3/ POP3S Health Monitor

n FTP/FTPS Health Monitor

Creating or Editing New Health Monitor


You can create or edit a health monitor by navigating to Templates > Profiles > Health Monitor.
You can create a new health monitor by clicking Create button. The following window is displayed:

For more details on generic field explanations, refer to the Health Monitoring.

To edit a health monitor you can check on the relevant check box and click the edit icon.

VMware, Inc. 253


VMware NSX Advanced Load Balancer Configuration Guide

DNS Health Monitor


This section covers the specific configuration for the DNS health monitor type. The DNS health
monitor validates the health of DNS servers by sending a UDP DNS request and comparing the
response IP address.

Creating or Editing DNS Health Monitor


You can edit the DNS health monitor by checking the System-DNS box and then click the edit
icon.

To create a new DNS health monitor, click Create button. Select the DNS option from the drop-
down list of the Type field. The following screen is displayed:

VMware, Inc. 254


VMware NSX Advanced Load Balancer Configuration Guide

You can specify the following details related to DNS request and response settings:

DNS Request Settings

n Request Name — Specify the request name. The DNS monitor will query the DNS server for
the fully qualified name in this field. For instance, www.avinetworks.com.

DNS Response Settings

n Response Matches — Select one of the appropriate response matches. The following are the
options:

n Anything — Any DNS answer from the server will be successful, even an empty answer.

n Any Type — The DNS response must contain at least one non-empty answer.

n Query Type — The response must have at least one answer of which the resource record
type matches the query type.

n Response Code — Select one of the appropriate response code. The following are the options:

n Anything — The monitor ignores the DNS server’s response code, and any potential
errors, hence will not result in a health check failure.

n No Error — The monitor marks the DNS query as failed if any error code is returned by the
server.

n Response String — Specify the IP address. The DNS response must contain this IP address to
be considered successful.

n Record Type — Select the record types used in the health monitor DNS query. The following
are the options:

n A

n AAAA

After specifying the necessary details, click Save.

External Health Monitor


This section covers the specific configuration for external health monitor type.

The external monitor type allows you to write scripts to provide highly customized and granular
health checks. The scripts can be Linux shell, Python, or Perl, which can be used to execute
wget, netcat, curl, snmpget, mysql-client, or dig. External monitors have constrained access to
resources, such as CPU and memory to ensure the normal functioning of NSX Advanced Load
Balancer Service Engines. As with any custom scripting, thoroughly validate the long-term stability
of the implemented script before pointing it at production servers.

You can view the errors generated from the script in the output by navigating to Operations >
Events log.

VMware, Inc. 255


VMware NSX Advanced Load Balancer Configuration Guide

NSX Advanced Load Balancer includes three sample scripts via the System-Xternal Perl, Python,
and Shell monitors.

Note NSX Advanced Load Balancer supports IPv6 external health monitors.

While building an external monitor, you need to manually test the successful execution of the
commands. To test command from an SE, you need to switch to the proper namespace or tenant.
The production external monitor will correctly use the proper tenant.

Creating or Editing External Health Monitor


You can edit any of the external health monitors by clicking on the required checkboxes and then
click the edit icon:

n System-Xternal-Perl

n System-Xternal-Python

n System-Xternal-Shell

To create a new External health monitor, click Create button. Select the External option from the
drop-down list of the Type field. The following screen is displayed:

You can specify the following details related to External settings:

Field Description

Script Code Specify the script code. You can either upload the script by
clicking on the Upload File option or paste the script code
by clicking the Paste Text option.

Script Parameters Specify the optional arguments to feed into the script.
These strings are passed in as arguments to the script,
such as $1 = server IP, $2 = server port.

VMware, Inc. 256


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Health Monitor Port Specify the health monitor. Use this port instead of the
port defined for the server in the pool. If the monitor
succeeds to this port, the load-balanced traffic will still be
sent to the port of the server defined within the pool.

Script Variables Specify the environment variables to be fed into the script.
For instance, a script that authenticates to the server may
have a variable set to USER=test.

Configuring General Monitor


n Send Interval: Frequency at which the health monitor initiates a server check in seconds.

n Best Practice: For busy Service Engines, keep the monitoring interval lower and receive
timeout larger since external checks tend to use more system resources than the system
default monitors.

n Receive Timeout: Maximum time before the server must return a valid response to the health
monitor in seconds.

n Successful Checks: Number of consecutive health checks that must succeed before NSX
Advanced Load Balancer marks a down server as being back up.

n Failed Checks: Number of consecutive health checks that must fail before NSX Advanced
Load Balancer marks an up server as being down.

Configuring External Specific


As a best practice, clean up any temporary files created by scripts.

While building an external monitor, you need to manually test the successful execution of the
commands. To test a command from an SE, it may be necessary to switch to the proper
namespace or tenant. The production external monitor will correctly use the proper tenant. To
manually switch tenants when testing a command from the SE CLI, follow the commands in the
following article: Manually Validate Server Health.

n Script Code: Upload the script via copy/paste or uploading the file.

n Script Parameters: Enter any optional arguments to apply. These strings are passed in as
arguments to the script, such as $1 = server IP, $2 = server port.

n Script Variables: Custom environment variables may be fed into the script to allow simplified
re-usability. For instance, a script that authenticates to the server may have a variable set to
USER=test.

n Script Success: If a script exits with any data, it is considered as success and marks as server
up. If there is no data from the script, the monitor will mark the server down.

In the SharePoint monitor example below, the script includes a grep "200 OK". If this is found, this
data is returned and the monitor exits as success. If the grep does not find this string, no data is
returned and the monitor marks the server down.

VMware, Inc. 257


VMware NSX Advanced Load Balancer Configuration Guide

MySQL Example Script

#!/bin/bash
#mysql --host=$IP --user=root --password=s3cret! -e "select 1"

SharePoint Example Script

#!/bin/bash
#curl http://$IP:$PORT/Shared%20Documents/10m.dat -I -L --ntlm -u $USER:$PASS -I -L > /run/
hmuser/$HM_NAME.out 2>/dev/null
curl http://$IP:$PORT/Shared%20Documents/10m.dat -I -L --ntlm -u $USER:$PASS -I -L | grep
"200 OK"

postgresql Example Script

Example 1:

In this example, the script makes NSX Advanced Load Balancer SE to query the database. On
getting successful response, NSX Advanced Load Balancer SE marks the server UP, else it marks
the server DOWN.

#!/bin/bash
#exporting username's password
export PGPASSWORD='password123'
psql -U aviuser -h $IP -p $PORT -d aviuser -c "SELECT * FROM employees"

Example 2:

In this example, the script makes the NSX Advanced Load Balancer SE to query the database
and parse the response for cell present at the provided row, column and match it to the provided
string. If it is matched, then the server will be marked as up, else the server will be marked DOWN.

#!/bin/bash
#example script for
#string match to cell present at row,column of query response
row=2
column=2
match_string="bob"
#exporting username's password
export PGPASSWORD='password123'
response="$(psql --field-separator=' ' -t --no-align -U aviuser -h $IP -p $PORT -d aviuser -c
"SELECT * FROM employees")"
str="$(awk -v r="$row" -v c="$column" 'FNR == r {print $c}' <<< "$response")"
if [ "$str" = "$match_string" ]; then
echo "Matched"
fi

RADIUS Example Script

VMware, Inc. 258


VMware NSX Advanced Load Balancer Configuration Guide

The below example performs an Access-Request using PAP authentication against the RADIUS
pool member and checks for an Access-Accept response.

#!/usr/bin/python3
import os
import radius
try:
r = radius.Radius(os.environ['RAD_SECRET'],
os.environ['IP'],
port=int(os.environ['PORT']),
timeout=int(os.environ['RAD_TIMEOUT']))
if r.authenticate(os.environ['RAD_USERNAME'], os.environ['RAD_PASSWORD']):
print('Access Accepted')
except:
pass

RAD_SECRET, RAD_TIMEOUT, RAD_USERNAME and RAD_PASSWORD can be passed in the health monitor
script variables, for example:

RAD_SECRET=foo123 RAD_USERNAME=avihealth RAD_PASSWORD=bar123 RAD_TIMEOUT=1

Applications like curl can have different syntax for v4 and v6 addresses. The external health
monitor scripts should be aware of these syntax. Following are the examples:

Using Domain Names

Starting with NSX Advanced Load Balancer 21.1.3, to resolve domain names, DNS Resolution on
Service Engine should be configured.

EXT_HM=exthm.example.com
curl <http://$EXT_HM:8123/path/to/resource> | grep "200 OK"```

*Shell Script Example for IPV6 Support*

Shell Script Example for IPV6 Support

#!/bin/bash
#curl -v $IP:$PORT >/run/hmuser/$HM_NAME.$IP.$PORT.out
if [[ $IP =~ : ]];
then curl -v [$IP]:$PORT;
else curl -v $IP:$PORT;
fi

perl Script Example for IPV6 Support

#!/usr/bin/perl -w
my $ip= $ARGV[0];
my $port = $ARGV[1];
my $curl_out;
if ($ip =~ /:/) {
$curl_out = `curl -v "[$ip]":"$port" 2>&1`;
} else {
$curl_out = `curl -v "$ip":"$port" 2>&1`;

VMware, Inc. 259


VMware NSX Advanced Load Balancer Configuration Guide

}
if (index($curl_out, "200 OK") != -1) {
print "Server is up";
}

List of SE Packages
Scripting Languages

The following are the scripting languages:

n Bash (shell script)

n Perl

n Python

Linux Packages (apt)

The following are the Linux packages:

n curl

n snmp

n dnsutils

n libpython2.7

n python-dev

n mysql-client

n nmap

n freetds-dev

n freetds-bin

n ldapsearch

n postgresql-client

Python Packages (pip)

The following are the Python packages:

n pymssql

n cx_Oracle (and related libraries for Oracle Database 12c)

n py-radius

NTP Health Monitor Example using netcat program

nc -zuv pool.ntp.org 123 2>&1 | grep "(ntp) open"

VMware, Inc. 260


VMware NSX Advanced Load Balancer Configuration Guide

The sample configuration for using a native perl script is as follows:

#!/usr/bin/perl
# ntpdate.pl

# this code will query a ntp server for the local time and display
# it. it is intended to show how to use a NTP server as a time
# source for a simple network connected device.

#
# For better clock management see the offical NTP info at:
# http://www.eecis.udel.edu/~ntp/
#

# written by Tim Hogard (thogard@abnormal.com)


# Thu Sep 26 13:35:41 EAST 2002
# this code is in the public domain.
# it can be found here http://www.abnormal.com/~thogard/ntp/

$HOSTNAME=shift;
$HOSTNAME="192.168.1.254" unless $HOSTNAME ; # our NTP server
$PORTNO=123; # NTP is port 123
$MAXLEN=1024; # check our buffers

use Socket;

#we use the system call to open a UDP socket


socket(SOCKET, PF_INET, SOCK_DGRAM, getprotobyname("udp")) or die "socket: $!";

#convert hostname to ipaddress if needed


$ipaddr = inet_aton($HOSTNAME);
$portaddr = sockaddr_in($PORTNO, $ipaddr);

# build a message. Our message is all zeros except for a one in the protocol version field
# $msg in binary is 00 001 000 00000000 .... or in C msg[]={010,0,0,0,0,0,0,0,0,...}
#it should be a total of 48 bytes long
$MSG="\01

Note The ntpdate or ntpq programs are not packaged in the Service Engine, and hence cannot
be used at this point in time.

Upgrade to Python 3.0


Starting with the NSX Advanced Load Balancer release 20.1.1, the NSX Advanced Load Balancer
Controller and Service Engines use Python 3.0.

The external Python health monitors should be converted to Python 3.0 syntax as part of upgrade
procedure.

Before initiating the upgrade to NSX Advanced Load Balancer release 20.1.1, execute the following
steps:

1 Identify the external Health Monitors using Python.

VMware, Inc. 261


VMware NSX Advanced Load Balancer Configuration Guide

2 Remove the health monitors, or replace them with a non-Python health monitor.

3 Ensure that the health monitor script is modified to Python 3.0 syntax.

After this, upgrade to NSX Advanced Load Balancer release 20.1.1.

Steps Post Upgrade


After upgrading to NSX Advanced Load Balancer release 20.1.1, execute the following steps:

1 Replace the existing (Python 2.7) health monitor script with the Python 3 script.

2 Re-apply the health monitor to the required pools, and remove the temporary non-Python
health monitor (if configure).

GSLB Health Monitor


GSLB service is the representation of a global application deployed at multiple sites. The GSLB
service configuration defines the FQDN of the application, the backing virtual services in various
sites, and the priority or ratios governing the selection of a particular virtual service at any
given time. The configuration also defines the health-monitoring methods by which unhealthy
components can be identified so that the best alternatives may be selected.

The following are the two categories of GSLB service health monitoring:

n Control-plane

n Data-plane

You can apply one or both on a per-application basis.

Note The health monitor is applicable for GSLB only if the is_federated option is checked in the
health monitor configuration.

For more details on GSLB, refer to the GSLB guide.

HM-IMAP Health Monitor


The IMAP (Internet Message Access Protocol) monitor is used for IMAP services. After issuing
CAPA (capabilities) and user authentication, the monitor uses the LIST command to get the
folders present in the mail box. The IMAP monitor marks the server up on successful transfer
and down in case of failure. When the folder is configured, the monitor fetches the first message
present in the mailbox folder and marks the server up or down, based on the response from the
server.

Configuring IMAP Specific Monitor


The following table lists the input fields needed for configuring IMAP specific monitor:

VMware, Inc. 262


VMware NSX Advanced Load Balancer Configuration Guide

Field Description Optional/ Mandatory

Folder Name of the folder present in Optional.


mailbox.

Username Mail client username (present in Mandatory.


general health monitor configuration
under authentication).

Password Mail client password (present in Mandatory.


general health monitor configuration
under authentication).

SSL Attributes Required for IMAPS (secure IMAP) Mandatory for IMAPS (SSL Profile
monitor. Attribute).

Note Currently, the IMAP Monitor can be configured only using the CLI.

Configuring Basic IMAP Health Monitor from CLI


The following example lists the mailboxes present, by sending LIST command after CAPA and user
authentication:

Example:

[admin:avi-controller]: > configure healthmonitor example-basic-imap-hm


[admin:avi-controller]: healthmonitor> type health_monitor_imap
[admin:avi-controller]: healthmonitor> authentication
[admin:avi-controller]: healthmonitor:authentication> username user1
[admin:avi-controller]: healthmonitor:authentication> password jhkjgjgk
[admin:avi-controller]: healthmonitor:authentication> save
[admin:avi-controller]: healthmonitor> save

Configuring IMAPS Health Monitor from CLI


The following example configures the IMAPS health monitor in CLI:

Example:

[admin:avi-controller]: > configure healthmonitor example-imaps-hm


[admin:avi-controller]: healthmonitor> type health_monitor_imaps
[admin:avi-controller]: healthmonitor> imaps_monitor
[admin:avi-controller]: healthmonitor:imaps_monitor> folder INBOX
[admin:avi-controller]: healthmonitor:imaps_monitor> ssl_attributes
[admin:avi-controller]: healthmonitor:imaps_monitor:ssl_attributes> ssl_profile_ref System-
Standard
[admin:avi-controller]: healthmonitor:imaps_monitor:ssl_attributes> save
[admin:avi-controller]: healthmonitor:imaps_monitor> save
[admin:avi-controller]: healthmonitor> authentication
[admin:avi-controller]: healthmonitor:authentication> username user1
[admin:avi-controller]: healthmonitor:authentication> password kjhkjhjkk
[admin:avi-controller]: healthmonitor:authentication> save
[admin:avi-controller]: healthmonitor> save

The following are the SSL configurations used for the IMAP health monitor:

VMware, Inc. 263


VMware NSX Advanced Load Balancer Configuration Guide

SSL Profile: Select an existing SSL profile or create a new one, as required. This defines the
ciphers and SSL versions to be used for the health monitor traffic to the backend servers.

PKI Profile: Select an existing PKI profile or create a new one, as required. This is used to validate
the SSL certificate presented by the server.

SSL key and certificate: Select an existing SSL Key and Certificate or create a new one, as
required.

HM - SMTP Health Monitor


The SMTP (Simple Mail Transfer Protocol) health monitor is used to monitor SMTP services. It can
be configured to send mail to multiple recipients.

The SMTP monitor marks the server up on successful transfer and down in case of failure. A basic
SMTP health monitor checks if the server is up or down by sending ELHO NOOP QUIT commands.

Configuring SMTP Specific Monitor


The following table lists the input fields needed for configuring SMTP specific monitor:

Field Description Optional / Mandatory

Sender ID Sender Mail ID. Optional.

Recipients ID Multiple recipients mail IDs. Optional.

Mail data Mail data which need to be sent. Optional.

Domain Name Sender mail domain name. Optional.

Username Sender username (present under Optional.


general health monitor configuration
under authentication).

Password Sender password (present under Optional.


general health monitor configuration
under authentication).

SSL Attributes Required for SMTPS (secure SMTP) Mandatory for SMTPS (SSL Profile
monitor. Attribute).

Note Currently the SMTP monitor can be configured only using the CLI.

Configuring Basic SMTP Health Monitor from CLI


This example lists the mailboxes present, by sending the LIST command after CAPA and user
authentication.

[admin:avi-controller]: > configure healthmonitor example-basic-smtp-hm


[admin:avi-controller]: healthmonitor> type health_monitor_smtp
[admin:avi-controller]: healthmonitor> save

VMware, Inc. 264


VMware NSX Advanced Load Balancer Configuration Guide

Configuring SMTPS Health Monitor from CLI


The SMTP health monitor checks if the server is up or down by sending complete mail over secure
channel.

[admin:avi-controller]: > configure healthmonitor example-smtps-hm


[admin:avi-controller]: healthmonitor> type health_monitor_smtps
[admin:avi-controller]: healthmonitor> smtps_monitor
[admin:avi-controller]: healthmonitor:smtps_monitor> sender_id xyz
[admin:avi-controller]: healthmonitor:smtps_monitor> recipients_ids user1
[admin:avi-controller]: healthmonitor:smtps_monitor> recipients_ids user2
[admin:avi-controller]: healthmonitor:smtps_monitor> mail_data "HELLO!"
[admin:avi-controller]: healthmonitor:smtps_monitor> domainname example.com
[admin:avi-controller]: healthmonitor:smtps_monitor> ssl_attributes
[admin:avi-controller]: healthmonitor:smtps_monitor:ssl_attributes> ssl_profile_ref System-
Standard
[admin:avi-controller]: healthmonitor:smtps_monitor:ssl_attributes> save
[admin:avi-controller]: healthmonitor:smtps_monitor> save
[admin:avi-controller]: healthmonitor> authentication
[admin:avi-controller]: healthmonitor:authentication> username xyz
[admin:avi-controller]: healthmonitor:authentication> password vhvhdlsh
[admin:avi-controller]: healthmonitor:authentication> save
[admin:avi-controller]: healthmonitor> save

The following are the SSL configurations used for SMTPS health monitor:

n SSL Profile: Select an existing SSL Profile or create a new one, as required. This defines the
ciphers and SSL versions to be used for the health monitor traffic to the backend servers.

n PKI profile : Select an existing PKI profile or create a new one, as required. This is used to
validate the SSL certificate presented by the server.

n SSL key and certificate : Select an existing SSL Key and Certificate or create a new one, as
required.

HTTP Health Monitor


This section covers the specific configuration for HTTP health monitor type.

The HTTP health monitor may only be applied to a pool whose virtual service has an HTTP
application profile attached.

Creating or Editing HTTP Health Monitor


You can edit the HTTP health monitor by checking the System-HTTP box and then click the edit
icon.

To create a new HTTP health monitor, click Create button. Select the HTTP option from the
drop-down list of the Type field. The following screen is displayed:

VMware, Inc. 265


VMware NSX Advanced Load Balancer Configuration Guide

You can specify the following details related to HTTP settings:

n Health Monitor Port — Specify the port defined for the server in the pool. If the monitor
succeeds to this port, the load-balanced traffic will still be sent to the port of the server defined
within the pool.

n Client Request Data — Specify the client request data in the USER INPUT field to send an
HTTP request to the server. The converted data will be displayed in the CONVERTED VALUE
PREVIEW field.

The default GET / HTTP/1.0 may be extended with additional headers or information. For
instance, GET /index.htm HTTP/1.1 Host: www.site.com Connection: Close.

n Use Exact Request — Specify the exact http_request string without any automatic insert of
headers like host header.

The system automatically adds three default headers in addition to any user-specified headers
as follows where hostname is automatically derived from each pool member's configuration:

Header Values

User-Agent avi/1.0\r\n

Host <hostname>\r\n

Accept */*;\r\n\r\n

In some situations, it may be necessary to override these default headers, for instance, to
configure a specific Host header value for all servers.

VMware, Inc. 266


VMware NSX Advanced Load Balancer Configuration Guide

To allow full control over the exact request that is sent, the exact_http_request (CLI) or Use
Exact Request (GUI) option should be enabled. This option prevents the addition of these default
headers. Ensure that all mandatory and required headers are explicitly configured.

n Server Response Data — Specify the snippet of content in the USER INPUT field from the
server’s HTTP response by copying and pasting text from either the source HTML or the web
page of the server. NSX Advanced Load Balancer inspects raw HTML data and not rendered
web pages. For instance, NSX Advanced Load Balancer does not follow HTTP redirects and
will compare the redirect response with the defined Server Response string, while a browser
will show the redirected page. The Server Response content is matched against the first
2KB of data returned from the server, including both headers and content/body. The Server
Response data can also be used to search for a specific response code, such as 200 OK.
When both Response Code and Server Response Data are populated, both must be true for
the health check to pass.

n Response Code — Select the HTTP response codes to match as successful from the drop-
down list. The list displays the following values:

n 1XX

n 2XX

n 3XX

n 4XX

n 5XX

n ANY

A successful HTTP monitor requires either the Response Code, the Server Response Data, or
both fields to be populated. The Response Code expects the server to return a response code
within the specified range. For a GET request, a server should usually return a 200, 301, or 302.
For a HEAD request, the server will typically return a 304. A response code by itself does not
validate the server’s response content, just the status.

Server Maintenance Mode

You can use a custom server response to mark a server as disabled. During this time, health
checks will continue, and servers will operate the same as if it is manually disabled, which means
existing client flows are allowed to continue, but new flows are sent to other available servers.
Once a server stops responding with the maintenance string it will be brought online, being
marked up or down as it normally would be based on the server response data.

This feature allows an application owner to gracefully bleed connections from a server prior to
taking the server offline without the requirement to log into NSX Advanced Load Balancer to first
place the server in the disabled state.

n Maintenance Response Code — Specify the maintenance response code. If the defined HTTP
response code is seen in the server response, place the server in maintenance mode. Multiple
response codes may be used via comma separation. A successful match results in the server
being marked down.

VMware, Inc. 267


VMware NSX Advanced Load Balancer Configuration Guide

n Maintenance Server Response Data — Specify the maintenance server response date. If
the defined string is seen in the server response, place the server in maintenance mode. A
successful match results in the server being marked down.

Example
The following is the sample HTTP health check send string:

GET /health/local HTTP/1.0


User-Agent: avi/1.0
Host: 10.10.10.3
Accept: */*

The following is the sample server response:

HTTP/1.0 200 OK
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: text/plain
Content-Length: 15
Date: Fri, 20 May 2016 18:23:05 GMT
Connection: close

Health Check Ok

The server response includes both the Response Code: 200

The Server Response Data: Health Check ok

Therefore this server will be marked up.

Notice that NSX Advanced Load Balancer automatically includes additional headers in the send
string, including User-Agent, Host, and Accept to ensure that the server receives a fully formed
request.

SSL Attributes in HTTPS Health Monitor


The SSL encrypted traffic passes to servers without decrypting in the load balancer (the SE). Since
the traffic is still SSL/HTTPS, you can conduct a relevant health monitor.

Overriding Host Header in Health Monitor

By default, NSX Advanced Load Balancer appends additional HTTP headers (Host, User-Agent
and Accept) into HTTP health monitor requests.

The exact values of these headers are as follows:

VMware, Inc. 268


VMware NSX Advanced Load Balancer Configuration Guide

Header Values

User-Agent avi/1.0\r\n

Host <hostname>\r\n

Accept */*;\r\n\r\n

For instance, if an NSX Advanced Load Balancer admin (user) adds a Host header in HTTP client
request data field of a health monitor, NSX Advanced Load Balancer will send this additional Host
header together with the existing Host header (Host header inserted by NSX Advanced Load
Balancer).

You can prevent from adding additional Host header. Use Exact Request option on NSX Advanced
Load Balancer UI or use_exact_request flag in NSX Advanced Load Balancer CLI for a health
monitor instructs NSX Advanced Load Balancer to pass the exact HTTP request string as specified
by NSX Advanced Load Balancer admin (user), without any automatic insertion of the additional
HTTP headers. This indicates that user is now responsible for adding the appropriate headers to
the HTTP client request field.
Configuration from NSX Advanced Load Balancer CLI
Login to NSX Advanced Load Balancer CLI, and use configure healthmonitor System-HTTP
command to change the value of the flag exact-http-request.

[admin:10-1-1-1]: > configure healthmonitor System-HTTP


[admin:10-1-1-1]: healthmonitor> http_monitor
[admin:10-1-1-1]: healthmonitor:http_monitor>
http_request
[admin:10-1-1-1]: healthmonitor:http_monitor> http_request "HEAD / HTTP/1.0\r\n\r\n"
Overwriting the previously entered value for http_request
[admin:10-1-1-1]: healthmonitor:http_monitor> exact_http_request
Overwriting the previously entered value for exact_http_request
[admin:10-1-1-1]: healthmonitor:http_monitor>
[admin:10-1-1-1]: healthmonitor:http_monitor> save
[admin:10-1-1-1]: healthmonitor> save

Configuration from NSX Advanced Load Balancer UI


1 Navigate to Templates > Profiles > Health Monitors.

2 Choose the desired HTTP health monitor and click edit icon.

3 Select the check box for Use Exact Request.

4 Click Save.

HTTPS Health Monitor


This section covers the specific configuration for HTTPS health monitor type.

VMware, Inc. 269


VMware NSX Advanced Load Balancer Configuration Guide

The HTTPS monitor type can be used to validate the health of HTTPS encrypted web servers. Use
this monitor when NSX Advanced Load Balancer is either passing SSL encrypted traffic directly
from clients to servers, or NSX Advanced Load Balancer is providing SSL encryption between itself
and the servers.

Creating or Editing HTTPS Health Monitor


You can edit the HTTP health monitor by checking the System-HTTPS box and then click the edit
icon.

To create a new HTTP health monitor, click Create button. Select the HTTPS option from the
drop-down list of the Type field. The following screen is displayed:

You can specify the following details related to HTTPS settings:

n SSL Attributes — Check this box to specify SSL attributes for HTTPS health monitor. The
system allows SSL encrypted traffic to pass to servers without decrypting in the load balancer
(the SE).

n TLS SNI Server Name — Specify a fully qualified DNS hostname that will be used in the TLS
SNI extension in server connections indicating that SNI is enabled. If you do not specify any
value, the system will inherit the value from the pool.

n SSL Profile — Select the SSL profile from the drop-down list. SSL profile defines ciphers and
SSL versions to be used for health monitor traffic to the back-end servers. The following are
the options in the drop-down list:

n System Standard

n System Standard Portal

VMware, Inc. 270


VMware NSX Advanced Load Balancer Configuration Guide

n PKI Profile — Select the PKI profile from the drop-down list. PKI profile is used to validate the
SSL certificate presented by a server.

n SSL Key and Certificate — Select SSL key and certification options from the drop-down list.
Service engines will present this SSL certificate to the server. The following are the options in
the drop-down list:

n System Default Certificate

n System Default Certificate EC

n System Default Portal Certificate

n System Default Portal Certificate EC256

n System Default Root CS

n System Default Secure Channel Certificate

For more details on other fields in the HTTPS section, refer to Configuring HTTP Health Monitor
section in this guide.

Ping Health Monitor


This section covers the specific configuration for Ping health monitor type.

To create a new ping health monitor, click Create button. Select the Ping option from the drop-
down list of the Type field. The following screen is displayed:

VMware, Inc. 271


VMware NSX Advanced Load Balancer Configuration Guide

NSX Advanced Load Balancer Service Engines will send an ICMP ping to the server. This monitor
type is generally very fast and lightweight for both Service Engines and the server. However, it is
not uncommon for ping to drop a packet and fail. Ensure that Failed Checks field is set to 2. This
monitor type does not test the health of the application, so it generally works best when applied in
conjunction with an application-specific monitor for the pool.

Note ICMP rate limiters can prevent Service Engines from aggressive health checking a server via
ping. This may be caused by an intermediate network firewall or rate limits set up on the server
itself.

RADIUS Health Monitor


This section covers the specific configuration for the RADIUS health monitor type.

For Remote Authentication Dial-In User Service (RADIUS) applications, you can monitor the server
health using the RADIUS request and response. You can generate the RADIUS requests using the
password, username, and secret. The server status will be marked Up only if the RADIUS response
is Access-Accept or Access-Challenge. Otherwise, the server will be marked Down.

To create a new RADIUS health monitor, click Create button. Select the RADIUS option from the
drop-down list of the Type field. The following screen is displayed:

You can specify the following details related to RADIUS settings:

n Username — Specify the user name. RADIUS monitor will query the RADIUS server with this
username.

n Password — Specify the password. RADIUS monitor will query the RADIUS server with this
password.

n Shared Secret — Specify the shared secret. RADIUS monitor will query the RADIUS server
with this shared secret.

VMware, Inc. 272


VMware NSX Advanced Load Balancer Configuration Guide

SIP Health Monitor


This section covers the specific configuration for the SIP health monitor type.

For SIP applications, the server health is monitored using the SIP request code and response.
Currently, only SIP options are supported for the request code. The monitor greps for the
configured response string in the response payload. If a valid response is not received from the
server within the configured timeout, then the server status is marked down.

To create a new SIP health monitor, click Create button. Select the SIP option from the drop-down
list of the Type field. The following screen is displayed:

You can specify the following details related to SIP settings:

n SIP Request Code — Select the SIP request code to be sent to the server from the drop-down
list. By default, a SIP options request will be sent.

n SIP Monitor Transport — Select the SIP monitor transport protocol from the drop-down list, to
be used for the SIP health monitor. The following are the options in the drop-down list:

n UDP

n TCP

The default transport is UDP.

n SIP Response — Match for a keyword in the first 2KB of the server header and body response.
By default, it matches SIP/2.0.

TCP Health Monitor


This section covers the specific configuration for TCP health monitor type.

VMware, Inc. 273


VMware NSX Advanced Load Balancer Configuration Guide

For any TCP application, this monitor will wait for the TCP connection establishment, send the
request string, and then wait for the server to respond with the expected content. If no client
request and server response are configured, the health check will pass once a TCP connection is
successfully established.

To create a new TCP health monitor, click Create button. Select the TCP option from the drop-
down list of the Type field. The following screen is displayed:

You can specify the following details related to TCP settings:

n Health Monitor Port — Specify a port that should be used for the health check. If the monitor
succeeds to this port, the load-balanced traffic will still be sent to the port of the server
defined within the pool. If you do not specify any value, then the system uses the default port
configured for the server.

n Client Request Data — Specify the request data to send after completing the TCP handshake
in the USER INPUT field. The converted data will be displayed in the CONVERTED VALUE
PREVIEW field.

n Half-Open (Close connection before completion) — If you check this box the monitor sends
a SYN. Upon receipt of an ACK, the server is marked up and the Service Engine responds
with a RST. Since the TCP handshake is never fully completed, the system does not validate
application health. The purpose of this monitor option is for applications that do not gracefully
handle quick termination. If the handshake is not completed, the application is not touched, no
application logs are generated or app resources are wasted for setting up the connection from
the health monitor.

VMware, Inc. 274


VMware NSX Advanced Load Balancer Configuration Guide

n Configure TCP health monitor to use half-open TCP connections to monitor the health of
backend servers thereby avoiding consumption of a full-fledged server-side connection and
the overhead and logs associated with it. This method is lightweight as it makes use of a
listener in the server's kernel layer to measure the health and a child socket or user thread is
not created on the server-side.

n Server Response Data — Specify the expected response from the server in the USER INPUT
field. NSX Advanced Load Balancer checks to see if the Server Response data is contained
within the first 2KB of data returned from the server. The converted data will be displayed in
the CONVERTED VALUE PREVIEW field.

Server Maintenance Mode


Maintenance Server Response Data — If the defined string is seen in the server response, place
the server in maintenance mode. During this time, the system will perform the health checks,
and the servers will operate the same as if manually disabled, which means existing client flows
are allowed to continue, but new flows are sent to other available servers. Once a server stops
responding with the maintenance string, it will be noticed by the subsequent health monitors and
will be brought online, being marked up or down as it normally would do based on the server
response data. Note that a manually disabled server does not receive health checks and is not
automatically re-enabled.

UDP Health Monitor


This section covers the specific configuration for the UDP health monitor type.

You can send a UDP datagram to the server, then match the server’s response against the
expected response data.

Default System-UDP health monitor will detect failure only when ICMP unreachable is received. It
will keep the server UP until it receives ICMP unreachable for the defined UDP port. Hence it does
not detect the failure:

n If the UDP health monitor request gets dropped or blackholed before reaching the server.

n If ICMP unreachable response packets are being dropped.

n If the backend UDP server does not send ICMP unreachable.

To create a new TCP health monitor, click Create button. Select the TCP option from the drop-
down list of the Type field. The following screen is displayed:

VMware, Inc. 275


VMware NSX Advanced Load Balancer Configuration Guide

For field explanation in the UDP section, refer to Configuring TCP Health Monitor section in this
guide.

POP3/ POP3S Health Monitor


This section covers the specific configuration for the POP3/ POP3S health monitor type.

The POP3 (Post Office Protocol version 3) health monitor is used to monitor POP services. It will
issue LIST command to get messages present in mailbox, after executing CAPA (capabilities) and
verifying user using username and password. On successful completion of these commands, POP3
monitor will mark the server UP, else it will mark the server DOWN.

Configuring POP3 Specific Monitor

Field Description Optional/ Mandatory

Username Mail client username (present under Mandatory


general health monitor configuration
under authentication)

Password Mail client password (present under Mandatory


general health monitor configuration
under authentication)

SSL Attributes Required for POP3S (secure pop3) Mandatory for POP3S
monitor

Note Currently these can be configured only using the CLI.

VMware, Inc. 276


VMware NSX Advanced Load Balancer Configuration Guide

Configuring POP3 Health Monitor from CLI

[admin:avi-controller]: > configure healthmonitor example-basic-pop3-hm


[admin:avi-controller]: healthmonitor> type health_monitor_pop3
[admin:avi-controller]: healthmonitor> authentication
[admin:avi-controller]: healthmonitor:authentication> username user1
[admin:avi-controller]: healthmonitor:authentication> password gjgksad
[admin:avi-controller]: healthmonitor:authentication> save
[admin:avi-controller]: healthmonitor> save

Configuring POP3S Health Monitor from CLI

[admin:avi-controller]: > configure healthmonitor example-pop3s-hm


[admin:avi-controller]: healthmonitor> type health_monitor_pop3s
[admin:avi-controller]: healthmonitor> pop3s_monitor
[admin:avi-controller]: healthmonitor:pop3s_monitor> ssl_attributes
[admin:avi-controller]: healthmonitor:pop3s_monitor:ssl_attributes> ssl_profile_ref System-
Standard
[admin:avi-controller]: healthmonitor:pop3s_monitor:ssl_attributes> save
[admin:avi-controller]: healthmonitor:pop3s_monitor> save
[admin:avi-controller]: healthmonitor> authentication
[admin:avi-controller]: healthmonitor:authentication> username user1
[admin:avi-controller]: healthmonitor:authentication> password njkhasdkj
[admin:avi-controller]: healthmonitor:authentication> save
[admin:avi-controller]: healthmonitor> save

SSL Configurations for POP3S Health Monitor


The following are the SSL configurations used for POP3S health monitor:

n SSL Profile: Select an existing SSL Profile or create a new one, as required. This defines the
ciphers and SSL versions to be used for the health monitor traffic to the backend servers.

n PKI Profile: Select an existing PKI profile or create a new one, as required. This will be used as
to validate the SSL certificate presented by the server.

n SSL Key and Certificate: Select an existing SSL Key and Certificate or create a new one, as
required.

FTP/FTPS Health Monitor


This section covers the options available to configure the FTP/ FTPS health monitor.

The FTP/FTPS health monitor checks the health of the FTP servers configured as pool members.
A file will be downloaded from the server. On successful download, the server is marked as UP. If
the file transfer fails, the server is marked DOWN.

Configuring the FTP/FTPS Health Monitor


To configure the FTP/ FTPS health monitor define the following fields:

VMware, Inc. 277


VMware NSX Advanced Load Balancer Configuration Guide

Field Description

Filename Enter the filename that has to be downloaded, including


the full path. For example, ftp/testfile.txt.

Mode (Optional) Select the data transfer modes (active/ passive). By


default, passive mode is selected.
n Port mode (Active mode): The client sends port to
the server on which the data connection can be
established. It tells the server the same using the PORT
<port-num> command on the control connection.
n In the passive mode, the client sends PASV command
to server on the control connection. In response, the
server sends IP and port where the data connection
can be established.

Username Enter the username if the FTP server requires


authentication.

Password Enter the password for the user account if the FTP server
requires authentication.

SSL Attributes Enter SSL Attributes in case of FTPS health monitor.

Note Currently FTP/FTPS health monitor can be configured only using the CLI.

A sample configuration of the FTP health monitor is as shown below:

[admin:avi-controller]: > configure healthmonitor ftp-hm


[admin:avi-controller]: healthmonitor> type health_monitor_ftp
[admin:avi-controller]: healthmonitor> authentication username aviuser
[admin:avi-controller]: healthmonitor:authentication> password xyz123
[admin:avi-controller]: healthmonitor:authentication> save
[admin:avi-controller]: healthmonitor> ftp_monitor filename ftp/ftptest
[admin:avi-controller]: healthmonitor:ftp_monitor> save
[admin:avi-controller]: healthmonitor> save

Note Ensure that an SSL Profile exists before configuring the FTPS health monitor.

An FTPS health monitor can be configured as shown below:

[admin:avi-controller]: > configure healthmonitor ftps-hm


[admin:avi-controller]: healthmonitor> type health_monitor_ftps
[admin:avi-controller]: healthmonitor> authentication username aviuser
[admin:avi-controller]: healthmonitor:authentication> password xyz123
[admin:avi-controller]: healthmonitor:authentication> save
[admin:avi-controller]: healthmonitor> ftps_monitor filename ftp/ftptest
[admin:avi-controller]: healthmonitor:ftps_monitor> ssl_attributes ssl_profile_ref System-
Standard
[admin:avi-controller]: healthmonitor:ftps_monitor:ssl_attributes> save
[admin:avi-controller]: healthmonitor:ftps_monitor> save
[admin:avi-controller]: healthmonitor> save

VMware, Inc. 278


VMware NSX Advanced Load Balancer Configuration Guide

The following are the SSL configurations that can be used for FTPS health monitor:

n SSL Profile: Select an existing SSL Profile or create a new one, as required. This defines the
ciphers and SSL versions to be used for the health monitor traffic to the backend servers.

n PKI Profile: Select an existing PKI profile or create a new one, as required. This will be used as
to validate the SSL certificate presented by the server.

n SSL Key and Certificate: Select an existing SSL Key and Certificate or create a new one, as
required.

LDAP/LDAPS Health Monitor


To monitor the health of the LDAP servers, LDAP health monitor is used. This section covers
the configuration for searching the LDAP servers using the LDAP health monitor. On successful
search, the server will be marked Up else, it will be marked down.

Configuring LDAP/LDAPS Health Monitor


The configuration options for searching LDAP servers are as explained below:

Field Description Optional/ Mandatory

base_dn Enter the distinguished name (DN) of Mandatory


an entry. base_dn is the starting point
of the search

Attributes Use this to define the attributes to Optional


be returned on search. To configure
multiple attributes, use commas to
separate the attributes (for example,
cn,address,email).

Scope Select the scope of search from one of Optional


the following:
n Base: To search for information
only about the base_dn specified
inside directory
n One: To search for information
at one level below the base_dn
specified inside directory
n Sub: To search for information
at all levels below the base_dn
specified inside directory.

Filter Filter to search entries within the Optional


specified scope

Username Enter the DN of the user, if the Optional


LDAP server requires authentication
(present under general health monitor
configuration under authentication)

VMware, Inc. 279


VMware NSX Advanced Load Balancer Configuration Guide

Field Description Optional/ Mandatory

Password Enter the password of user if the Optional


LDAP server requires authentication
(present under general health monitor
configuration under authentication)

SSL Attributes Enter SSL Attributes in case of LDAPS Mandatory for LDAPS Health Monitor
health monitor

Note Currently, LDAP/LDAPS health monitor can be configured only using the CLI.

A sample configuration of the LDAP health monitor is shown below:

[admin:avi-controller]: > configure healthmonitor ldap-hm


[admin:avi-controller]: healthmonitor> type health_monitor_ldap
[admin:avi-controller]: healthmonitor> authentication username cn=aviuser,ou=users,ou=system
[admin:avi-controller]: healthmonitor:authentication> password xyz123
[admin:avi-controller]: healthmonitor:authentication> save
[admin:avi-controller]: healthmonitor> ldap_monitor base_dn ou=system
[admin:avi-controller]: healthmonitor:ldap_monitor> save
[admin:avi-controller]: healthmonitor> save

A sample configuration for LDAPS health monitor is shown below:

[admin:avi-controller]: > configure healthmonitor ldaps-hm


[admin:avi-controller]: healthmonitor> type health_monitor_ldaps
[admin:avi-controller]: healthmonitor> authentication username cn=aviuser,ou=users,ou=system
[admin:avi-controller]: healthmonitor:authentication> password xyz123
[admin:avi-controller]: healthmonitor:authentication> save
[admin:avi-controller]: healthmonitor> ldaps_monitor base_dn ou=system
[admin:avi-controller]: healthmonitor:ldaps_monitor> ssl_attributes ssl_profile_ref System-
Standard
[admin:avi-controller]: healthmonitor:ldaps_monitor:ssl_attributes> save
[admin:avi-controller]: healthmonitor:ldaps_monitor> save
[admin:avi-controller]: healthmonitor> save

The following are the SSL configurations that can be used for the LDAPS health monitor:

n SSL Profile - Select an existing SSL Profile or create a new one, as required. This defines the
ciphers and SSL versions to be used for the health monitor traffic to the back end servers.

n PKI profile - Select an existing PKI profile or create a new one, as required. This will be used to
validate the SSL certificate presented by the server.

VMware, Inc. 280


VMware NSX Advanced Load Balancer Configuration Guide

n SSL key and certificate - Select an existing SSL Key and Certificate or create a new one, as
required.

Note
n When attributes are configured, the SE will match configured attributes in server response
data. When the match is not found it marks the server down.

n For lesser consumption of resources, configure specific base_dn having less number of entries
with base scope so that server response data will not be large.

Health Monitor Troubleshooting


This section lists the troubleshooting techniques of a health monitor.

For general monitor definitions see the following links:

n Health Monitoring

n Reasons a Server Can Be Marked Down

n Flapping Servers Up or Down

n Manually Validate Server Health

VMware, Inc. 281


VMware NSX Advanced Load Balancer Configuration Guide

General Health Monitor Details


The following are the details of the general health monitor:

n Multi-Pool — A server that exists in multiple pools will receive health checks for each pool it
has membership within. If the pools are on the same Service Engine and configured with the
same health monitor, then the system will not perform redundant monitoring.

n Disabled — Health checks are not performed for disabled servers, servers within a pool that
are not assigned to a VS, or attached to a disabled virtual service.

n Scaled SEs — When scaling out a virtual service across multiple Service Engines, the servers
will receive active health checks from each SE for the virtual service. If one SE marks a server
as up, it will be included in the load balancing. If a second SE is unable to access the server, it
will mark it down and not send traffic to that server. From the Controller UI, the server health
icon can flip intermittently between red and green (or other colors). The status flipping is due
to the frequency when SEs report their status to the Controller.

n SNAT IP — If a SNAT IP is configured for a virtual service (see virtual service), the active SE will
send monitors from the SNAT IP address. If a SNAT IP is not configured, the active SE initiates
monitors from its interface IP. The standby SE will always send monitors from its interface IP.

n Standby SE — By default, the standby SE will send health checks. This behavior can be
changed from the CLI for the Service Engine Group of the SE.

n Send Interval — By default, NSX Advanced Load Balancer sends checks based on the
frequency defined by a monitor's Send Interval timer. However, if you add a new health
monitor or a new server to a pool, or if there is a positive monitor response received after a
server that has been marked down for a long time, NSX Advanced Load Balancer will quickly
send additional checks. For instance, if a new server is added to a pool with a monitor set
to query every 20 seconds, and requires 3 consecutive positive responses, the server will not
be marked up for nearly one minute. In this example, when the new server is added to the
pool, NSX Advanced Load Balancer will send the first 3 checks immediately to the server. The
server will respond, potentially marking the server up within one or two seconds. The system
performs the subsequent checks at the interval specified by the Send Interval setting of the
health monitor.

n Port Translation Enabled — If port translation is enabled:

a The server ports targeted by the virtual service has to be defined.

b If active monitoring is needed, but the ports to be monitored are not explicitly defined, the
NSX Advanced Load Balancer infers them from the defined server ports (on a per-server
basis).

n Port Translation Disabled — If port translation is disabled:

a If active monitoring is needed, but the ports to be monitored are not explicitly defined, the
NSX Advanced Load Balancer does not infer them automatically from the defined server
ports.

b You must add a health monitor for each port on the servers needing to be monitored.

VMware, Inc. 282


VMware NSX Advanced Load Balancer Configuration Guide

Verifying Monitor Results


You can verify the results of the health monitors. NSX Advanced Load Balancer does not include
health monitors while recording logs for client traffic for a virtual service. The following are the
methods for inspecting the results received by active health monitors:

Using GUI
From the GUI, the following are the ways to check the status of a server:

n Mouse over a down (red) server icon.

n Navigate to Pool > Server page, click the Failed Monitor in the health monitor table to expand
the results.

n Check for the events of the virtual server and pool record status changes and reasons.

For more details, refer to Reasons Servers Can Be Marked Down section.

Using CLI and API


You can view the extensive health monitor information from the CLI and API for each server in the
pool. The example below shows an abbreviated view:

show pool [poolname] server hmonstat


+---------------------------------+----------------------------------------+
| Field | Value |
+---------------------------------+----------------------------------------+
| server_hm_stat[1] | |
| server_name | 10.90.15.61:8000 |
| oper_status | |
| state | OPER_UP |
| shm_runtime[1] | |
| health_monitor_name | healthmonitor-1 |
| health_monitor_type | HEALTH_MONITOR_TCP |
| last_transition_timestamp_3 | Tue May 24 20:42:51 2016 ms |
| last_transition_timestamp_2 | Tue May 24 20:42:38 2016 ms |
| last_transition_timestamp_1 | Tue May 24 20:37:10 2016 ms |
| rise_count | 255 |
| fall_count | 0 |
| total_checks | 1414 |
| total_failed_checks | 5 |
| total_count[1] | |
| type | CONNECTION_TIMEOUT |
| count | 5 |
| avg_response_time | 1 |
| recent_response_time | 1 |
| min_response_time | 1 |
| max_response_time | 1999 |
| port | 8000 |
| curr_failed_checks | 1 |
| ip_addr | 10.90.15.61 |
| port | 8000 |
+---------------------------------+----------------------------------------+

VMware, Inc. 283


VMware NSX Advanced Load Balancer Configuration Guide

Using Packet Capture


By default, NSX Advanced Load Balancer does not include health monitor traffic when performing
packet captures. However, you can change this via the CLI using the following flags:

CLI Description

debug_vs_hm_include Include health monitor packets in the capture

debug_vs_hm_none This default omits health monitor packets from the capture

debug_vs_hm_only Only capture health monitor packets

Refer to Packet Capture for more information.

Using Manual Test


You can manually send a ping, curl, or similar Linux CLI accessed utilities to validate the response
of a server.

Refer to Manually Validate Server Health guide for more details.

Common Monitor Issues


You can review these common issues if the result from a server response is the desired response
and NSX Advanced Load Balancer is still marking the server down.

General Monitor Issues


The following are the generic monitor issues:

n The system inspects the content returned from servers and compares it to the monitor's
Server Response Data as case sensitive.

n Most monitors only inspect up to 2k within the server response, which includes both headers
and content. If the desired result is further within the response, the server will be marked
down.

n Duplicate IP is one of the most common issues causing intermittent failures of health checks.

Passive
The system will trigger the passive monitor in the event of a significant error, which will
automatically generate the logs for the virtual service. When drilling into a server page, the
passive monitor can show less than 100%. You can view the virtual service logs by filtering for the
server in question. Then click the Significance tile from the Log Analytics sidebar.

You can check if failures are occurring and increasing over time using the following CLI:

: > show pool p1 detail | grep suspect


| lb_fail_suspect_state | 0

VMware, Inc. 284


VMware NSX Advanced Load Balancer Configuration Guide

Ping
Some devices, including servers and firewalls, restrict the frequency of ICMP messages and can
silently discard them. In such cases, you need to lower the frequency of the Send Interval option.

HTTP
You need to send the exact request headers in the send string to the servers. For instance, a
space in a host header can cause issues for IIS, such as Host: Avi Server. The HTTP monitor adds
a few headers to emulate a valid request. To omit these extra headers, you can use a TCP monitor,
which is explicit to the send string defined in the Client Request Data field. If you are using a TCP,
ensure that you add \r\n characters for the carriage return line feed.

NSX Advanced Load Balancer includes \r\n at the end of each line of the request. HTTP 1.0
requires a second \r\n to be sent after the last line, which includes:

[Health monitor send string]\r\n


User-Agent: avi/1.0\r\n
Host: [Avi inserted server name]\r\n
Accept: */*\r\n\r\n

For HTTP/S, NSX Advanced Load Balancer does not render the results but inspects them literally.
For instance, a server can send a 302 redirect back to NSX Advanced Load Balancer, which does
not include server is good. A browser will follow the redirect and display the page with the correct
content. The URI encoding of content can also cause an HTTP/S response to failing.

External
You can run external health monitors using hmuser users with lower privileges. You can attach to
a Service Engine and log in as root as su - hmuser <-- login as hmuser.

root@test-se2:~# su - hmuser
hmuser@10-10-25-28:~$ pwd
/run/hmuser

UDP Health Monitor


UDP health monitors that are configured with no receive-string, rely on ICMP unreachable
messages to detect an error. The absence of an ICMP message results in the server being marked
up. In a deployment with a large number of servers, the number of ICMP messages can be large,
and UDP health monitors can be erroneously marked up.

To overcome the above situation and mark the server down or virtual service down, you can tune
the ICMP rate limit configuration.

If ICMP unreachable messages are dropped, in high scale cases due to ICMP unreachable rate-
limiter, you can confirm the occurrence of this issue, using the following command:

show serviceengine [se-name] flowtablestat | grep icmp_rx_rl

| icmp_rx_rl_cfg_pps | 100 |

VMware, Inc. 285


VMware NSX Advanced Load Balancer Configuration Guide

| icmp_rx_rl_confirming | 30 |
| icmp_rx_rl_drops | 0 |

The following are the commands to configure ICMP rate limit:

[admin:controller]: configure serviceengineproperties


[admin:controller]: seproperties:se_runtime_properties
[admin:controller]: seproperties:se_runtime_properties se_rate_limiters
[admin:controller]: seproperties:se_runtime_properties:se_rate_limiters : icmp_rl 100

Parameters to Mark a Virtual Service or Pool Up


You can specify the minimum threshold parameters for a virtual service or a pool to make
it serviceable. This section explains the virtual service and pool configurations to define the
parameters through the CLI.

Virtual Service Configuration


Use min_pools_up to specify the minimum number of pools out of a single pool group in a virtual
service that has to be up to mark the virtual service as UP.

min_pools_up for a virtual service is configured as shown below.

[admin:abc-ctrl]: virtualservice> min_pools_up 3


[admin:abc-ctrl]: virtualservice> save
+------------------------------------+-----------------------------------+
| Field | Value |
+------------------------------------+-----------------------------------+
| uuid |virtualservice-9c5eee94-fd57- |
| |4ef6-b912-fadbb10ae464 |
|name |vs-1 |
|----------------------truncated output----------------------------------|
|min_pools_up |3 |
+------------------------------------+-----------------------------------+

Verify the configuration as shown below.

[admin:abc-test-ctrl]: > show virtualservice vs_1


+------------------------------------+-----------------------------------+
| Field | Value |
+------------------------------------+-----------------------------------+
| uuid |virtualservice-9c5eee94-fd57- |
| |4ef6-b912-fadbb10ae464 |
|name |vs-1 |
|----------------------truncated output----------------------------------|
|use_vip_as_snat |false |
|traffic_enabled |true |
|min_pools_up |3 |
+------------------------------------+-----------------------------------+

VMware, Inc. 286


VMware NSX Advanced Load Balancer Configuration Guide

Pool Configurations
n Use min_servers_up to specify the minimum number of servers required to be UP for the
pool’s health to be marked as available. If this parameter is not defined, the pool is marked as
available as long as at least one server is UP.

n Use min_health_monitors_up to specify the minimum number of health monitors required


to succeed, to decide whether to mark the corresponding server as UP. If this parameter
is not defined, the server is marked as UP only if all the health monitors are successful.
min_servers_up and min_health_monitors_up are configured in a pool as shown below:

[admin:abc-ctrl]: pool> min_servers_up 2


[admin:abc-ctrl]: pool> min_health_monitors_up
INTEGER Minimum number of health monitors in UP state to mark server UP.
[admin:abc-ctrl]: pool> min_health_monitors_up 1
[admin:abc-ctrl]: pool> save
+------------------------------------+-----------------------------------+
| Field | Value |
+------------------------------------+-----------------------------------+
|uuid |pool-6fb04b70-5547-4232-b7b7- |
| |33e72ee33d64 |
|--------------------------truncated output------------------------------|
|min_servers_up |3 |
|min_health_monitors_up |1 |
+------------------------------------+-----------------------------------+

Verify the configuration as shown below:

[admin:abc-test-ctrl]: > show pool vs_1-pool


+------------------------------------+-----------------------------------+
| Field | Value |
+------------------------------------+-----------------------------------+
|uuid |pool-6fb04b70-5547-4232-b7b7- |
| |33e72ee33d64 |
|--------------------------truncated output------------------------------|
|min_servers_up |3 |
|min_health_monitors_up |1 |
+------------------------------------+-----------------------------------+

For example, If two servers are marked DOWN. This does not meet the minimum threshold (three
servers in UP state). Therefore, the pool is marked DOWN and is not available for any virtual service
referencing it.

Note If the minimum threshold parameters are not defined, NSX Advanced Load Balancer retains
the default behavior.

Use Cases:Minimum Health Monitors


NSX Advanced Load Balancer designates a server as UP only when all the monitors bound to it are
UP. If one of the health monitors marks it DOWN, NSX Advanced Load Balancer considers the server
as DOWN.

VMware, Inc. 287


VMware NSX Advanced Load Balancer Configuration Guide

In scenarios where multiple services are monitored on a backend server using separate monitors.
If either one is available, NSX Advanced Load Balancer will mark that server as UP. For example,
GET for /foo.html and GET for /bar.html. If either is available, NSX Advanced Load Balancer marks
the server UP.

In a similar use case, to specify the minimum number of health monitors required to succeed
and to decide whether to mark the corresponding server as UP, define the parameter
min_health_monitors_up. If this parameter is not defined, the server is marked as UP only if all
the health monitors are successful.

Minimum Servers
NSX Advanced Load Balancer marks a pool as up when one of the servers present in that pool
is UP. In a scenario where at least two servers are required to be marked as up to mark the
pool as UP, the option min_servers_up can be used to specify the numbers of servers that should
be up to mark the pool as UP. If this parameter is not defined, the pool is marked as available as
long as at least one server is UP.

See also,

n Reasons Why a Server is Marked Down

n Virtual Service Health Monitoring Pool Health

Determining the Server Status


Servers within a pool can have a status of up, down, or disabled (administratively disabled by
an administrator). The health monitor determines this status applied to the server pool. NSX
Advanced Load Balancer will mark a server down for several reasons.

The reason a server is marked down can be accessed in the following three different ways:

n Down Health Score Icon — Hover the mouse over a server's red status icon in the UI.

VMware, Inc. 288


VMware NSX Advanced Load Balancer Configuration Guide

n Down Event — Navigate to the events for the server, the pool, and the virtual service. Expand
the event to see the full details. This information can be used to automatically generate an alert
and potentially make further system changes. Refer to alerts overview for more information.

n Server Page — Navigate to Applications > Pools > pool-name > Servers > server-name. This
displays the analytics page for the server.

Note The Passive monitor is a special type. A passive monitor will not mark a server down.
Instead, if a passive monitor detects bad server-to-client responses, the monitor reduces the
percentage of traffic load balanced to that server. Click the plus sign next to the health monitor
to show additional information regarding the server's health status.

Describing the Reasons for a Marked Down Server


The following are the common reasons for a server to be marked down:

n ARP Unresolved — If the Service Engine is unable to resolve the MAC address of the server's
IP address (when in the same layer 2 domain) or is unable to initiate a TCP connection (when
the server is a layer 3 hop away).

n Payload Mismatch — The health monitor expects specific content to be returned in the body
of the response (HTTP or TCP). In the example, an excerpt of the server's response is shown.
Often this type of error occurs when a server's first response is to send a redirect to a client.
The expected content appears in the client browser, but from NSX Advanced Load Balancer's
perspective, the client receives a redirect.

n Response Code Mismatch — HTTP health checks can be configured to expect a specific
response code, such as 2xx. Meanwhile, the server can be sending back a different code, such
as 404.

VMware, Inc. 289


VMware NSX Advanced Load Balancer Configuration Guide

n Response Timeout with a Threshold Violation — Health monitors wait a timeout period for a
response and every health monitor can be assigned to its threshold and timeout period. If a
valid response is not received within the timeout period, for N consecutive times equal to the
threshold, then the server is marked down.

While NSX Advanced Load Balancer is engineered for easy troubleshooting, you will require more
advanced tools. Hence you can capture a trace of the conversation between the SE and the server
by navigating to Operations > Traffic Capture.

For more details on traffic capture, refer to Traffic Capture.

You can use tools such as ping and curl while launching from a client machine to the server.
However, these tools are not reliable if the tools are executed by administrators from SEs. This
is due to the dual network stacks used for the data plane and management. For instance, a tool
such as ping is executed from Linux using the SE management IP and network. The results can be
different than the SE that is reporting its health check via its data NICs and networks. For instance,
use ping -1 to verify the interface used.

Troubleshooting External Health Monitor


This section discusses how to troubleshoot external health monitor issues.

External health monitor on NSX Advanced Load Balancer uses scripts to provide highly
customized and granular health checks. The scripts may be Linux shell, Python, or Perl, which
can be used to execute wget, netcat, curl, snmpget, etc.

Troubleshooting Steps
The directory structure of NSX Advanced Load Balancer is not exposed in the NSX Advanced
Load Balancer UI. This is available only through the admin shell/console access. External health
monitor scripts have limited access, so as to not affect the normal functioning of the NSX
Advanced Load Balancer system. CPU, memory, disk, and other resources are limited for the
external health monitor scripts. Hence, it is recommended to have relaxed timeouts for external
health monitors.

Using NSX Advanced Load Balancer CLI


When building an external monitor, it is common to manually test the successful execution of the
commands. To execute commands from an SE, it is necessary to switch to the proper namespace
or tenant. The production external monitor will correctly use the proper tenant.

To attach to an NSX Advanced Load Balancer SE using NSX Advanced Load Balancer CLI, refer to
SSH Access for Super User.

For more information on the script parameters, refer to External Health Monitors.

If the external health monitor script provides an output for the stdout command, this indicates the
successful execution of the health monitor. If the script does not provide any output, this is treated
as a failure.

Troubleshooting Examples:

VMware, Inc. 290


VMware NSX Advanced Load Balancer Configuration Guide

Check that the output goes to stdout and not stderr.

For example, the following usage fails:

netcat -v -n -z -w 3 $IP $PORT | grep "open" 2>&1 > /dev/null

The netcat command's output is written to stderr. The grep command operates on stdout.
Hence, the output data is available under stderr.

You can confirm this by doing:

root@avi-se-iihyz:/run/hmuser# netcat -v -n -z -w 3 $IP $PORT | grep "open" 2>&1 > /dev/null


(UNKNOWN) [10.10.30.34] 80 (http) open ? still shows up.

Changing the above to the following fixes the issue.

netcat -v -n -z -w 3 $IP $PORT 2>&1 | grep "open"

Using Show Command


The show pool <pool-name> server hmonstat command provides information about the failure
code, the request, and response strings.

Using NSX Advanced Load Balancer UI


Login to NSX Advanced Load Balancer UI and navigate to Applications > Pools, select the desired
pool, and click Events to check health monitor logs.

Using Errors Output from the Script


The return code of the external health monitor script is used to pick the failure reason code. The
valid error codes are:

n EINTR, ETIMEDOUT: Connection Timeout. (Generated by NSX Advanced Load Balancer infra
upon script timeout)

n ECONNREFUSED: Connection Refused

n ECONNRESET: Connection Reset

n EADDRINUSE/EADDRNOTAVAIL: Address not available

n EHOSTDOWN/EHOSTUNREACH: Host unreachable

n ENETDOWN/ENETUNREACH: Network unreachable

n ENOBUFS/ENOMEM: Out of resources (this could be generated by NSX Advanced Load


Balancer Infra if resource allocation fails.)

VMware, Inc. 291


VMware NSX Advanced Load Balancer Configuration Guide

All other errors are treated as the other error.

Note
n The script can write an error to $HM_NAME.$IP.$PORT.out, and this output will be available in
the above command’s output, to aid debugging. This works only when the external health
monitor debugging is enabled.

n In order to run the script to troubleshoot the script, the superuser can log in to the Service
Engine console with root privileges, and then as a sudo - hmuser and run the script which is
stored in the /run/hmuser directory.

n Although you can modify the script on the Service Engine for troubleshooting, this change
is temporary. Once the Service Engine restarts or you modify the pool/health monitor, the
changes will be lost. The correct way to modify the health monitor configuration is from the
NSX Advanced Load Balancer UI/CLI/API.

Packet Capture
External health monitor packets are not captured using the option available under Operations
> Packet Capture. Use the tcpdump command with filter options from the shell prompt of NSX
Advanced Load Balancer Controller.

tcpdump -i <avi_ethX>”

The output for the above commands shows the external health monitor traffic.

For more details on SSH Key-based Login to NSX Advanced Load Balancer Controller, refer to
SSH Key-based Login to NSX Advanced Load Balancer Controller.

Flapping Servers Up or Down


Server flapping, or bouncing between up and down, is a common issue. Generally, server flapping
is caused by the server reaching or slightly exceeding the health monitor's maximum allowed
response time.

To validate if a server is flapping, you need to check the specific server's analytics page within the
pool. You can enable the Alerts and System Events Overlay icons for the main chart. This will
show server up and down events over the time period selected. The page also displays the list of
failed health monitors.

Compare the response times from the server to the health monitor's configured receive timeout
window. If the failures can be attributed to these timers, you can use the following steps to rectify
the same:

n Add additional servers — This will not help if the slowdown is due to a backend database, but
for servers that are simply busy or overloaded, this can be a quick and permanent fix.

n Increase the health monitor's receive timeout window — The timeout value can be 1-300
seconds. The timeout value must always be shorter than the send interval for the health
monitor.

VMware, Inc. 292


VMware NSX Advanced Load Balancer Configuration Guide

n Raise the number of successful checks required, and decrease the number of failed checks
allowed — This will ensure the server is not brought back into the rotation as quickly,
potentially giving it more time to handle the processes that are causing the slow response.

n Change the connection ramp-up (if using the least connections load-balancing algorithm)—
Servers can be susceptible to receive too many connections too fast when first brought up.
For instance, if one server has 1 connection and the rest have 100 connections, then as per
the least connections algorithm, the new server should get the next 99 connections. This can
easily overwhelm the server, leaving a flash crowd of connections that must be dealt with
the remaining servers, causing a domino effect. You can configure the connection ramp-up
feature on the Advanced tab of the pool's configuration. The connection ramp-up feature
slowly ramps up the percentage of new connections sent to a new server. Increasing the
ramp-up time can be beneficial if you are seeing a cascading failure of servers.

n Set the maximum number of connections per server — This option, configurable on the
Advanced tab of the pool configuration, ensures that servers are not overloaded and can
handle connections at optimal speed.

Validating Server Health


You can validate the response of a server while troubleshooting the reasons for a marked-down
server. Ensure that the test is from a specific NSX Advanced Load Balancer Service Engine, using
the same tenant, network, and IP address.

SEs have multiple network stacks, one for the control plane which uses Linux, and a second for the
data plane. Simply logging into an SE and pinging a server will go out the management port and
IP address, which can route through a different infrastructure than the SE data plane.

Prerequisites
The following are the prerequisites to validate server health.

1 Determine the IP address of the Service Engine hosting the virtual service.

2 SSH into the NSX Advanced Load Balancer Controller.

3 Log into the NSX Advanced Load Balancer shell.

shell

Validating Server Health of VMware - No Tenants


The following are the steps to validate server health of VMware in no tenants option:

1 Connect to a Service Engine's Linux shell as follows:

: > attach serviceengine 10.10.25.28

2 Validate the current namespace as follows:

admin@10-10-25-28:~$ ip netns

VMware, Inc. 293


VMware NSX Advanced Load Balancer Configuration Guide

The usual output is avi_ns1, which is the default namespace.

3 Execute a static health check from this namespace.

Validating Server Health of VMware - Multiple Tenants


For multiple tenants on VMware, NSX Advanced Load Balancer does not create VRFs/
namespaces by default. The following are the steps to validate the server health of VMware in
multiple tenants option:

1 Attach to the Service Engine Linux shell as follows:

: > attach serviceengine 10.10.25.28

2 Execute a static health check.

Validating Server Health of Multiple Tenants with VRF (Provider Mode)


The following are the steps to validate the server health of multiple tenants in VRF:

1 Find the namespace/VRF for the pool server as follows:

: > show pool p1 detail | grep vrf_id


| vrf_id | 2

In this case, the vrf_id is 2, and the namespace is avi_ns2.This information can also be
obtained using the following CLI command:

: > show serviceengine 10.10.25.28 vnicdb

2 If there are multiple SEs, find the vrf-id on the specific SE:

show pool p1 detail | filter disable_aggregate se se_ref 10.10.25.28


| vrf_id | 2

3 Attach to the Service Engine Linux shell as follows:

: > attach serviceengine 10.10.25.28

4 Execute a static health check from this namespace.

Validating Server Health of Bare Metal/Linux Cloud


For bare-metal Linux clouds, there are no namespaces, reducing the necessary steps. The
following are the steps to validate server health of bare metal/Linux cloud:

1 Attach to the Service Engine Linux shell as follows:

: > attach serviceengine 10.10.25.28

2 Execute a static health check.

VMware, Inc. 294


VMware NSX Advanced Load Balancer Configuration Guide

Validating Common Manual-Server Checks


Ping — The following are the steps to validate server health of ping health monitor:

root@test-se2:~# sudo ip netns exec avi_ns1 ping 10.90.15.62


PING 10.90.15.62 (10.90.15.62) 56(84) bytes of data.
64 bytes from 10.90.15.62: icmp_seq=1 ttl=64 time=26.8 ms

Curl — The following are the steps to validate server health of curl option:

root@test-se2:~# sudo ip netns exec avi_ns1 curl 10.90.15.62


curl: Failed to connect to 10.90.15.62 port 80: Connection refused

root@test-se2:~# sudo ip netns exec avi_ns1 curl 10.90.15.62:8000Welcome - Served from port
80!

Note This step is not necessary when the SE is on a Docker and bare-metal setup and the Docker
container itself exists in a namespace.

Detecting Server Maintenance Mode with a Health Monitor


NSX Advanced Load Balancer can actively disable back-end servers for maintenance. NSX
Advanced Load Balancer can be configured to use information in the health check responses
from servers to detect if a server is in maintenance mode.

Administrators and application developers can use information in the health-check responses from
servers to detect if a server is in maintenance mode.

The information can be a specific response code, for instance, HTTP code 503, or a specific
response message string, for instance, "Server is under maintenance". Such an event is
operationally different from a case where the server process is down due to a software issue.
During the time a server is under maintenance, you should not send new connections to the server
and should drain the existing connections.

Detecting Maintenance Mode


You can configure some types of health monitors to detect when a server has entered
maintenance mode, based on specific response codes or response data contained in the server's
responses to health checks. This monitor must be associated with the pool the server is in.

n Response code — You can configure HTTP and HTTPS health monitors to filter for a specific
HTTP response code (101-599). If the code is detected in a server's response to a health check
based on the HTTP or HTTPS monitor, NSX Advanced Load Balancer changes the server's
status to down for maintenance.

n Response data — You can configure TCP, UDP, HTTP, and HTTPS health monitors to filter
for specific data (a response string). If the string is detected in a server's response to a health
check based on the HTTP or HTTPS monitor, NSX Advanced Load Balancer changes the
server's status to down for maintenance. The response data must be within the first 2000
bytes of the response data.

VMware, Inc. 295


VMware NSX Advanced Load Balancer Configuration Guide

An HTTP or HTTPS health monitor can filter for up to 4 maintenance response codes.

The HTTP and HTTPS health monitors can contain any of the following combinations of filters for
detecting a maintenance mode:

n Response string

n Multiple response codes

n Maintenance response string

n Up to four maintenance response codes

TCP and UDP health monitors can contain a filter for maintenance mode based on either or both
of the following:

n Response string

n Maintenance response string

Indicating Maintenance Mode


When NSX Advanced Load Balancer detects that a server has entered maintenance mode, the
server's health status is changed to down for maintenance.

When a server is marked down for maintenance, the existing connections to the server are left
untouched and are allowed to close on their own. NSX Advanced Load Balancer continues to send
health checks to the server. When the server stops responding with the maintenance string or
code, this indicates to NSX Advanced Load Balancer that the maintenance mode has concluded,
and changes the server's health status to up.

Similarly, the server's change into and back out of maintenance mode is indicated in the event log.

Configuring a Health Monitor To Detect Server Maintenance Mode


Web Interface

The following are the steps to configure web interface to detect server maintenance mode:

1 Navigate to the configuration popup of the health monitor:

a Navigate to Templates > Health Monitor.

b Click the edit icon next to the name of the health monitor.

2 Click Create button to create a new health monitor. Specify a name, and select the monitor
type, such as TCP or UDP for layer 4, HTTP or HTTPS for layer 7.

3 In the Server Maintenance Mode section, specify the response code(s) or data to use as the
indicator that a server is in maintenance mode.

4 Click Save.

Example HTTPS Health Monitor with Maintenance Mode Detection

VMware, Inc. 296


VMware NSX Advanced Load Balancer Configuration Guide

Example TCP Health Monitor with Maintenance Mode Detection

Attaching a Health Monitor to a Pool


The health monitor is used only for the pools the monitor is attached to.

To attach a health monitor to a pool,

1 Navigate to Applications > Pools.

2 Click Create button.

3 Select the monitor by clicking Add Active Monitor button. The drop-down list displays the list
of health monitors.

VMware, Inc. 297


VMware NSX Advanced Load Balancer Configuration Guide

4 Select the required health monitor option.

CLI
The following commands configure an HTTP health monitor to filter for the string under
construction in health-check responses from servers:

: > configure healthmonitor System-HTTP


: healthmonitor> http_monitor
: healthmonitor:http_monitor> maintenance_response "under construction"
: healthmonitor:http_monitor> save
: healthmonitor> save

The following commands configure the same HTTP health monitor to filter for response codes
500 and 501 in health-check responses from servers. The following commands configure an HTTP
health monitor to filter for the string under construction in health-check responses from servers:

: > configure healthmonitor System-HTTP : healthmonitor> http_monitor :


healthmonitor:http_monitor> maintenance_code 500 : healthmonitor:http_monitor>
maintenance_code 501 : healthmonitor:http_monitor> save : healthmonitor> save

The following commands edit the health monitor's configuration to remove the filter a response
string:

: > configure healthmonitor System-HTTP


: healthmonitor> http_monitor
: healthmonitor:http_monitor> no maintenance_response
: healthmonitor:th> save
: healthmonitor> save

NSX Advanced Load Balancer makes no assumptions.

VMware, Inc. 298


VMware NSX Advanced Load Balancer Configuration Guide

Enabling Authentication HTTP and HTTPs Health Monitor


This section explains the NTLM and Basic authentication support for HTTP and HTTPS health
monitor.

Configuring HTTPS Health Monitor with NTLM Authentication


The following are the steps to create a new HTTPS monitor for POST method with NLM
authentication enabled:

[admin:ctrl2]: > configure healthmonitor NTLM-POST

[admin:ctrl2]: healthmonitor> type health_monitor_https


[admin:ctrl2]: healthmonitor> https_monitor
[admin:ctrl2]: healthmonitor:https_monitor> http_request "POST /EWS/Exchange.asmx HTTP/
1.1\r\nContent-Typ
e: text/xml; charset=utf-8 "
[admin:ctrl2]: healthmonitor:https_monitor> http_request_body "[?xml version=\"1.0\"
encoding=\"UTF-8\"?]
[soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:t=\"http://
schemas.microsoft.com/e
xchange/services/2006/types\" xmlns:m=\"http://schemas.microsoft.com/exchange/services/2006/
messages\"][soap:Hea
der][/soap:Header][soap:Body][GetFolder xmlns=\"http://schemas.microsoft.com/exchange/
services/2006/messages\"][
FolderShape][t:BaseShape>IdOnly[/t:BaseShape][/FolderShape][FolderIds]
[t:DistinguishedFolderId Id=\"inbox\"][t:M
ailbox][t:EmailAddress][/t:EmailAddress][/t:Mailbox][/t:DistinguishedFolderId][/FolderIds][/
GetFolder][/soap:Body][/soap:Envelope]"

[admin:ctrl2]: healthmonitor:https_monitor> http_response "GetFolderResponseMessage


ResponseClass=\"Success\""

[admin:ctrl2]: healthmonitor:https_monitor> http_response_code http_2xx


[admin:ctrl2]: healthmonitor:https_monitor> auth_type auth_ntlm
[admin:ctrl2]: healthmonitor:https_monitor> ssl_attributes
[admin:ctrl2]: healthmonitor:https_monitor:ssl_attributes> ssl_profile_ref System-Standard
[admin:ctrl2]: healthmonitor:https_monitor:ssl_attributes> save
[admin:ctrl2]: healthmonitor:https_monitor> save
[admin:ctrl2]: healthmonitor> authentication
[admin:ctrl2]: healthmonitor:authentication> username aviuser
[admin:ctrl2]: healthmonitor:authentication> password aviuserpassword
[admin:ctrl2]: healthmonitor:authentication> save
[admin:ctrl2]: healthmonitor> save

+-------------------------
+---------------------------------------------------------------------------------------------
-----------------------------+
| Field |
Value
|
+-------------------------
+---------------------------------------------------------------------------------------------
-----------------------------+

VMware, Inc. 299


VMware NSX Advanced Load Balancer Configuration Guide

| uuid | healthmonitor-b8b7cd94-7076-4a55-
a90a-77d6e768f4b1 |
| name |
NTLM
|
| send_interval | 10
sec
|
| receive_timeout | 4
sec
|
| successful_checks |
2
|
| failed_checks |
2
|
| type |
HEALTH_MONITOR_HTTPS
|
| https_monitor
|
|
| http_request | POST /EWS/Exchange.asmx
HTTP/1.1
|
| | Content-Type: text/xml;
charset=utf-8
|
| http_response_code[1] |
HTTP_2XX
|
| http_response | GetFolderResponseMessage
ResponseClass="Success"
|
| ssl_attributes
|
|
| ssl_profile_ref | System-
Standard
|
| exact_http_request |
False
|
| auth_type |
AUTH_NTLM
|
| http_request_body | <?xml version="1.0" encoding="UTF-8"?
>< |
| | soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/
envelope/" |
| | xmlns:t="http://schemas.microsoft.com/exchange/services/2006/
types" |
| | xmlns:m="http://schemas.microsoft.com/exchange/services/2006/
messages"> |

VMware, Inc. 300


VMware NSX Advanced Load Balancer Configuration Guide

| |
<soap:Header><soap:Header><soap:Body>
|
| | <GetFolder xmlns="http://schemas.microsoft.com/exchange/
services/2006/messages"> |
| | <FolderShape><t:BaseShape>IdOnly</
t:BaseShape> |
| | </
FolderShape><FolderIds>
|
| | <t:DistinguishedFolderId Id="inbox">
<t:Mailbox> |
| | <t:EmailAddress></t:EmailAddress> </
t:Mailbox> |
| | </t:DistinguishedFolderId><</
FolderIds> |
| | </GetFolder></soap:Body></
soap:Envelope> |
| authentication
|
|
| username |
<sensitive>
|
| password |
<sensitive>
|
| is_federated |
False
|
| tenant_ref |
admin
|
+-------------------------
+---------------------------------------------------------------------------------------------
-----------------------------+

Note You can configure NTLM authentication for GET method, and HTTP health monitor in a
similar way.

Enabling Basic Authentication in HTTP(S) Health Monitor


The HTTP(S) health monitor supports the authentication type by providing basic information like
username, and password.

Configuring Health Monitor with Basic Authentication


You can configure basic authentication for GET and POST methods by providing basic information
like username and password.

VMware, Inc. 301


VMware NSX Advanced Load Balancer Configuration Guide

The following is the configuration example for basic authentication for GET method on HTTP
health monitor:

[admin:ctrl2]: > configure healthmonitor HTTP-Basic-Authentication

[admin:ctrl2]: healthmonitor> type health_monitor_http


[admin:ctrl2]: healthmonitor> http_monitor
[admin:ctrl2]: healthmonitor:http_monitor> auth_type auth_basic
[admin:ctrl2]: healthmonitor:http_monitor> save
[admin:ctrl2]: healthmonitor> authentication
[admin:ctrl2]: healthmonitor:authentication> username aviuser
[admin:ctrl2]: healthmonitor:authentication> password aviuser
[admin:ctrl2]: healthmonitor:authentication> save
[admin:ctrl2]: healthmonitor> save

You can configure basic authentication for POST method, or enable basic auth in HTTPS health
monitor in a similar way.

Note
n Enabling authentication is available for HTTP and HTTPs monitors only.

n You cannot configure exact_http_request for HTTP(S) health monitors using NTLM
authentication.

VMware, Inc. 302


SE Advanced Networking
2
This chapter includes the following topics:

n VRFs

n Routing

n BGP

n DSR and Default Gateway

n Configuring Networks for SEs and Virtual IPs

n Enabling VLAN trunking on NSX Advanced Load Balancer Service Engine

n Sizing Service Engines

n Per-App SE Mode

n Connecting SEs to Controllers When Their Networks Are Isolated

n SE Memory Consumption

n X-Forwarded-For Header Insertion

n Resetting PCAP TX Ring for Non-DPDK Deployment

n Preserve Client IP

VRFs
This section covers the following topics:

n SE Data Plane Architecture and Packet Flow

n Change VRF Context Setting for NSX Advanced Load Balancer SE's Management Network

n VRF Support for Service Engine Deployment on Bare-Metal Servers

SE Data Plane Architecture and Packet Flow


The Data Plane Development Kit (DPDK) comprises a set of libraries that boosts packet processing
in data plane applications.

VMware, Inc. 303


VMware NSX Advanced Load Balancer Configuration Guide

The following are the packet processing for the SE data path:

n Server health monitor

n TCP/IP Stack - TCP for all flows

n Terminate SSL

n Parse protocol header

n Server load balancing for SIP/L4/L7 App profiles

n Sending and receiving packets

VMware, Inc. 304


VMware NSX Advanced Load Balancer Configuration Guide

SE System Logical Architecture

Controller Cluster

SE Mgr /
Metrics - Mgr Log Mgr
VS Mgr

Datastore

proxy proxy proxy SE-Agent SE-Log-Agent

Dispatcher Flow-Table

(v)NIC

VMware, Inc. 305


VMware NSX Advanced Load Balancer Configuration Guide

The following are the features of each component in SE system logical architecture:

Work Process

The following are the three processes in Service Engine:

n SE-DP

n SE-Agent

n SE-Log-Agent

SE-DP — The role of the process can be a proxy-alone, dispatcher-alone, or proxy-dispatcher


combination.

n Proxy-alone — Full TCP/IP, L4/L7 processing and policies defined for each app/virtual
service.

n Dispatcher-alone —

n Processes Rx of (v)NIC and distributes flows across the proxy services via per proxy
lock-less RxQ based on the current load of each proxy service.

n The dispatcher manages the reception and transmission of packets through the NIC.

n Polls the proxy TxQ and transacts to the NIC.

n Proxy-dispatcher — This acts as a proxy and dispatcher depending on the configuration


and resources available.

SE–Agent — This acts as a configuration and metrics agent for Controller. This can run on any
available core.

SE-Log-Agent — This maintains a queue for logs. This performs the following actions:

n Batches the logs from all SE processes and sends them to the log manager in Controller.

n SE-Log-Agent can run on any available core.

Flow-Table

This is a table that stores relevant information about flows. It maintains flow to proxy service
mapping.

Based on the resources available, the service engine configures an optimum number of
dispatchers. You can override this by using Service Engine group properties. There are multiple
dispatching schemes supported based on the ownership and usage of Network Interface Cards
(NICs):

n A single dispatcher process owning and accessing all the NICs.

n Ownership of NICs distributed among a configured number of dispatchers.

n Multi-queue configuration where all dispatcher cores poll one or more NIC queue pairs, but
with mutually exclusive se_dp to queue pair mapping.

VMware, Inc. 306


VMware NSX Advanced Load Balancer Configuration Guide

The remaining instances are considered as a proxy. The combination of NICs and dispatchers
determine the Packets per Second (PPS) that a SE can handle. The CPU speed determines
the maximum data plane performance (CPS/RPS/TPS/Tput) of a single core and linearly scales
with the number of cores for a SE. You can dynamically increase the SE’s proxy power without
the need to reboot. A subset of the se_dp processes is active in handling the traffic flows. The
remaining se_dp processes will not be selected to handle new flows. All the dispatcher cores are
also selected from this subset of processes.

The active number of se_dp processes can be specified using SE group property max_num_se_dps.
As a run-time property, it can be increased without a reboot. However, if the number is
decreased, it will not take effect until after the SE is rebooted.

The following is the configuration example:

[admin:ctr2]: serviceenginegroup> max_num_se_dps

INTEGER 1-128 Configures the maximum number of se_dp processes that handles traffic. If not
configured, defaults to the number of CPUs on the SE.
[admin:aziz-tb1-ctr2]: serviceenginegroup> max_num_se_dps

INTEGER 1-128 Configures the maximum number of se_dp processes that handles traffic. If not
configured, defaults to the number of CPUs on the SE.
[admin:ctr2]: serviceenginegroup> max_num_se_dps 2
[admin:ctr2]: serviceenginegroup> where | grep max_num
| max_num_se_dps | 2 |
[admin:ctr2]: serviceenginegroup>

Tracking CPU Usage


CPU is intensive in the following cases:

n Proxy

n SSL Termination

n HTTP Policies

n Network Security Policies

n WAF

n Dispatcher

n High PPS

n High Throughput

n Small Packets (for instance, DNS)

Packet Flow from Hypervisor to Guest VM


SR-IOV

VMware, Inc. 307


VMware NSX Advanced Load Balancer Configuration Guide

Single Root I/O Virtualization (SR-IOV) assigns a part of the physical port (PF - Platform
Function) resources to the guest operating system. A Virtual Function (VF) is directly mapped
as the vNIC of the guest VM and the guest VM needs to implement the specific VF’s driver.

SR-IOV is supported on CSP and OpenStack no-access deployments.

For more information on SR-IOV, see SR-IOV with VLAN and NSX Advanced Load Balancer
(OpenStack No-Access) Integration in DPDK.

Virtual Switch

Virtual switch within hypervisor implements L2 switch functionality and forwards traffic to each
guest VM’s vNIC. Virtual switch maps a VLAN to a vNIC or terminates overlay networks and
maps overlay segment-ID to vNIC.

Note AWS/Azure clouds have implemented the full virtual switch and overlay termination
within the physical NIC and network packets bypass the hypervisor.

In these cases, as VF is directly mapped to the vNIC of the guest VM, the guest VM needs to
implement a specific VF’s driver.

VLAN Interfaces and VRFs


VLAN

VLAN is a logical physical interface that can be configured with an IP address. This acts as
child interfaces of the parent vNIC interface. VLAN interfaces can be created on port channels/
bonds.

VRF Context

A VRF identifies a virtual routing and forwarding domain. Every VRF has its routing table
within the SE. Similar to a physical interface, a VLAN interface can be moved into a VRF. The
IP subnet of the VLAN interface is part of the VRF and its routing table. The packet with a
VLAN tag is processed within the VRF context. Interfaces in two different VRF contexts can
have overlapping IP addresses.

Health Monitor
Health monitors run in data paths within proxy as synchronous operations along with packet
processing. Health monitors are shared across all the proxy cores, hence linearly scales with the
number of cores in SE.

For instance, 10 virtual services with 5 servers in a pool per virtual service and one HM per server
is 50 health monitors across all the virtual services. 6 core SE with dedicated dispatchers will have
5 proxies. Each proxy will run 10 HMs and all the HM status is maintained within shared memory
across all the proxies.

Custom external health monitor runs as a separate process within SE and script provides HM
status to the proxy.

VMware, Inc. 308


VMware NSX Advanced Load Balancer Configuration Guide

DHCP on Datapath Interfaces


The Dynamic Host Configuration Protocol (DHCP) mode is supported on datapath interfaces
(regular interfaces/bond) in bare-metal/LSC Cloud. Starting with NSX Advanced Load Balancer
version 20.1.3, it can also be enabled from Controller GUI.

You can enable DHCP from the Controller using the following command: configure
serviceengine <serviceengine-name>

You can check the desired data_vnics index ( i ) using the following command:

data_vnics index <i>


dhcp_enabled
save
save

This should enable DHCP on the desired interface.

To disable DHCP on a particular data_vnic, you can replace dhcp_enabled with no dhcp_enabled
in the above command sequence.

Note If DHCP is turned-ON on unmanaged/ unconnected interfaces, it could slow down the SE
stop sequence and SE could get restarted by the Controller.

Change VRF Context Setting for NSX Advanced Load Balancer SE's
Management Network
The option to change the VRF context setting is available on the NSX Advanced Load Balancer UI.

The VRF setting can be changed by navigating to Infrastructure > Service Engine and clicking the
edit option.

Configuring static routes available under the Management Network does not affect the SE. As
part of SE boot-up process, the NSX Advanced Load Balancer Controller only picks the default
gateway that is applicable to the specific SE based on the SE’s management network.

The VRF edit option on the NSX Advanced Load Balancer UI is only applicable to the Data NIC’s.
The VRF context setting for the management network can be changed using NSX Advanced Load
Balancer CLI.

This section also explains how to configure NSX Advanced Load Balancer when there are multiple
management networks used across Service Engines Groups.

Instructions
Login to the NSX Advanced Load Balancer CLI, and execute the following commands:

n configure vrfcontext management

n static_routes route_id <ID> prefix 0/0 next_hop <IP address of the next
hop>

VMware, Inc. 309


VMware NSX Advanced Load Balancer Configuration Guide

For more information, see the following configuration snippets:

admin:10-10-30-102]: > configure vrfcontext management


Updating an existing object. Currently, the object is:
+----------------+-------------------------------------------------+
| Field | Value |
+----------------+-------------------------------------------------+
| uuid | vrfcontext-ef7605b5-4d95-41dd-bf22-5d132584ec7b |
| name | management |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------+-------------------------------------------------+
[admin:10-10-30-102]: vrfcontext> static_routes route_id 1 prefix 0/0 next_hop 10.10.22.1
New object being created
[admin:10-10-30-102]: vrfcontext:static_routes> save
[admin:10-10-30-102]: vrfcontext> static_routes route_id 2 prefix 0/0 next_hop 10.10.30.1
New object being created
[admin:10-10-30-102]: vrfcontext:static_routes> save

After the changes, the show output will exhibit the following information. The VRF for the
management network has the routing entries for the two different subnets.

[admin:10-10-30-102]: vrfcontext:static_routes> save


[admin:10-10-30-102]: vrfcontext> wh
Tenant: admin
+------------------+-------------------------------------------------+
| Field | Value |
+------------------+-------------------------------------------------+
| uuid | vrfcontext-ef7605b5-4d95-41dd-bf22-5d132584ec7b |
| name | management |
| static_routes[1] | |
| prefix | 0/0 |
| next_hop | 10.10.22.1 |
| route_id | 1 |
| static_routes[2] | |
| prefix | 0/0 |
| next_hop | 10.10.30.1 |
| route_id | 2 |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+------------------+-------------------------------------------------+

VRF Support for Service Engine Deployment on Bare-Metal Servers


NSX Advanced Load Balancer Service Engine data interfaces can be assigned to multiple Virtual
Routing and Forwarding Context (VRFs).

Virtual Routing Framework (VRF) is a method of isolating traffic within a system. This is also
referred to as a “route domain” within the load balancer community.

VMware, Inc. 310


VMware NSX Advanced Load Balancer Configuration Guide

Clouds Types Supported


NSX Advanced Load Balancer supports the assignment of Service Engine data interfaces to
multiple VRFs only in the following cloud types:

n No Access Cloud

n Linux Server Cloud

n VRF Support for vCenter Deployments

Note Multiple VRFs are only supported in Linux Server Clouds for SEs with DPDK enabled.

Types of Interfaces Supported


The VRF property for the following types of data interfaces can be modified by the user, using the
REST API, UI, or CLI.

n Physical interfaces

n Port-channel interfaces

n VLAN interfaces

The following types of data interfaces do not support modification of the VRF property. Any
attempt to modify them will result in an error.

n Port-channel member interfaces

n Management interface

Dependency on In-band Management


Service Engines can be configured to use in-band management. When enabled, control plane and
data plane traffic will share the same interface.

n If in-band management is enabled on an SE, that SE will not support multiple VRFs.

n To enable multiple VRFs on an SE, it must be deployed with in-band management disabled.
The caveat with disabling in-band management is that the management interface will not be
used for data plane traffic, and hence no VS will be placed on this interface and this interface
will not be used to communicating with back-end servers.

For information on how to disable/enable in-band management, see Configuring In-band


Management for an NSX Advanced Load Balancer Service Engine.

Creating VRF Contexts


To create VRF Contexts:

Procedure

1 Navigate to Infrastructure > Routing.

VMware, Inc. 311


VMware NSX Advanced Load Balancer Configuration Guide

2 Click the cloud name to select the cloud.

Note If the VMware vCenter cloud is the only one configured, or was the first one configured,
the cloud name is “Default-Cloud”.

3 Under the VRF Context tab, click Create.

4 Enter the Name of the VRF context and click Save.

Modifying SE Data Interface VRF — UI


Service Engine physical, port-channel & VLAN interface VRFs can be updated if there are multiple
VRFs configured in the tenant and cloud to which the SE belongs.

Modifying SE Data Interface VRF —CLI


Setting VRF for physical and VLAN interfaces through CLI is as shown below:

Creating Virtual Services in a VRF


To create virtual services in a VRF:

Prerequisites

The steps to create a virtual service in a VRF can be performed from the admin tenant or another
tenant.

VMware, Inc. 312


VMware NSX Advanced Load Balancer Configuration Guide

Procedure

1 Navigate to Applications > Dashboard.

2 Click Create Virtual Service.

3 Select Basic Setup.

Note You can also select Advanced Setup if required.

4 Click the cloud name to select the cloud.

5 Click Next.

6 Select the VRF context from the list and click Next.

7 Enter a name for the virtual service, virtual IP address (VIP) and other properties of the virtual
service.

8 Click Save.

Routing
This section covers the following topics:

n Static Route Support for VIP and SNAT IP Reachability

n NAT Configuration on NSX Advanced Load Balancer Service Engine

n Source NAT for Application Identification

n SNAT Source Port Exhaustion

n TCP Transparent Proxy Support

n Autoscale Service Engines

Static Route Support for VIP and SNAT IP Reachability


A static route is required on the next-hop router through which back end servers are connected to
their NSX Advanced Load Balancer Service Engine (SE).

A static route is required on the next-hop router from the NSX Advanced Load Balancer SE to the
pool, in the following case:

n The virtual service’s VIP address or SNAT addresses are not in any of the SE interface subnets.

To make the static route work, the following are the prerequisites:

n There is no HA requirement on the SE (because only a single SE is used).

n Legacy HA mode is enabled.

This section shows sample topologies that use static routing for server response traffic.

VMware, Inc. 313


VMware NSX Advanced Load Balancer Configuration Guide

Static Routing without HA


Provisioning load balancers without HA is in general not a recommended practice. However,
there can be cases where such a configuration may be desirable. For example, if the SE group is
provisioned with only one SE, then HA is not applicable since there is no device to fail over to. If
that SE fails, all the traffic will be blackholed.

Here is a sample topology without HA. The virtual service’s VIP and SNAT IP addresses are not in
any of the SE interface subnets. As a result, a static route from the back end server to the SE is
required on the next-hop router.

Static routes can be provisioned on the next-hop router to point to the interface IP of the Avi SE.
However, it is recommended to configure a floating interface IP for the SE group and to have the
static route use the floating interface as the adjacency. This will allow the smooth addition of a
second Avi SE in the future if required, for HA purposes (using legacy HA mode).

Client
Interface
192.168.1.0/24

Server pool(s) Router NSX Advanced


Load Balancer
Static Router
Controller Cluster
10.200.1.1->10.10.1.100
10.200.1.2->10.10.1.100
10.100.1.1->10.10.1.100

10.10.1.1/24

Floating Intf IP = 10.10.1.100

Virtual Service 1 Virtual Service 2


VIP=10.100.1.1 VIP=10.100.1.1
SNAT=10.200.1.1 SNAT=10.200.1.2

Active

SE group 1

Legend

Management network
Datapath network

Similarly, static routes or a default gateway will also need to be provisioned on the SE group, to
enable reachability to servers and clients, which might not be Layer-2 adjacent. For information
on provisioning a default gateway and static routes on an SE, see NSX Advanced Load Balancer
Infrastructure.

VMware, Inc. 314


VMware NSX Advanced Load Balancer Configuration Guide

Static Routing with HA


The SE group belonging to the pool used by the virtual service contains two SEs in legacy HA
mode. One of the SEs is active and has ownership of the virtual service’s VIP and SNAT addresses,
while the other SE waits in standby mode. IP addresses that are part of the individual virtual
service’s configuration (including the VIP and SNAT IPs) are enabled only on the active SE in the
SE group.

The active SE responds to Address Resolution Protocols (ARPs) for the VIP and SNAT IP
addresses that are in the same subnet as the SE. The active SE also carries traffic corresponding to
all the virtual services. The standby SE remains idle unless the active SE becomes unavailable. In
this case, the standby SE takes over the active role and assumes ownership of the virtual service’s
IP addresses.

Note The use of static routes for VIP and SNAT IP reachability in cluster HA configurations is not
supported.

Here is an example of a one-armed topology with legacy HA:

Client
Interface
192.168.1.0/24

Server pool(s) Router NSX Advanced


Load Balancer
Static Router
Controller Cluster
10.200.1.1->10.10.1.100
10.200.1.2->10.10.1.100

10.10.1.1/24 10.10.1.2/24

Floating Intf IP = 10.10.1.100

Virtual Service 1 Virtual Service 2 Virtual Service 1 Virtual Service 2


VIP=10.100.1.1 VIP=10.100.1.1
SNAT=10.200.1.1 SNAT=10.200.1.2

Active Standby

SE group 1
HA mode = Legacy Active/Standby
Legend

Management network
Datapath network

VMware, Inc. 315


VMware NSX Advanced Load Balancer Configuration Guide

In this example, neither the VIP nor the SNAT IP is part of the SE interface’s subnet. For this
reason, a floating interface IP (10.10.1.100) is configured. The floating interface IP must be in the
same subnet as the attached interface subnet through which the VIP or SNAT-IP is reachable
(10.10.1.0/24 subnet in the above topology).

A separate floating interface IP is required for each of the attached interface subnets through
which VIP or SNAT IP traffic flows. On the next-hop router used by the server pool for return
traffic back to the SE, static routes to the VIP and SNAT IP addresses are configured, with the
next-hop set to the floating interface IP.

Following failover, ownership of the VIP, SNAT IPs, and floating interface IP are taken over by the
new active SE, as shown here:

Client
Interface
192.168.1.0/24

Server pool(s) Router NSX Advanced


Load Balancer
Static Router
Controller Cluster
10.200.1.1->10.10.1.100
10.200.1.2->10.10.1.100

10.10.1.1/24 10.10.1.2/24

Floating Intf IP = 10.10.1.100 Floating Intf IP = 10.10.1.100

Virtual Service 1 Virtual Service 2 Virtual Service 1 Virtual Service 2


VIP=10.100.1.1 VIP=10.100.1.1 VIP=10.100.1.1 VIP=10.100.1.2
SNAT=10.200.1.1 SNAT=10.200.1.2 SNAT=10.200.1.1 SNAT=10.200.1.2

Failed Active

SE group 1
HA mode = Legacy Active/Standby
Legend

Management network
Datapath network

The connecting router thus does not see any change, except for the gratuitous ARP update for the
floating interface’s IP address, which is now mapped to the interface MAC address the new active
SE.

Configuration
On the NSX Advanced Load Balancer Controller, the VIP and SNAT IP addresses are part of the
individual virtual service’s configuration.

VMware, Inc. 316


VMware NSX Advanced Load Balancer Configuration Guide

The HA mode and floating IP address are configured within the SE group.

Note The SE group for the non-HA topology contains a single SE. The SE group for the legacy
HA topology contains two SEs.

VIP Address
The VIP address is the IP address that DNS will return in response to queries for the load-
balanced application’s domain name. This is the destination IP address of requests sent from the
client browser to the application.

SNAT IP Address
When the SE forwards a request to a back end server, the SE uses the SNAT IP address as the
source address of the client request. In deployments that handle VIP traffic differently depending
on the application, the source NAT IP address provides a way to direct the traffic. The SNAT IP
address also ensures that response traffic from the back end servers goes back through the SE
that forwarded the request.

Floating Interface IP Address


On the next-hop router, static routes are set up to point to the VIP and SNAT-IP of the SE group.
The static routes are configured with the next-hop set to the floating-interface IP of the attached
subnet of the SE group.

Within the SE group configuration, legacy HA mode is selected and the floating IP address is
specified.

For more information, see Network Service Configuration.

VMware, Inc. 317


VMware NSX Advanced Load Balancer Configuration Guide

Using the CLI


The following commands set the HA mode in SE group 1 to legacy HA. The floating IP address
10.10.1.100 for the corresponding SE group is configurable using Network Service. For more
information on configuring floating IP Addresses, see Network Service Configuration.

: > configure serviceenginegroup SE group 1


...
: ha_mode ha_mode_legacy_active_standby
: save

NAT Configuration on NSX Advanced Load Balancer Service Engine


When new application servers are deployed, the servers need external connectivity for
manageability.

In the absence of a router in the server networks, the NSX Advanced Load Balancer SE can be
used for routing the traffic of server networks by using the IP routing feature of Service Engines.
Also, you need NAT functionality in the SE to use a NAT gateway for the entire private network of
servers.

Note This feature is not supported for IPv6.

NAT will function in the post-routing phase of the packet path in the SE. It is recommended to go
through the SE default gateway (IP routing on Service Engine) feature. For more information, see
Default Gateway (IP Routing on NSX Advanced Load Balancer SE).

Enabling IP routing on Service Engine and using the SE as the gateway is a necessary prerequisite
to use the outbound NAT feature. Hence all necessary requirements for enabling IP routing on the
Service engine is also applicable to the outbound NAT feature.

Note Outbound NAT is supported for TCP/UDP, and ICMP flows.

NAT Guidelines
NAT is VRF-aware and must be programmed per SE group using a network service of Routing
Service type. For more information, see Network Service.

NAT/IP routing is supported on two-armed, no-access configurations of Linux server clouds and
VMware clouds.

NSX Advanced Load Balancer supports NAT for VMware cloud deployments in write access
mode. For this feature to work on VMware write access clouds, at least one virtual service must be
configured with the following configurations:

n One arm (in the two-arm mode deployment) must be placed in the back end network. For this
network, the SE acts as the default gateway.

n The other arm is placed in the desired front end network.

n The SE group of the network service must be in legacy HA (active/standby).

VMware, Inc. 318


VMware NSX Advanced Load Balancer Configuration Guide

n The Routing Service should have enabled the routing set.

n NAT functions are done by Service Engine IP stack, so the routing_by_linux_ipstack attribute
of Routing Service should be set to False.

n Only DPDK-based SEs are allowed.

n On VMware write access mode, if a virtual service has already been created. This virtual
service creates the required Service Engines.

n NAT IP of a NAT Rule cannot be the same as any interface IP present in the VRF. Such NAT IP
will be ignored.

n NAT IP is configured on an interface as a secondary IP. Hence different Service Engine groups
can not share a NAT IP in a given VRF.

NAT Service
The diagrammatic representation of NAT service traffic initiated from inside to outside is as
follows:

4 5

3 6
FE-NW 10.100.0.0/24

Floating IP
10.100.0.2/24

2 7

Floating IP

BE-NW 192.168.100.0/24 192.168.100.1/24

1 8

VMware, Inc. 319


VMware NSX Advanced Load Balancer Configuration Guide

The flow details mentioned in the diagrammatic representation of NAT service traffic is from 1 to 8.
The details of the flow are as follows:

Flow Count Description

The server ARP's for DG and gets MAC-A. The server


1
sends IP packet to MAC-A. [S:IP-SX, D:IP-Ext]

Service Engine creates a NAT entry since this is a new


2 flow, does NAT of src-IP and sport and sends packet to
router (MAC-R). [S:SE-NIP, D:IP-Ext]

Router uses internet routing to forward to Ext. [S:SE-NIP,


3
D:IP-Ext]

4 Ext receives the packet sent by SX. [S:SE-NIP, D:IP-Ext]

5 Packet received by destination. [S:IP-Ext, D:SE-NIP]

Router ARP's for SE-NIP and SE-Active responds to ARP.


6
[S:IP-Ext, D:SE-NIP]

SE looks up NAT flow table and based on the match, it


7 changes the dst-IP:port to real server IP port. [S:IP-Ext,
D:IP-SX]

SE does IP routing and sends packet to MAC-SX. [S:IP-


8
Ext, D:IP-SX]

Note
n The router acts as a front end floating IP for the SE group. SE backend network is not routable
on the front end.

n In the floating IP, the back end network is not routable on the front end.

NAT requires the following configurations at various points in the network:

On the NSX Advanced Load Balancer Controller, you can Enable IP Routing in the Service Engine
group (only Legacy HA) in Advanced tab configuration.

On the front end router, configure static routes to the back end server networks with the next-hop
as floating IP in the front end network.

On the back end router, configure the SE’s floating IP in the back end server network as the
default gateway.

Configuring NAT Policy


You can configure NAT policy as follows:

VMware, Inc. 320


VMware NSX Advanced Load Balancer Configuration Guide

Step 1: Assume 10.100.0.78 is the destination-IP on which the server is trying to reach,
10.100.0.26 is the NAT IP. This IP is owned by Service Engine. Note that the NAT IP has to be
configured as a static route on the front end router with next-hop as front-end floating-interface-ip
(10.100.0.2) of the SE.

configure natpolicy nat-policy-default-group-global


rules index 1
enable
name rule1
match
source_ip match_criteria is_in
addrs 192.168.100.21
ranges begin 192.168.100.2 end 192.168.100.10
save
prefixes 192.168.100.1/24
save
destination_ip match_criteria is_in
addrs 10.100.0.78
save
services
destination_port match_criteria is_in
ports 80
ports 443
save
source_port match_criteria is_not_in
ports 800
save
save
save
action
type nat_policy_action_type_dynamic_ip_port
nat_info
nat_ip 10.100.0.26
save
save
save
save

Assume that the Service Engine Group name is set to DefaultGroup with SE-interfaces present in
VRF global.

Step 2: Create a NetworkService that has a NAT policy.

configure networkservice nat-policy-default-group-global


vrf_ref global
se_group_ref Default-Group
service_type routing_service
routing_service
enable_routing
nat_policy_ref nat-policy-default-group-global
save
save

VMware, Inc. 321


VMware NSX Advanced Load Balancer Configuration Guide

Step 3: Configure ServiceEngineGroup in Legacy-HA and EnableRouting with floating


interface IP, as mentioned. For more information, see Default Gateway (IP Routing on NSX
Advanced Load Balancer SE).

Outbound NAT Use Case

The following are the available debugging commands to get the information of the NAT flows/
stats:

n NAT Flows - Show NAT flow information

n NAT Policy Stats - show NAT policy stats

n NAT Stat - Show NAT statistics

[admin:localhost.localdomain]: > show serviceengine Active_Standby-se-xyjud nat

Note The stats are available using CLI.

Match Criteria
The following match criteria options are supported:

n Match source IP address

n Match source IP address range

n Match source IP address group

n Match source IP prefix

n Match source port(s). Port range is not supported.

n Match destination IP address

n Match destination IP address range

n Match destination IP address group

n Match destination IP prefix

n Match destination port(s)

For every option, is not option is available. This option can be used to exclude packets having
certain parameters from matching the rule.

Match Operations

1 If two or more of the same parameters are used as match criteria, then OR operation is used
for matching.

Example:

match

source_ip match_criteria is_in

VMware, Inc. 322


VMware NSX Advanced Load Balancer Configuration Guide

addrs 192.168.100.21

ranges begin 192.168.100.2 end 192.168.100.10

This will match if the source IP is 192.168.100.21 or if the source IP falls in the range of
192.168.100.2 - 192.168.100.10.

2 If two different parameters are used in the match criteria, then AND operation is used for
matching.

Example:

match

source_ip match_criteria is_in

addrs 192.168.100.21

ranges begin 192.168.100.2 end 192.168.100.10

destination_port match_criteria is_in

ports 80

This will match if the source IP is 192.168.100.21 or if the source IP falls in the range of
192.168.100.2 - 192.168.100.10 and if the destination port is 80.

3 If there are multiple rules configured, the rules are evaluated in the ascending order as
indexed. The evaluation stops on the first match. No subsequent rules are checked if a packet
already matches a rule.

Action Options

n NAT IP - can be NSX Advanced Load Balancer VIP, floating interface IP, or IP address in the
subnet of SE interface. NAT IP cannot be SE interface IP.

n NAT IP range.

Source NAT for Application Identification


The source IP address used by NSX Advanced Load Balancer SEs for server back end connections
can be overridden through an explicit user-specified address (Source NAT (SNAT) IP address).

The SNAT IP address can be specified as part of the virtual service configuration.

Note This feature is not supported for IPv6.

VMware, Inc. 323


VMware NSX Advanced Load Balancer Configuration Guide

Uses for SE SNAT


In some deployments, it is required to identify traffic based on source IP address, to provide
differential treatment based on the application. For instance, in DMZ deployments there can be
firewall, security, visibility, and other types of solutions that might need to validate clients before
passing their traffic on to an application. Such deployments use the source IP to validate the client.
A single SE can host multiple VIPs, so a firewall sitting between the SE and back end servers
would normally see all traffic coming from the same SE interface IPs, no matter what virtual service
the traffic belongs to. In contrast, with per-VS SNAT, the firewall will see a source IP it can use to
filter traffic based on what application it is coming from (since the firewall knows the VS-SNAT-IP
mapping established by the admin).

In the following example, SNAT is used to identify the application type for a VIP’s traffic. Traffic
destined for email servers must pass through a SPAM filter and anti-virus checks, while traffic
destined for DocShare servers needs to undergo anti-virus and malware filter checks.

EmailApp
Firewall Rules Servers
IP source-IP == 1.1.1.1, run through
SPAM filter and anti-virus
IP sourceIP == 1.1.1.2, run through
anti-virus and malware filters

DocShare
Avi SE Firewall Servers

Virtual Service 1 Virtual Service 2


Name: EmailApp Name: DocShare
VIP: 10.1.1.100 VIP: 10.1.1.100
SNAT-IP: 1.1.1.1 SNAT-IP: 1.1.1.2

(The topology representation is logical rather than physical. For instance, email and DocShare
servers can both be running on the same host and be in the same pool. Such as the set of email or
DocShare servers does not need to be physically connected to the rest of the network through a
single segment, and so on.)

VMware, Inc. 324


VMware NSX Advanced Load Balancer Configuration Guide

One SNAT Address per SE


If a virtual service uses SNAT, the virtual service's configuration must include a unique SNAT
address for each SE that the virtual service can use. For instance, if the SE group for the virtual
service’s pool can be scaled out to a maximum of four SEs, the SNAT list within the virtual service
configuration must contain four unique SNAT addresses.

Note Unlike some other load balancing systems, NSX Advanced Load Balancer does not require
a entire pool of SNAT IP addresses per virtual service, even for a single load balancing appliance.
NSX Advanced Load Balancer does not have the limitation of 64k port numbers for a single
device. NSX Advanced Load Balancer is designed to allow a single source IP to have more than
64k connections across an application’s back end servers. Up to 48k open connections can be
established to each back end server.

Configuring SE SNAT
To enable source NAT for a virtual service:

Procedure

1 Navigate to Applications > Virtual Services.

a If you are creating a new virtual service, click Create > Advanced Setup.

b If you are adding SNAT to an existing virtual service, click the edit icon in the row where
the virtual service is listed.

2 On the Advanced tab, select the SNAT IP in the SNAT IP Address field.

If the SE group allows scaling out to more than one SE, add a unique SNAT IP for each SE. Use
a comma between each IP as a delimiter.

3 Click Save.

Results

The following configuration changes are disruptive, i.e., the virtual service will get removed from
the existing Service Engines and get added back again:

n Add snat_ip pool to virtualservice config

n Remove snat_ip pool from virtualservice

n Update snat_ip pool to remove a IP that was already being allocated

High Availability Support for Source NAT


Source NAT can be used with either of the high availability (HA) modes, such as elastic HA or
legacy HA. The configuration requirements differ depending on whether the SE and back end
servers are in the same subnet (connected at Layer 2) or in different subnets (connected at Layer
3).

VMware, Inc. 325


VMware NSX Advanced Load Balancer Configuration Guide

SE-server Connection HA Type Requirements

Elastic HA SNAT IPs: 1 per SE

(Active/Active) Floating IP: Not Required


Layer 2
Legacy HA SNAT IPs: 1 per virtual service

(Active/Standby) Floating IP: Not Required

SNAT IPs: 1 per SE in SE group (to


Dynamic HA using BGP
support scale out)

(Active/Active) Floating IP: Not Required


Layer 3
Legacy HA SNAT IPs: 1 per virtual service

(Active/Standby) Floating IP: Required

n In Layer 3 HA, the upstream router is used to provide equal-cost multipath (ECMP) load
balancing across the virtual service’s SEs.

n For Layer 3 HA, the configuration might be required on the router between the SEs and the
back end servers to enable return traffic from the server to reach the SEs.

n In Layer 2 HA, scale-out is not possible.

Virtual services can have SNAT enabled when associated with a Service Engine Group and VRF
that have a network service with IP routing enabled. However, on any given virtual service
preserve_client_ip will take precedence over SNAT IP.

Layer 2: Cluster HA (A/A)


In Layer 2 cluster HA, one SNAT IP is required per SE at the virtual service configuration level.
When an SE initiates the connection to the back end server, the SNAT IP corresponding to that SE
will be used for the connection.

If the default Layer-2 forwarding option is used, the connections from clients can always go to
the primary SE and then get distributed using Layer 2 forwarding. Here is an example of a typical
Layer 2 cluster HA topology.

VMware, Inc. 326


VMware NSX Advanced Load Balancer Configuration Guide

Client
Interface
10.10.1.0/24

Server pool(s) Router NSX Advanced Load


Balancer Controller Cluster

10.10.1.1/24 10.10.1.2/24

Virtual Service 1 Virtual Service 2 Virtual Service 1 Virtual Service 2


VIP=10.100.1.1 VIP=10.100.1.2 VIP=10.100.1.1 VIP=10.100.1.2
SNAT=10.10.1.100 SNAT=10.10.1.150 SNAT=10.10.1.101 SNAT=10.10.1.151

Active Active

SE group 1
HA mode = Cluster Active-Active
Legend

Management network
Datapath network

In this topology, two virtual services are configured. Each of the virtual services is provisioned with
a distinct SNAT IP. Since cluster HA is selected, each virtual service will need to be provisioned
with as many SNAT IPs as the number of SEs in the SE group. The Avi Controller will automatically
distribute the SNAT IPs to the individual SEs on which the virtual services are enabled.

Here is the SNAT configuration in the web interface for Virtual Service 1 with IPv4 addresses in the
example topology.

VMware, Inc. 327


VMware NSX Advanced Load Balancer Configuration Guide

Here is the SNAT configuration in the web interface for Virtual Service 1 with IPv6 addresses in the
example topology.

Layer 2: Legacy HA (A/S)


Legacy HA mode typically is used when migrating from appliance-based load balancing
deployments, which support only 1:1 active-standby HA mode. In this case, only a single SNAT IP
per virtual service is necessary since the standby SE does not carry any traffic. Here is an example
of a typical Layer 2 legacy HA topology.

VMware, Inc. 328


VMware NSX Advanced Load Balancer Configuration Guide

Client
Interface
10.10.1.0/24

Server pool(s) Router NSX Advanced Load


Balancer Controller Cluster

10.10.1.1/24 10.10.1.2/24

Virtual Service 1 Virtual Service 2


VIP=10.100.1.1 VIP=10.100.1.1 Virtual Service 1 Virtual Service 2
SNAT=10.10.1.100 SNAT=10.10.1.150

Active Standby

SE group 1
HA mode = Legacy HA
Legend

Management network
Datapath network

In case of a failover, the newly active SE will take over the traffic and ownership of the SNAT IP
from the failed SE. Health monitoring is performed only by the active SE.

Here is the SNAT configuration in the web interface for Virtual Service 1 in the example topology.

Layer 3: Elastic HA (A/A)


In Layer 3 dynamic HA, the SNAT IP is advertised dynamically through BGP. BGP support is
enabled in the virtual service configuration. Using BGP enables an active/active scale-out topology
when the SNAT IP is not part of the SE interface’s subnet.

VMware, Inc. 329


VMware NSX Advanced Load Balancer Configuration Guide

When SNAT is enabled, NSX Advanced Load Balancer Controller users will need to provide as
many SNAT IPs as the width of the scale-out desired. For instance, to support a maximum of four
SEs, four unique SNAT IPs are required in the virtual service configuration. If fewer SNAT IPs are
configured than the maximum scale outsize, scale-out is limited to one SE per configured SNAT IP.

Here is an example topology with SNAT enabled in scale-out HA mode with BGP enabled.

Client
Interface
192.168.1.0/24

Server pool(s) NSX Advanced


Load Balancer
Router
Controller Cluster

BGP BGP
10.10.1.1/24 10.10.1.2/24

Virtual Service 1 Virtual Service 2 Virtual Service 1 Virtual Service 2


VIP=10.100.1.1 VIP=10.100.1.1 VIP=10.100.1.1 VIP=10.100.1.1
SNAT=10.200.1.1 SNAT=10.200.1.1 SNAT=10.200.1.2 SNAT=10.200.2.2

Active Active

SE group 1
HA mode = Clustered Active-Active
Legend

Management network
Datapath network

Here is the SNAT configuration in the web interface for Virtual Service 1 in the example topology.

VMware, Inc. 330


VMware NSX Advanced Load Balancer Configuration Guide

For more information on enabling BGP to advertise SNAT IP addresses, see BGP Support for
Scaling Virtual Services.

Layer 3: Legacy HA (A/S)


This mode requires only a single SNAT IP per virtual service since scaling out is not possible. The
active SE carries all the traffic and owns the SNAT IP, whereas the standby SE remains idle. In case
of a failover, the standby SE takes over the traffic and ownership of the SNAT IP.

A floating interface IP needs to be provisioned to provide adjacency to the upstream router for the
SNAT IP.

For more information, see Legacy HA for NSX Advanced Load Balancer Service Engines.

Using the CLI


The following commands add SNAT IP address 10.200.1.1 and 2001::10 to Virtual Service 1:

: > configure virtualservice Virtual Service 1


...

: snat_ip 10.200.1.1
: snat_ip 2001::10
: save

SNAT Source Port Exhaustion


Typically, connections between an NSX Advanced Load Balancer Service Engine (SE) and
destination servers are translated using source NAT (SNAT). NSX Advanced Load Balancer uses
SNAT to translate the source IP address of the connection from the client address into the IP
address of the SE.

VMware, Inc. 331


VMware NSX Advanced Load Balancer Configuration Guide

A connection is considered unique if any combination of the client source IP (for SNATed
connections, the SE IP) and protocol port plus the server destination IP and port are unique.
For typical application traffic, the source port from an Avi SE is unique for each SNATed TCP
connection. When SNAT is used, an SE can open up to 64k connections to each destination
server. Every new server added to a pool adds 64k potential concurrent connections. If a
virtual service is scaled across multiple SEs, each SE can maintain a maximum of 64k SNATed
connections to each server.

TCP Transparent Proxy Support


Transparent TCP proxy can also be called routed mode or default gateway mode. In this
mode, servers point to the Service Engine’s IP address as their default gateway, mitigating the
requirement for the Service Engine to Source NAT (SNAT) traffic sent to the destination servers.

PROXY Protocol Support


This section explains the Proxy protocol support for the NSX Advanced Load Balancer.

By default, NSX Advanced Load Balancer SE source-NAT (SNAT) is the traffic destined to servers.
Due to SNAT, application server logs will show the L3 IP address of the SE rather than the original
client’s IP address. Protocol extensions such as the “X-Forwarded-For” header for HTTP require
knowledge of the underlying protocol (such as HTTP). For L4 applications, NSX Advanced Load
Balancer supports version 1 (human-readable format) and version 2 (binary format) of the PROXY
protocol (PROXY protocol spec), which conveys the original connection parameters, such as the
client IP address, to the back-end servers. For L4 SSL applications, version 1 is supported. The
NSX Advanced Load Balancer SE requires no knowledge of the encapsulated protocol, and the
impact on performance caused by the processing of transported information is minimal.

Note For applications served over SSL, the server should be configured to accept proxy protocol,
otherwise the SSL handshake may fail.

PROXY protocol spec format:

PROXY TCP4 (real source address) (proxy address) (TCP source port) (TCP destination port)
(CRLF sequence)

Example V1 PROXY protocol line:

PROXY TCP4 12.97.16.194 136.179.21.69 31646 80\r\n

Application Support
Applications must be configured to capture the IP address embedded within the proxy header,
which is in turn embedded in the TCP options. For more information, see PROXY protocol spec.

VMware, Inc. 332


VMware NSX Advanced Load Balancer Configuration Guide

Configuring PROXY Protocol via UI


The following are the steps to configure PROXY protocol via UI:

1 Navigate to Template > Profiles.

2 Within the Application tab, select System-L4-Application.

3 For Type, select L4.

4 Click Enable PROXY Protocol.

5 Select the desired version.

6 When finished changing the profile, click Save.

Configuring PROXY Protocol via CLI


The following sequence of CLI commands enable PROXY protocol support and protocol version to
be used.

configure applicationprofile System-L4-Application


applicationprofile> tcp_app_profile
applicationprofile:tcp_app_profile> proxy_protocol_enabled
applicationprofile:tcp_app_profile> proxy_protocol_version proxy_protocol_version_1
applicationprofile:tcp_app_profile> save
applicationprofile> save

IPv6 Support for PROXY Protocol


PROXY protocol supports IPv6 addresses. IPv6 address can be sent in the PROXY header. The
following is the format for the PROXY header:

PROXY TCP6 (real source IPv6 address) (proxy IPv6 address) (TCP source port) (TCP destination
port) (CRLF sequence)

The following is an example with IPv6 addresses as the source IPv6 address and the proxy IPv6
addresses.

PROXY TCP6 3ffe::1:600:f8ff:ff95:50df 2001::9d38:6ab8:1d49:4c1a:b94b:d2c1 31646 80\r\n

All the features which are applicable or valid for IPv4 address, still applicable with these changes
also.

Autoscale Service Engines


NSX Advanced Load Balancer SEs can run into CPU, memory, or PPS resource exhaustion while
performing application delivery tasks. To increase the capacity of a load-balanced virtual service,
NSX Advanced Load Balancer needs to increase the resources dedicated to the virtual service.

The NSX Advanced Load Balancer Controller can migrate a virtual service to an unused SE, or
scale out the virtual service across multiple SEs for even greater capacity. This allows multiple
active SEs to concurrently share the workload of a single virtual service.

VMware, Inc. 333


VMware NSX Advanced Load Balancer Configuration Guide

NSX Advanced Load Balancer Data Plane Scaling Methods


NSX Advanced Load Balancer supports three techniques to scale data plane performance:

n Vertical scaling of individual SE performance

n Native horizontal scaling of SEs in a group

n BGP-based horizontal scaling of SEs in a group

In vertical scaling, the allocated resources for a virtual machine running the SE increases manually,
and the VM must reboot. The physical limitations of a single virtual machine restrict this scaling.
For instance, a SE is not allowed to consume more resources than the physical host allows.

In horizontal scaling, a virtual service is placed on additional Service Engines. The first SE on which
the virtual service is placed is called the primary SE and all the additional SEs are called secondary
SEs for the virtual service.

With native scaling, the primary SE receives all connections for the virtual service and distributes
them across all secondary SEs. As a result, all virtual service traffic is routed through the primary
SE. At some point, the primary SE's packet processing capacity will reach a limit. Although
secondary SEs might have the capacity, the primary SE cannot forward enough traffic to utilize
that capacity. Thus, the packet-processing capacity of the primary SE decides the effectiveness of
native scaling.

For instance, when a virtual service is scaled out to four SEs, that is, one primary SE and three
secondary SEs, the primary SE's packet processing capacity will reach a limit and have only
marginal benefits to scale out the virtual service to a fifth Service Engine.

To scale beyond the native scaling's limit of four Service Engines, NSX Advanced Load Balancer
supports BGP-based horizontal scaling. This method relies on RHI and ECMP and requires manual
intervention to scale the load balancing infrastructure. For more information, see BGP Support for
Scaling Virtual Services.

Both horizontal methods can be used in combination. Native scaling requires no changes to the
first SE but instead relies on distributing load to the additional SEs. The scaling capacity requires
no changes within the network or applications.

Native Service Engine Scaling


Scaling Out

During a normal steady-state, all traffic can be handled by a single SE. The MAC address of this SE
will respond to any Address Resolution Protocol (ARP) requests.

VMware, Inc. 334


VMware NSX Advanced Load Balancer Configuration Guide

SE SE SE
secondary 1 primary secondary 2

VMware, Inc. 335


VMware NSX Advanced Load Balancer Configuration Guide

n As traffic increases beyond the capacity of a single SE, the NSX Advanced Load Balancer
Controller can add one or more new SEs to the virtual service. These new SEs can process
other virtual service traffic, or they can be newly created for this task. Existing SE s can be
added within a couple of seconds, whereas instantiating a new SE VM may take up to several
minutes, depending on the time necessary to copy the SE image to the VM’s host.

n Once the new SEs are configured (both for networking and configuration sync), the first SE,
known as the primary, will begin forwarding a percentage of inbound client traffic to the
new SE. Packets will flow from the client to the MAC address of the primary SE, then be
forwarded (at layer 2) to the MAC address of the new SE. This secondary SE will terminate the
Transmission Control Protocol (TCP) connection, process the connection and/or request, then
load balance the connection/request to the chosen destination server.

n The secondary SE will source NAT the traffic from its IP address when load balancing the flow
to the chosen server. Servers will respond to the source IP of the connection (the SE), ensuring
a symmetrical return path from the server to the SE that owns the connection.

n For OpenStack with standard Neutron, such behavior presents a security violation. To avoid
this, it is recommended to use port security. For more information, see Neutron ML2 Plugin.

n If you (administrator) wish to take direct control of how an SE routes responses to clients, you
can use the CLI (or REST API) to control the se_tunnel_mode setting, as shown:

>configure serviceenginegroup Default-Group


>serviceenginegroup> se_tunnel_mode 1
>serviceenginegroup> save

Tunnel mode values are:

0 (default) — Automatic, based on customer environment

1 — Enable tunnel mode

2 — Disable tunnel mode

The tunnel mode setting won’t take effect until the SE is rebooted. This is a global change.

>reboot serviceengine

Scaling In

In this mode, NSX Advanced Load Balancer is load balancing the load balancers, which allows a
native ability to grow or shrink capacity on the fly.

To scale traffic in, NSX Advanced Load Balancer reverses the process, allowing secondary SEs
30 seconds to timeout active connections by default. At the end of this period, the secondary
terminates the remaining connections. Subsequent packets for these connections will now be
handled by the primary SE, or if the virtual service was distributed across three or more SEs,
the connection could hash to any of the remaining SEs. This timeout can be changed using the
following CLI command: vs_scalein_tmeout seconds

Distribution

VMware, Inc. 336


VMware NSX Advanced Load Balancer Configuration Guide

When scaled across multiple Service Engines, the percentage of load may not be entirely equal.
For instance, the primary SE must make a load balancing decision to determine which SE should
handle a new connection, then forward the ingress packets. For this reason, it will have a higher
workload than the secondary SEs and may therefore own a smaller percentage of connections
than secondary SEs. The primary will automatically adjust the percentage of traffic across the
eligible SEs based on available CPU.

Use Case Scenarios


This section focuses on use case scenarios of scaling Service Engines.

Scale Use Cases


A non-scaled virtual service offers the most optimal packet path from the client to NSX Advanced
Load Balancer Controller to the server. Scaling SEs may add an extra hop to some traffic
(specifically traffic pushed to secondary SEs) for ingress packets. Scaling works well for the
following use cases:

n Traffic that involves minimal ingress and greater egress traffic, such as client/server apps,
HTTP or video streaming protocols. For instance, SEs may exist on hosts with single 10-Gbps
NICs. While scaled out, the virtual service can still deliver 30 Gbps of traffic to clients.

n Protocols or virtual service features that consume significant CPU resources, such as
compression or Secure Sockets Layer (SSL)/ Transport Layer Security (TLS).

n Concurrent connection counts that exceed the memory of a single SE.

Scaling does not work well for the following use case:

n Traffic that involves significant client uploads beyond the network or packet per second
capacity of a single SE (or specifically the underlying virtual machine). Since all ingress
packets traverse the primary SE, scaling may not be of many benefits. For packet per second
limitations, see documentation on the desired platform or hypervisor.

Impact on Existing Connections


Existing connections are not impacted by scaling out, as only new connections are eligible to be
scaled to another SE. When scaling in, connections on the secondary SE are given 30 seconds
to finish and are then terminated by the secondary SE. These connections will be flagged in the
virtual service’s significant logs. Subsequent packets for the connection or client are eligible to be
re-load balanced by the primary SE.

Secondary SE Failure
If a secondary SE fails, the primary will detect the failure quickly and forward subsequent packets
to the remaining SEs handling the virtual service. Depending on the high availability mode
selected, a new SE may also be automatically added to the group to fill the gap in capacity. Aside
from the potential increase in connections, traffic to other SEs is not affected.

VMware, Inc. 337


VMware NSX Advanced Load Balancer Configuration Guide

Primary SE Failure
If the primary SE fails, a new primary will be automatically chosen among the secondary SEs.
Similar to a non-scaled failover event, the new primary will advertise a gratuitous ARP for
the virtual service IP address. If the virtual service was using source IP persistence, the newly
promoted primary will have a mirrored copy of the persistence table. Other persistence methods
such as cookies and secure HTTPS are maintained by the client; therefore no mirroring is
necessary. For TCP and UDP connections that were previously delegated to the newly promoted
primary SE, the connections continue as normal, although now there is no need for these packets
to incur the extra hop from the primary to the secondary.

For connections that were owned by the failed primary or by other secondary SEs, the new
primary will need to rebuild their mapping in its connection table. As a new, non-SYN packet is
received by the new primary, it will query the remaining SEs to see if they had been processing
the connection. If they had, the connection flow will be reestablished to the same SE. If no SE
announces it had been handling the flow, it is assumed the flow was owned by the failed primary.
The connection will be reset for TCP, or load balanced to a remaining SE for UDP.

Relation to HA modes
Scaling is different from high availability, however, the two are heavily intertwined. A scaled-out
virtual service will experience no more than a performance degradation if a single SE in the
group fails. Legacy HA active/standby mode - a two-SE configuration - does not support scaling.
Instead, service continuity depends on the existence of initialized standby virtual services on the
surviving SE. These are capable of taking over with a single command.

NSX Advanced Load Balancer’s default HA mode is elastic HA N+M mode, which starts each
virtual service for the SE group in a non-scaled mode on a single SE. In such a configuration,
failure of an SE running non-scaled virtual services causes a brief service outage (of those
virtual services only), during which the Controller places the affected virtual services on spare
SE capacity. In contrast, a virtual service that has scaled to two or more SEs in an N+M group
suffers no outage, but instead a potential performance reduction.

Automated Versus Manual Scaling

Migrate
In addition to scaling, a virtual service can also be migrated to a different SE. For instance,
multiple underutilized SEs can be consolidated into a single SE. Or a single SE with two busy
virtual services can have one virtual service migrated to its SE. If further capacity is required, the
virtual service can still be scaled out to additional SEs. The migration process behaves similar to
scaling. A new SE is added to an existing virtual service as a secondary. Shortly the NSX Advanced
Load Balancer Controller will promote the secondary to become primary. The new SE will now
handle all new connections, forwarding any older connections to the now secondary SE. After 30
seconds, the old SE will terminate the remaining connections and be removed from the virtual
service configuration.

VMware, Inc. 338


VMware NSX Advanced Load Balancer Configuration Guide

Manual Scaling

Manual scaling is the default mode. Scale-out is initiated from the Analytics page for the virtual
service. Point to the Quick Info popup (the virtual service name in the top left corner) to show
options for Scale-Out, Scale In, and Migrate. Select the desired option to scale or migrate. If NSX
Advanced Load Balancer is configured in full access mode, then scale out will begin. This can take
a couple of seconds if an existing SE has available resource capacity and can be added to the
VS, or up to a couple of minutes if a new SE must be instantiated. For read or no access modes,
the NSX Advanced Load Balancer Controller can not install new SEs or change the networking
settings of existing SEs. Therefore, the administrator may be required to manually create new
SEs and properly configure their network settings before initiating a scale-out command. If an
eligible SE is not available when attempting to scale out, an error message will provide further
info. Consider scaling out when the SE CPU exceeds 80% for any sustained amount of time, the
SE memory exceeds 90% or the packets per second reach the limit of the hypervisor for a virtual
machine.

Automated Scaling

The default for scaling is manual. This may be changed on a per-SE-group basis to automatic
scaling (auto-rebalance), which allows the Avi Controller to determine when to scale or migrate
a virtual service. By default, NSX Advanced Load Balancer Controller may scale-out or migrate
a virtual service when the SE CPU exceeds an 80% average. It will migrate or scale in a virtual
service if the SE CPU is under 30%. The Controller inspects SE groups at a five-minute interval.
If the last 30 seconds of that 5-minute interval is above the maximum or below the minimum
settings, the Controller may take an action to rebalance the virtual services across SEs. The
Controller will only initiate or allow one pending change per five-minute period. This could be
a scale in, scale-out, or virtual service migration.

VMware, Inc. 339


VMware NSX Advanced Load Balancer Configuration Guide

5 Min Interval
Yes Does 1VS Yes
Check Is SE CPU Scale Out
consume >
above 80%? 70% of SE’s largest VS
PPS?

No No

Do Nothing Migrate any


VS from SE

Examples scenarios for automated scaling and migration:

n If a single virtual service exists on an SE and that SE is above the 80% threshold, the virtual
service will be scaled out.

n The ratio of consumption of SEs by virtual services is determined by comparing the PPS
(packets per second) during the 5-minute interval. If the SE is above the 80% CPU threshold,
and one virtual service is generating more than 70% of the PPS for the SE, this virtual service
will be scaled out. However, if the SE CPU is above the 80% mark, and no single virtual service
is consuming more than 70% of the SE’s PPS, the Controller will elect to migrate a virtual
service to another SE. The virtual service that is consuming the most resources has a higher
probability of being chosen to migrate.

n If two virtual services exist on an SE, and each are consuming 45% of the SE’s CPU, in other
words neither is violating the 70% PPS rule, one virtual service will be migrated to a new SE.

For more information, see How to Configure Auto-rebalance Using NSX Advanced Load Balancer
CLI.

Configuring Auto-Rebalance
The auto re-balance feature helps in automatically migrating or scaling virtual services when the
load on the Service Engines goes beyond or falls below the configured threshold.

For more information, see How to Configure Auto-rebalance using NSX Advanced Load Balancer
CLI

Enable a Virtual Service VIP on All Interfaces


This section describes how to enable a virtual service VIP.

Typically, a virtual service is placed on one or more NICs, as determined by a list ascertained by
the NSX Advanced Load Balancer Controller. However, the list may not include all SE interfaces.
This feature enables placing the VIP on all NICs of the SEs in the SE group which is useful when
using the default gateway feature. Otherwise, the back-end servers might never be able to reach
the VIP placed on interfaces other than one set as their default gateway.

VMware, Inc. 340


VMware NSX Advanced Load Balancer Configuration Guide

This feature is only relevant in Active-Standby environments and configurable through


Network Service for a particular Service Engine Group and VRF through the
enable_vip_on_all_interfaces configuration knob.

For more information, see Network Service Configuration

Note The maximum number of characters in a vip_id is limited to 16 characters.

BGP
This section covers the following topics:

n BGP Learning and Advertisement Support

n BGP Support for AS Path

n BGP Support for Scaling Virtual Services

n BGP/BFD Visibility

n BGP Community Support on NSX Advanced Load Balancer

n Multihop BGP

n Configuring BGP Graceful Restart

n Service Engine Failure Detection

n Debugging BGP-based Service Engine Configurations

n How to Access and Use Quagga Shell using NSX Advanced Load Balancer CLI

n IPv6 BGP Peering in NSX Advanced Load Balancer

n BGP Support in NSX Advanced Load Balancer for OpenShift and Kubernetes

BGP Learning and Advertisement Support


The BGP learning and advertisement supports:

n Learning routes from a set of peers.

n Learning default route from a set of peers.

n Advertising learned routes to a set of peers.

n Advertising NSX Advanced Load Balancer Service Engine as default routes to a set of peers.

Note
n This feature is not supported for IPv6.

n Learning and advertisement are not supported alongside graceful restart.

VMware, Inc. 341


VMware NSX Advanced Load Balancer Configuration Guide

Learning Back end Routes and Advertising the same to the Front-end
The following is the diagrammatic representation of learning back end routes and advertising the
same to the front end:

BGP Peer Router-1 SE advertises


(label:North) learned backend
10.10.116.19/24 routes to
SE learns backend North peer(s) Routing Option
routes from as follows: (associated to label North):
South peer(s)
as follows: 10.10.116.18/24 10.10.118.5 NH Label: North
10.10.116.18 advertise_learned_routes
10.10.118.5 NH AVI SE Learn_default_route
10.10.117.19 10.10.119.5 NH
10.10.117.18/24 10.10.116.18
10.10.119.5 NH Routing Option
10.10.117.19 10.10.120.5 NH (associated to label South):
10.10.116.18
10.10.120.5 NH Label: South
10.10.117.19/24
10.10.117.19 learn_routes
advertise_default_route
BGP Peer Router-2
(label:South)

Server network: Server network: Server network:


10.10.118.5/24 10.10.119.5/24 10.10.120.5/24

Learning Default Route from the Front end and Advertising itself as Default
Route to Back end
The following is the diagrammatic representation of learning default route from the front end and
advertising itself as the default route to the back end:

BGP Peer Router-1


(label:North)
SE learns
10.10.116.19/24 default
Routing Option
route from
(associated to label North):
North peer(s)
as follows:
SE advertises Label: North
itself as default 10.10.116.18/24 advertise_learned_routes
0.0.0.0.0/0 NH
route to Learn_default_route
10.10.116.19
South peer(s) AVI SE
as follows:
10.10.117.18/24 Routing Option
0.0.0.0/0 NH (associated to label South):
10.10.117.18
Label: South
10.10.117.19/24 learn_routes
BGP Peer Router-2
advertise_default_route
(label:South)

Server network: Server network: Server network:


10.10.118.5/24 10.10.119.5/24 10.10.120.5/24

VMware, Inc. 342


VMware NSX Advanced Load Balancer Configuration Guide

Advertising directly connected Back-end Networks to Front-end


The following is the diagrammatic representation of advertising directly connected back end
networks to the front end:

BGP Peer Router-1 SE advertises directly


(label:North) connected backend
networks to North
peers(s) as follows:
10.10.116.19/24
10.10.117.19 NH
10.10.116.18
The option to advertise
10.10.118.19 NH directly backend network is
10.10.116.18 available inside
NetworkService object->
10.10.116.18/24 10.10.119.19 NH Routing Service->
10.10.116.18 Advertise_backend_networks

AVI SE

10.10.118.18/24

10.10.118.18/24 10.10.117.18/24 10.10.119.18/24

Server network: Server network: Server network:


10.10.118.19/24 10.10.117.19/24 10.10.119.19/24

Key Considerations
The following are the constraints with learning and advertising NSX Advanced Load Balancer BGP:

n This feature is only available using CLI.

n The advertisement option is supported only when routing is enabled (Default Gateway (IP
Routing on NSX Advanced Load Balancer SE). Routing is supported only with Legacy-HA
mode. Only active SE will advertise the routes.

n Configurable route attributes, such as AS path prepend, IP communities, local preference, will
not be applied on learned routes.

n The filters to learning routes and advertising of learned routes are not allowed.

n A label used in peer should be present in one routing option.

n The peers are grouped to exchange routes based on the associated label.

n From a peer, you can either learn routes or learn the default route, but not both.

n The assumption for instance is that when you learn routes from back end peers, there will be
no default route.

n You will not be advertising NSX Advanced Load Balancer Service Engine as the default route
to any peer belonging to a group from which you are learning the default route.

VMware, Inc. 343


VMware NSX Advanced Load Balancer Configuration Guide

n You will not be advertising the default route to any peer in the group to which you are
advertising the learned routes.

Note The routes learned through BGP will not be used for placement decisions. The Controller
will not use the routes learned by Service Engines through BGP to evaluate reachability to the pool
servers.

Configuring Learning and Advertisement


The following is the sample configuration sequence with one front end peer and one back end
peer:

[admin:ctlr-bgp]: > configure vrfcontext global


Updating an existing object. Currently, the object is:
+----------------------------+-------------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------------+
| uuid | vrfcontext-f1d049c8-306e-45eb-8fe3-1f6abb8e19ef |
| name | global |
| bgp_profile | |
| local_as | 66000 |
| ibgp | False |
| peers[1] | |
| remote_as | 1 |
| peer_ip | 100.64.1.64 |
| subnet | 100.64.1.0/24 |
| md5_secret | <sensitive> |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | False |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 255 |
| shutdown | False |
| label | frontend |
| peers[2] | |
| remote_as | 65000 |
| peer_ip | 100.64.2.65 |
| subnet | 100.64.2.0/24 |
| md5_secret | <sensitive> |
| bfd | True |
| advertise_vip | False |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 255 |
| shutdown | False |
| label | backend |
| keepalive_interval | 60 |
| hold_time | 180 |
| send_community | True |
| local_preference | 400 |
| num_as_path_prepend | 3 |

VMware, Inc. 344


VMware NSX Advanced Load Balancer Configuration Guide

| routing_options[1] | |
| label | backend |
| learn_routes | True |
| advertise_default_route | True |
| max_learn_limit | 100 |
| routing_options[2] | |
| label | frontend |
| learn_only_default_route | True |
| learn_routes | False |
| advertise_learned_route | True |
| max_learn_limit | 50 |
| shutdown | False |
| system_default | True |
| lldp_enable | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+

The example shows a configuration where the default route is learned from the front end,
advertises the default route to the back end, learns routes from the back end and advertises
the learned routes to the front end.

The following is the Service Engine route outputs to illustrate the learning and advertisement
feature:

[admin:amit-ctrl-bgp]: >
[admin:amit-ctrl-bgp]: > show serviceengine Avi-se-mrcps route
+-----------------+-------------+-----------+---------------+---------------------------+
| IP Destination | Gateway | Interface | Interface IP | Route Flags |
+-----------------+-------------+-----------+---------------+---------------------------+
+-----------------+-------------+-----------+---------------+---------------------------+
VRF 0
+-----------------+-------------+-----------+---------------+---------------------------+
| 4.4.4.0/24 | 100.64.1.64 | eth3 | 100.64.1.24 | Up, Learned, Gateway, GWUp |
| 5.5.5.1/32 | 0.0.0.0 | eth3 | 5.5.5.1 | Up, GWUp |
| 6.6.6.0/24 | 100.64.2.65 | eth2 | 100.64.2.56 | Up, Learned, Gateway, GWUp|
| 7.7.7.1/32 | 0.0.0.0 | eth3 | 7.7.7.1 | Up, GWUp |
| 100.64.1.0/24 | 0.0.0.0 | eth3 | 100.64.1.24 | Up, GWUp |
| 100.64.1.104/32 | 0.0.0.0 | eth3 | 100.64.1.104 | Up, GWUp |
| 100.64.1.105/32 | 0.0.0.0 | eth3 | 100.64.1.105 | Up, GWUp |
| 100.64.1.106/32 | 0.0.0.0 | eth3 | 100.64.2.106 | Up, GWUp
| 100.64.1.108/32 | 0.0.0.0 | eth3 | 100.64.1.108 | Up, GWUp |
| 100.64.2.0/24 | 0.0.0.0 | eth2 | 100.64.2.56 | Up, GWUp|
+-----------------+-------------+-----------+---------------+---------------------------+
[admin:admin-ctrl-bgp]: >

VMware, Inc. 345


VMware NSX Advanced Load Balancer Configuration Guide

BGP Support for AS Path


This section focuses on the configuring process of the Autonomous System (AS) path and local
preference for routes published over eBGP and iBGP respectively.

Note
n The AS path prepend and local preference features works with the same pre-requisites or
ecosystem support that is listed in the BGP Support for Scaling Virtual Services.

n The features are not supported for IPv6.

Prepending AS Path
When multiple paths to an IP address or prefix are available through BGP in a router, the router
will prefer the path with the least number of AS identifiers in the path.

The BGP can signal lower priority to a route by prepending an arbitrary number of AS identifiers.
This route will be picked only when the route with the lower number of AS identifiers goes down.

This feature allows you to prepend AS identifiers in the path. This is applicable only for routes
advertised over eBGP connections.

Setting Local Preference


You can set the Local Preferencefield to communicate the preference of the path to its peer.

A higher value means higher preference. This is applicable only over iBGP connections.

Use Case for AS Path


The following is the diagrammatic representation of an AS path use case:

VMware, Inc. 346


VMware NSX Advanced Load Balancer Configuration Guide

eBGP Peer
Local AS: 66000 (In AS different than SE’s AS)
Learns following VIP routes:

1.1.1.1 NH 10.10.116.18 Sample Config:


1.1.1.1 NH 20.20.116.18 Path: 65000 65000
Configure VRF> bgp_profile>
Router will always prefer the route
with the leaast AS-Path num_as_path_prepend 2

10.10.116.19/24 20.20.116.19/24

Data Center-1 Data Center-2


AVI-SE-DC-1 AVI-SE-DC-2
Local AS:65000 Local AS:65000
Advertising Advertising same VS as
VS1:VIP:1.1.1.1 VS2:VIP:1.1.1.1
without AS-Path-Prepend 10.10.116.18/24 20.20.116.18/24 with AS-Path-Prepend
(AS Path prepended
twice for example)

VMware, Inc. 347


VMware NSX Advanced Load Balancer Configuration Guide

You can deploy the same service in two different data centers involving two different NSX
Advanced Load Balancer clusters. Both use the same VIP.

The upstream router to which both the SEs get connected will pick the path with the shortest AS
path.

If the service with a short AS path gets disrupted, the system picks the one with the longer AS
path. This is a method for deploying active stand-by across datacenters/geographies.

Use Case for Local Preference


The following is the diagrammatic representation of the local preference use case:

VMware, Inc. 348


VMware NSX Advanced Load Balancer Configuration Guide

eBGP Peer
Local AS: 65000 (Same local AS as SE)
Learns following VIP routes:

1.1.1.1 NH 10.10.116.18 LocalPrf:100 Sample Config:


1.1.1.1 NH 20.20.116.18 LocalPrf:200
Configure VRF> bgp_profile>
Router will always prefer the route
with highest local preference. local_preference 200

10.10.116.19/24 20.20.116.19/24

DataCenter-1 DataCenter-2
AVI-SE-DC-1 AVI-SE-DC-2
Local AS:65000 Local AS:65000
Advertising Advertising same VS as
VS1:VIP:1.1.1.1 VS2:VIP:1.1.1.1
without any explicit local 10.10.116.18/24 20.20.116.18/24 with local preference set
preference. to 200.
Default is 100.

VMware, Inc. 349


VMware NSX Advanced Load Balancer Configuration Guide

You can deploy the same service in two different data centers involving two different NSX
Advanced Load Balancer clusters. Both use the same VIP.

The upstream router to which both the SEs get connected will pick the path with the shortest local
preference path.

If the service with a short local preference path gets disrupted, the system picks the one with the
longer local preference path. This is a method for deploying active stand-by across datacenters/
geographies.

Configuring AS Path and Local Preference


This section describes the configuration of AS path and local preference.

The community feature allows you to configure a default community string and separate
community strings for address ranges and a default community string.

The AS path prepend and local preference is route qualifiers like the community. The same
process can be followed for AS path prepend and local preference.

The configuration supports setting a local preference value for all the VIP and SNAT routes
advertised. This is a field in the BGP profile which is part of VRF.

The configuration supports setting the number of times the local AS is to be prepended in the VIP
and SNAT routes advertised. This is a field in the BGP profile which is part of VRF.

Configuring AS Path using NSX Advanced Load Balancer UI


The NSX Advanced Load Balancer supports configuring AS path from the UI.

Navigate to Infrastructure > Routing > BGP Peering and provide the value for the AS-Path
Prepend as shown below:

VMware, Inc. 350


VMware NSX Advanced Load Balancer Configuration Guide

VMware, Inc. 351


VMware NSX Advanced Load Balancer Configuration Guide

Configuring AS Path using NSX Advanced Load Balancer CLI


The following is the CLI to configure the AS path:

[admin:ctlr1]: > configure vrfcontext global


[admin:ctlr1]: vrfcontext> bgp_profile
[admin:ctlr1]: vrfcontext:bgp_profile> num_as_path_prepend 5
[admin:ctlr1]: vrfcontext:bgp_profile> save
[admin:ctlr1]: vrfcontext> save
+-----------------------+-------------------------------------------------+
| Field | Value |
+-----------------------+-------------------------------------------------+
| uuid | vrfcontext-4f58cb16-eedb-41d1-a125-538e512f11bb |
| name | global |
| bgp_profile | |
| local_as | 66000 |
| ibgp | False |
| keepalive_interval | 60 |
| hold_time | 180 |
| send_community | True |
| num_as_path_prepend | 5 |

VMware, Inc. 352


VMware NSX Advanced Load Balancer Configuration Guide

| shutdown | False |
| system_default | True |
| lldp_enable | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+-----------------------+-------------------------------------------------+

Network Next Hop Metric LocPrf Weight Path


*>100.64.1.126/32 100.64.1.69 0 0 65000 i
*>100.64.1.153/32 100.64.1.39 0 0 65000 65000 65000 65000 65000
65000 i

As per the above use case, on the upstream router, the AS path has been prepended with N+1,
wherein the N=AS path is defined while doing the configuration in the BGP profile.

Configuring Local Preference using NSX Advanced Load Balancer UI


Configuring AS path is supported using NSX Advanced Load Balancer UI.

Navigate to Infrastructure > Routing > BGP Peering and provide the value for the Local
Preference as shown below:

VMware, Inc. 353


VMware NSX Advanced Load Balancer Configuration Guide

Note Any configuration change in AS path prepend or local preference parameters can result in a
BGP connection to the peers being flapped.

Configuring Local Preference using NSX Advanced Load Balancer CLI


The following is the CLI to configure the local preference:

[admin:ctlr1]: > configure vrfcontext global


[admin:ctlr1]: vrfcontext:bgp_profile> local_preference 500
[admin:ctlr1]: vrfcontext:bgp_profile> save
[admin:ctlr1]: vrfcontext> save
+----------------------+-------------------------------------------------+
| Field | Value |
+----------------------+-------------------------------------------------+
| uuid | vrfcontext-b894161d-d517-4f11-ac78-ee869389fe1e |
| name | global |
| bgp_profile | |
| local_as | 6000 |
| ibgp | True |
| keepalive_interval | 60 |

VMware, Inc. 354


VMware NSX Advanced Load Balancer Configuration Guide

| hold_time | 180 |
| send_community | True |
| local_preference | 500 |
| shutdown | False |
| system_default | False |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------+-------------------------------------------------+

Network Next Hop Metric LocPrf Weight Path


>i0.0.0.0/0 100.64.2.70 500 0 i
>i10.79.172.0/22 100.64.2.70 0 500 0 i

As per the above use case, on the upstream router, the local preference has been updated to the
configured value.

Local AS Override for an iBGP Profile in VRF


This feature is required for cases where the local AS in an iBGP profile on a VRF needs to be
decided based on the peers reachable through the SE. For instance, the networks where routers
that support only 2-byte AS number and more recent routers co-exist.

When a VRF and its BGP profile is deployed in an SE, if there are peer configurations with
ibgp_local_as_override set and the peer subnet applies to the SE, the profile level local_as will
be overridden with the peer level remote_as.

The following are a few constraints in the configuration:

n This feature is only for iBGP networks.

n If there are multiple peers with subnets to the same TOR in the SE and
ibgp_local_as_override is enabled, all the peers must have the same remote_as value.

Example config

+----------------------------+-------------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------------+
| uuid | vrfcontext-553674bd-44b9-4a22-b4d6-8bf804e0f046 |
| name | global |
| bgp_profile | |
| local_as | 100 |
| ibgp | True |
| peers[1] | |
| remote_as | 200 |
| peer_ip | 100.64.3.10 |
| subnet | 100.64.3.0/24 |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |

VMware, Inc. 355


VMware NSX Advanced Load Balancer Configuration Guide

| ibgp_local_as_override | True |
| peers[2] | |
| remote_as | 200 |
| peer_ip | 100.64.4.10 |
| subnet | 100.64.4.0/24 |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| ibgp_local_as_override | True |
| peers[3] | |
| remote_as | 300 |
| peer_ip | 100.64.5.10 |
| subnet | 100.64.5.0/24 |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| ibgp_local_as_override | True |
| peers[4] | |
| remote_as | 100 |
| peer_ip | 100.64.6.10 |
| subnet | 100.64.6.0/24 |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| keepalive_interval | 60 |
| hold_time | 180 |
| send_community | True |
| shutdown | False |
| system_default | True |
| lldp_enable | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+

With the above config, the following are the only valid SE peering:

Peering with Peers Quagga Config Local AS

peering with peers[1] 200

peering with peers[2] 200

peering with peers[1] and [2] 200

VMware, Inc. 356


VMware NSX Advanced Load Balancer Configuration Guide

Peering with Peers Quagga Config Local AS

peering with peers[3] 300

peering with peers[4] 100

Note Any other combination of peering is invalid and results in all the BGP virtual services
deployed in the SE with this VRF going to OPER_DOWN state.

BGP Support for Scaling Virtual Services


One of the ways NSX Advanced Load Balancer adds load balancing capacity for a virtual service is
to place the virtual service on additional Service Engines (SEs).

For instance, capacity can be added for a virtual service when needed by scaling out the virtual
service to additional SEs within the SE group, then removing (scaling in) the additional SEs when
no longer needed. In this case, the primary SE for the virtual service coordinates the distribution of
the virtual service traffic among the other SEs, while also continuing to process some of the virtual
service’s traffic.

An alternative method for scaling a virtual service is to use a Border Gateway Protocol (BGP)
feature, route health injection (RHI), with a layer 3 routing feature, equal-cost multi-path (ECMP).
Using Route Health Injection (RHI) with ECMP for virtual service scaling avoids the managerial
overhead placed upon the primary SE to coordinate the scaled out traffic among the SEs.

BGP is supported in legacy (active/standby) and elastic (active/active and N+M) high availability
modes.

If a virtual service is marked down by its health monitor or for any other reason, the NSX
Advanced Load Balancer SE withdraws the route advertisement to its virtual IP (VIP) and restores
the same only when the virtual service is marked up again.

Notes on Limits
Service Engine Count

By default, NSX Advanced Load Balancer supports a maximum of four SEs per virtual service, and
this can be increased to a maximum of 64 SEs. Each SE uses RHI to advertise a /32 host route
to the virtual service’s VIP address and can accept the traffic. The upstream router uses ECMP to
select a path to one of the SEs.

The limit on SE count is imposed by the ECMP support on the upstream router. If the router
supports up to 64 equal-cost routes, then a virtual service enabled for RHI can be supported on up
to 64 SEs. Similarly, if the router supports a lesser number of paths, then the virtual service count
enabled for RHI will be lower.

Subnets and Peers

VMware, Inc. 357


VMware NSX Advanced Load Balancer Configuration Guide

NSX Advanced Load Balancer supports 4 distinct subnets with any number of peers in those 4
subnets. Consequently, a VIP can be advertised on more than 4 peers as long as those peers
belong to 4 or fewer subnets. To illustrate:

n A VIP can be advertised to 8 peers, all belonging to a single subnet.

n A VIP can be advertised to 4 pairs of peers (once again, 8 peers), with each pair belonging to a
separate subnet.

Supported Ecosystem
BGP-based scaling is supported in the following:

n VMware

n Linux server (bare-metal) cloud

Note Peering with OpenStack routers is not supported. However, peering with an external router
is possible.

BGP-based Scaling
NSX Advanced Load Balancer supports the use of the following routing features to dynamically
perform virtual service load balancing and scaling:

n Route health injection (RHI): RHI allows traffic to reach a VIP that is not in the same subnet as
its SE. The NSX Advanced Load Balancer Service Engine (SE) where a virtual service is located
advertises a host route to the VIP for that virtual service, with the SE’s IP address as the
next-hop router address. Based on this update, the BGP peer connected to the NSX Advanced
Load Balancer SE updates its route table to use the NSX Advanced Load Balancer SE as the
next hop for reaching the VIP. The peer BGP router also advertises itself to its upstream BGP
peers as a next hop for reaching the VIP.

n Equal cost multi-path (ECMP): Higher bandwidth for the VIP is provided by load sharing its
traffic across multiple physical links to the SE(s). If an NSX Advanced Load Balancer SE has
multiple links to the BGP peer, the NSX Advanced Load Balancer SE advertises the VIP host
route on each of those links. The BGP peer router sees multiple next-hop paths to the virtual
service’s VIP and uses ECMP to balance traffic across the paths. If the virtual service is scaled
out to multiple NSX Advanced Load Balancer SEs, each SE advertises the VIP, on each of its
links to the peer BGP router.

When a virtual service enabled for BGP is placed on its NSX Advanced Load Balancer SE, that SE
establishes a BGP peer session with each of its next-hop BGP peer routers. The NSX Advanced
Load Balancer SE then performs RHI for the virtual service’s VIP by advertising a host route (/32
network mask) to the VIP. The NSX Advanced Load Balancer SE sends the advertisement as a
BGP route update to each of its BGP peers. When a BGP peer receives this update from the NSX
Advanced Load Balancer SE, the peer updates its route table with a route to the VIP that uses the
SE as the next hop. Typically, the BGP peer also advertises the VIP route to its other BGP peers.

VMware, Inc. 358


VMware NSX Advanced Load Balancer Configuration Guide

The BGP peer IP addresses and the local Autonomous System (AS) number and a few other
settings are specified in a BGP profile on the NSX Advanced Load Balancer Controller. RHI
support is disabled (default) or enabled within the individual virtual service’s configuration. If an
NSX Advanced Load Balancer SE has more than one link to the same BGP peer, this also enables
ECMP support for the VIP. The NSX Advanced Load Balancer SE advertises a separate host route
to the VIP on each of the NSX Advanced Load Balancer SE interfaces with the BGP peer.

If the NSX Advanced Load Balancer SE fails, the BGP peers withdraw the routes that were
advertised to them by the NSX Advanced Load Balancer SE.

BGP Profile Modifications


BGP peer changes are handled as follows:

n If a new peer is added to the BGP profile, the virtual service IP is advertised to the new BGP
peer router without needing to disable/enable the virtual service.

n If a BGP peer is deleted from the BGP profile, any virtual service IPs that had been advertised
to the BGP peer will be withdrawn.

n When a BGP peer IP is updated, it is handled as an add/delete of the BGP peer.

BGP Upstream Router Configuration


The BGP control plane can hog the CPU on the router in case of scale setups. Changes to CoPP
policy are needed to have more BGP packets on the router, or this can lead to BGP packets
getting dropped on the router when churn happens.

Note The ECMP route group or ECMP next-hop group on the router could exhaust if the unique
SE BGP next-hops advertised for a different set of virtual service VIPs. When such exhaustion
happens, the routers could fall back to a single SE next-hop causing traffic issues.

Example:

The following is the sample config on a Dell S4048 switch for adding 5k network entries and 20k
paths:

w1g27-avi-s4048-1#show ip protocol-queue-mapping
Protocol Src-Port Dst-Port TcpFlag Queue EgPort Rate (kbps)
-------- -------- -------- ------- ----- ------ -----------
TCP (BGP) any/179 179/any _ Q9 _ 10000
UDP (DHCP) 67/68 68/67 _ Q10 _ _
UDP (DHCP-R) 67 67 _ Q10 _ _
TCP (FTP) any 21 _ Q6 _ _
ICMP any any _ Q6 _ _
IGMP any any _ Q11 _ _
TCP (MSDP) any/639 639/any _ Q11 _ _
UDP (NTP) any 123 _ Q6 _ _
OSPF any any _ Q9 _ _
PIM any any _ Q11 _ _
UDP (RIP) any 520 _ Q9 _ _
TCP (SSH) any 22 _ Q6 _ _
TCP (TELNET) any 23 _ Q6 _ _

VMware, Inc. 359


VMware NSX Advanced Load Balancer Configuration Guide

VRRP any any _ Q10 _ _


MCAST any any _ Q2 _ _
w1g27-avi-s4048-1#show cpu-queue rate cp
Service-Queue Rate (PPS) Burst (Packets)
-------------- ----------- ----------
Q0 600 512
Q1 1000 50
Q2 300 50
Q3 1300 50
Q4 2000 50
Q5 400 50
Q6 400 50
Q7 400 50
Q8 600 50
Q9 30000 40000
Q10 600 50
Q11 300 50

SE-Router Link Types Supported with BGP


The following figure shows the types of links that are supported between NSX Advanced Load
Balancer and BGP peer routers:

VMware, Inc. 360


VMware NSX Advanced Load Balancer Configuration Guide

Spine Router Spine Router

Switch/Router

10.10.10.2/31 SVI: 10.10.20.1/24

/31 or /30 subnet with /24 subnet with SVI in Create L2 Port- Separate L3 interfaces Separate L3 interfaces
BGP peering between the router. BGP Channel in different subnets (/31 in same subnet -
the Interface IPs peering between the or /24 with SVI) and Not Supported
Host and SVI IP peer with Router’s IP on
each subnet

10.10.10.3/24 10.10.20.4/24
10.10.10.3/31 10.10.20.3/24 10.10.20.3/24 10.10.20.3/24 10.10.20.3/24

SE SE SE SE SE

Linux Server Linux Server Linux Server Linux Server Linux Server

VMware, Inc. 361


VMware NSX Advanced Load Balancer Configuration Guide

BGP is supported over the following types of links between the BGP peer and the NSX Advanced
Load Balancer SEs:

n Host route (/30 or /31 mask length) to the VIP, with the NSX Advanced Load Balancer SE as
the next hop.

n Network route (/24 mask length) subnet with Switched Virtual Interface (SVI) configured in the
router.

n Layer 2 port-channel (separate physical links configured as a single logical link on the next-hop
switch or router).

n Multiple layer 3 interfaces, in separate subnets (/31 or /24 with SVI). A separate BGP peer
session is set up between each NSX Advanced Load Balancer SE layer 3 interface and the BGP
peer.

Each SE can have multiple BGP peers. For example, an SE with interfaces in separate layer 3
subnets can have a peer session with a different BGP peer on each interface. The connection
between the NSX Advanced Load Balancer SE and the BGP peer on separate Layer 3 interfaces
that are in the same subnet and same VLAN is not supported. Using multiple links to the BGP
peer provides higher throughput for the VIP. The virtual service also can be scaled out for higher
throughput. In either case, a separate host route to the VIP is advertised over each link to the BGP
peer, with the NSX Advanced Load Balancer SE as the next-hop address.

Note This feature is supported for IPv6.

To make debugging easier, some BGP commands can be viewed from the NSX Advanced Load
Balancer Controller shell. For more information, see BGP Visibility.

Optional BGP Route Withdrawal when virtual service Goes Down


If virtual service advertising VIPs through BGP goes down, its VIPs are removed from BGP, and so
it becomes unreachable. With NSX Advanced Load Balancer version 20.1, an optional BGP route
withdrawal when virtual service goes down feature is added.

The following are the features added:

n Field

VirtualService
advertise_down_vs

VMware, Inc. 362


VMware NSX Advanced Load Balancer Configuration Guide

n Configuration

n To turn on the feature, you can configure as follows:

[admin:amit-ctrl-bgp]: virtualservice> advertise_down_vs


[admin:amit-ctrl-bgp]: virtualservice> save

n To turn off the feature, you can configure as follows:

[admin:amit-ctrl-bgp]: virtualservice> no advertise_down_vs


[admin:amit-ctrl-bgp]: virtualservice>save

Note
n If the virtual service is already down, the configuration changes done will not affect
it. These changes will be applied if virtual service goes down in future. In such
cases, you should disable and then enable virtual service and apply the configuration.
remove_listening_port_on_vs_down feature will not work if advertise_down_vs is False.

n For custom actions, such as HTTP redirect, showing error pages, etc., to handle down virtual
service, VirtualService.remove_listening_port_on_vs_down should be False.

Use Case for adding the same BGP peer to the different VRFs
You can add a block preventing from:

n Adding a BGP peer which belongs to a network with a different VRF than the VRF that you are
adding the peer to

n Changing network VRF if the network is being used in the BGP profile

The output of show show serviceengine backend_tp_segrp0-se-zcztm vnicdb:

| vnic[3]
| |
| if_name |
avi_eth5 |
| linux_name |
eth3 |
| mac_address |
00:50:56:86:0f:c8 |
| pci_id |
0000:0b:00.0 |
| mtu |
1500 |
| dhcp_enabled |
True |
| enabled |
True |
| connected |
True |
| network_uuid | dvportgroup-2404-cloud-d992824d-
d055-4051-94f8-5abe4a323231 |
| nw[1]

VMware, Inc. 363


VMware NSX Advanced Load Balancer Configuration Guide

| |
| ip |
fe80::250:56ff:fe86:fc8/64 |
| mode |
DHCP |
| nw[2]
| |
| ip |
10.160.4.16/24 |
| mode |
DHCP |
| is_mgmt |
False |
| is_complete |
True |
| avi_internal_network |
False |
| enabled_flag |
False |
| running_flag |
True |
| pushed_to_dataplane |
True |
| consumed_by_dataplane |
True |
| pushed_to_controller |
True |
| can_se_dp_takeover |
True |
| vrf_ref | T-0-
default |
| vrf_id |
2 |
| ip6_autocfg_enabled | False
11:46
| vnic[7]
| |
| if_name |
avi_eth6 |
| linux_name |
eth4 |
| mac_address |
00:50:56:86:12:0e |
| pci_id |
0000:0c:00.0 |
| mtu |
1500 |
| dhcp_enabled |
True |
| enabled |
True |
| connected |
True |
| network_uuid | dvportgroup-69-cloud-d992824d-
d055-4051-94f8-5abe4a323231 |

VMware, Inc. 364


VMware NSX Advanced Load Balancer Configuration Guide

| nw[1]
| |
| ip |
10.160.4.21/24 |
| mode |
DHCP |
| nw[2]
| |
| ip |
172.16.1.90/32 |
| mode |
VIP |
| ref_cnt |
1 |
| nw[3]
| |
| ip |
fe80::250:56ff:fe86:120e/64 |
| mode |
DHCP |
| is_mgmt |
False |
| is_complete |
True |
| avi_internal_network |
False |
| enabled_flag |
False |
| running_flag |
True |
| pushed_to_dataplane |
True |
| consumed_by_dataplane |
True |
| pushed_to_controller |
True |
| can_se_dp_takeover |
True |
| vrf_ref | T-0-
default |
| vrf_id |
2 |
| ip6_autocfg_enabled |
False |

[T-0:tp_bm-ctlr1]: > show vrfcontext


+-------------+-------------------------------------------------+
| Name | UUID |
+-------------+-------------------------------------------------+
| global | vrfcontext-0287e5ea-a731-4064-a333-a27122d2683a |
| management | vrfcontext-c3be6b14-d51d-45fc-816f-73e26897ce84 |
| management | vrfcontext-1253beae-4a29-4488-80d4-65a732d42bb4 |
| global | vrfcontext-e2fb3cae-f4a6-48d5-85be-cb06293608d6 |
| T-0-default | vrfcontext-1de964c7-3b6b-4561-9005-8f537db496ea |
| T-0-VRF | vrfcontext-04bb20ef-1cbc-498b-b5ce-2abf68bae321 |

VMware, Inc. 365


VMware NSX Advanced Load Balancer Configuration Guide

| T-1-default | vrfcontext-9bea0022-0c15-44ea-8813-cfd93f559261 |
| T-1-VRF | vrfcontext-18821ea1-e1c7-4333-a72b-598c54c584d5 |
+-------------+-------------------------------------------------+

[T-0:tp_bm-ctlr1]: > show vrfcontext T-0-default


+----------------------------+-------------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------------+
| uuid | vrfcontext-1de964c7-3b6b-4561-9005-8f537db496ea |
| name | T-0-default |
| bgp_profile | |
| local_as | 65000 |
| ibgp | True |
| peers[1] | |
| remote_as | 65000 |
| peer_ip | 10.160.4.1 |
| subnet | 10.160.4.0/24 |
| md5_secret | |
| bfd | True |
| network_ref | PG-4 |
| advertise_vip | True |
| advertise_snat_ip | False |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| peers[2] | |
| remote_as | 65000 |
| peer_ip | 10.160.2.1 |
| subnet | 10.160.2.0/24 |
| md5_secret | |
| bfd | True |
| network_ref | PG-2 |
| advertise_vip | False |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| keepalive_interval | 60 |
| hold_time | 180 |
| send_community | True |
| shutdown | False |
| system_default | False |
| lldp_enable | True |
| tenant_ref | admin |

VMware, Inc. 366


VMware NSX Advanced Load Balancer Configuration Guide

| cloud_ref | backend_vcenter |
+----------------------------+-------------------------------------------------+

Note
n The tenant (tenant VRF enabled) specific SE is configured with a PG-4 interface in VRF context
(T-0-default) which belongs to the tenant and not the actual VRF context (global) in which the
PG-4 is configured.

n From a placement perspective, if you initiate an add vNIC for a Service Engine for a virtual
service, the vNIC’s VRF will always be the VRF of the virtual service. This change will block
you from adding a BGP peer to a vrfcontext if the BGP peer belongs to a network that has
a different vrfcontext. The change is necessary as this configuration can cause traffic to be
dropped.

n Because there is no particular use case for having a VRF-A with BGP peers which belong to
networks in VRF-B, you will not be allowed to make any configuration changes.

n Additionally, you can change an existing network’s VRF, and there are BGP peers in that
network’s VRF which belong to this network, the change will be blocked.

Bidirectional Forwarding Detection (BFD)


BFD is supported for the fast detection of failed links. BFD enables networking peers on each end
of a link to quickly detect and recover from a link failure. Typically, BFD detects and repairs a
broken link faster than by waiting for BGP to detect the downlink.

For example, if an NSX Advanced Load Balancer SE fails, BFD on the BGP peer router can quickly
detect and correct the link failure.

Note With NSX Advanced Load Balancer release 21.1.2, the BFD feature supports BGP multi-hop
implementation.

Scaling
Scaling out/in virtual services is supported. In this example, a virtual service is placed on the NSX
Advanced Load Balancer SE on the 10.10.10.x network is scaled out to 3 additional NSX Advanced
Load Balancer SEs.

VMware, Inc. 367


VMware NSX Advanced Load Balancer Configuration Guide

Spine Router Spine Router Route Table:


10.100.10.100/31 NH:
10.10.10.3
10.10.15.20
10.10.20.2
10.10.50.10

Switch/Router Switch/Router Switch/Router

10.10.10.2/31 10.10.15.21/31 SVI - 10.10.20.1/24 10.10.50.11/31

10.10.10.3/31 10.10.15.20/31 10.10.20.2/24 10.10.50.10/31

SE VIP SE SE VIP SE

Linux Server Linux Server Linux Server Linux Server

VMware, Inc. 368


VMware NSX Advanced Load Balancer Configuration Guide

Flow Resiliency During Scale-Out/In


A flow is a 5-tuple: src-IP, src-port, dst-IP, dst-port, and protocol. Routers do a hash of the
5-tuple to pick which equal-cost path to use. When an SE scale-out occurs, the router is given yet
another path to use, and its hashing algorithm can make different choices, as a result disrupting
existing flows. To gracefully cope with this BGP-based scale-out issue, NSX Advanced Load
Balancer supports resilient flow handling using IP-in-IP (IPIP) tunnelling. The following sequence
shows how this is done.

VMware, Inc. 369


VMware NSX Advanced Load Balancer Configuration Guide

V V V

A B C D A B C D E A B C D E

Fig. 1 Fig. 2 Fig. 3

VMware, Inc. 370


VMware NSX Advanced Load Balancer Configuration Guide

Figure 1 shows the virtual service placed on four SEs, with a flow ongoing between a client and
SE-A. In figure 2, there is a scale-out to SE-E. This changes the hash on the router. Existing flows
get rehashed to other SEs. In this particular example, suppose it is SE-C.

VMware, Inc. 371


VMware NSX Advanced Load Balancer Configuration Guide

V V V

A B C D E A B C D E A B C D E

IPIP tunnel

Fig. 4 Fig. 5 Fig. 6

VMware, Inc. 372


VMware NSX Advanced Load Balancer Configuration Guide

In the NSX Advanced Load Balancer implementation, SE-C sends a flow probe to all other SEs
(figure 4). Figure 5 shows SE-A responding to claim ownership of the depicted flow. In figure 6,
SE-C uses IPIP tunnelling to send all packets of this flow to SE-A.

VMware, Inc. 373


VMware NSX Advanced Load Balancer Configuration Guide

A B C D E

IPIP tunnel

Fig. 7
VMware, Inc. 374
VMware NSX Advanced Load Balancer Configuration Guide

In figure 7, SE-A continues to process the flow and sends its response directly to the client.

Flow Resiliency for Multi-homed BGP Virtual Service


The flow resiliency is supported when there is a BGP virtual service that is configured to advertise
its VIP to more than one peer in the front end and is configured to advertise SNAT IP associated
with virtual service to more than one peer in the back end.

In such a setup, when one of the links goes down, the BGP withdraws the routes from that
particular NIC causing rehashing of that flow to another interface on the same SE or to another SE.
The new SE that receives the flow tries to recover the flow with a flow probe which fails because of
the interface going down.

The problem is seen with both the front end and the back end flows.

For the front end flows to be recovered, the flows must belong to a BGP virtual service that is
placed on more than one NIC on a Service Engine.

For the back end flows to be recovered, the virtual service must be configured with SNAT IPs and
must be advertised through BGP to multiple peers in the back end.

10.10.114.18/24 BGP Peer FE-


Router-1
IP: 10.10.114.19/24

10.10.115.18/24 BGP Peer FE-


Router-2
IP: 10.10.115.19/24

10.10.116.18/24 BGP Peer FE-


Router-3
IP: 10.10.116.19/24

Mandatory Requirements:

AVI SE It needs two or more peers on the front end


that a SE needs to advertise the BGP VS’s VIP to.

BGP VS needs to use custom SNAT for back-end


connection advertised to two or more peers in the
back-end.

10.10.119.18/24 BGP Peer BE-


Router-3
IP: 10.10.119.19/24
Server n/w:
10.10.120.5/24
10.10.118.18/24 BGP Peer BE-
Router-2
IP: 10.10.118.19/24
Server n/w:
10.10.120.6/24
10.10.117.18/24 BGP Peer BE-
Router-1
IP: 10.10.117.19/24

VMware, Inc. 375


VMware NSX Advanced Load Balancer Configuration Guide

Recovering Frontend Flows


Flow recovery within the same SE

If the interface goes down, the FT entries are not deleted. If the flow lands on another interface,
the flow-probe is triggered which is going to migrate the flow from the old flow table to the new
interface where the flow is landed.

The interface down event is reported to the Controller and the Controller removes the VIP
placement from the interface. This causes the primary virtual service entry to be reset. If the same
flow now lands on a new interface, it triggers a flow-probe, flow-migration if the virtual service was
placed initially on more than one interface.

Flow recovery on a scaled-out SE

If the flow lands on a new SE, the remote flow-probes are triggered. A new flag called relay will be
added to the flow-probe message. This flag indicates that all the receiving interfaces need to relay
the flow-probes to other flow-tables where the flow can be there. The flag is set at the sender of
the flow-probe when the virtual service is detected as BGP scaled-out virtual service.

On the receiving SE, the messages are relayed to the other flow tables resulting in a flow
migration. So subsequent flow-probe from the new SE is going to earn a response because the
flow now resides on the interface that is up and running.

If there is more than one interface on the flow-probe receiving SE, they will all trigger a flow-
migrate.

Recovering Back end Flows


The back end flows can be migrated only if the SNAT IP is used for the back end connection.
When multiple BGP peers are configured on the back end, and the servers are reachable through
more than one route, SNAT IP is placed on all the interfaces. Also, the flow table entries are
created on all the interfaces in the back end.

This results in the flow getting recovered in case an interface fails, and the flow lands on another
interface with flow table entry.

Message Digest5 (MD5) Authentication


BGP supports an authentication mechanism using the Message Digest 5 (MD5) algorithm. When
authentication is enabled, any TCP segment belonging to BGP exchanged between the peers,
is verified and accepted only if authentication is successful. For authentication to be successful,
both the peers must be configured with the same password. If authentication fails, the BGP peer
session will not be established. BGP authentication can be very useful because it makes it difficult
for any malicious user to disrupt network routing tables.

Enabling MD5 Authentication for BGP


To enable MD5 authentication, specify md5_secret in the respective BGP peer configuration. MD5
support is extended to OpenShift cloud where the Service Engine runs as docker container but
peers with other routers masquerading as host.

VMware, Inc. 376


VMware NSX Advanced Load Balancer Configuration Guide

Mesos Support
BGP is supported for north-south interfaces in Mesos deployments. The SE container that is
handling the virtual service will establish a BGP peer session with the BGP router configured in
the BGP peering profile for the cloud. The SE then injects a /64 route (host route) to the VIP, by
advertising the /64 to the BGP peer.

The following requirements apply to the BGP peer router:

n The BGP peer must allow the SE’s IP interfaces and subnets in its BGP neighbor configuration.
The SE will initiate the peer connection with the BGP router.

n For eBGP, the peer router will detect the time-to-live (TTL) value decremented for the BGP
session. This can prevent the session from coming up. This issue can be prevented from
occurring by setting the eBGP multi-hop TTL. For example, on Juniper routers, the eBGP
multi-hop TTL must be set to 64.

To enable MD5 authentication, select md5_secret in the respective BGP peer configuration. MD5
support is extended to OpenShift cloud where the Service Engine runs as docker container but
peers with other routers masquerading as host.

Enabling BGP Features in NSX Advanced Load Balancer


Configuration of BGP features in NSX Advanced Load Balancer is accomplished by configuring a
BGP profile, and by enabling RHI in the virtual service’s configuration.

n Configure a BGP profile. The BGP profile specifies the local Autonomous System (AS) ID that
the NSX Advanced Load Balancer SE and each of the peer BGP routers are in, and the IP
address of each peer BGP router.

n Enable the Advertise VIP using the BGP option on the Advanced tab of the virtual service’s
configuration. This option advertises a host route to the VIP address, with the NSX Advanced
Load Balancer SE as the next hop.

Note When BGP is configured on global VRF on LSC in-band, BGP configuration is applied on SE
only when a virtual service is configured on the SE. Till then peering between SE and peer router
will not happen.

Using NSX Advanced Load Balancer UI


To configure a BGP profile using the web interface:

Procedure

1 Navigate to Infrastructure > Routing.

2 Select the Cloud.

3 Click the BGP Peering tab, and click the Edit icon to reveal more fields.

4 Enter the following information.

n Local Autonomous System ID: a value between 1 and 4294967295

VMware, Inc. 377


VMware NSX Advanced Load Balancer Configuration Guide

n BGP type: iBGP or eBGP

5 Click Add New Peer to reveal a set of fields appropriate to iBGP or eBGP.

Note Remote AS is an additional field in eBGP. BGP peering (as eBGP) is explained as
follows:

n SE placement network

n Subnet provides reachability for peer

n Peer BGP router’s IP address

n Remote AS, a value between 1 and 4294967295, applies only to eBGP

n Peer Autonomous System MD5 digest secret key

n BFD option (on by default, enables very fast link failure detection through BFD, the only
async mode is supported)

n Advertise VIP (to this peer, on by default)

n Advertise SNAT (to this peer, on by default)

6 Click Save. The BGP Peering screen appears.

VMware, Inc. 378


VMware NSX Advanced Load Balancer Configuration Guide

Enabling BGP Timers


BGP timers — Advertisement Interval, Connection Timer, Keepalive Interval, and Hold Time can be
configured using NSX Advanced Load Balancer UI.

Navigate to Infrastructure > Routing, and select BGP Peering. Enter the desired values for the
timers as shown:

Using NSX Advanced Load Balancer CLI


The following commands configure the BGP profile. The BGP profile is included under NSX
Advanced Load Balancer’s virtual routing and forwarding (VRF) settings.

BGP configuration is tenant-specific and the profile. Accordingly, sub-options appear in a suitable
tenant vrfcontext.

: > configure vrfcontext management


Multiple objects found for this query.
[0]: vrfcontext-52d6cf4f-55fa-4f32-b774-9ed53f736902#management in tenant admin,
Cloud AWS-Cloud
[1]: vrfcontext-9ff610a4-98fa-4798-8ad9-498174fef333#management in tenant admin,
Cloud Default-Cloud
Select one: 1
Updating an existing object. Currently, the object is:
+----------------+-------------------------------------------------+
| Field | Value |
+----------------+-------------------------------------------------+
| uuid | vrfcontext-9ff610a4-98fa-4798-8ad9-498174fef333 |
| name | management |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------+-------------------------------------------------+
: vrfcontext > bgp_profile
: vrfcontext:bgp_profile > local_as 100
: vrfcontext:bgp_profile > ibgp
: vrfcontext:bgp_profile > peers peer_ip 10.115.0.1 subnet 10.115.0.0/16 md5_secret abcd
: vrfcontext:bgp_profile:peers > save
: vrfcontext:bgp_profile > save
: vrfcontext > save
: >

This profile enables iBGP with peer BGP router 10.115.0.1/16 in local AS 100. The BGP connection
is secured using MD5 with shared secret “abcd.”

The following commands enable RHI for a virtual service (vs-1):

: > configure virtualservice vs-1


: virtualservice > enable_rhi

VMware, Inc. 379


VMware NSX Advanced Load Balancer Configuration Guide

: virtualservice > save


: >

The following commands enable RHI for a source-NAT’ed floating IP address for a virtual service
(vs-1):

: > configure virtualservice vs-1


: virtualservice > enable_rhi_snat
: virtualservice > save
: >

The following command can be used to view the virtual service’s configuration:

: > show virtualservice

Two configuration knobs have been added to configure the per-peer “advertisement-interval” and
“connect” timer in Quagga BGP:

advertisement_interval: Minimum time between advertisement runs, default = 5 seconds


connect_timer: Time due for connect timer, default = 10 seconds

Usage is illustrated in this CLI sequence:

[admin:controller]:> configure vrfcontext management


Multiple objects found for this query.
[0]: vrfcontext-52d6cf4f-55fa-4f32-b774-9ed53f736902#management in tenant admin, Cloud
AWS-Cloud
[1]: vrfcontext-9ff610a4-98fa-4798-8ad9-498174fef333#management in tenant admin, Cloud
Default-Cloud
Select one: 1
Updating an existing object. Currently, the object is:
+----------------+-------------------------------------------------+
| Field | Value |
+----------------+-------------------------------------------------+
| uuid | vrfcontext-9ff610a4-98fa-4798-8ad9-498174fef333 |
| name | management |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------+-------------------------------------------------+
[admin:controller]: vrfcontext> bgp_profile
[admin:controller]: vrfcontext:bgp_profile> peers
New object being created
[admin:controller]: vrfcontext:bgp_profile:peers> advertisement_interval 10
Overwriting the previously entered value for advertisement_interval
[admin:controller]: vrfcontext:bgp_profile:peers> connect_timer 20
Overwriting the previously entered value for connect_timer
[admin:controller]: vrfcontext:bgp_profile:peers> save
[admin:controller]: vrfcontext:bgp_profile> save
[admin:controller]: vrfcontext> save

VMware, Inc. 380


VMware NSX Advanced Load Balancer Configuration Guide

Configuration knobs have been added to configure the keepalive interval and hold timer on a
global and per-peer basis:

[admin:controller]: > configure vrfcontext global


[admin: controller]: vrfcontext> bgp_profile

Overwriting the previously entered value for keepalive_interval:

[admin: controller]: vrfcontext:bgp_profile> keepalive_interval 30

Overwriting the previously entered value for hold_time:

[admin: controller]: vrfcontext:bgp_profile> hold_time 90


[admin: controller]: vrfcontext:bgp_profile> save
[admin:controller]: vrfcontext> save
[admin:controller]:>

The above commands configure the keepalive/hold timers on a global basis, but those values
can be overridden for a given peer using the following per-peer commands. Both the global and
per-peer knobs have default values of 60 seconds for the keepalive timer and 180 seconds for the
hold timer.

[admin:controller]: > configure vrfcontext global


[admin: controller]: vrfcontext> bgp_profile
[admin: controller]: vrfcontext:bgp_profile> peers index 1

Overwriting the previously entered value for keepalive_interval:

[admin: controller]: vrfcontext:bgp_profile:peers> keepalive_interval 10

Overwriting the previously entered value for hold_time:

[admin: controller]: vrfcontext:bgp_profile:peers> hold_time 30


[admin:controller]: vrfcontext:bgp_profile:peers> save
[admin:controller]: vrfcontext:bgp_profile> save
[admin:controller]: vrfcontext> save

Example: Example
The following is an example of router configuration when the BGP peer is FRR:

You need to find the interface information of the SE, which is peering with the router.

[admin-ctlr1]: > show serviceengine 10.79.170.52 interface summary | grep ip_addr


| ip_addr | fe80:1::250:56ff:fe91:1bed |
| ip_addr | 10.64.59.48 |
| ip_addr | fe80:2::250:56ff:fe91:b2 |
| ip_addr | 10.115.10.45 |

Here 10.115.10.45 matches the subnet in the peer configuration in vrfcontext->bgp_profile object.

VMware, Inc. 381


VMware NSX Advanced Load Balancer Configuration Guide

In the FRR router, the CLI is as follows:

# vtysh
Hello, this is FRRouting (version 7.2.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

frr1# configure t
frr1(config)# router bgp 100
frr1(config-router)# neighbor 10.115.10.45 remote-as 100
frr1(config-router)# neighbor 10.115.10.45 password abcd
frr1(config-router)# end
frr1#

You need to perform this for all the SEs that will be peering.

‘show serviceengine < > route’ Filter


The following is the CLI command to use show serviceengine <SE_ip> route:

[admin:controller]: > show serviceengine 10.19.100.1 route filter


configured_routes Show routes configured using controller
dynamic_routes Show routes learned through routing protocols
host_routes Show routes learned from host
vrf_ref Only this Vrf

Note If no VRF is provided in the filters, the command output can show routes from global vrf
which is present in the system, by default.

Enable Gratuitous ARP


You can enable gratuitous ARP for the virtual service allocated through BGP. This feature is
enabled at the Service Engine group level as shown:

[admin:controller]: > configure serviceenginegroup se_group_test


[admin:controller]: serviceenginegroup> enable_gratarp_permanent

With NSX Advanced Load Balancer release 20.1.1, the BFD parameters are user-configurable
using the CLI. For more information, see Configuring High Frequency BFD.

Selective VIP Advertisement


A BGP based virtual service implies that the VIPs are advertised through BGP. All the peers for
which the field advertise VIP is enabled, the corresponding VIPs are advertised.

You can select the VIPs to be advertised using labels. When configuring the VSVIP, you can
define that all the peers with a specific label should have a specific VIP advertised. Each peer
on the front end receives the VIP route advertisement only from the virtual services if the label
matches that of the peer.

Consider the example where,

n One SE is connected to three front end routers, FE-Router-1, FE-Router-2, FE-Router-3.

VMware, Inc. 382


VMware NSX Advanced Load Balancer Configuration Guide

n FE-Router-1, FE-Router-2, and FE-Router-3 have labels Peer1, Peer2, and Peer3 respectively.

n There are three virtual services in the Global VRF: VS1, VS2, and VS3.

n VS1 (1.1.1.1) is configured with the label Peer1. This implies, that the virtual service will be
advertised to Peer1.

n Similarly, VS2 will be advertised to Peer2 and VS 3 to Peer3, as defined by the labels.

Whenever BGP is enabled for a virtual service, the VIP will be advertised to all the front end
routers. However, in this case, the VIP will be advertised to the selected peer only.

To implement this, the labels list bgp_peer_labels is introduced in the VSVIP object configuration.

VsVip.bgp_peer_labels is a list of unique strings (with a maximum of 128 strings).

The length of each string can be a maximum of 128 characters. A label can consist of upper and
lower case alphabets, numbers, underscores, and hyphens.

Note
n If the VSVIP does not have any label, it will be advertised to all BGP peers with advertise_vip
set to True.

n If the VSVIP has the bgp_peer_labels, the peer with the field advertise_vip is set to True and
the label matching the bgp_peer_labels will receive VIP advertisement. However, if the BGP
peer configuration either has no label or if the label does not match, the peer will not receive
the VIP advertisement.

Configuring BGP Peer Labels


Consider an example where VS1 is a BGP-virtual service with a VSVIP vs1-vsvsip.

Global VRF has one peer without any labels.

To enable selective VIP advertisement, add label Peer1 for the Peer, and add the Peer 1 label in
VsVip.bgp_peer_labels.

Configuring VRF with BGP Peers

The following are the steps to configure BGP peer labels, from the NSX Advanced Load Balancer
UI:

1 Navigate to Infrastructure > Cloud Resources > Routing.

2 Click Add to view theBGP Profile screen.

3 Configure the details under General, and Routing as required.

4 Under Peers, click Add.

5 Select the Label used for advertisement of routes to this peer.

6 Select the Placement Network.

7 Enable the option Advertise VIP to Peer.

8 Click Save.

VMware, Inc. 383


VMware NSX Advanced Load Balancer Configuration Guide

Alternatively, BGP peer can be configured using the CLI as shown below:

configure vrfcontext global


Updating an existing object. Currently, the object is:
+----------------------------+------------------------------------------+
| Field | Value |
+----------------------------+------------------------------------------+
| uuid | vrfcontext-a1c097dd-f58e-45ca-b90a-6de72a4fd19d |
| name | global |
| bgp_profile | |
| local_as | 65000 |
| ibgp | True |
| peers[1] | |
| remote_as | 65000 |
| peer_ip | 10.10.114.19/24 |
| subnet | 10.10.114.0/24 |
| bfd | True |
| network_ref | vxw-dvs-34-virtualwire-15-sid-1060014-blr-01-vc06-avi-dev010 |
| advertise_vip | True |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| keepalive_interval | 60 |
| hold_time | 180 |
| send_community | True |
| shutdown | False |
| system_default | True |
| lldp_enable | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+------------------------------------------+
[admin:]: vrfcontext> bgp_profile
[admin:]: vrfcontext:bgp_profile> peers index 1
[admin:]: vrfcontext:bgp_profile:peers> label Peer1
[admin:]: vrfcontext:bgp_profile:peers> save
[admin:]: vrfcontext:bgp_profile> save
[admin:]: vrfcontext> save

Configuring VSVIP 1

From the VS VIP creation screen of the NSX Advanced Load Balancer, add BGP peer labels, as
required:

configure vsvip vs1-vsvip


Updating an existing object. Currently, the object is:
+-----------------------------+-----------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------+
| uuid | vsvip-0cab1bbb-d474-4365-8ba4-9d6a3f0add34 |
| name | vs1-vsvip |
| vip[1] | |
| vip_id | 0 |
| ip_address | 1.1.1.1 |

VMware, Inc. 384


VMware NSX Advanced Load Balancer Configuration Guide

| enabled | True |
| auto_allocate_ip | False |
| auto_allocate_floating_ip | False |
| avi_allocated_vip | False |
| avi_allocated_fip | False |
| auto_allocate_ip_type | V4_ONLY |
| prefix_length | 32 |
| vrf_context_ref | global |
| east_west_placement | False |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+-----------------------------+-----------------------------------------+
[admin:]: vsvip> bgp_peer_labels Peer1
[admin:]: vsvip> save

Caveats
n This feature is only applicable to BGP-virtual services. For virtual services that do not use BGP,
the field bgp_peer_labels cannot be enabled.

n When selective VIP advertisement is configured, the option use_vip_as_snat cannot be


enabled.

BGP Label-based Virtual Service Placement


Prior to NSX Advanced Load Balancer version 21.1.4, the placement of BGP virtual services is
limited to a maximum of four distinct peer networks and was not label-aware. In the event of
more than four distinct peer networks, the Controller chose any four networks randomly out of the
same.

Starting with NSX Advanced Load Balancer version 21.1.4, the virtual service placement is done
based on the BGP peer label configuration.

Use Case 1

A virtual service VIP with a label X, for instance, can only be placed on SE’s having BGP peering
with peers containing label X.

VRF BGP Peer Labels VSVIP BGP Peer Labels

Network 1 Label 1 Label 1

Network 2 Label 2 Label 2

Network 3 Label 3 Label 3

Network 4 Label 4 Label 4

Network 5 Label 5

In this case, the virtual service is placed on Network 1, Network 2, Network 3, and Network 4.

Use Case 2

VMware, Inc. 385


VMware NSX Advanced Load Balancer Configuration Guide

A virtual service VIP with no labels can be placed on SE’s having BGP peering with peers in any
subnet. That is, if the BGP peer has labels but BGP virtual service VIP does not have a label,
the virtual service VIP is advertised to be placed on all peer NICs (maximum of four distinct peer
networks).

VRF BGP Peer Labels VSVIP BGP Peer Labels

Network 1 Label 1

Network 2 Label 2

Network 3 Label 3

Network 4 Label 4

Network 5 Label 5

In this case, the virtual service is randomly placed on any one of the four networks.

Use Case 3

If the virtual service VIP is updated to associate a label later, the SE receives the virtual service
SE_List update, the VIP is withdrawn from all the other peers and is placed only on the NIC
pertaining to the peer with the matching label (disruptive update).

For example, the initial configuration is as below:

VRF BGP Peer Labels VSVIP BGP Peer Labels

Network 1 Label 1 Label 1

Network 2 Label 2 Label 2

Network 3 Label 3 Label 3

Network 4 Label 4

Network 5 Label 5

Note Updates done on the VS VIP to associate the labels will lead to disruptive update of the
virtual service.

The updated configuration is as shown below:

VRF BGP Peer Labels VSVIP BGP Peer Labels

Network 1 Label 1 Label 1

Network 2 Label 2 Label 2

Network 3 Label 3 Label 3

VMware, Inc. 386


VMware NSX Advanced Load Balancer Configuration Guide

VRF BGP Peer Labels VSVIP BGP Peer Labels

Network 4 Label 4 Label 4

Network 5 Label 5

Use Case 4

If the label is removed from the virtual service and the virtual service VIP is left with no label, then
the virtual service VIP is placed on all the peer - NICs (maximum of four distinct peer networks).

Use Case 5

If the virtual service VIP is created with labels for which there is no matching peer, VS VIP creation
is blocked due to invalid configuration, whether it is at the time of creating the virtual service or if
the virtual service VIP label is updated later.

VRF BGP Peer Labels VSVIP BGP Peer Labels

Network 1 Label 1 Label 5

Network 2 Label 2

Network 3 Label 3

Network 4 Label 4

In this case, since there are no matching VS VIP BGP-peer labels, VS VIP creation is blocked with
No BGP Peer exists with matching labels error.

High Frequency BFD


Bidirectional Forwarding Detection (BFD) enables networking peers on each end of a link to
quickly detect and recover from a link failure. BFD detects and repairs a broken link faster than by
waiting for BGP to detect the down link. Peer link failure detection time via BFD is a minimum of
three seconds, by default.

The BFD parameters are user-configurable, for faster failure detection. This gives you the flexibility
to choose the frequency of failure detection, as required.

VMware, Inc. 387


VMware NSX Advanced Load Balancer Configuration Guide

High Frequency BFD Parameters

Field Description Range Default Value

minrx The minimum rate at which 10-60000 (in milliseconds) 1000


the packets are received

mintx The minimum rate at which 10-60000 (in milliseconds) 1000


the packets are sent

multi The detection multiplier 3-255 3


used in BFD.
For instance, if multi = 3,
The value of SE mintx =
500 ms,
The value of remote BFD
minrx = 500ms,
Then the SE will be marked
as Down if it does not
receive any packets from
SE for 3*500 ms.

Configuring BFD Parameters


The BFD parameters are configured in the VRF context.

To configure the same,

1 Login to the NSX Advanced Load Balancer shell with your credentials

2 Enter the configure vrfcontext <vrfcontext name> as shown below:

[admin:abc-ctrl]: > configure vrfcontext global


Updating an existing object. Currently, the object is:
+----------------------------+-------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------+
| uuid |
vrfcontext-1f42f9e5-2aee-4509-95c8-52a9d994fbe1
|
| name | global |
| bgp_profile | |
| local_as | 65000 |
| ibgp | True |
| peers[1] | |
| remote_as | 65000 |
| peer_ip | 100.64.50.21 |
| subnet | 100.64.50.0/24 |
| md5_secret | |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | False |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |

VMware, Inc. 388


VMware NSX Advanced Load Balancer Configuration Guide

| shutdown | False |
| peers[2] | |
| remote_as | 65000 |
| peer_ip | 100.64.50.3 |
| subnet | 100.64.50.0/24 |
| md5_secret | |
| bfd | True |
| advertise_vip | False |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| ebgp_multihop | 0 |
| shutdown | False |
| keepalive_interval | 60 |
| hold_time | 180 |
| send_community | True |
| shutdown | False |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------+

3 A new parameter called bfd_profile is introduced within which the BFD parameters are
configured. In the bfd_profile, enter the values for mintx, minrx, and multi.

4 Save the configuration.

[admin:abc-ctrl]: vrfcontext>
[admin:abc-ctrl]: vrfcontext> bfd_profile
[admin:abc-ctrl]: vrfcontext:bfd_profile> mintx 1500
[admin:abc-ctrl]: vrfcontext:bfd_profile> minrx 1500
[admin:abc-ctrl]: vrfcontext:bfd_profile> multi 3
[admin:abc-ctrl]: vrfcontext:bfd_profile> save
[admin:abc-ctrl]: vrfcontext> save

The BFD parameters are now configured as per the values entered.

The minimum timeout duration was 3 seconds in prior releases. The minimum timeout duration as
low as 1.5 seconds is supported.

BGP/BFD Visibility
NSX Advanced Load Balancer uses Quagga for BGP based scaling of virtual services. Therefore,
debugging or checking the BGP configuration or the status of the BGP peer was possible only by
logging into the Quagga instance of the Service Engine.

For more information, see How to Access and Use Quagga Shell using NSX Advanced Load
Balancer CLI.

To make debugging easier, with NSX Advanced Load Balancer release 20.1.1, the capability to
view these commands from the NSX Advanced Load Balancer Controller shell is possible.

VMware, Inc. 389


VMware NSX Advanced Load Balancer Configuration Guide

Viewing BGP/BFD Configuration using the Controller


Log in to the Controller shell with your credentials and view the required BGP/BFD commands
discussed as follows:

n Advertised Routes

n Peer Status

n Peer Info

n Running Configuration

n BFD Session Status

Advertised Routes

Command Filters Applicable

/serviceengine/<se_uuid>/bgp/advertised_routes n vrf_ref
n peer_ip

Use the command bgp advertised_routes to view the BGP routes advertised to configured
peers:

[admin:1234-ctrl]: > show serviceengine 10.79.168.63 bgp advertised_routes


+----------------------+---------------------------------------------------------------------+
| Field | Value |
+----------------------+---------------------------------------------------------------------+
| vrf | global |
| namespace | avi_ns1 |
| advertised_routes[1] | |
| ipv4_routes | show ip bgp |
| | BGP table version is 0, local router ID is 2.146.114.58 |
| | Status code |
| | s: s suppressed, d damped, h history, * valid, > best, = multipath, |
| | |
| | i internal, r RIB-failure, S Stale, R Removed |
| | Origin codes: i - IGP, e - EGP, |
| | ? - incomplete |
| | |
| | Network Next Hop Metric LocPrf Weight Pat |
| | h |
| | *> 1.1.1.1/32 0.0.0.0 0 32768 i |
| | *> 2.2.2.2/32 |
| | 0.0.0.0 0 32768 i |
| | |
| | Total number of prefixes 2 |
| | 10-7 |
| | 9-168-63# |
| | |
| ipv6_routes | show ipv6 bgp |
| | No BGP network exists |
| | 10-79-168-63# |
| | |
+----------------------+---------------------------------------------------------------------+

VMware, Inc. 390


VMware NSX Advanced Load Balancer Configuration Guide

+----------------------+------------------------------+
| Field | Value |
+----------------------+------------------------------+
| vrf | seagent-default |
| namespace | none |
| advertised_routes[1] | |
| ipv4_routes | show ip bgp |
| | No BGP process is configured |
| | 10-79-168-63# |
| | |
| ipv6_routes | show ipv6 bgp |
| | No BGP process is configured |
| | 10-79-168-63# |
| | |
+----------------------+------------------------------+

This is the generic advertised routes. To view the advertised routes for a specific VRF, use the
vrf_ref filter as shown below:

admin:1234-ctrl]: > show serviceengine 10.79.168.63 bgp advertised_routes filter vrf_ref


global
+----------------------+-------------------------------------------------------+
| Field | Value |
+----------------------+-------------------------------------------------------+
| vrf | global |
| namespace | avi_ns1 |
| advertised_routes[1] | |
| peer_ip | 100.64.50.21 |
| ipv4_routes | show ip bgp neighbors 100.64.50.21 advertised-routes |
| | BGP table version is 0, lo |
| | cal router ID is 2.146.114.58 |
| | Status codes: s suppressed, d damped, h history, * |
| | valid, > best, = multipath, |
| | i internal, r RIB-failure, S Stale, R |
| | Removed |
| | Origin codes: i - IGP, e - EGP, ? - incomplete |
| | |
| | Network Nex |
| | t Hop Metric LocPrf Weight Path |
| | *> 1.1.1.1/32 100.64.50.14 |
| | 0 100 32768 i |
| | |
| | Total number of prefixes 1 |
| | 10-79-168-63# |
| | |
| ipv6_routes | show bgp neighbors 100.64.50.21 advertised-routes |
| | % No such neighbor or address |
| | family |
| | 10-79-168-63# |
| | |
| advertised_routes[2] | |
| peer_ip | 100.64.50.3 |
| ipv4_routes | show ip bgp neighbors 100.64.50.3 advertised-routes |
| | 10-79-168-63# |

VMware, Inc. 391


VMware NSX Advanced Load Balancer Configuration Guide

| | |
| ipv6_routes | show bgp neighbors 100.64.50.3 advertised-routes |
| | % No such neighbor or address |
| | family |
| | 10-79-168-63# |
| | |
+----------------------+-------------------------------------------------------+

Peer-wise advertised routes are displayed on using vrf_ref.

Note Use the peer filter to view the advertised routes for a specific peer using show
serviceengine <se_name> bgp advertised_routes filter vrf_ref <vrf_name> peer_ipv4
<peer_IP>.

Peer Status
Command Filters Applicable

/serviceengine/<se_uuid>/bgp/peers_status vrf_ref

When advertising BGP routes to peers, use the bgp peer status flag to check if the advertising
was successful:

[admin:abc-ctrl]: > show serviceengine 10.79.168.63 bgp peer_status


+-------------
+----------------------------------------------------------------------------------+
| Field |
Value |
+-------------
+----------------------------------------------------------------------------------+
| vrf |
global |
| namespace |
avi_ns1 |
| ipv4_status |
show ip bgp summary |
| | BGP
router identifier 2.146.114.58, local AS number 65000 |
| |
R |
| | IB
entries 3, using 336 bytes of memory |
| | Peers
2, using 9136 bytes of memory |
|
| |
| |
Nei |
| | ghbor
V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxR |
| |
cd |
| | 100.64.50.3

VMware, Inc. 392


VMware NSX Advanced Load Balancer Configuration Guide

4 65000 0 0 0 0 0 never Active |


|
| |
| | 100.64.50.21
4 65000 281 283 0 0 0 04:38:38 0 |
|
| |
|
| |
| | Total
number of neighbors 2 |
| |
10-79-168-63# |
|
| |
| ipv6_status |
show bgp summary |
| | No
IPv6 neighbor is configured |
| |
10-79-168-63# |
|
| |
+-------------
+----------------------------------------------------------------------------------+
+-------------+----------------------+
| Field | Value |
+-------------+----------------------+
| vrf | seagent-default |
| namespace | none |
| ipv4_status | show ip bgp summary |
| | 10-79-168-63# |
| | |
| ipv6_status | show bgp summary |
| | 10-79-168-63# |
| | |
+-------------+----------------------+

Starting with NSX Advanced Load Balancer 21.1.3, the current state of the BGP peers on
a ServiceEngine can be viewed using show serviceengine <Service Engine name>- bgp
peer_state. This shows the peer status of all the VRF configured in the Service Engine.

show serviceengine 10.79.168.63- bgp peer_state


+----------------+------------------------------------+
| Field | Value |
+----------------+------------------------------------+
| vrf_name | global |
| peers_state[1] | |
| peer_ip | 100.64.188.27 |
| state | BGP_PEER_ESTABLISHED |
| upOrDownTime | 4d21h59m |
| peers_state[2] | |
| peer_ip | 100.64.106.55 |
| state | BGP_PEER_NOT_APPLICABLE_TO_THIS_SE |

VMware, Inc. 393


VMware NSX Advanced Load Balancer Configuration Guide

| upOrDownTime | |
| peers_state[3] | |
| peer_ip | 100.64.19.12 |
| state | BGP_PEER_IDLE |
| upOrDownTime | |
| peers_state[4] | |
| peer_ip | 100.64.19.11 |
| state | BGP_PEER_NOT_ESTABLISHED |
| upOrDownTime | 00:13:22 |
+----------------+------------------------------------+

Use show serviceengine <Service Engine name>- bgp peer_state filter vrf_ref global to
filter by VRF.

The following information can be viewed using show serviceengine <Service Engine name> bgp
peer_state:

n BGP Peer IP

n The State

n Up or Down Time

The State of the BGP Peer can be one of the following:

State Description

BGP_PEER_IDLE n This is the initial state and can be updated when


refreshed
n Not tracking due to peer/BGP shutdown

BGP_PEER_ESTABLISHED BGP session is UP.

BGP_PEER_NOT_ESTABLISHED BGP session is DOWN.


If the upOrDownTime is set to Never, BGP Session has
been down since the configuration of the peer.

BGP_PEER_PREFIX_EXCEEDED Prefixes learnt from the peer exceeded the maximum limit.
Check the VRF's configuration.

BBGP_PEER_NOT_APPLICABLE_TO_THIS_SE On the SE, for the VRF, no interface is configured with the
peer's reachability network.

Note This feature provides peer states cached on SE. To update the interval, use
serviceenginegroup->bgp_state_update_interval.

Peer Information
Command Filters Applicable

n vrf_ref
/serviceengine/<se_uuid>/bgp/peers n peer_ipv4
n peer_ipv6

VMware, Inc. 394


VMware NSX Advanced Load Balancer Configuration Guide

Use the bgp peer_info flag to view BGP peer information:

[admin:1234-ctrl]: > show serviceengine 10.79.168.63 bgp peer_info


+-----------+-----------------------------------------------------------------------------+
| Field | Value |
+-----------+-----------------------------------------------------------------------------+
| vrf | global |
| namespace | avi_ns1 |
| peer_info | show ip bgp neighbors |
| | BGP neighbor is 100.64.50.3, remote AS 65000, local AS 65 |
| | 000, internal link |
| | BGP version 4, remote router ID 0.0.0.0 |
| | BGP state = Activ |
| | |
| | Last read 05:01:23, hold time is 180, keepalive interval is 60 seconds |
| | Mes |
| | sage statistics: |
| | Inq depth is 0 |
| | Outq depth is 0 |
| | |
| | Sent Rcvd |
| | Opens: 0 0 |
| | Notifications: |
| | 0 0 |
| | Updates: 0 0 |
| | Keepalives: |
| | 0 0 |
| | Route Refresh: 0 0 |
| | Capability: |
| | 0 0 |
| | Total: 0 0 |
| | Minimum time b |
| | etween advertisement runs is 5 seconds |
| | |
| | For address family: IPv4 Unicast |
| | Comm |
| | unity attribute sent to this neighbor(both) |
| | Inbound path policy configured |
| | O |
| | utbound path policy configured |
| | Route map for incoming advertisements is PEER_R |
| | M_IN_100.64.50.3 |
| | Route map for outgoing advertisements is *PEER_RM_OUT_100.64. |
| | 50.3 |
| | 0 accepted prefixes |
| | |
| | Connections established 0; dropped 0 |
| | Last reset |
| | never |
| | Next connect timer due in 4 seconds |
| | Read thread: off Write thread: off |
| | |
| | B |
| | GP neighbor is 100.64.50.21, remote AS 65000, local AS 65000, internal link |
| | BG |

VMware, Inc. 395


VMware NSX Advanced Load Balancer Configuration Guide

| | P version 4, remote router ID 2.226.39.17 |


| | BGP state = Established, up for 04:5 |
| | 2:38 |
| | Last read 00:00:37, hold time is 180, keepalive interval is 60 seconds |
| | |
| | Neighbor capabilities: |
| | 4 Byte AS: advertised and received |
| | Route refresh: |
| | advertised and received(old & new) |
| | Address family IPv4 Unicast: advertised |
| | and received |
| | Graceful Restart Capabilty: advertised and received |
| | Remot |
| | e Restart timer is 120 seconds |
| | Address families by peer: |
| | none |
| | Gr |
| | aceful restart informations: |
| | End-of-RIB send: IPv4 Unicast |
| | End-of-RIB re |
| | ceived: IPv4 Unicast |
| | Message statistics: |
| | Inq depth is 0 |
| | Outq depth is |
| | 0 |
| | Sent Rcvd |
| | Opens: 1 |
| | 1 |
| | Notifications: 0 0 |
| | Updates: 2 |
| | 1 |
| | Keepalives: 294 293 |
| | Route Refresh: 0 |
| | 0 |
| | Capability: 0 0 |
| | Total: 297 |
| | 295 |
| | Minimum time between advertisement runs is 5 seconds |
| | |
| | For address f |
| | amily: IPv4 Unicast |
| | Community attribute sent to this neighbor(both) |
| | Inbound |
| | path policy configured |
| | Outbound path policy configured |
| | Route map for incomin |
| | g advertisements is PEER_RM_IN_100.64.50.21 |
| | Route map for outgoing advertiseme |
| | nts is *PEER_RM_OUT_100.64.50.21 |
| | 0 accepted prefixes |
| | |
| | Connections establishe |
| | d 1; dropped 0 |
| | Last reset never |
| | Local host: 100.64.50.14, Local port: 45618 |

VMware, Inc. 396


VMware NSX Advanced Load Balancer Configuration Guide

| | Fo |
| | reign host: 100.64.50.21, Foreign port: 179 |
| | Nexthop: 100.64.50.14 |
| | Nexthop global |
| | : fe80::250:56ff:fe91:feb0 |
| | Nexthop local: :: |
| | BGP connection: non shared network |
| | |
| | Read thread: on Write thread: off |
| | |
| | 10-79-168-63# |
| | |
+-----------+-----------------------------------------------------------------------------+
+-----------+------------------------+
| Field | Value |
+-----------+------------------------+
| vrf | seagent-default |
| namespace | none |
| peer_info | show ip bgp neighbors |
| | 10-79-168-63# |
| | |
+-----------+------------------------+

View the Running Configuration


Command Filters Applicable

/serviceengine/<se_uuid>/bgp/running_config vrf_ref

Use the command show serviceengine bgp running_config:

[admin:1234-ctrl]: > show serviceengine 10.79.168.63 bgp peer_info


+-----------+-----------------------------------------------------------------------------+
| Field | Value |
+-----------+-----------------------------------------------------------------------------+
| vrf | global |
| namespace | avi_ns1 |
| peer_info | show ip bgp neighbors |
| | BGP neighbor is 100.64.50.3, remote AS 65000, local AS 65 |
| | 000, internal link |
| | BGP version 4, remote router ID 0.0.0.0 |
| | BGP state = Activ |
| | |
| | Last read 05:01:23, hold time is 180, keepalive interval is 60 seconds |
| | Mes |
| | sage statistics: |
| | Inq depth is 0 |
| | Outq depth is 0 |
| | |
| | Sent Rcvd |
| | Opens: 0 0 |
| | Notifications: |
| | 0 0 |
| | Updates: 0 0 |

VMware, Inc. 397


VMware NSX Advanced Load Balancer Configuration Guide

| | Keepalives: |
| | 0 0 |
| | Route Refresh: 0 0 |
| | Capability: |
| | 0 0 |
| | Total: 0 0 |
| | Minimum time b |
| | etween advertisement runs is 5 seconds |
| | |
| | For address family: IPv4 Unicast |
| | Comm |
| | unity attribute sent to this neighbor(both) |
| | Inbound path policy configured |
| | O |
| | utbound path policy configured |
| | Route map for incoming advertisements is PEER_R |
| | M_IN_100.64.50.3 |
| | Route map for outgoing advertisements is *PEER_RM_OUT_100.64. |
| | 50.3 |
| | 0 accepted prefixes |
| | |
| | Connections established 0; dropped 0 |
| | Last reset |
| | never |
| | Next connect timer due in 4 seconds |
| | Read thread: off Write thread: off |
| | |
| | B |
| | GP neighbor is 100.64.50.21, remote AS 65000, local AS 65000, internal link |
| | BG |
| | P version 4, remote router ID 2.226.39.17 |
| | BGP state = Established, up for 04:5 |
| | 2:38 |
| | Last read 00:00:37, hold time is 180, keepalive interval is 60 seconds |
| | |
| | Neighbor capabilities: |
| | 4 Byte AS: advertised and received |
| | Route refresh: |
| | advertised and received(old & new) |
| | Address family IPv4 Unicast: advertised |
| | and received |
| | Graceful Restart Capabilty: advertised and received |
| | Remot |
| | e Restart timer is 120 seconds |
| | Address families by peer: |
| | none |
| | Gr |
| | aceful restart informations: |
| | End-of-RIB send: IPv4 Unicast |
| | End-of-RIB re |
| | ceived: IPv4 Unicast |
| | Message statistics: |
| | Inq depth is 0 |
| | Outq depth is |
| | 0 |

VMware, Inc. 398


VMware NSX Advanced Load Balancer Configuration Guide

| | Sent Rcvd |
| | Opens: 1 |
| | 1 |
| | Notifications: 0 0 |
| | Updates: 2 |
| | 1 |
| | Keepalives: 294 293 |
| | Route Refresh: 0 |
| | 0 |
| | Capability: 0 0 |
| | Total: 297 |
| | 295 |
| | Minimum time between advertisement runs is 5 seconds |
| | |
| | For address f |
| | amily: IPv4 Unicast |
| | Community attribute sent to this neighbor(both) |
| | Inbound |
| | path policy configured |
| | Outbound path policy configured |
| | Route map for incomin |
| | g advertisements is PEER_RM_IN_100.64.50.21 |
| | Route map for outgoing advertiseme |
| | nts is *PEER_RM_OUT_100.64.50.21 |
| | 0 accepted prefixes |
| | |
| | Connections establishe |
| | d 1; dropped 0 |
| | Last reset never |
| | Local host: 100.64.50.14, Local port: 45618 |
| | Fo |
| | reign host: 100.64.50.21, Foreign port: 179 |
| | Nexthop: 100.64.50.14 |
| | Nexthop global |
| | : fe80::250:56ff:fe91:feb0 |
| | Nexthop local: :: |
| | BGP connection: non shared network |
| | |
| | Read thread: on Write thread: off |
| | |
| | 10-79-168-63# |
| | |
+-----------+-----------------------------------------------------------------------------+
+-----------+------------------------+
| Field | Value |
+-----------+------------------------+
| vrf | seagent-default |
| namespace | none |
| peer_info | show ip bgp neighbors |
| | 10-79-168-63# |
| | |
+-----------+------------------------+

You can view the current BGP configuration for all VRFs.

VMware, Inc. 399


VMware NSX Advanced Load Balancer Configuration Guide

BFD Session Status


BFD enables networking peers on each end of a link to quickly detect and recover from a link
failure.

Command Filters Applicable

/serviceengine/<se_uuid>/bfd/session_status vrf_ref

Use the show serviceengine <Service Engine IP address> bfd session_status command to
check the details of the BFD packets and the BGP session.

The below is the output for the BFD session status on the NSX Advanced Load Balancer release
before 21.1.2.

show serviceengine 10.79.168.63 bfd session_status


+-----------+-----------------------------------+
| Field | Value |
+-----------+-----------------------------------+
| vrf | global |
| namespace | avi_ns1 |
| status | There are 2 sessions: |
| | Session 2 |
| | id=2 |
| | local=100.64.50.14 (active) |
| | remote=100 |
| | .64.50.21 |
| | LocalState=Down*No Diagnostic* |
| | RemoteState=Down*No Diagnostic* |
| | L |
| | ocalId=1968595698 |
| | RemoteId=0 |
| | Time=Down(05:300:11.166) |
| | CurrentTxInterval=1 |
| | ,000,000 us |
| | CurrentRxTimeout=0 us |
| | LocalDetectMulti=3 |
| | LocalDesiredMinTx=1,0 |
| | 00,000 us |
| | LocalRequiredMinRx=1,000,000 us |
| | RemoteDetectMulti=0 |
| | RemoteDesire |
| | dMinTx=0 us |
| | RemoteRequiredMinRx=1 us |
| | |
| | Session 1 |
| | id=1 |
| | local=100.64.50.14 (a |
| | ctive) |
| | remote=100.64.50.3 |
| | LocalState=Down*No Diagnostic* |
| | RemoteState=Down* |
| | No Diagnostic* |
| | LocalId=817711591 |
| | RemoteId=0 |

VMware, Inc. 400


VMware NSX Advanced Load Balancer Configuration Guide

| | Time=Down(05:300:19.723) |
| | Cu |
| | rrentTxInterval=1,000,000 us |
| | CurrentRxTimeout=0 us |
| | LocalDetectMulti=3 |
| | Loca |
| | lDesiredMinTx=1,000,000 us |
| | LocalRequiredMinRx=1,000,000 us |
| | RemoteDetectMulti |
| | =0 |
| | RemoteDesiredMinTx=0 us |
| | RemoteRequiredMinRx=1 us |
| | |
+-----------+-----------------------------------+
+-----------+-----------------------+
| Field | Value |
+-----------+-----------------------+
| vrf | seagent-default |
| namespace | none |
| status | There are 0 sessions: |
| | |
+-----------+-----------------------+

BFD Support for BGP Multi-hop


With NSX Advanced Load Balancer release 21.1.2, the BFD feature supports BGP multi-hop
implementation. The below is the output for the BFD session status on the NSX Advanced Load
Balancer release 21.1.2.

show serviceengine 10.102.64.10 bfd session_status filter vrf_ref global


+-----------+------------------------------------------------+
| Field | Value |
+-----------+------------------------------------------------+
| vrf | global |
| namespace | avi_ns1 |
| status | show bfd peers |
| | BFD Peers: |
| | peer 100.64.188.60 |
| | ID: 4 |
| | Remote ID: 0 |
| | Status: down |
| | Do |
| | wntime: 21 hour(s), 26 minute(s), 49 second(s) |
| | Diagnostics: ok |
| | Remote diagnostic |
| | s: ok |
| | Local timers: |
| | Receive interval: 1000ms |
| | Transmission interval: 300ms (confi |
| | gured 1000ms) |
| | Echo transmission interval: disabled |
| | Remote timers: |
| | Receive interv |
| | al: 0ms |

VMware, Inc. 401


VMware NSX Advanced Load Balancer Configuration Guide

| | Transmission interval: 0ms |


| | Echo transmission interval: 0ms |
| | |
| | 10-102-64-10 |
| | # |
| | |
+-----------+------------------------------------------------+

Note
n The peer_ipv4/ peer_ipv6 filters should always be used with the vrf_ref filter.

n The filters peer_ipv4 and peer_ipv6 can not be used together.

n When an invalid vrf_ref is provided, it defaults to the management vrf and when an invalid
peer filter is provided, an empty output is returned.

n With NSX Advanced Load Balancer 21.1.2, the status_level filter for the show serviceengine
<Service Engine name> bfd session_status command is not supported.

BGP Community Support on NSX Advanced Load Balancer


BGP community is extra information that the advertised routes can be tagged with, allowing the
router on the other end or a BGP peer to better classify/handle routes sharing common property.

The community value is a 32-bit field that is divided into two sub-fields. The first 2 bytes encode
the AS number of the network that originated the community and the last 2 bytes carry a unique
number assigned by the AS. Communities add power to BGP, changing it from a routing protocol
to a tool for signalling and policy enforcement.

Note This feature is not supported for IPv6.

Use Cases
n BGP community is useful when there are common requirements for a range of IP addresses or
a network.

n It provides a better understanding of the network topology and routing policy requirements.

n It makes scalability, operation, and troubleshooting of a network easier. For more information
on the BGP community, see An Application of the BGP Community Attribute.

Working Principle
NSX Advanced Load Balancer supports the new ip_community option in the BGP configuration.
You can conveniently tag a virtual IP address (VIP) or a backend server IP address advertised from
an NSX Advanced Load Balancer Service Engine with appropriate communities. Tagging allows
BGP peers to handle BGP routes with discretion.

VMware, Inc. 402


VMware NSX Advanced Load Balancer Configuration Guide

Configuration
Login to the NSX Advanced Load Balancer Controller command line interface (CLI) and follow the
steps to configure the BGP community for all routes advertised to a BGP peer:

[admin:controller]: > configure vrfcontext global


Updating an existing object. Currently, the object is:
+----------------+-------------------------------------------------+
| Field | Value |
+----------------+-------------------------------------------------+
| uuid | vrfcontext-ded10944-53da-4542-bbf1-1cd4f300fb29 |
| name | global |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------+-------------------------------------------------+
[admin:controller]: vrfcontext> bgp_profile
[admin:controller]: vrfcontext:bgp_profile>
cancel Exit the current submode without saving
community Community string either in aa:nn format where aa, nn is within [1,65535]
or local-AS|no-advertise|no-export|internet.
do Execute a show command
hold_time Hold time for Peers
ibgp BGP peer type
ip_communities (submode)
keepalive_interval Keepalive interval for Peers
local_as Local Autonomous System ID
new (Editor Mode) Create new object in editor mode
no Remove field
peers (submode)
save Save and exit the current submode
send_community Send community attribute to all peers.
show_schema show object schema
watch Watch a given show command
where Display the in-progress object
[admin:controller]: vrfcontext:bgp_profile>

[admin:controller]: vrfcontext:bgp_profile> community internet


[admin:controller]: vrfcontext:bgp_profile> community 10:10
[admin:controller]: vrfcontext:bgp_profile> community 65000:20
[admin:controller]: vrfcontext:bgp_profile> save
[admin:controller]: vrfcontext> save

+---------------------------
+-----------------------------------------------------------------------+
| Field |
Value |
+---------------------------
+-----------------------------------------------------------------------+
| uuid |
vrfcontext-ded10944-53da-4542-bbf1-1cd4f300fb29 |
| name |
global |
| bgp_profile
| |

VMware, Inc. 403


VMware NSX Advanced Load Balancer Configuration Guide

| local_as |
65000 |
| ibgp |
True |
| keepalive_interval. |
60 |
| hold_time |
180 |
| send_community |
True |
| community[1] |
internet |
| community[2] |
10:10 |
| community[3] |
65000:20 |
| system_default |
True |
| tenant_ref |
admin |
| cloud_ref |
Default-Cloud |
+---------------------------
+-----------------------------------------------------------------------+

Follow the steps to delete one of the configured communities:

[admin:controller]: > configure vrfcontext global


[admin:controller]: vrfcontext> bgp_profile
[admin:controller]: vrfcontext:bgp_profile> no community 10:10
Removed community 10:10
[admin:controller]: vrfcontext:bgp_profile> save
[admin:controller]: vrfcontext> save

+----------------------------+-------------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------------+
| uuid | vrfcontext-ded10944-53da-4542-bbf1-1cd4f300fb29 |
| name | global |
| bgp_profile | |
| local_as | 65000 |
| ibgp | True |
| peers[1] | |
| remote_as | 1 |
| | |
| send_community | True |
| community[1] | internet |
| community[2] | 65000:20 |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+

Steps to configure a BGP community specific to routers belonging to an IP range.

VMware, Inc. 404


VMware NSX Advanced Load Balancer Configuration Guide

The example shows how to tag any routes with a specific community that will be applied to only a
specific IP range. This IP-specific community overrides the default community in bgp_profile that
applies to all routes.

[admin:controller]: > configure vrfcontext global


[admin:controller]: vrfcontext> bgp_profile
[admin:controller]: vrfcontext:bgp_profile> ip_communities
New object being created
[admin:controller]: vrfcontext:bgp_profile:ip_communities>
cancel Exit the current submode without saving
community Community string either in aa:nn format where aa, nn is within [1,65535] or
local-AS|no-advertise|no-export|internet.
do Execute a show command
ip_begin Beginning of IP address range.
ip_end End of IP address range. Optional if ip_begin is the only ip address in
specified ip range.
no Remove field
save Save and exit the current submode
show_schema show object schema
watch Watch a given show command
where Display the in-progress object
[admin:controller]: vrfcontext:bgp_profile:ip_communities> ip_begin 10.70.163.100
[admin:controller]: vrfcontext:bgp_profile:ip_communities> ip_end 10.70.163.200
[admin:controller]: vrfcontext:bgp_profile:ip_communities> community 200:200
[admin:controller]: vrfcontext:bgp_profile:ip_communities> community 100:100
[admin:controller]: vrfcontext:bgp_profile:ip_communities> save
[admin:controller]: vrfcontext:bgp_profile> save
[admin:controller]: vrfcontext> save
+----------------------------+-------------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------------+
| uuid | vrfcontext-ded10944-53da-4542-bbf1-1cd4f300fb29 |
| name | global |
| bgp_profile | |
| local_as | 65000 |
| ibgp | True |
| peers[1] | |
| remote_as | 1 |
| | |
| hold_time | 180 |
| send_community | False |
| community[1] | internet |
| community[2] | 65000:20 |
| ip_communities[1] | |
| ip_begin | 10.70.163.100 |
| ip_end | 10.70.163.200 |
| community[1] | 200:200 |
| community[2] | 100:100 |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+

VMware, Inc. 405


VMware NSX Advanced Load Balancer Configuration Guide

Follow the steps mentioned to configure a BGP community for a single IP address (for example
a VIP address) that is advertised to a BGP peer. While configuring a community for the single IP
address, ip_end is optional. The user can however configure both ip_begin and ip_end to the
same IP address without any issue.

[admin:controller]: vrfcontext> bgp_profile


[admin:controller]: vrfcontext:bgp_profile> ip_communities
New object being created
[admin:controller]: vrfcontext:bgp_profile:ip_communities> ip_begin 10.70.164.150
[admin:controller]: vrfcontext:bgp_profile:ip_communities> community 150:150
[admin:controller]: vrfcontext:bgp_profile:ip_communities> save
[admin:controller]: vrfcontext:bgp_profile> save
[admin:controller]: vrfcontext> save
+----------------------------+-------------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------------+
| uuid | vrfcontext-ded10944-53da-4542-bbf1-1cd4f300fb29 |
| name | global |
| bgp_profile | |
| local_as | 65000 |
| ibgp | True |
| peers[1] | |
| | |
| hold_time | 180 |
| send_community | True |
| community[1] | internet |
| community[2] | 65000:20 |
| ip_communities[1] | |
| ip_begin | 10.70.163.100 |
| ip_end | 10.70.163.200 |
| community[1] | 200:200 |
| community[2] | 100:100 |
| ip_communities[2] | |
| ip_begin | 10.70.164.150 |
| community[1] | 150:150 |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+

Follow the CLI commands to stop tagging BGP advertised routes with the community. This
command stops tagging routes with the community while preserving the configuration.

The user can enable tagging at a later time if required.

[admin:controller]: > configure vrfcontext global


[admin:controller]: vrfcontext> bgp_profile
[admin:controller]: vrfcontext:bgp_profile> no send_community
+--------------------------+----------------+
| Field | Value |
+--------------------------+----------------+
| local_as | 65000 |
| ibgp | True |

VMware, Inc. 406


VMware NSX Advanced Load Balancer Configuration Guide

| peers[1] | |
| remote_as | 1 |
| | |
| hold_time | 180 |
| send_community | False |
| community[1] | internet |
| community[2] | 65000:20 |
| ip_communities[1] | |
| ip_begin | 10.70.163.100 |
| ip_end | 10.70.163.200 |
| community[1] | 200:200 |
| community[2] | 100:100 |
| ip_communities[2] | |
| ip_begin | 10.70.164.150 |
| community[1] | 150:150 |
+--------------------------+----------------+
[admin:controller]: vrfcontext:bgp_profile> save

Follow the NSX Advanced Load Balancer CLI commands to delete the configured
ip_communities:

| send_community | False |
| community[1] | local-AS |
| community[2] | no-export |
| ip_communities[1] | |
| ip_begin | 10.70.163.100 |
| ip_end | 10.70.163.200 |
| community[1] | 200:200 |
| community[2] | 100:100 |
| ip_communities[2] | |
| ip_begin | 10.70.164.150 |
| community[1] | 150:150 |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+

[admin:controller]: > configure vrfcontext global


[admin:controller]: vrfcontext> bgp_profile
[admin:controller]: vrfcontext:bgp_profile> no ip_communities index 1
Removed ip_communities with index 1
+--------------------------+----------------+
| Field | Value |
+--------------------------+----------------+
| local_as | 65000 |
| ibgp | True |
| peers[1] | |
| remote_as | 1 |
| | |
| hold_time | 180 |
| send_community | False |
| community[1] | internet |
| community[2] | 65000:20 |

VMware, Inc. 407


VMware NSX Advanced Load Balancer Configuration Guide

| ip_communities[1] | |
| ip_begin | 10.70.164.150 |
| community[1] | 150:150 |
+--------------------------+----------------+

Follow the steps to enable the community tags for the BGP-advertised routes:

[admin:controller]: > configure vrfcontext global


[admin:controller]: vrfcontext> bgp_profile
[admin:controller]: vrfcontext:bgp_profile> send_community
Overwriting the previously entered value for send_community
[admin:controller]: vrfcontext:bgp_profile> save
[admin:controller]: vrfcontext> save
s+----------------------------+------------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------------+
| uuid | vrfcontext-ded10944-53da-4542-bbf1-1cd4f300fb29 |
| name | global |
| bgp_profile | |
| local_as | 65000 |
| ibgp | True |
| peers[1] | |
| remote_as | 1 |
| peer_ip | 10.70.163.23 |
| subnet | 10.70.163.0/24 |
| md5_secret | sensitive |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| keepalive_interval | 60 |
| hold_time | 180 |
| ebgp_multihop | 0 |
| peers[2] | |
| remote_as | 1 |
| peer_ip | 10.70.164.21 |
| subnet | 10.70.164.0/24 |
| md5_secret | sensitive |
| bfd | True |
| advertise_vip | True |
| advertise_snat_ip | True |
| advertisement_interval | 5 |
| connect_timer | 10 |
| keepalive_interval | 60 |
| hold_time | 180 |
| ebgp_multihop | 0 |
| keepalive_interval | 60 |
| hold_time | 180 |
| send_community | True |
| community[1] | internet |
| community[2] | 65000:20 |
| ip_communities[1] | |
| ip_begin | 10.70.164.150 |
| community[1] | 150:150 |

VMware, Inc. 408


VMware NSX Advanced Load Balancer Configuration Guide

| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------------------+-------------------------------------------------+

It is possible to tag routes advertised to a BGP peer with a standard community. NSX Advanced
Load Balancer supports tagging of the routes in BGP sub mode only. NSX Advanced Load
Balancer does not support tagging of the communities on a per route basis.

[admin:controller]: > configure vrfcontext global


Updating an existing object. Currently, the object is:
+----------------+-------------------------------------------------+
| Field | Value |
+----------------+-------------------------------------------------+
| uuid | vrfcontext-3cc726d3-d94a-4eb0-9c70-f70d7e1b185e |
| name | global |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------+-------------------------------------------------+
[admin:controller]: vrfcontext> bgp_profile
[admin:controller]: vrfcontext:bgp_profile>
cancel Exit the current submode without saving
community List of community attributes. Valid values are "internet", "local-AS",
"no-advertise", "no-export". Community can also be specified in : format where AS,Val are in
the range [1,65535].
do Execute a show command
hold_time Hold time for Peers
ibgp BGP peer type
keepalive_interval Keepalive interval for Peers
local_as Local Autonomous System ID
new (Editor Mode) Create new object in editor mode
no Remove field
peers (submode)
save Save and exit the current submode
send_community Send community attribute to all peers(True by default)
show_schema show object schema
watch Watch a given show command
where Display the in-progress object

[admin:controller]: vrfcontext:bgp_profile> community internet


[admin:controller]: vrfcontext:bgp_profile> community 10:10
[admin:controller]: vrfcontext:bgp_profile> community 65000:20
[admin:controller]: vrfcontext:bgp_profile> save
[admin:controller]: vrfcontext> save

+---------------------------
+-----------------------------------------------------------------------+
| Field |
Value |
+---------------------------
+-----------------------------------------------------------------------+
| uuid |

VMware, Inc. 409


VMware NSX Advanced Load Balancer Configuration Guide

vrfcontext-3cc726d3-d94a-4eb0-9c70-f70d7e1b185e |
| name |
global |
| bgp_profile
| |
| local_as |
65000 |
| ibgp |
True |
| keepalive_interval. |
60 |
| hold_time |
180 |
| send_community |
True |
| community[1] |
internet |
| community[2] |
10:10 |
| community[3] |
65000:20 |
| system_default |
True |
| tenant_ref |
admin |
| cloud_ref |
Default-Cloud |
+---------------------------
+-----------------------------------------------------------------------+
</code></pre>

Multihop BGP
NSX Advanced Load Balancer supports multihop BGP. A plain peer configuration is supported in
all its variations, including iBGP multihop.

This section explains the following:

n eBGP multihop: BGP peers are more than one hop away and in a different autonomous
system. BGP peers are not directly connected.

n iBGP multihop: BGP peers are in the same autonomous system but more than one hop away.

Note This feature is supported for IPv6.

Configuring eBGP
To configure eBGP multihop, a per-peer configuration parameter, that is, ebgp_multihop specifies
the number of next hops. The following are the two main configuration sections:

n Configuring NSX Advanced Load Balancer Controller:

n The eBGP-multihop peer. The multihop peer must be configured with the same subnet as
that of the interface network

VMware, Inc. 410


VMware NSX Advanced Load Balancer Configuration Guide

n Static/default route to reach the BGP peer

n Configuring BGP peer and intermediate routers: Static or default route configuration on the
NSX Advanced Load Balancer Controller, intermediate router, and BGP peer.

Configuring NSX Advanced Load Balancer Controller (Configuring eBGP-


multihop Peer)
To configure eBGP-multihop Peer using NSX Advanced Load Balancer Controller UI and CLI:

Using NSX Advanced Load Balancer UI


Login to NSX Advanced Load Balancer UI and navigate to Infrastructure > Routing > BGP
Peering, provide the value for BGP AS, and select the eBGP option.

Provide the following values to BGP, IPv4 Prefix, IPv4 Peer, Remote AS, and Multihop:

Using NSX Advanced Load Balancer CLI


1 Peer Configuration — Enable BGP, set the following attributes:

n AS – 65000

n Type – eBGP

n Remote AS – 1

n BFD – Yes

n Advertise VIP – Yes

n Advertise SNAT – Yes

Use vrfcontext sub-mode to configure the required attributes:

[admin-controller]: > configure vrfcontext global


[admin:controller]: vrfcontext:bgp_profile> peers index 1

VMware, Inc. 411


VMware NSX Advanced Load Balancer Configuration Guide

[admin:controller]: vrfcontext:bgp_profile> peers ebgp_multihop 2


[admin-controller]: vrfcontext:bgp_profile > peers peer_ip 10.116.0.1 subnet 10.115.0.0/16
md5_secret abcd
[admin:controller]: vrfcontext:bgp_profile:peers> save
[admin:controller]: vrfcontext:bgp_profile> save
[admin:controller]: vrfcontext> save
[admin:controller]: >

For more information on configuring BGP on NSX Advanced Load Balancer, see BGP Support
for Scaling Virtual Services.

2 The following diagram explains all the required configurations for configuring multihop eBGP
peers:

VIP routes are learned as follows:


10.10.3.17/24
10.10.116.88/32 NH 10.10.3.16
10.10.226.88/32 NH 10.10.3.16

10.10.3.16/24
Intermediate For VIP configured in random subnet,
Router R1 intermediate router(s) need(s) to have
10.10.116.13/24 static route (or some default route)
configured to it as follows:

10.10.226.0/24 NH 10.10.116.17
10.10.116.17/24

VIP (10.10.116.88) is configured in the


same subnet as the interface network.

VIP (10.10.226.88) is configured in


some random subnet.

Configure static route or default route to reach peer network (10.10.3.0/24) using router R1
(10.10.116.12). 10.10.3.0/24 next hop 10.10.116.12

3 Configure two virtual service IP addresses:

n VIP (10.10.116.88) is configured in the same subnet as the interface network


(10.10.116.0/24).

n VIP (10.10.226.88) is configured in some random subnet.

Configuring the BGP Peer (Router R2)


Multihop BGP peer (two hops away from the NSX Advanced Load Balancer SE) configured with
the following static route to reach to SE network through router R1 for peering with the SE:

VMware, Inc. 412


VMware NSX Advanced Load Balancer Configuration Guide

10.10.116.0/24 next hop 10.10.3.16 If no static route is specified, there needs to be some default
route through which to reach SE interface network.

Configure the following additional neighbor configuration to peer with Avi SE which is two hops
apart: neighbor 10.10.116.17 ebgp-multihop 2VIP routes on the router R2 are learned as follows:

n 10.10.116.88/32 next hop 10.10.3.16

n 10.10.226.88/32 next hop 10.10.3.16

Configuration of the Intermediate Router(R1)


For VIP configured in a random subnet, intermediate router(s) need(s) to have a static route (or
some default route) configured to it as follows: 10.10.226.0/24 next hop 10.10.116.17

Configuring iBGP
A multihop iBGP configuration is similar to that of a normal iBGP peer. Once the proper peer
placement subnet, peer IP and other details are provided, the Service Engine will initiate peering
with the router.

Using NSX Advanced Load Balancer UI


Login to NSX Advanced Load Balancer UI and navigate to Infrastructure > Routing > BGP
Peering.

Provide the following values to BGP AS, IPv4 Prefix, and IPv4 Peer, and select iBGP:

VMware, Inc. 413


VMware NSX Advanced Load Balancer Configuration Guide

Using NSX Advanced Load Balancer CLI Configuration

[admin-controller]: > configure vrfcontext management


Multiple objects found for this query.
[0]: vrfcontext-52d6cf4f-55fa-4f32-b774-9ed53f736902#management in tenant admin,
Cloud AWS-Cloud
[1]: vrfcontext-9ff610a4-98fa-4798-8ad9-498174fef333#management in tenant admin,
Cloud Default-Cloud
Select one: 1
Updating an existing object. Currently, the object is:
+----------------+-------------------------------------------------+
| Field | Value |
+----------------+-------------------------------------------------+
| uuid | vrfcontext-9ff610a4-98fa-4798-8ad9-498174fef333 |
| name | management |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+----------------+-------------------------------------------------+
[admin-controller]: >: vrfcontext > bgp_profile
[admin-controller]: >: vrfcontext:bgp_profile > local_as 100
[admin-controller]: >: vrfcontext:bgp_profile > ibgp
[admin-controller]: >: vrfcontext:bgp_profile > peers peer_ip 10.116.0.1 subnet 10.115.0.0/16

VMware, Inc. 414


VMware NSX Advanced Load Balancer Configuration Guide

md5_secret abcd
: vrfcontext:bgp_profile:peers > save
: vrfcontext:bgp_profile > save
: vrfcontext > save

Configuring BGP Graceful Restart


Following are the steps to configure BGP graceful restart:

Configuring BGP Graceful Restart


In Legacy HA, when the active SE goes down, there can be route flaps for the advertised VIPs on
the peer router. The graceful restart feature ensures that the VIPs are available for up to 2 minutes
in the peering router when active SE goes down using floating interface IP. If floating interface IP
is not available, then virtual service will be marked down.

If graceful restart is configured and the interfaces in SE that are used for BGP does not have
floating interface IPs, the virtual service will be marked down. It will recover when the floating
interface IPs are added.

The graceful restart feature also advertises BGP graceful restart option to the BGP peer. The peer
will preserve the routes from SE for 120 secs even when the connection is lost.

Note
n The graceful restart timer should be less than the hold timer.

n The graceful restart will be allowed only if the linked SE group is legacy HA and
distribute_load_active_standby is not enabled.

n If you move an SE group from legacy HA mode to any other mode, and if a network service
with graceful restart exists that refers to this SE group then the graceful restart will fail.

n When distribute_load_active_standby is enabled in an SE group, and if a network service


with graceful restart exists that refers to this SE group, then the graceful restart will fail.

Restrictions
The following are the restrictions of BGP graceful restart:

n You can set the BGP graceful restart feature only on Legacy HA by disabling
distribute_load_active_standby. This is so that the routes are advertised only from 1 SE.
The floating interface IP will be constant and always available on the SE advertising the
routes(VIPs).

n Requires a floating interface IP for the interface from where the peering happens.

Configuration
The graceful restart configuration is as follows:

configure networkservice *name*


networkservice> routing_service

VMware, Inc. 415


VMware NSX Advanced Load Balancer Configuration Guide

networkservice:routing_service> graceful_restart
networkservice:routing_service>

The following are the CLI details:

[admin:georgem-ctrlr]: > configure networkservice NS


[admin:georgem-ctrlr]: networkservice> routing_service
[admin:georgem-ctrlr]: networkservice:routing_service>
advertise_backend_networks Advertise reachability of backend server networks via ADC
through BGP for default gateway feature.
cancel Exit the current submode without saving
do Execute a show command
enable_routing Service Engine acts as Default Gateway for this service.
enable_vip_on_all_interfaces Enable VIP on all interfaces of this service.
enable_vmac Use Virtual MAC address for interfaces on which floating
interface IPs are placed
floating_intf_ip Floating Interface IPs for the RoutingService.
floating_intf_ip_se_2 If ServiceEngineGroup is configured for Legacy 1+1 Active
Standby HA Mode, Floating IP's will be advertised only by the Active SE in t...
flowtable_profile (submode)
graceful_restart Enable graceful restart feature in routing service. For
example, BGP.
nat_policy_ref NAT policy for outbound NAT functionality. This is done in
post-routing
new (Editor Mode) Create new object in editor mode
no Remove field
routing_by_linux_ipstack For IP Routing feature, enabling this knob will fallback to
routing through Linux, by default routing is done via Service Engine data-...
save Save and exit the current submode
show_schema show object schema
watch Watch a given show command
where Display the in-progress object

Service Engine Failure Detection


Failure detection is essential in achieving Service Engine high availability.

NSX Advanced Load Balancer relies on a variety of methods to detect Service Engine failures, as
listed:

n Controller-to-SE Failure Detection Method

n SE-to-SE Failure Detection Method

n BGP-Router-to-SE Failure Detection Method

Controller-to-SE Failure Detection Method


In all deployments, the NSX Advanced Load Balancer Controller sends heartbeat messages to all
Service Engines in all groups under its control, once every 10 seconds. If there is no response
from a specific SE for six consecutive heartbeat messages, the Controller concludes that the SE is
DOWN, and moves all virtual services to the new SEs.

VMware, Inc. 416


VMware NSX Advanced Load Balancer Configuration Guide

When vSphere High Availability is enabled, if the Controller detects that a vSphere host failure has
occurred, the SEs will transition to OPER_PARTITIONED or OPER_DOWN prior to missing six consecutive
heartbeat misses.

n SEs (on the failed host) which have operational virtual services transition to OPER_PARTITIONED
state.

n SEs (on the failed host) which do not have any operational virtual services transition to
OPER_DOWN state.

SE-to-SE Failure Detection Method


In the above-mentioned Controller-to-SE failure detection method, the Controller detects a
Service Engine failure by sending periodic heartbeat messages over the management interface.
However, this method will not detect datapath failures for the data interfaces on SEs.

To verify holistic failure detection, the Service Engine datapath heartbeat mechanism was devised,
where the Service Engines send periodic heartbeat messages over the data interfaces.

By default, this communication is set to standard mode. It can also be configured for the
aggressive mode, as discussed in the Enabling Aggressive Mode using the CLI section.

Service Engine Datapath Communication Modes


Depending on the Service Engine deployment, the three modes available for SE-to-SE inter-
process communication are as discussed below:

1 Custom EtherTypes

This is the default mode applicable when the Service Engines are in the same subnet. The
EtherTypes used are:

n ETHERTYPE_AVI_IPC 0XA1C0

n ETHERTYPE_AVI_MACINMAC 0XA1C1

n ETHERTYPE_AVI_MACINMAC_TXONLY 0XA1C2

2 IP Encapsulation

This mode is applicable when the infrastructure does not permit EtherTypes through. Even
in this mode, it is assumed that the Service Engines are in the same subnet. This mode is
applicable for AWS by default.

Configure IP encapsulation by using the se_ip_encap_ipc X command.

The following example displays configuring IP encapsulation using the CLI:

#shell
Login: admin
Password:
[GB-slough-cam:cd-avi-cntrl1]: > configure serviceengineproperties
[GB-slough-cam:cd-avi-cntrl1]: seproperties> se_bootup_properties
[GB-slough-cam:cd-avi-cntrl1]: seproperties:se_bootup_properties> se_ip_encap_ipc 1

VMware, Inc. 417


VMware NSX Advanced Load Balancer Configuration Guide

[GB-slough-cam:cd-avi-cntrl1]: seproperties:se_bootup_properties> save


[GB-slough-cam:cd-avi-cntrl1]: seproperties:> save
[GB-slough-cam:cd-avi-cntrl1]: > reboot serviceengine <IP 1>
[GB-slough-cam:cd-avi-cntrl1]: > reboot serviceengine <IP 2>

Note For changes to the se_ip_encap_ipc command to be effective, reboot all Service
Engines in the Service Engine group.

The IP protocols used in this mode are:

n IPPROTO_AVI_IPC 73

n IPPROTO_AVI_MACINMAC 97

n IPPROTO_AVI_MACINMAC_TX 63

3 IP packets

This mode is applicable when the Service Engines are in different subnets. The IP packet
destined to the destination Service Engine’s interface IP is sent to the next-hop router. The IP
protocols used in this mode are:

n IPPROTO_AVI_IPC_L3 75

n IPPROTO_AVI_MACINMAC 97

BGP-Router-to-SE Failure Detection Method


With BGP configured, the SE-to-SE failure detection is augmented as described below:

n Bidirectional Forwarding Detection (BFD) detects SE failures and prompts the router not to
use the route to the failed SE for flow load balancing.

n Routers detect SE failures using BGP protocol timers.

Failure Detection Algorithm


Consider a SE group on which a virtual service has been scaled out. The sequence followed for
failure detection is as explained below:

1 Virtual service’s primary SE sends periodic heartbeat messages to all virtual services’
secondary SEs.

2 If a SE fails to respond repeatedly, the primary SE will suspect that the said SE may be down.

3 A notification is sent to NSX Advanced Load Balancer Controller indicating a possible SE


failure.

4 NSX Advanced Load Balancer Controller sends a sequence of echo messages to confirm if the
suspected Service Engine is indeed down.

VMware, Inc. 418


VMware NSX Advanced Load Balancer Configuration Guide

Based on the time frame and frequency of heartbeat messages sent across the Service Engines,
the modes of operation are standard and aggressive. The algorithm for both modes is the same,
with a difference in frequency and time frame, as explained below:

1 The primary SE sends heartbeat messages to the secondary SE on a customized interval, e.g.,
100 milliseconds. A string of consecutive failures to respond will indicate that the given SE
could be down. According to the settings shown in the second column, the primary SE will
suspect a secondary SE to be down if,

n 10 consecutive heartbeat messages fail for 1 sec (standard), or

n 10 consecutive heartbeat messages fail for 1 sec (aggressive). However it could be tweaked
to make it aggressive with the below configuration parameters.

n As soon as primary suspects that the secondary is down, it apprises the NSX Advanced
Load Balancer Controller , which then sends echo messages to the suspect. According to
the settings shown in the third column, the Controller will declare the suspect down after,

n 4 consecutive echo messages fail for 8 sec (standard), or

n 2 consecutive echo messages fail for 4 sec (aggressive).

By summing the values in the second and third columns, the Controller makes a failure conclusion
within 9 seconds under standard settings, but just within 5 seconds under aggressive settings.

The time taken to detect Service Engine failure based on SE-DP heartbeat failure is as follows:

Controller-SE Echo Total Time for Failure


Detection Mode SE-SE HB Messaging
Messages Detection

HB-Period: 100ms Echo-Period: 2 seconds


Normal Mode 1+8 = 9 seconds
10 consecutive failures 4 consecutive failures

HB-Period: 100ms Echo-Period: 2 seconds


Aggressive Mode 1+4 = 5 seconds
10 consecutive failures 2 consecutive failures

The aggressive failure detection as aggressive as 2 seconds can be achieved with the following
configuration. However it is recommended only on bare-metal environment, on virtualised
environment it can lead to false positives.

serviceengineproperties indicate the aggressive timeout values:

configure serviceengineproperties
se_runtime_properties
| dp_aggressive_hb_frequency | 100 milliseconds |
| dp_aggressive_hb_timeout_count | 5 |
se_agent_properties
| controller_echo_rpc_aggressive_timeout | 500 milliseconds |
| controller_echo_miss_aggressive_limit | 3 |

VMware, Inc. 419


VMware NSX Advanced Load Balancer Configuration Guide

Enabling Aggressive Mode using CLI


Service Engine failure detection can be set to Aggressive mode using only the CLI, as explained
below.

Login to the shell prompt for NSX Advanced Load Balancer Controller and enter the following
commands under the chosen Service Engine group:

[admin:1-Controller-2]: > configure serviceenginegroup AA-SE-Group

[admin:1-Controller-2]: serviceenginegroup> aggressive_failure_detection

[admin:1-Controller-2]: serviceenginegroup> save

Verify the settings using the following show command:

[admin:1-Controller-2]: > show serviceenginegroup AA-SE-Group | grep aggressive

| aggressive_failure_detection | True

Debugging BGP-based Service Engine Configurations


How to check if a BGP session doesn’t come up:

1 Doublecheck the configuration on the router and NSX Advanced Load Balancer. Make sure
that the peer IPs, subnets, and AS numbers are correct.

2 Verify the MD5 passwords are the same on the router and NSX Advanced Load Balancer.

3 Run “show serviceengine bgp” to determine the state of the BGP session initiated by the NSX
Advanced Load Balancer SE.

4 Verify there are no ACLs/route maps on the router preventing the sessions/advertisements.

5 Additionally, if needed, check the packet capture using tcpdump (tcpdump -M ) on the router
and check BGP negotiations.

How to Access and Use Quagga Shell using NSX Advanced Load
Balancer CLI
Quagga is a network routing software suite providing implementations of various routing
protocols. NSX Advanced Load Balancer uses Quagga for BGP-based scaling of virtual services.

For more information on BGP scaling, see BGP Support for Scaling Virtual Services.

VMware, Inc. 420


VMware NSX Advanced Load Balancer Configuration Guide

Instructions
Quagga shell is used to check BGP configuration and status of BGP peer.

Note In this example, all the commands are executed from the default namespace on an NSX
Advanced Load Balancer SE hosting a virtual service enabled for BGP. To list the namespaces
available, use the command ip netns. To switch to the desired datapath namespace, use the
following command.

admin@AVI-SE1:ip netns exec namespace name bash

Use the netcat localhost bgpd command instead of the telnet localhost bgpd command to get
access to the Quagga shell.

admin@AVI-SE1: netcat localhost bgpd


Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Hello, this is Quagga (version 0.99.24.1).

If the authentication is successful, the following output is observed:

Quagga-bgp>

Configuration and Troubleshooting Commands


Use the command show run to check running configuration:

Quagga-bgp> en
Quagga-bgp# show run

Current configuration:
!
password <password>
log file /var/lib/avi/log/bgp/0_bgpd.log
!
router bgp 65000
bgp router-id 1.2.87.205
network 10.140.99.153/32
neighbor 10.140.60.155 remote-as 3
neighbor 10.140.60.155 password <password>
neighbor 10.140.60.155 advertisement-interval 5
neighbor 10.140.60.155 timers 60 180
neighbor 10.140.60.155 timers connect 10
neighbor 10.140.60.155 distribute-list 2 out
neighbor 10.140.99.157 remote-as 2
neighbor 10.140.99.157 password <password>
neighbor 10.140.99.157 advertisement-interval 5
neighbor 10.140.99.157 timers 60 180
neighbor 10.140.99.157 timers connect 10
neighbor 10.140.99.157 distribute-list 1 out

VMware, Inc. 421


VMware NSX Advanced Load Balancer Configuration Guide

!
access-list 1 permit 10.140.99.153
!
line vty
!
end

Use the command show bgp neighbors to check bgp peering status:

10-140-4-220# *show bgp neighbors*


BGP neighbor is 10.140.60.155, remote AS 3, local AS 65000, external link
BGP version 4, remote router ID 0.0.0.0
BGP state = Active*
Last read 03w5d06h, hold time is 180, keepalive interval is 60 seconds
Configured hold time is 180, keepalive interval is 60 seconds
Message statistics:
Inq depth is 0
Outq depth is 0
Sent Rcvd
Opens: 0 0
Notifications: 0 0
Updates: 0 0
Keepalives: 0 0
Route Refresh: 0 0
Capability: 0 0
Total: 0 0
Minimum time between advertisement runs is 5 seconds

For address family: IPv4 Unicast


Community attribute sent to this neighbor(both)
Outbound path policy configured
Outgoing update network filter list is 2
0 accepted prefixes

Connections established 0; dropped 0


Last reset never
Next connect timer due in 3 seconds
Read thread: off Write thread: off

BGP neighbor is 10.140.99.157, remote AS 2, local AS 65000, external link


BGP version 4, remote router ID 10.140.6.28
BGP state = Established, up for 03w6d03h*
Last read 00:00:48, hold time is 180, keepalive interval is 60 seconds
Configured hold time is 180, keepalive interval is 60 seconds
Neighbor capabilities:
4 Byte AS: advertised and received
Route refresh: advertised and received(old & new)
Address family IPv4 Unicast: advertised and received
Graceful Restart Capabilty: advertised and received
Remote Restart timer is 120 seconds
Address families by peer:
none
Graceful restart informations:
End-of-RIB send: IPv4 Unicast
End-of-RIB received: IPv4 Unicast

VMware, Inc. 422


VMware NSX Advanced Load Balancer Configuration Guide

Message statistics:
Inq depth is 0
Outq depth is 0
Sent Rcvd
Opens: 6 3
Notifications: 3 0
Updates: 4 1
Keepalives: 39103 39102
Route Refresh: 0 0
Capability: 0 0
Total: 39116 39106
Minimum time between advertisement runs is 5 seconds

For address family: IPv4 Unicast


Community attribute sent to this neighbor(both)
Outbound path policy configured
Outgoing update network filter list is *1
0 accepted prefixes

Connections established 1; dropped 0


Last reset never
Local host: 10.140.99.156, Local port: 179
Foreign host: 10.140.99.157, Foreign port: 54566
Nexthop: 10.140.99.156
Nexthop global: ::
Nexthop local: ::
BGP connection: non shared network
Read thread: on Write thread: off

BGP Peer Monitoring for High Availability


In Legacy HA configuration, it is recommended for SE failover to happen when the BGP peers are
inaccessible from Active SE. BGP peer monitoring is available by default on NSX Advanced Load
Balancer. Failover on Legacy HA SE groups based on BGP peer monitoring is also introduced.

BGP Peer Monitoring for Failover on Legacy HA


The SE agent periodically queries the bgpd and detects the peer state. If the peer state changes, it
triggers an event. BGP peers are configured in the VRF.

Not all peers might be applicable on a particular SE. Only those peers with subnet matching any of
the interfaces in the SE, are applicable on the SE.

Note Peers in this section refer only to those BGP peers that have matching interfaces on the SE.

Configuring BGP Peer Monitor Failover


BGP Peer monitor failover can be configured for an SE through the CLI as shown below:

[admin:123-ctlr3]: > configure serviceenginegroup Default-Group [admin:123-


ctlr3]: serviceenginegroup> bgp_peer_monitor_failover_enabled Overwriting the previously
entered value for bgp_peer_monitor_failover_enabled [admin:123-ctlr3]: serviceenginegroup>
save

VMware, Inc. 423


VMware NSX Advanced Load Balancer Configuration Guide

Criteria for BGP Peer Monitoring


A peer monitor looks if the following conditions are met:

If peers with advertise_vip set are present, at least one such peer should be in the UP state. If
peers with advertise_snat_ip set are present, at least one such peer must be in the UP state. For
the peer monitor to mark the status as UP, both the conditions mentioned above have to be met.
The peer monitor marks the status as DOWN if either condition fails.

BGP Peer Monitoring in a Multiple VRF Scenario


In a multi-VRF scenario, each of the VRFs must satisfy the conditions for the peer monitor to mark
the status as UP. Immediately after VRF is configured, BGP peer monitor waits for two cycles of
peer monitor queries before the peer monitor status is updated.

IPv6 BGP Peering in NSX Advanced Load Balancer


BGP peering is supported for IPv6 in the VMware no access, VMware write access, VMware read
access, Linux server, and bare-metal cloud ecosystems.

Configuring IPv6 BGP peer

Note Similar to an IPv4 BGP peer, the IPv6 peer must be in the Service Engine’s directly-
connected network. If it is an eBGP multihop peer, then you need to configure the IPv6 subnet of
the Service Engine’s interface as subnet6, through which the multihop peer is reachable.

Using UI
To configure BGP IPv6 peer on the NSX Advanced Load Balancer UI:

Procedure

1 Navigate to Infrastructure > Routing and select the required cloud from the drop-down menu.

2 Click the BGP Peering tab and click the edit icon to create a new peer.

3 Enter the desired BGP autonomous system value in the BGP AS field.

4 Enter the IPv6 Prefix and IPv6 Peer details along with the MD5 Secret value. In the case of
eBGP, enter relevant information in the fields for Remote AS and Multihop.

The Edit BGP Peering screen is as shown:

VMware, Inc. 424


VMware NSX Advanced Load Balancer Configuration Guide

5 Click Save to complete the configuration.

Note You can save the configuration by entering just the IPv6 prefix and peer details.
Corresponding IPv4 details are optional. However, for either IPv4 or IPv6, both prefix and
peer details are required.

Using CLI
To configure an IPv6 BGP peer, login to the Controller shell, and execute the following commands:

Syntax

peer_ip6 ipv6_peer_address subnet6 ipv6_subnet remote_as AS_identity md5_secret


password

The following is an example of configuring an IPv6 BGP peer, with an IP address of 2006::54, and
a subnet of 2006::/64.

[admin:cntrlr]: > configure vrfcontext global


[admin:cntrlr]: vrfcontext> bgp_profile
[admin:cntrlr]: vrfcontext:bgp_profile> peers
New object being created
[admin:cntrlr]: vrfcontext:bgp_profile:peers> peer_ip6 2006::54

VMware, Inc. 425


VMware NSX Advanced Load Balancer Configuration Guide

[admin:cntrlr]: vrfcontext:bgp_profile:peers> subnet6 2006::/64


[admin:cntrlr]: vrfcontext:bgp_profile:peers> remote_as 1
[admin:cntrlr]: vrfcontext:bgp_profile:peers> md5_secret avi123
[admin:cntrlr]: vrfcontext:bgp_profile:peers> save
[admin:cntrlr]: vrfcontext:bgp_profile> save
[admin:cntrlr]: vrfcontext> save
[admin:cntrlr]:>

Configuring Dual-Stack Peer


To configure both IPv4 and IPv6 BGP peers on the NSX Advanced Load Balancer UI:

Procedure

1 Navigate to Infrastructure > Routing and select the required cloud from the drop-down menu.

2 Click the BGP Peering tab and click the edit icon to create a new peer.

3 Enter the IPv4 peer details under IPv4 Prefix and IP4 Peer fields.

4 Enter the IPv6 peer details under IPv6 Prefix and IPv6 Peer fields.

The Edit BGP Peering screen is as shown:

5 Click Save to complete the configuration.

VMware, Inc. 426


VMware NSX Advanced Load Balancer Configuration Guide

Results

Note Click Add New Peer to add more peers.

You can configure the peer details using CLI as explained below:

n IPv4 BGP peer configuration on CLI

n IPv6 BGP peer configuration on CLI

Note Similar to dual-stack virtual service, the dual-stack peer considered for BGP virtual service
placement must have both its IPv4 (peer_ip/subnet) and IPv6 (peer_ip6/subnet6) located on the
same interface. The IPv6 routes will be advertised over the IPv6 peering and the IPv4 routes over
the IPv4 peering.

BGP Virtual Service Configuration


To configure IPv6 BGP virtual service:

Procedure

1 Navigate to Applications > Virtual Services.

2 Click Create Virtual Service.

3 Select Advanced Setup.

4 Enter IPv4 VIP Adress and IPv6 VIP Address.

The New Virtual Service screen is as shown:

5 Under Pools, provide the IPv4 and IPv6 server IP addresses.

6 Under Step 4: Advanced, click the Advertise VIP via BGP option to enable BGP advertising
for the configured virtual service.

VMware, Inc. 427


VMware NSX Advanced Load Balancer Configuration Guide

Verifying Configuration
Use the show serviceengine service_engine_IP_address bgp command to verify the
configuration.

The following is an example of the show output:

[admin:cntrlr]: > show serviceengine 10.140.1.13 bgp


+---------------------+----------------------------------------------------------------+
| Field | Value |
+---------------------+----------------------------------------------------------------+
| se_uuid | 10-140-1-13:se-10.140.1.13-avitag-1 |
| proc_id | C0_L4 |
| name | global |
| local_as | 65000 |
| vrf | 1 |
| active | 1 |
| peer_bmp | 2147483648 |
| peers[1] | |
| remote_as | 1 |
| peer_ip | 2006::54 |
| peer_id | 1 |
| active | 1 |
| md5_secret | **** |
| bfd | True |
| advertise_snat_ip | True |
| bgp_state | Established, |
| | |
| advertise_vip | True |
+++ Output truncated +++

BGP Support in NSX Advanced Load Balancer for OpenShift and


Kubernetes
BGP Route Health Injection (RHI) is used to advertise the Virtual IPs (VIPs) assigned to north-
south services in a Kubernetes or an OpenShift cluster.

This feature is useful in the following scenarios:

n To support elastic scaling using ECMP as described in BGP Support for Scaling Virtual
Services.

n To allow north-south VIPs to be allocated from a subnet other than that in which the cluster
nodes’ external interface resides.

Note NSX Advanced Load Balancer Controller must be outside the OpenShift/K8S cluster and
cannot run as a container alongside the NSX Advanced Load Balancer SE container.

VMware, Inc. 428


VMware NSX Advanced Load Balancer Configuration Guide

Enabling BGP Features in NSX Advanced Load Balancer for Kubernetes and
OpenShift
Configuring BGP features in NSX Advanced Load Balancer is accomplished by configuring a BGP
profile and through an annotation in the Kubernetes/OpenShift service or route/ingress definition.
The BGP profile specifies the local Autonomous System (AS) ID that the NSX Advanced Load
Balancer Service Engine and each of the peer BGP routers are in and the IP address of each peer
BGP router.

Configuring a BGP profile (using UI)


To configure a BGP profile:

Procedure

1 Navigate to Infrastructure > Routing.

2 Click the cloud name.

If the cloud is set up during the initial installation of the NSX Advanced Load Balancer
Controller using the setup wizard, the cloud name is “Default-Cloud,” as shown in the image.

3 Click the BGP Peering tab and click the edit icon to reveal more fields.

4 Enter the following information:

a Enter a value between 1 and 4294967295 as Local Autonomous System ID.

b Select either iBGP or eBGP as the BGP type.

5 Click Add New Peer to reveal a set of fields appropriate to iBGP or eBGP.

a Enter the SE placement network.

b Enter the subnet providing reachability for peer.

c Enter the Peer BGP router’s IP address.

d Enter a value between 1 and 4294967295 in the field Remote AS.

Note The field Remote AS applies only to eBGP.

e Enter the Peer Autonomous System MD5 digest secret key.

f Set Multihop to 0.

g Click the following options to enable them.

n BFD (by default, enables very fast link failure detection using BFD).

Note Only async mode is supported.

n Advertise VIP.

h Advertise SNAT can be turned off as advertisement of SNAT is not relevant for
Kubernetes/OpenShift environments.

VMware, Inc. 429


VMware NSX Advanced Load Balancer Configuration Guide

Results

The Edit BGP Peering screen for eBGP type is as shown in the image:

Note eBGP multihop is not supported in Kubernetes/OpenShift environments.

Configuring a BGP profile (using CLI)


To configure a BGP profile:

: > configure vrfcontext global


Multiple objects found for this query.
[0]: vrfcontext-f834cafa-b572-4ec3-9559-db0573f26d2f#global in tenant admin, Cloud
OpenShift-Cloud
[1]: vrfcontext-6d6ec0dd-0aaf-4b73-9d86-37569b505494#global in tenant admin, Cloud
Default-Cloud

VMware, Inc. 430


VMware NSX Advanced Load Balancer Configuration Guide

Select one: 0
Updating an existing object. Currently, the object is:
+----------------------------+-------------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------------+
| uuid | vrfcontext-f834cafa-b572-4ec3-9559-db0573f26d2f |
| name | global |
| system_default | True |
| tenant_ref | admin |
| cloud_ref | OpenShift-Cloud |
+----------------------------+-------------------------------------------------+
: vrfcontext > bgp_profile
: vrfcontext:bgp_profile > local_as 65536
: vrfcontext:bgp_profile > ebgp
: vrfcontext:bgp_profile > peers peer_ip 10.115.0.1 subnet 10.115.0.0/16 md5_secret abcd
remote_as 65537
: vrfcontext:bgp_profile:peers > save
: vrfcontext:bgp_profile > save
: vrfcontext > save
: >

Enabling a north-south service to use BGP RHI


To enable a specific north-south service, route or ingress to have its VIP advertised through BGP
RHI, use the annotation avi_proxy: {"enable_rhi"}.

For example, to enable BGP RHI for a north-south service, use the following Kubernetes/
OpenShift service definition:

apiVersion: v1
kind: Service
metadata:
name: avisvc
labels:
svc: avisvc
annotations:
avi_proxy: '{"virtualservice":{"enable_rhi": true, "east_west_placement": false}}'
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
selector:
name: avitest

Specifying the placement subnet for a VIP


By default, the VIP will be allocated from one of the “Usable Networks” listed in the north-south
IPAM object configured in the Kubernetes/OpenShift cloud.

VMware, Inc. 431


VMware NSX Advanced Load Balancer Configuration Guide

In some instances, it can be desirable to specify that the VIP be allocated from a specifically
named subnet. This can be achieved by defining the network in NSX Advanced Load Balancer and
then referencing the network by name in the service annotation as follows:

apiVersion: v1
kind: Service
metadata:
name: avisvc
labels:
svc: avisvc
annotations:
avi_proxy: >-
{"virtualservice":{"enable_rhi": true, "east_west_placement": false,
"auto_allocate_ip": true,
"ipam_network_subnet": {"network_ref": "/api/network/?name=ns-cluster-network-bgp"}}}
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
selector:
name: avitest

When explicitly referencing a network in this way, it is not necessary to include that network in the
Usable Networks list in the north-south IPAM object.

n Networks created in the NSX Advanced Load Balancer “admin” tenant can be referenced in
any Kubernetes namespace/OpenShift project.

n Networks created in a specific NSX Advanced Load Balancer tenant can be referenced only in
the corresponding namespace/project.

n Networks with the same name, defining different subnets, can be created in different tenants.

Combining these capabilities allows for great flexibility in the allocation of VIPs in different
subnets, for example:

n Global default subnet(s) for unannotated services

n Add network(s) defined in the “admin” tenant to the north-south IPAM configuration.

n Per-namespace default subnet(s) for unannotated services

n Add network(s) defined in the non-admin tenants only to the north-south IPAM
configuration.

n Allow application owners to place services in the specific subnet(s) through annotations

n Define networks in the “admin” tenant

n Can or can not be added to the north-south IPAM configuration

VMware, Inc. 432


VMware NSX Advanced Load Balancer Configuration Guide

n Allow application owners to place services in namespace/project-specific subnet(s) through


annotations

n Define networks in the tenant corresponding to the namespace/project.

n Can or can not be added to the north-south IPAM configuration.

DSR and Default Gateway


This section covers the following topics:

n Direct Server Return on NSX Advanced Load Balancer

n Default Gateway (IP Routing on NSX Advanced Load Balancer SE)

n Network Service Configuration

Direct Server Return on NSX Advanced Load Balancer


In general, a load balancer (NSX Advanced Load Balancer) performs address translation for the
incoming requests and outgoing requests. Return packets go through the load balancer, and the
destination and the source address is changed as per the configuration on the load balancer.

The following is the packet flow when Direct Server Return (DSR) is enabled:

n The load balancer does not perform any address translation for the incoming requests.

n The traffic is passed to the pool members without any changes in the source and the
destination address.

n The packet arrives at the server with the virtual IP address as the destination address.

n The server responds with the virtual IP address as the source address. The return path to the
client does not flow back through the load balancer and thus the term, Direct Server Return.

Note This feature is only supported for IPv4.

Use Case
DSR is often applicable to audio and video applications as these applications are sensitive to
latency.

Supported Modes
The supported modes for DSR are as follows:

VMware, Inc. 433


VMware NSX Advanced Load Balancer Configuration Guide

DSR Type Encapsulation How it works

NSX Advanced Load Balancer


Controller rewrites the source MAC
Layer 2 DSR MAC-based Translation address with Service Engine Interface
MAC address and destination MAC
address with server MAC address.

An IP-in-IP tunnel is created from the


NSX Advanced Load Balancer to the
pool members, which can be a router
hop(s) away.
Layer 3 DSR IP-in-IP The incoming packets from clients are
encapsulated in IP-in-IP with source
as the Service Engine’s interface
IP and destination as the back-end
server IP address.

Generic Routing Encapsulation (GRE)


tunnel is supported for layer 3 DSR.
In this case, the incoming packets
Layer 3 DSR GRE
from clients are encapsulated in a
GRE header, followed by the outer IP
header (delivery header).

The specification of supported features for DSR are:

Feature Support

Encapsulation IP-in-IP, MAC-based translation

Ecosystem VMware write, VMware No-access, and Linux server cloud

Dataplane drivers DPDK and PCAP support for Linux server cloud

BGP VIP placement using BGP in front-end

Load balancing algorithm Only Consistent Hash is supported for L2 and L3 DSR

Support for both TCP Fast Path and UDP Fast Path in L2
TCP UDP
and L3 DSR

High Availability (SE) N+M, active-active, and active-standby

Layer 2 DSR
n Destination MAC address for the incoming packets is changed to server MAC address.

n Supported modes: DSR over TCP and UDP.

n Health monitoring of TCP Layer2 DSR is supported as well.

Packet Flow Diagram

The following diagram exhibits a packet flow diagram for Layer 2 DSR:

VMware, Inc. 434


VMware NSX Advanced Load Balancer Configuration Guide

Data vNIC IP:


VIP: 10.140.116.210 10.140.116.58
3
Loopback IP:
2 10.140.116.210

Server-1
00:50:56:bd:95:85
1
Client
Load Balancer
2 Data vNIC IP:
Dst Mac is changed 10.140.116.58
to Server Mac Server-2
00:50:56:bd:95:86 Loopback IP:
10.140.116.210
3

Packet Flow

n Clients send requests to a Virtual IP (VIP) served by the Load Balancer (Step 1)

n LB determines real server to forward the request to

n LB performs MAC Address Translation (Step 2)

n The server responds directly to the client, bypassing the LB (Step 3)

Layer 2 - DSR

n Servers must be on directly attached networks to the Load Balancer

n LB and server need to be on the same L2 network segment

n The server's loopback IP should be configured for VIP IP

Configuring Network Profile for Layer 2 DSR

Login to the NSX Advanced Load Balancer CLI and use the configure networkprofile
<profile name> command to enter into TCP fast-path profile mode. For Layer 2 DSR, enter
the value for the DSR type as dsr_type_l2.

[admin:10-X-X-X]: > configure networkprofile <profile name>


[admin:10-X-X-X]: networkprofile> profile
[admin:10-X-X-X]: networkprofile profile> tcp_fast_path_profile
[admin:10-X-X-X]: networkprofile profile:tcp_fast_path_profile>dsr_profile dsr_type
dsr_type_l2
[admin:10-X-X-X]: networkprofile profile:dsr_profile> save
[admin:10-X-X-X]: networkprofile> save

Once the network profile is created, create an L4 application virtual service with the DSR network
profile created above and attach DSR-capable servers to the pool associated with the virtual
service.

VMware, Inc. 435


VMware NSX Advanced Load Balancer Configuration Guide

Configuring Server
ifconfig lo:0 <VIP ip> netmask 255.255.255.255 -arp up
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 >/proc/sys/net/ipv4/conf/<Intraface of pool server ip configured>/rp_filter

sysctl -w net.ipv4.ip_forward=1

Configuring Network Profile for DSR over TCP and UDP using UI
Network profiles for DSR over TCP and UDP can be created using NSX Advanced Load Balancer
UI. Log in to the UI and follow the steps mentioned below.

Procedure

1 Navigate to Templates > Profiles > TCP/UDP. Click Create to create a new TCP profile or
select the existing one to modify.

2 Provide the desired name and select TCP Fast Path as Type. Select the following options:

a Enable checkbox for Enable DSR.

b Use the drop-down menu for DSR Type and select L2 or L3 as per the requirement.

c Select IPinip as the option for DSR Encapsulation Type.

3 For UDP fast path profile, select UDP Fast Path as Type. Select the following options:

a Enable checkbox for Enable DSR.

b Use the drop-down menu for DSR Type and select L2 or L3 as per the requirement.

c Select IPinip as the option for DSR Encapsulation Type.

Configuring Network Profile for DSR over TCP using CLI


Log in to the NSX Advanced Load Balancer CLI and use the configure networkprofile
<profile name> command to enter into TCP fast-path profile mode.

For layer 3 DSR, enter the value for the DSR type as dsr_type_l3 and encapsulation type as
encap_ipinip or encap_gre.

Run the following command for IPnIP encapsulation:

[admin:10-X-X-X]: > configure networkprofile <profile name>


[admin:10-X-X-X]: networkprofile> profile
[admin:10-X-X-X]: networkprofile profile> tcp_fast_path_profile
[admin:10-X-X-X]: networkprofile profile:tcp_fast_path_profile>dsr_profile dsr_type
dsr_type_l3 dsr_encap_type encap_ipinip
[admin:10-X-X-X]: networkprofile profile:dsr_profile> save
[admin:10-X-X-X]: networkprofile> save

This creates the DSR profile over TCP (default L3, IPinIP encapsulation).

VMware, Inc. 436


VMware NSX Advanced Load Balancer Configuration Guide

Run the following command for GRE encapsulation:

[admin:10-X-X-X]: > configure networkprofile <profile name>


[admin:10-X-X-X]: networkprofile> profile
[admin:10-X-X-X]: networkprofile profile> tcp_fast_path_profile
[admin:10-X-X-X]: networkprofile profile:tcp_fast_path_profile>dsr_profile dsr_type
dsr_type_l3 dsr_encap_type encap_gre
[admin:10-X-X-X]: networkprofile profile:dsr_profile> save
[admin:10-X-X-X]: networkprofile> save

This creates the DSR profile over TCP(default L3, GRE encapsulation).

Configuring Network Profile for DSR over UDP using CLI


Log in to the NSX Advanced Load Balancer CLI and use the configure networkprofile
<profile name> command to enter into UDP fast path profile mode.

For layer 3 DSR, enter the value for the DSR type as dsr_type_l3 and encapsulation type as
encap_ipinip or encap_gre.

Run the following command for IPnIP encapsulation:

[admin:10-X-X-X]: > configure networkprofile <profile name>


[admin:10-X-X-X]: networkprofile> profile
[admin:10-X-X-X]: networkprofile profile> udp_fast_path_profile
[admin:10-X-X-X]: networkprofile profile:udp_fast_path_profile>dsr_profile dsr_type
dsr_type_l3 dsr_encap_type encap_ipinip
[admin:10-X-X-X]: networkprofile profile:dsr_profile> save
[admin:10-X-X-X]: networkprofile> save

This creates the DSR profile over UDP (default L3, IPinIP encapsulation).

Run the following command for GRE encapsulation:

[admin:10-X-X-X]: > configure networkprofile <profile name>


[admin:10-X-X-X]: networkprofile> profile
[admin:10-X-X-X]: networkprofile profile> udp_fast_path_profile
[admin:10-X-X-X]: networkprofile profile:udp_fast_path_profile>dsr_profile dsr_type
dsr_type_l3 dsr_encap_type encap_gre
[admin:10-X-X-X]: networkprofile profile:dsr_profile> save
[admin:10-X-X-X]: networkprofile> save

This creates the DSR profile over UDP (default L3, GRE encapsulation).

Layer 3 DSR
This section discusses Layer 3 DSR in detail.

n L3 DSR can be used in conjunction with a full proxy deployment:

n Tier 1: L3 DSR

n Tier 2: Full-proxy (with SNAT)

n Supported mode: IPinIP

VMware, Inc. 437


VMware NSX Advanced Load Balancer Configuration Guide

n Virtual service placement is supported in the front-end using BGP.

n Supported load balancing algorithm: Only consistent hash is supported.

n Deployment mode: Auto gateway and traffic enabling should be disabled for the deployment
mode when Layer 7 virtual service is configured (in the deployment mode Tier-2 as shown
below).

n If the SEs are scaled out in the Tier-2 deployment mode, pool members are added manually
once new SEs are added.

Packet Flow Diagram


The following diagram exhibits a packet flow diagram for Layer 3 DSR:

S: C-IP, C-Port S: C-IP, C-Port App-Servers


D:VIP, VIP: 10.140.116.210 D: S-IP, S-Port
VIP-Port
Loopback IP:
1 2 10.140.116.210
Client
Server-1
Load Balancer
Loopback IP:
10.140.116.210
S: VIP, Server-2
VIP-Port
D: C-IP, C-Port
3

Note
n IP-in-IP tunnel is created from the load balancer to the pool members that can be a router
hop(s) away.

n The incoming packets from clients are encapsulated in IP-in-IP with source as the Service
Engine’s interface IP address and destination as the back-end server IP address.

n In the case of the Generic Routing Encapsulation (GRE) tunnel, the incoming packets from
clients are encapsulated in a GRE header, followed by the outer IP header (delivery header).

Deployment Modes
Tier-1

n Layer 4 virtual service is connected to application servers which terminate the connections.
Pool members are the application servers.

n Servers handle the IPinIP packets. The loopback interface is configured with the
corresponding virtual service IP address. The service listening on this interface receives
packets and responds to the client directly in the return path.

Tier-2

n Layer 4 virtual service is connected to the corresponding Layer 7 virtual service (which has the
same virtual service IP address as Layer 4 virtual service), which terminates the tunnel.

VMware, Inc. 438


VMware NSX Advanced Load Balancer Configuration Guide

n Layer 4 virtual service’s pool members will be Service Engines of the corresponding Layer 7
virtual services.

n For the Layer 7 virtual service, traffic is disabled so that it does not perform ARP.

n Auto gateway is disabled for Layer 7 virtual service.

n Servers are Service Engines of corresponding Layer 7 virtual service.

Packet Flow

n IPinIP packets reach one of the Service Engines of Layer 7 virtual service. That SE will decrypt
and handle the IPinIP packet and give it to the corresponding layer 7 virtual services. The
virtual service sends it to the backend servers.

n Return packets from the backend servers are received at the virtual service, and the virtual
service forwards the packet directly to the client.

n The following diagram exhibits packet flow for the tier-2 deployment in the Layer 3 mode:

SE Group-1 Tenant-1 SE Group-2 Tenant-1

Network Team Application Team


(Layer-4 VS’s with DSR) (Layer-7 Full Proxy VS’s) App-Servers

S:SE-b-IP
D: SE3-f-IP,
S: C-IP, C-Port IP-in-PI S:SE3-b-IP Server-1
SE-2
D:VIP, S: C-IP, C-Port SE3-b-Port,
VIP-Port D:VIP, VIP-Port D:App-IP, Pool-port
Server-2
Client S:App-IP, Pool-port
SE-1 SE-3 D: SE3-b-IP, SE3-b-port

S:VIP, VIP-port
D: C-IP, C-port

SE-4

The following are the observations for the above deployment as mentioned in the diagram:

n Layer 4 virtual service is connected to the corresponding Layer 7 virtual service (which has the
same virtual service IP address as Layer 4 virtual service), which terminates the tunnel.

n Layer 4 virtual service’s pool members will be Service Engines of the corresponding Layer 7
virtual services.

n For the Layer 7 virtual service, traffic is disabled so that it does not perform ARP.

n Auto gateway is disabled for Layer 7 virtual service.

n Servers are Service Engines of corresponding Layer 7 virtual service.

n Return packets from the back end servers are received at the virtual service, and the virtual
service forwards the packets directly to the client.

VMware, Inc. 439


VMware NSX Advanced Load Balancer Configuration Guide

Creating Virtual Service and Associating it with the Network Profile (for Tier-2 deployment)
Navigate to Application > Virtual Services and click Create to add a new virtual service. Provide
the following information as mentioned:

n Provide the desired name for the virtual service and IP address.

n Select the network profile created in the previous step for Tier-2 deployment from the
TCP/UDP Profile drop-down menu.

n Select the pool created for the selected virtual service.

Note The Traffic Enabled check box must not be selected for Tier-2 deployment.

Configuring Server

modprobe ipip

ifconfig tunl0 <Interface ip of the server, same should be part of


pool> netmask <mask> up

ifconfig lo:0 <VIP ip> netmask 255.255.255.255 -arp up


echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 >/proc/sys/net/ipv4/conf/tunl0/rp_filter

sysctl -w net.ipv4.ip_forward=1

Configuring the loopback interface for Windows


The following commands must be configured on the server to make the HTTP health monitor work
for Windows servers in the back end:

netsh interface ipv4 set interface "Ethernet0" forwarding=enabled netsh interface ipv4 set
interface "Ethernet1" forwarding=enabled netsh interface ipv4 set interface "Ethernet1"
weakhostreceive=enabled netsh interface ipv4 set interface "Loopback" weakhostreceive=enabled
netsh interface ipv4 set interface "Loopback" weakhostsend=enabled

VMware, Inc. 440


VMware NSX Advanced Load Balancer Configuration Guide

In the above steps,

Ethernet0 = Management interface name

Ethernet1 = Data interface name

Loopback = Loopback interface name (VIP)

Default Gateway (IP Routing on NSX Advanced Load Balancer SE)


There are multiple use cases for enabling IP routing on NSX Advanced Load Balancer Service
Engines.

When new application servers are deployed, the servers need external connectivity for
manageability. In the absence of a router in the server networks, the NSX Advanced Load
Balancer SE can be used for routing the traffic of server networks.

Another use case is when virtual services use an application profile with the Preserve Client IP
option enabled, back end servers receive traffic with the source IP set to the IP of the originating
clients. The NSX Advanced Load Balancer SE’s IP needs to be configured as the default gateway
for servers to route all traffic back through the SEs to the clients.

Note This feature is not supported for IPv6.

Scope
The following features are supported:

n IP routing is supported on two-armed, no-access configurations of Linux server clouds and


VMware clouds and conditionally supported on CSP. On CSP, it is supported when the
interfaces attached to the SE instances are configured in SR-IOV mode.

n VMWare write access clouds are also supported when configured using the CLI.

n NSX Advanced Load Balancer supports IP routing for VMware cloud deployments in write
access mode. For this feature to work on VMware write access clouds, at least one virtual
service must be configured with the following configurations:

n One arm (in the two-arm mode deployment) must be placed in the backend network. For
this network, SE acts as the default gateway.

n The other arm is placed in the desired front-end network.

n The HA mode must be legacy HA (active/standby) only for SE groups with the enable IP
routing option set.

n The HA mode must be legacy HA (active/standby) only for SE groups and routing has to be
enabled in the corresponding Network Service.

n IP routing cannot be enabled in conjunction with the distribute load option set in the SE group
configuration.

VMware, Inc. 441


VMware NSX Advanced Load Balancer Configuration Guide

n IP routing is supported on the following:

n Only DPDK-based SEs.

n VMware write access mode if a virtual service has already been created. This virtual service
creates the required Service Engines before MAC masquerading is tested.

Note Preserve_client_ip is supported for non-directly-connected or routed backend servers.


However, all the required IPs on NSX Advanced Load Balancer still needs to be static, and there is
no support for DHCP relay.

Use Case

SEs are in a
• Linux server cloud or
• VMware no-access cloud
(no auto-creation of SE)

IP Route (in router):


10.10.10.0/24 NH 10.10.40.11
10.10.20.0/24 NH 10.10.40.11
10.10.30.0/24 NH 10.10.40.11
10.10.40.3/24

FE-NW 10.10.40.0/24

Floating IP - 10.10.40.11/24

10.10.40.1/24 10.10.40.2/24
Legacy HA
active/standby

Floating IP - 10.10.10.11

BE-NW-1 10.10.10.0/24

BE-NW-3 10.10.30.0/24

BE-NW-2 BE-NW-1 10.10.20.0/24

Briefly, enabling IP routing requires the following configurations to be done at various points in the
network:

n On the NSX Advanced Load Balancer Controller, enable IP routing for the SE group. This has
to be configured through Network Service of routing_service type.

n On the front end router, configure static routes to the back end server networks with the next
hop as floating IP in the front end network.

VMware, Inc. 442


VMware NSX Advanced Load Balancer Configuration Guide

n If BGP is enabled in the network and BGP peers configured on the SEs, then enable Advertise
back end subnets using BGP for the SE group.

n If BGP is enabled in the network and BGP peers are configured on the SEs, then enable
Advertise back end subnets using BGP for the SE group in the above routing enabled Network
Service.

n On the back end servers, configure the SE’s floating IP in the back end server network as the
default gateway.

Configure IP Routing (Without BGP Peer)


Consider a simple two-leg setup with the server(s) in the 10.10.10.0/24 back end network (it need
not be directly connected network) and front end router in the 10.10.40.0/24 network. Steps to
configure the IP routing (default gateway) feature are listed below. UI and CLI in each step are just
two different ways of configuring the same step.

Note This feature is supported for IPv6.

1 Navigate to Infrastructure > Service Engine Group > Edit.

Configure the HA mode in the SE group to legacy HA (Active/Standby)

Distribute Load is not enabled.

2 ,Sna,mnsad

3 asmdnasnd

4 asdsand

5 asdaskjndas

6 asdasnda

7 Asdnasnd,asnd

8 asdnasnd

9 asdbsabd

10 asdasnd

11 asndbsanbd

12

Configure IP Routing (Without BGP Peer)


Consider a simple two-leg setup with the server(s) in the 10.10.10.0/24 back end network (it need
not be directly connected network) and front end router in the 10.10.40.0/24 network.

Steps to configure the IP routing (default gateway) feature are listed below. UI and CLI in each
step are just two different ways of configuring the same step.

Note This feature is supported for IPv6.

VMware, Inc. 443


VMware NSX Advanced Load Balancer Configuration Guide

Procedure

1 Navigate to Infrastructure > Service Engine Group > Edit.

a Configure the HA mode in the SE group to legacy HA (Active/Standby).

: > configure serviceenginegroup Default-Group


: serviceenginegroup> active_standby
Overwriting the previously entered value for active_standby
: serviceenginegroup> ha_mode ha_mode_legacy_active_standby
Overwriting the previously entered value for ha_mode
: serviceenginegroup>save

b Distribute Load is not enabled.

VMware, Inc. 444


VMware NSX Advanced Load Balancer Configuration Guide

c Configure Floating IP Addresses (for instance, 10.10.10.11), one on each back end network.
These IP addresses will get configured on the active SE and will be taken over by the
standby SE (new-active) upon failover.

: > configure serviceenginegroup Default-Group


: serviceenginegroup> floating_intf_ip 10.10.10.11
: serviceenginegroup> save

Floating IP Addresses are configurable using Network Service of service_type


routing_service. For more information, see Network Service.

d If there are no BGP peers configured, then configure Floating IP address for front end
networks (for instance, 10.10.40.11).

: > configure serviceenginegroup Default-Group


: serviceenginegroup> floating_intf_ip 10.10.40.11
: serviceenginegroup> save

If there are no BGP peers configured, then configure Floating IP address for front-end
networks (for instance, 10.10.40.11) using the above Network Service configuration.

2 Enable IP routing on all SEs in the SE group.

: > configure serviceenginegroup Default-Group


: serviceenginegroup> enable_routing
Overwriting the previously entered value for enable_routing
: serviceenginegroup> save

Enable IP routing on all SEs in the SE group using Network Service configuration. For more
details, see Network Service.

3 The above steps complete the configuration of routing for Service Engine Group via Network
Service. However, the network is incomplete without the front end routers and back end
servers being configured accordingly.

4 Front end router configuration (if no BGP peers are configured on SE). Configure the front end
router with a static route to the back end server network (with next-hop pointing to floating
interface IP of SE in front end network).

route add -net 10.10.10.0/24 gw 10.10.40.11

5 Back end server configuration.

a Configure the default gateway of back end server(s) to point to floating interface IP of SE
(the one in server network).

route add default gw 10.10.10.11

This ensures that all the traffic including, return (VIP) traffic from the back end network,
uses SE for all northbound traffic.

VMware, Inc. 445


VMware NSX Advanced Load Balancer Configuration Guide

6 Configure the default gateway of SE to the front end as needed. Navigate to Infrastructure >
Routing > Static Route > Create.

Configure IP-Routing (With BGP Peer)


For configuring IP routing without BGP peers, follow the five steps detailed above with the
following exceptions:

n If the front end supports BGP peering, then there is no necessity to configure floating IPs on
the front end interface (skip step 1.d above).

n Also, you do not have to configure static routes in the front-end router (skip step 3 above).

After performing the above steps, follow the instructions below:

Procedure

1 Navigate to Infrastructure > Routing > BGP Peering > Edit.

On the NSX Advanced Load Balancer Controller, configure BGP Peers network and IP
Address.

: > configure vrfcontext global


: vrfcontext> bgp_profile ibgp local_as 1
: vrfcontext:bgp_profile >
: vrfcontext:bgp_profile> peers peer_ip 10.10.40.3
New object being created
: vrfcontext:bgp_profile:peers>
: vrfcontext:bgp_profile:peers> subnet

VMware, Inc. 446


VMware NSX Advanced Load Balancer Configuration Guide

IP4 Prefix Format

(required) Subnet providing reachability for ... : vrfcontext:bgp_profile:peers>


subnet 10.10.40.0/24 : vrfcontext:bgp_profile:peers>
bfd : vrfcontext:bgp_profile:peers>
save : vrfcontext:bgp_profile>
save : vrfcontext> save

2 Navigate to Infrastructure > Service Engine Group > Edit > Advanced. Enable Advertise
back-end subnets via BGP. This UI knob will appear only if Enable IP Routing option is
selected.

: > configure serviceenginegroup Default-Group


: serviceenginegroup> advertise_backend_networks
Overwriting the previously entered value for advertise_backend_networks
: serviceenginegroup> save

Configure Advertise Back end Networks of the Service Engine Group through its
corresponding Network Service. For more information, see Network Service.

VMware, Inc. 447


VMware NSX Advanced Load Balancer Configuration Guide

3 Configure the application profile to preserve client IPs for associated virtual service(s). This
step is to be performed before any virtual service using the given application profile is
enabled.

: > configure applicationprofile System-HTTP


: applicationprofile> preserve_client_ip
Overwriting the previously entered value for preserve_client_ip
: applicationprofile> save

This configuration will not succeed if enable_routing is not yet configured. This configuration
works in mutual exclusion with the Connection Multiplexing option for L7 application profiles.

4 Create a virtual service with an application profile for which preserve client IP is enabled.

Routing Auto Gateway


With NSX Advanced Load Balancer release 20.1.1, a new knob enable_auto_gateway is
introduced in the routing service of network service configuration. This enables the auto gateway
functionality to the routing traffic. The knob is set to False by default.

On enabling the knob, flow-based routing is enabled for all the incoming traffic for all the
interfaces in a VRF. The Service Engine caches the incoming route traffic mac and forwards the
packet to the same next-hop that it received the traffic from.

Note For more information on NSX Advanced Load Balancer Routing GRO and TSO subject
to environment capabilities, see TSO, GRO, RSS, and Blocklist Feature on NSX Advanced Load
Balancer.

Network Service Configuration


Network service can be configured per VRF and Service Engine Group. IP routing can be enabled
by configuring Network Service of routing_service service type.

You can configure the routing function per VRF basis. The existing functions of
routing and its associated information such as enable_routing, floating_interface_ip,
enable_vip_on_all_interfaces, and Mac masquerade under SE group are grouped under
routing_service service type.

Note Network Service can be configured only using CLI. The Network Service will be in effect on
Active SE only if an interface of the corresponding VRF is present on the Service Engine.

Configuring Network Service


The network service configuration is as follows:

configure networkservice NS-Default-Group-Global


se_group_ref Default-Group
cloud_ref [cloud name]
vrf_ref global
service_type routing_service
routing_service

VMware, Inc. 448


VMware NSX Advanced Load Balancer Configuration Guide

enable_routing
floating_intf_ip 10.10.10.11
floating_intf_ip 10.10.40.11
advertise_backend_networks
enable_vip_on_all_interfaces
floating_intf_ip_se_2 10.10.20.11
floating_intf_ip_se_2 10.10.30.11
nat_policy_ref nat-policy
save
save

To disable any feature, use the no-form of the CLI as follows:

configure networkservice NS-Default-Group-Global


se_group_ref Default-Group
vrf_ref global
service_type routing_service
routing_service
no enable_routing
save
save

Supported Environments
The routing auto gateway functions are supported in the following environments:

n Active/ Standby SE group in DPDK based environments

n VMware Read/Write modes and Bare-metal clouds

Configure a network service corresponding to the SE group requires and set enable_auto_gateway
to True for the corresponding network service catering to routing.

Configuring Routing Auto Gateway


Enabling auto gateway, routing, and NAT are currently supported only using CLI.

Log in to the NSX Advanced Load Balancer Controller CLI and execute the following commands:

configure networkservice NS-Default-Group-Global


se_group_ref Default-Group
cloud_ref [cloud name]
vrf_ref [vrf name]
service_type routing_service
routing_service
enable_routing
nat_policy_ref nat-policy
enable_auto_gateway
save
save

The network service configuration is as shown:

[admin:abd-ctrl-wildcard]: > show networkservice NS-Default-Group-Global


+--------------------------------+-----------------------------------------------------+

VMware, Inc. 449


VMware NSX Advanced Load Balancer Configuration Guide

| Field | Value |
+--------------------------------+-----------------------------------------------------+
| uuid | networkservice-1bcd0e3a-4c3d-4e3e-8d1a-619120f9d68f |
| name | NS-Default-Group-Global
|
| se_group_ref | Default-Group |
| vrf_ref | global |
| service_type | ROUTING_SERVICE |
| routing_service | |
| enable_routing | True |
| enable_auto_gateway | True |
| nat_policy_ref | nat-policy |
| | |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
+--------------------------------+-----------------------------------------------------+

Configuring Networks for SEs and Virtual IPs


This section describes the IP address allocation and the steps to configure IP address pools.

Network objects in the NSX Advanced Load Balancer govern IP address allocation for the
following:

n Load balancers, known as NSX Advanced Load Balancer SEs

n Virtual IPs (VIPs) for the load-balanced applications (Optional)

Note
n NSX Advanced Load Balancer SE will acquire IP addresses for its management network (used
to communicate with the NSX Advanced Load Balancer Controller) and its data networks (used
for data plane/ load balancing) from these Network objects.

n Each Cloud configured on the NSX Advanced Load Balancer can have multiple Network
objects defined, each for the various networks used to provide load balancing.

n Network objects support both IPv6 and IPv4 addresses.

n Network objects are not used for native public-cloud integrations such as AWS, Azure, and
GCP. These are handled from within the Cloud object on the NSX Advanced Load Balancer.

IP address allocation in the Network object can be either through DHCP or Static IP Pools.

If IP address allocation is through static pools, you need to configure these static IP pools.

Configuring IP Address Pools


This section describes the steps to manage static and DHCP IP Address

VMware, Inc. 450


VMware NSX Advanced Load Balancer Configuration Guide

Managing Static IP Address


The following are the steps to configure IP address pools for networks hosting NSX Advanced
Load Balancer SEs and (optionally) VIPs:

1 Navigate to Infrastructure > Cloud Resources > Networks.

2 Choose a Cloud from the Select Cloud drop-down menu.

3 Search and edit the required Network object.

4 Deselect DHCP Enabled check box under IP Address Management.

5 Click + Add Subnet button to add an IP address network.

a Specify the IP Subnet to grant the IPs.

b Click + Add Static IP Address Pool button.

1 Either a single pool or a dedicated pool of IPs will be used for both VIPs and NSX
Advanced Load Balancer SEs.

2 To use a single pool of IPs, you can select Use Static IP Address for VIPs and SE
check box and specify the range of IPs from the configured IP subnet.

3 To use a dedicated pool of IPs for VIPs, you can deselect Use Static IP Address for
VIPs and SE checkbox.

n Select Use for VIPs to specify the range of IPs from the configured IP Subnet used
for VIPs.

n Select Use for Service Engines to specify the range of IPs from the configured IP
subnet used for the NSX Advanced Load Balancer SEs.

6 (Optional) Repeat step 5 to add more subnets to the Network object.

7 Click Save to complete configuring the Network object.

Managing DHCP IP Address


The following are the steps to use DHCP on networks hosting NSX Advanced Load Balancer SEs
and (optionally) VIPs:

1 Navigate to Infrastructure > Cloud Resources > Networks.

2 Choose a Cloud from the Select Cloud drop-down menu.

3 Search and edit the required Network object.

4 Select DHCP Enabled check box under IP Address Management.

5 Click on Save to complete configuring the Network object.

VMware, Inc. 451


VMware NSX Advanced Load Balancer Configuration Guide

Enabling VLAN trunking on NSX Advanced Load Balancer


Service Engine
This section discusses configuration changes required to enable VLAN trunking on NSX Advanced
Load Balancer SEs running on ESX in no-access mode.

Procedure

1 Log in to UI and navigate to Infrastructure > Service Engines. Select the desired SEs and click
the edit option.

2 Click Create VLAN Interface.

3 Provide the required details as shown below and click Save.

In the above example, VLAN trunking is enabled on the Ethernet interface 1 with VLAN 137.

You can now place the virtual service on SE using the usual way.

To create virtual service on NSX Advanced Load Balancer, see Virtual Services.

Enabling VLAN Tagging on ESX


This section discusses on how to enable support for VLAN trunking to the SE virtual machine in a
vSphere environment.

For more information on Virtual Guest Tagging (VGT) mode, see VLAN Configuration.

VMware, Inc. 452


VMware NSX Advanced Load Balancer Configuration Guide

Configuring VLAN Interface


Starting with NSX Advanced Load Balancer version 20.1.3, the number of VLAN interfaces
allowed to be configured on a SE is increased from 224 to 1000 (this feature is supported
only in VMware no-access mode). As the number of VLAN interfaces increases, memory usage
increases significantly. The additional memory required for configuring 1000 VLAN interfaces is
approximately 550MB. If there are configurations such as virtual services on those interfaces, then
more memory is required.

If the memory runs low when you add a VLAN interface, the configuration is accepted but the
interface is put into a fault state. You can confirm this by using the show serviceengine < >
vnicdb command and checking if there is a fault entry for the concerned interface.

The following is the sample output with fault entry:

Table 2-1.

Field Value

vnic[2]

if_name avi_eth2.999

linux_name eth2.999

mac_address 00:50:56:81:2f:ec

pci_id PCI-eth2.999

mtu 1496

dhcp_enabled TRUE

enabled TRUE

connected TRUE

network_uuid Unknown

nw[1]

ip 100.3.231.0/24

mode STATIC

nw[2]

ip fe80::250:56ff:fe81:2fec/64

mode DHCP

is_mgmt FALSE

is_complete TRUE

avi_internal_network FALSE

VMware, Inc. 453


VMware NSX Advanced Load Balancer Configuration Guide

Table 2-1. (continued)

Field Value

enabled_flag TRUE

running_flag TRUE

pushed_to_dataplane FALSE

consumed_by_dataplane FALSE

pushed_to_controller FALSE

can_se_dp_takeover TRUE

vrf_ref global

vrf_id 1

ip6_autocfg_enabled TRUE

fault

uuid 00:50:56:81:2f:ec-eth2.999

The following are the reason and recommendation details:

| reason | Insufficient memory to apply


configuration |
| recommendation | Free up resources on this SE[se-00505681a639] and then
do configure and save |

Note 550MB memory is required to configure 1000 VLAN interfaces. If there are configurations
such as virtual services on those interfaces, more memory is required.

Sizing Service Engines


NSX Advanced Load Balancer publishes minimum and recommended resource requirements for
NSX Advanced Load Balancer SEs. This section provides details on sizing. You can consult your
NSX Advanced Load Balancer sales engineer for recommendations that are tailored to the exact
requirements.

The SEs can be configured with 1 vCPU core and 2 GB RAM, up to 64 vCPU cores and 256 GB
RAM.

In write access mode, you can configure SE resources for newly created SEs within the SE Group
properties.

For the SE in read or no orchestrator modes, the SE resources are manually allocated to the SE
virtual machine when it is being deployed.

VMware, Inc. 454


VMware NSX Advanced Load Balancer Configuration Guide

NSX Advanced Load Balancer SE performance is determined by several factors, including


hardware, SE scaling, and the ciphers and certificates used. Performance can be broken down
into the following primary benchmark metric:

n Connections, Requests and SSL Transactions per second (CPS/ RPS/ TPS) - Primarily gated
by the available CPU.

n Bulk throughput - Dependent upon CPU, PPS and environment-specific limitations.

n Concurrent connections — Dependent on SE memory.

This section illustrates the expected real-world performance and discusses SE internals on
computing and memory usage.

CPU
NSX Advanced Load Balancer supports x86 based processors, including those from AMD and
Intel. Leveraging AMD’s and Intel’s processors with AES-NI and similar enhancements steadily
enhances the performance of the NSX Advanced Load Balancer with each successive generation
of the processor.

CPU is a primary factor in SSL handshakes (TPS), throughput, compression, and WAF inspection.

Service Engine

vCPU 0

vCPU 1 vCPU 2 vCPU 3 vCPU n

Performance increases linearly with CPU if CPU usage limit or environment limits are not hit. CPU
is the primary constraint for both transactions per second and bulk throughput.

Within a SE, one or more CPU cores will be given a dispatcher role. It will interface with NICs and
distribute network flows across the other cores within the system, effectively load balancing the
traffic to other CPU cores. Each core is responsible for terminating TCP, SSL, and other processing
determined by the virtual service configuration. The vCPU 0 shown in the diagram acts as the
dispatcher and can also handle some percentage of SSL traffic if it has the available capacity. By
using a system of internally load balancing across CPU cores, NSX Advanced Load Balancer can
scale linearly across the ever-increasing capacity.

VMware, Inc. 455


VMware NSX Advanced Load Balancer Configuration Guide

Memory
Memory allocated to the SE primarily impacts concurrent connections and HTTP caching.
Doubling the memory will double the ability of the SE to perform these tasks. The default is 2
GB memory, reserved within the hypervisor for VMware clouds. See SE Memory Consumption for
a verbose description of expected concurrent connections. Generally, SSL connections consume
about twice as much memory as HTTP layer 7 connections and four times as much memory as
layer 4 with TCP proxy.

NIC
Throughput through a SE can be a gating factor for the bulk throughput and sometimes for
SSL-TPS. The throughput for an SE is highly dependent upon the platform.

Disk
The SEs can store logs locally before they are sent to the Controllers for indexing. Increasing the
disk will increase the log retention on the SE. SSD is preferred over hard drives, as they can write
the log data faster.

The recommended minimum size for storage is 15 GB, ((2 * RAM) + 5 GB) or 15 GB, whichever is
greater. 15 GB is the default for SEs deployed in VMware clouds.

Disk Capacity for Logs


NSX Advanced Load Balancer computes the disk capacity it can use for logs based on the
following parameters:

n Total disk capacity of the SE

n Number of SE CPU cores

n The main memory (RAM) of the SE

n Maximum storage on the disk not allocated for logs on the SE (configurable through SE
runtime properties)

n Minimum storage allocated for logs irrespective of SE size

You can calculate the capacity reserved for debug logs and client logs as follows:

n Debug Logs capacity = (SE Total Disk * Maximum Storage not allocated for logs on SE)/ 100

n Client Logs capacity = Total Disk – Debug Logs capacity

Adjustments to these values are done based on configured value for minimum storage allocated
for logs and RAM of SE, and so on.

PPS
PPS is generally limited by hypervisors. Limitations are different for each hypervisor and version.
PPS limits on Bare metal (no hypervisor) depend on the type of NIC used and how Receive Side
Scaling (RSS) is leveraged.

VMware, Inc. 456


VMware NSX Advanced Load Balancer Configuration Guide

RPS (HTTP Requests Per Second)


RPS is dependent on the CPU or the PPS limits. It indicates the performance of the CPU and the
limit of PPS that the SE can push.

SSL Transactions Per Second


In addition to the hardware factors outlined above, TPS is dependent on the negotiated settings of
the SSL session between the client and VMware NSX Advanced Load Balancer. The following are
the points to consider for sizing based on SSL TPS:

n NSX Advanced Load Balancer supports RSA and Elliptic Curve (EC) certificates. The type of
certificate used along with the cipher selected during negotiation, determines the CPU cost of
establishing the session.

n RSA 2k keys are computationally more expensive compared to EC. NSX Advanced Load
Balancer recommends you to use EC with PFS, which provides the best performance and
the best possible security.

n RSA certificates can still be used as a backup for clients that do not support current industry
standards. As NSX Advanced Load Balancer supports both an EC certificate and an RSA
certificate on the same virtual service, you can gradually migrate to using EC certificates with
minimal user experience impact. For more information, see RSA versus EC certificate priority.

n Default SSL profiles of NSX Advanced Load Balancer prioritize EC over RSA and PFS over
non-PFS.

n EC using perfect forward secrecy (ECDHE) is about 15% more expensive than EC without PFS
(ECDH).

n SSL session reuse gives better SSL Performance for real-world workloads.

Bulk Throughput
The maximum throughput for a virtual service depends on the CPU and the NIC or hypervisor.

Using multiple NICs for client and server traffic can reduce the possibility of congestion or NIC
saturation. The maximum packets per second for virtualized environments vary dramatically and
will be the same limit regardless of the traffic being SSL or unencrypted HTTP.

See Performance Datasheet numbers for throughput numbers. SSL throughput numbers are
generated with standard ciphers mentioned in the datasheet. Using more esoteric or expensive
ciphers can harm throughput. Similarly, using less secure ciphers, such as RC4-MD5, will provide
better performance but are also not recommended by security experts.

Generally, the TPS impact is negligible on the CPU of the SE if SSL re-encryption is required, since
most of the CPU cost for establishing a new SSL session is on the server, not the client. For the
bulk throughput, the impact on the CPU of the SE will be double for this metric.

VMware, Inc. 457


VMware NSX Advanced Load Balancer Configuration Guide

Concurrent Connections (also known as Open Connections)


While planning SE sizing, the impact of concurrent connections should not be overlooked. The
concurrent benchmark numbers floating around are generally for layer 4 in a pass-through or
non-proxy mode.

In other words, they are many orders of magnitude greater than what can be achieved. Consider
40kB of Memory per SSL terminated connection in the real world as a preliminary but conservative
estimate. The amount of HTTP header buffering, caching, compression, and other features play
a role in the final number. For more information, see SE Memory Consumption, including the
methods for optimizing for greater concurrency.

Scale-out capabilities across Service Engines


NSX Advanced Load Balancer can also scale traffic across multiple SEs. Scale-out allows linear
scaling of workload. Scale-Out is primarily useful when CPU, memory, and PPS become the
limiting factor.

Native auto-scale feature of NSX Advanced Load Balancer (L2 Scaleout) allows a virtual service to
be scaled out across four SEs. Using ECMP scale-out (L3 Scale-out), a virtual service can be scaled
out to multiple SEs with a linear scale for workloads.

The following diagrams shows L2 Scale-out.

VMware, Inc. 458


VMware NSX Advanced Load Balancer Configuration Guide

SE SE SE
secondary 1 primary secondary 2

VMware, Inc. 459


VMware NSX Advanced Load Balancer Configuration Guide

For more information on the scale-out feature, see Autoscale Service Engines.

Service Engine Performance Datasheet


For more information on the above limitations and sizing of the Service Engine based on your
applications behavior and load requirement, see Performance Datasheet.

The following are the points to be considered while sizing for different environments:

n vCenter and NSX-T cloud

n CPU reservation (configurable through se-group properties) is recommended

n RSS configuration for VMware based on the load requirement

n Baremetal and CSP (Linux Server Cloud) Deployment

n A single SE can scale up to 36 cores for Linux server cloud (LSC) deployment for Baremetal
and CSP

n PPS Limits on different clouds depend on either hypervisor or NIC used and how
dispatcher cores and RSS are configured. For more information on recommended
configuration and feature support, see TSO GRO RSS Features

n SR-IOV and PCIe Passthrough are used in some environments to bypass the PPS limitations
and provide line-rate speeds. For more information on support for SR-IOV, see Ecosystem
Support

n Sizing for public clouds should be decided based on the cloud limits and SE performance on
different clouds for different VM sizes

Per-App SE Mode
This section describes about per-app Service Engine (SE) mode.

Per-app SE mode enables cost-effective deployment of load balancers on a dedicated LB-per-


application basis with a high degree of application isolation. This setting is at an SE group level.

When an SE group is configured in per-app SE mode, a vCPU counts at a 25% rate for licensing
usage. For example, each 2-vCPU SE in a per-app SE group utilizes half a vCPU license (2 * 0.25).

Per-app SE mode is limited to a maximum of 2 virtual services per SE so that customers can also
enable HA. All HA modes are supported. Per-VS license mode is not supported for DNS virtual
services.

Configuring Per-App SE Mode in the NSX Advanced Load Balancer


UI
1 Navigate to Infrastructure > Service Engine Groups .

2 Select the SE group to be edited and click the pencil (edit) icon.

VMware, Inc. 460


VMware NSX Advanced Load Balancer Configuration Guide

3 Per-app SE mode is available under High Availability & Placement Setting, as shown in figure
below. By default, per-app SE mode is disabled for any SE group.

4 Click on the check box to enable Per-app SE mode and then click on Save.

Note that the displayed value of Virtual Services per Service Engine automatically changes to 2.

Configuring Per-App SE Mode in the NSX Advanced Load Balancer


CLI
Invoke the following commands from the NSX Advanced Load Balancer Controller shell prompt to
enable per-app SE mode when defining any SE group.

[admin Controller]: > configure serviceenginegroup <name-of-the-SE-group>


[admin Controller]: serviceenginegroup> per_app
Overwriting the previously entered value for per_app
[adminController]: serviceenginegroup> save
per_app | True

Restriction
One can only set per-app SE mode when first defining an SE group. It can’t be toggled on a
pre-existing setup. Any attempt to toggle the option will be ignored, and an error message will be
displayed.

VMware, Inc. 461


VMware NSX Advanced Load Balancer Configuration Guide

Connecting SEs to Controllers When Their Networks Are


Isolated
This topic explains how SE-Controller communication can be established from Service Engines
instantiated on a network isolated from the network of the Controller nodes.

The process of connecting starts with the first communication that a freshly-instantiated SE sends
to its parent Controller. Classic examples of this type of communication are:

1 The Controller cluster is protected behind a firewall, while its SEs are on the public Internet.

2 In a public-private cloud deployment, Controllers reside in the public cloud (e.g., AWS), while
SEs reside in the customer’s private cloud.

Implementation
In addition to the management node addresses that Controllers in the cluster can mutually see, for
each Controller, a second management IP address or a DNS-resolvable FQDN that is addressable
by SEs connected to an isolated network, can be specified. It is this second IP address or FQDN
that is incorporated by the Controller into the SE image used to spawn SEs. The NSX Advanced
Load Balancer has added the public-ip-or-name parameter to support this capability.

Setting the Parameter through the NSX Advanced Load Balancer CLI
In the initial release, the parameter is accessible only through the REST API and NSX Advanced
Load Balancer CLI. In the following CLI example a single-node cluster is employed.

[admin:my-controller-aws]: > configure cluster


Updating an existing object. Currently, the object is:
+---------------+----------------------------------------------+
| Field | Value |
+---------------+----------------------------------------------+
| uuid | cluster-223cc977-f0de-4c5e-9612-7b0254b3057d |
| name | cluster-0-1 |
| nodes[1] | |

VMware, Inc. 462


VMware NSX Advanced Load Balancer Configuration Guide

| name | 10.10.30.102 |
| ip | 10.10.30.102 |
| vm_uuid | 005056b02776 |
| vm_mor | vm-222393 |
| vm_hostname | node1.controller.local |
+---------------+----------------------------------------------+
[admin:my-controller-aws]: cluster> nodes index 1
[admin:my-controller-aws]: cluster:nodes> public_ip_or_name 1.1.1.1

Explanation
n The SEs cannot address (route to) the Controller by using the address 10.10.30.102 from their
network.

n Administrative staff are aware that a NAT-enabled firewall is in place and programmed to
translate 1.1.1.1 to 10.10.30.102.

n The string parameter public_ip_or_name in the object definition of the first (and only) node
of the cluster is set to 1.1.1.1. So, Controller “cluster-0-1” knows that it must embed 1.1.1.1 (not
10.10.30.102 ) into the SE image it creates for spawning SEs.

n When an SE comes alive for the first time, it therefore addresses its parent Controller at IP
address 1.1.1.1.

n Due to being completely transparent to that SE and because of the firewall’s NAT’ing ability,
the initial communication is passed on to IP address 10.10.30.102.

n Subsequent Controller-SE communications proceed as normal, as if the Controller and SEs


were on the same network.

Important Notes
n The public_ip_or_name field needs to be configured either for all the nodes in the cluster or
none of the nodes. A subset of nodes in the cluster cannot be configured.

n When this configuration is enabled, SEs from all clouds will always use the
public_ip_or_name to attempt to talk to the Controller. It is not currently possible to have
SEs from one cloud to use the private network while SEs from another cloud use the NATed
network.

n It is recommended to enable this feature while configuring the cluster before SEs are created
and not modify this setting while SEs exist.

SE Memory Consumption
This topic discusses calculation of memory utilization within a Service Engine (SE) to estimate the
number of concurrent connections or the amount of memory that can be allocated to features
such as HTTP caching.

VMware, Inc. 463


VMware NSX Advanced Load Balancer Configuration Guide

SEs support 1-256 GB memory. The minimum recommendation for NSX Advanced Load Balancer
is 2 GB. Providing more memory drastically increases the scale of capacity. Adjusting the priorities
for memory between concurrent connections and optimized performance buffers also improves
the scale of capacity significantly.

Memory allocation for NSX Advanced Load Balancer SE deployments in write access mode is
configured through Infrastructure > Cloud > SE Group Properties. Changes to the Memory
per Service Engine property only impact newly created SEs. For read or no access modes, the
memory is configured on the remote orchestrator such as vCenter. Changes to existing SEs need
the SE to be powered down prior to the change.

Memory Allocation
The following table details the memory allocation for SE:

Base 500 MB Required to turn on the SE (Linux plus


basic SE functionality).

Local 100 MB / core Memory allocated per vCPU core.

Shared Remaining Remaining memory is split between


Connections and Buffers.

<Insert Image>

The shared memory pool is divided between two components, namely, Connections and Buffers.
A minimum of 10% must be allocated to each. Changing the Connection Memory Percentage slider
only impacts the newly created SEs and not the existing SEs.

Connections consist of the TCP, HTTP, and SSL connection tables. Memory allocated to
connections directly impacts the total concurrent connections that an SE can maintain.

<Insert Image>

Buffers consist of application layer packet buffers. These buffers are used to queue packets
from Layer 4 to Layer 7 for providing improved network performance. For example, if a client
is connected to the NSX Advanced Load Balancer SE at 1Mbps with large latency and the server
is connected to the SE at no latency and 10Gbps throughput, the server can respond to client
queries by transmitting the entire response and proceed to service the next client request. The
SE buffers the response and transmit it to the client at a much reduced speed, handling any
retransmissions without needing to interrupt the server. This memory allocation also includes
application centric features such as HTTP caching and improved compression.

Buffers maximize the number of concurrent connections by changing the priority towards
connections. The calculations for NSX Advanced Load Balancer are based on the default setting,
which allocates 50% of the shared memory for connections.

VMware, Inc. 464


VMware NSX Advanced Load Balancer Configuration Guide

Concurrent Connections
Most Application Delivery Controller (ADC) benchmark numbers are based on an equivalent TCP
Fast Path, which uses a simple memory table with client IP:port mapped to server IP:port. Though
TCP Fast Path uses very less memory, enabling extremely large concurrent connection numbers,
it is not relevant to the vast majority of real world deployments which rely on TCP and application
layer proxying. The NSX Advanced Load Balancer benchmark numbers are based on full TCP
proxy (L4), TCP plus HTTP proxy with buffering and basic caching with DataScript (L7), and
the same scenario with Transport Layer Security Protocol (TLS) 1.2 between client and the NSX
Advanced Load Balancer.

The memory consumption numbers per connection listed below can be higher or lower. For
example, typical buffered HTTP request headers consume 2kb of memory, but they can be as high
as 48kb. The numbers below are intended to provide real world sizing guidelines.

Memory consumption per connection:

n 10 KB L4

n 20 KB L7

n 40 KB L7 + SSL (RSA or ECC)

To calculate the potential concurrent connections for an SE, use the following formula:

Concurrent L4 connections = ((SE memory - 500 MB - (100 MB * num of vCPU)) *


Connection Percent) / Memory per Connection

To calculate layer 4 sessions (memory per connection = 10KB = 0.01MB) for an SE with 8 vCPU
cores and 8 GB RAM, using a Connection Percentage of 50%, the calculation is: ((8000 - 500 -
( 100 * 8 )) * 0.50) / 0.01 = 335k.

1 vCPU 4 vCPU 32 vCPU

1 GB 36k n/a n/a

4 GB 306k 279k n/a

32 GB 2.82m 2.80m 2.52m

The calculations in the table are with 90% connection percentage. The table above shows the
number of concurrent connections for L4 (TCP Proxy mode) optimized for connections.

View Allocation through CLI


Type the following command from the CLI:

show serviceengine <SE Name> memdist

VMware, Inc. 465


VMware NSX Advanced Load Balancer Configuration Guide

This command shows a truncated breakdown of memory distribution for the SE. The SE has one
vCPU core with 141 MB allocated for the shared memory’s connection table. The huge_pages value
of 91 means that there are 91 pages of 2 MB each. This indicates that 182 MB has been allocated for
the shared memory’s HTTP cache table.

[admin:grr-ctlr2]: > show serviceengine 10.110.111.10 memdist


+-------------------------+-----------------------------------------+
| Field | Value |
+-------------------------+-----------------------------------------+
| se_uuid | 10-217-144-19:se-10.217.144.19-avitag-1 |
| proc_id | C0_L4 |
| huge_pages | 2353 |
| clusters | 1900544 |
| shm_memory_mb | 398 |
| conn_memory_mb | 4539 |
| conn_memory_mb_per_core | 1134 |
| num_queues | 1 |
| num_rxd | 2048 |
| num_txd | 2048 |
| hypervisor_type | 6 |
| shm_conn_memory_mb | 238 |
| os_reserved_memory_mb | 0 |
| shm_config_memory_mb | 160 |
| config_memory_mb | 400 |
| app_learning_memory_mb | 30 |
| app_cache_mb | 1228 |
+-------------------------+-----------------------------------------+
[admin:grr-ctlr2]: >

Code Description

The total number of packet buffers (mbufs) reserved for an


clusters
SE.

shm_memory_mb The total amount of shared memory reserved for an SE.

The total amount of memory reserved from the heap for


conn_memory_mb
the connections.

The amount of memory reserved from the heap for the


conn_memory_mb_per_core connections per core ( conn_memory_mb/ number of
vCPUs). In this system, 4 vCPUs are available.

The amount of memory reserved from the shared memory


shm_conn_memory_mb
reserved for the connections.

num_queues The number of NIC queue pairs.

num_rxd The number of RX descriptors.

num_txd The number of TX descriptors.

The amount of extra memory reserved for non se datapth


os_reserved_memory_mb
processes.

VMware, Inc. 466


VMware NSX Advanced Load Balancer Configuration Guide

Code Description

The amount of memory reserved from the shared memory


shm_config_memory_mb
reserved for configuration.

The amount of memory reserved from the heap for


config_memory_mb
configuration.

hypervisor_type refers to the following list of hypervisor types and the respective values
associated with it:

Hypervisor Types Values

SE_HYPERVISOR_TYPE_UNKNOWN 0

SE_HYPERVISOR_TYPE_VMWARE 1

SE_HYPERVISOR_TYPE_KVM 2

SE_HYPERVISOR_TYPE_DOCKER_BRIDGE 3

SE_HYPERVISOR_TYPE_DOCKER_HOST 4

SE_HYPERVISOR_TYPE_XEN 5

SE_HYPERVISOR_TYPE_DOCKER_HOST_DPDK 6

SE_HYPERVISOR_TYPE_MICROSOFT 7

View Allocation through API


The total memory allocated to the connection table and the percentage in use can be viewed. Use
the following commands to query the API:

https://<IP Address>/api/analytics/metrics/serviceengine/se-<SE UUID>?


metric_id=se_stats.max_connection_mem_total returns the total memory available to the
connection table. In the following response snippet, 141 MB is allocated.

"statistics": {
"max": 141,
}

https://<IP Address>/api/analytics/metrics/serviceengine/se-<SE UUID>?


metric_id=se_stats.avg_connection_mem_usage&step=5 returns the average percent of memory
used during the queried time period. In the result snippet below, 5% of the memory was in use.

"statistics": {
"min": 5,
"max": 5,
"mean": 5
},

VMware, Inc. 467


VMware NSX Advanced Load Balancer Configuration Guide

Shared Memory Caching


You can use the app_cache_percent field in the Service Engine properties to reserve a
percentage of the SE memory for Layer 7 caching. The default value is zero, which implies that
the NSX Advanced Load Balancer will not cache any object.This is a property that takes effect on
SE boot up and so needs a SE reboot/restart after the configuration.

If virtual service application profile caching is enabled, on upgrading the NSX Advanced Load
Balancer from an earlier version, this field is automatically set to 15 and so 15% of SE memory will
be reserved for caching. This value is a percentage configuration and not an absolute memory
size.

After configuring the feature, restart the SE to enable the configuration.

Note Total memory allocated for caching must meet the 1GB min allocation per core. If
app_cache_persent exceeds this condition, the allocated memory will be less than % of the Total
System memory.

For example, App cache memory = Total memory - number of cores*1GB.

For 10GB 9 Core SE, 15 % app_cache_percent would be 1GB instead of 1.5 GB.

Configuring using CLI


Enter the following commands to configure the app_cache_percent:

[admin:cntrlr]: > configure serviceenginegroup Serviceenginegroup-name


[admin:cntrlr]: serviceenginegroup> app_cache_percent 30

Overwriting the previously entered value for app_cache_percent


[admin:cntrlr]: serviceenginegroup> save

Configuring using UI
You can enable this feature using the NSX Advanced Load Balancer UI. Navigate to Infrastructure
> Service Engine Group and click the edit icon of the desired SE group. In the Basic Settings tab,
under Memory Allocation section, enter the value meant to be reserved for Layer 7 caching in the
Memory for Caching field.

<Insert Image>

Reduce the Core File Size


The following new fields have been introduced to help exclude some sections from being included
into the core:

n core_shm_app_learning - To include shared memory for app learning in core file.


n core_shm_app_cache- To include shared memory for app cache in core file.

VMware, Inc. 468


VMware NSX Advanced Load Balancer Configuration Guide

By default, these options are set to false. Use the following commands and enable the options
core_shm_app_learning and core_shm_app_cache.

[admin:cntrlr]: serviceenginegroup> core_shm_app_learning


Overwriting the previously entered value for core_shm_app_learning
[admin:cntrlr]: serviceenginegroup> core_shm_app_cache
Overwriting the previously entered value for core_shm_app_cache
[admin:cntrlr]: serviceenginegroup> save

Note Restart or reboot the SE for this configuration to take effect.

Per Virtual Service Level Admission Control


Connection Refusals to Incoming Requests on a Virtual Service:

The connection refusals on a particular virtual service can be due to the high consumption of
packet buffers by that virtual service.

When the packet buffer usage of a virtual service is greater than 70% of the total packet buffers,
the connection refusals start. This might mean that there is a slow client that is causing a packet
buffer build up on the virtual service.

This issue can be alleviated by increasing the memory allocated per SE or by identifying and
limiting the number of requests by slow clients using a network security policy.

Per virtual service level admission control is disabled by default. To enable this setting, set the
Service Engine Group option per_vs_admission_control to True.

[admin Controller]: > configure serviceenginegroup <name-of-the-SE-group>


[admin Controller]: serviceenginegroup> per_vs_admission_control
Overwriting the previously entered value for per_vs_admission_control
[admin Controller]: serviceenginegroup> save
| per_vs_admission_control | True |

The connection refusals stop when the packet buffer consumption on the Virtual Service drops to
50%. The sample logs generated show admission control:

C255 12:46:28.774900 [se_global_calculate_per_vs_mbuf_usage:1561] Packet buffer usage for the


Virtual Service: 'vs-http' UUID: 'virtualservice-e20cfff1-173f-4f4c-9028-4ae544116191' has
breached the threshold 70.0%, current value is 71.8%. Starting admission control.
C255 12:49:01.285088 [se_global_calculate_per_vs_mbuf_usage:1575] Packet buffer usage for
the Virtual Service: 'vs-http' UUID: 'virtualservice-e20cfff1-173f-4f4c-9028-4ae544116191' is
below the threshold 50.0%, current value is 46.7%. Stopping admission control.

The connection refusals and packet throttles due to admission control can be monitored using the
se_stats metrics API:

https://<Controller-IP>/api/analytics/metrics/serviceengine/se-<SE-UUID>?
metric_id=se_stats.sum_connection_dropped_packet_buffer_stressed,se_stats.sum_packet_dropped_p
acket_buffer_stressed

VMware, Inc. 469


VMware NSX Advanced Load Balancer Configuration Guide

To know how to resolve intermittent connection refusals on NSX Advanced Load Balancer SEs
correlating to higher traffic volume, see Connection Refusals.

X-Forwarded-For Header Insertion


By default, NSX Advanced Load Balancer Service Engines (SEs) source-NAT (SNAT) traffic is
destined to servers. Due to SNAT, logs on the application servers will show the layer 4 IP address
of the SE rather than the original client’s IP address. Most application servers can leverage the
XFF header as the source IP address for logging or blocklisting.

For HTTP traffic, NSX Advanced Load Balancer can be configured to insert an X-Forwarded-For
(XFF) header in client-server requests to include the original client IP addresses in the logging
requests. This feature is supported for IPv6 in NSX Advanced Load Balancer.

To include the client’s original IP address in HTTP traffic logs, enable NSX Advanced Load
Balancer to insert an XFF header into the client traffic destined for the server. XFF insertion can be
enabled in the HTTP application profile attached to the virtual service.

1 Navigate to Templates > Profiles > Application.

2 Click on the edit icon near a HTTP Application Profile to open the following editor:

3 Within the General tab, select the X-Forwarded-For check box.

Note Optionally the header can be renamed using the XFF Alternate Name field.

4 Click Save.

The profile change affects any virtual services that use the same HTTP application profile.

VMware, Inc. 470


VMware NSX Advanced Load Balancer Configuration Guide

When XFF header insertion is enabled, the SE checks the headers of client-server packets for
existing XFF headers. If XFF headers already exist, the SE first removes any pre-existing XFFs,
then inserts its own XFF header. This is done to prevent clients from spoofing their IP addresses.

Keeping Pre-existing XFF Headers


There are times when this behavior (removing pre-existing XFF headers) is not desired, such as
when multiple proxies may be SNATing and inserting XFF headers. In this case, to insert an XFF
header without removing pre-existing XFF headers, use either a DataScript or an HTTP Request
Policy.

Example:

avi.http.add_header("XFF", avi.vs.client_ip())

Resetting PCAP TX Ring for Non-DPDK Deployment


This section discusses the steps to enable and disable the PCAP tx_ring using NSX Advanced
Load Balancer shell prompt.

The tx_ring method is the default transmission mechanism in the current non-DPDK
environments for NSX Advanced Load Balancer SEs. The PCAP tx_ring method consumes more
memory compared to the PCAP socket mechanism. Due to the higher memory consumption, the
rest of the processes might run into memory allocation failures in SEs having limited resources.

Due to resource constraints on the system, the tx_ring mode can cause packet drop issues
in the transmission path in the non-DPDK deployment. Whenever the issue occurs with the
default tx_ring method, an alternative raw socket approach is used to transfer the packets in
the transmission path.

Enabling Raw Socket Effect


If the system is running with <=2GB RAM, the raw socket method is used to transfer the packets.
In other cases, the tx_ring is the default transmission mechanism. The pcap_tx_mode knob
helps override the default behavior by forcing the SE to use the tx_ring method or the raw
socket method. The configuration is part of the SE group properties and takes effect once SE is
restarted.

enable_pcap_tx_ring is the configuration parameter for the tx_ring transmission option. To


enable the raw socket effect, disable the enable_pcap_tx_ring flag using NSX Advanced Load
Balancer CLI and restart all the respective SEs.

Note This applies to all non-DPDK environments.

VMware, Inc. 471


VMware NSX Advanced Load Balancer Configuration Guide

Disabling PCAP_TX_Ring
Login to the NSX Advanced Load Balancer shell prompt and use the configure
serviceenginegroup mode to disable the enable_pcap_tx_ring transmission mode, as shown
below:

[admin:<controller-ip>]: > configure serviceenginegroup Default-Group


[admin:<controller-ip>]: serviceenginegroup> no enable_pcap_tx_ring
[admin:<controller-ip>]: serviceenginegroup> save
[admin:<controller-ip>]: >

Once the above command is executed, restart the effected SEs.

Enabling pcap_tx_ring Option through NSX Advanced Load


Balancer CLI
Login to the NSX Advanced Load Balancer shell prompt using credentials. Use the configure
serviceenginegroup mode to enable pcap_tx_mode as shown below:

[admin:<controller-ip>]: > configure serviceenginegroup Default-Group


[admin:<controller-ip>]: serviceenginegroup>pcap_tx_mode pcap_tx_ring
[admin:<controller-ip>]: serviceenginegroup> save
[admin:<controller-ip>]: >

Enabling the pcap_tx_socket Option through the NSX Advanced


Load Balancer CLI
[admin:<controller-ip>]: > configure serviceenginegroup Default-Group
[admin:<controller-ip>]: serviceenginegroup>pcap_tx_mode pcap_tx_socket
[admin:<controller-ip>]: serviceenginegroup> save
[admin:<controller-ip>]: >

Re-enabling PCAP_TX_Ring
To switch the transmission mode back to the tx_ring method, log into the NSX Advanced Load
Balancer CLI and re-enable the method as shown below:

[admin:<controller-ip>]: > configure serviceenginegroup Default-Group


[admin:<controller-ip>]: serviceenginegroup>enable_pcap_tx_ring
[admin:<controller-ip>]: serviceenginegroup> save
[admin:<controller-ip>]: >

Preserve Client IP
This section discusses the configuration and scope of preserve client IP address.

VMware, Inc. 472


VMware NSX Advanced Load Balancer Configuration Guide

By default, NSX Advanced Load Balancer Service Engines (SEs) do source NAT-ing (SNAT) of
traffic destined to back-end servers. Due to SNAT, the application servers see the IP address
of the SE interfaces and are unaware of the original client’s IP address. Preserving a client’s IP
address is a desirable feature in many cases, for example, when servers have to apply security and
access-control policies. Two ways to solve this problem in NSX Advanced Load Balancer are:

n X-Forwarded-For Header Insertion — Limited to HTTP(S) application profiles only

n PROXY Protocol Support — Limited to TCP traffic on L4 application profiles only

Both of the above require the back-end servers to be capable of supporting the respective
capability.

A third and more generic approach is for the SE to use the client IP address as the source
IP address for load-balanced connections from the SE to back-end servers. This capability is
called preserve client IP, one component of NSX Advanced Load Balancer default gateway feature
and property that can be set on/off in application profiles.

Note Enable IP Routing with Service Engine option is not mandatory to select Preserve Client IP
Address.

For more information on Enable IP routing, see Network Service Configuration.

VMware, Inc. 473


VMware NSX Advanced Load Balancer Configuration Guide

Scope of Preserve Client IP


n Enabling IP routing is not a prerequisite for enabling the Preserve Client IP Address option.

n It is not mandatory for the HA mode to be legacy HA (active/standby).

However, you can either use Legacy HA, configure floating interface IP, and set it as the default
gateway on the server to attract return traffic. (or)

Setup the routing in the backend to ensure that return traffic for the client-IP-preserved traffic
requests sent to the backend server comes back to the SE as needed.

Mutual Exclusions With Other Features


n Preserving the client IP address is mutually exclusive with SNAT-ing the virtual services.

n Enabling connection multiplexing in an HTTP(s) application profile is incompatible with


selecting the Preserve Client IP Address option.

n NSX Advanced Load Balancer will always NAT the back-end connection in these cases:

n When client and server IPs are in the same subnet.

n When the back-end servers are not on networks directly-attached to the SE, i.e., they are
a hop or more away.

VMware, Inc. 474


VMware NSX Advanced Load Balancer Configuration Guide

Example Use-Case

SEs are in a
• Linux server cloud or
• VMware no-access cloud
(no auto-creation of SE)

IP Route (in router):


10.10.10.0/24 NH 10.10.40.11
10.10.20.0/24 NH 10.10.40.11
10.10.30.0/24 NH 10.10.40.11

10.10.40.3/24
FE-NW 10.10.40.0/24

Floating IP - 10.10.40.11/24

10.10.40.1/24 10.10.40.2/24

Legacy HA
active/standby

BE-NW-1 10.10.10.0/24

BE-NW3 10.10.30.0/24

BE-NW2 BE-NW1 10.10.20.0/24

Enable IP routing on the SE group before enabling preserve client IP on an application profile
used to create virtual services on that SE group.

In addition,

n Configure static routes to the back-end server networks on the front-end servers with nexthop
as front-end floating IP

n Configure back-end servers’ default gateway as SE and

n Configure SE’s default gateway as a front-end router.

Configure Preserve Client IP


Consider a simple two-leg setup with the back-end server(s) in the 10.10.10.0/24 network (always
a directly-connected network) and the front-end router in the 10.10.40.0/24 network. Following
are the steps to configure the feature:

VMware, Inc. 475


VMware NSX Advanced Load Balancer Configuration Guide

Create a virtual service using the advanced-mode wizard. Configure its application profile
to preserve client IPs as follows: Applications > Create Virtual Service > Advanced > Edit
Application Profile.

Note that this configuration needs to be done before enabling any virtual service in the chosen
application profile. Once an application profile is configured to preserve client IP, it preserves the
client IP for all virtual services using this application profile.

: > configure applicationprofile System-HTTP


: applicationprofile> preserve_client_ip
Overwriting the previously entered value for preserve_client_ip
: applicationprofile> save

For deploying preserve client IP in NSX-T overlay cloud, see Preserve Client IP for NSX-T Overlay.

VMware, Inc. 476


High Availability and Redundancy
3
This section explains about the Controller and Service Engine high availability.

The following are the two types of HA:

n NSX Advanced Load Balancer Controller HA: This provides node-level redundancy for the
Controllers. A single Controller is deployed as the leader node while the two additional
Controllers are added as the follower nodes.

n NSX Advanced Load Balancer Service Engine HA: This provides SE-level redundancy within
a SE group. If a SE within the group fails, then HA heals the failure and compensates for the
reduced site capacity; which means it replaces a new SE in place of the one that has failed.

Note To ensure the highest level of uptime for a site, including NSX Advanced Load Balancer
software upgrades, you need to ensure the availability for both NSX Advanced Load Balancer
Controllers and NSX Advanced Load Balancer Service Engines.

HA for the Controllers and Service Engines are separate features which are configured separately.
HA for the Controllers is a system administration setting.

Note To ensure application availability in the presence of whole site failures, NSX Advanced Load
Balancer recommends use of NSX Advanced Load Balancer GSLB which provides an overview.
For more details on GSLB Overview, see GSLB guide.

This chapter includes the following topics:

n Control Plane High Availability

n Data Plane High Availability

n Virtual Service Scaling

n Throughput

n Virtual Service Policies

n Controller Interface and Route Management

n Auto Scaling

n NSX Advanced Load Balancer SE Behavior on Gateway Monitor Failure

VMware, Inc. 477


VMware NSX Advanced Load Balancer Configuration Guide

Control Plane High Availability


You can deploy a set of three NSX Advanced Load Balancer Controllers as a High Availability
cluster, as a best practise. In a cluster deployment, one of the Controllers acts as a leader.
It performs load balancing and configuration management for the cluster. While the other two
Controllers are followers; they collaborate along with the leader to perform data collection from
SEs and process analytics data.

High Availability for NSX Advanced Load Balancer Controllers


NSX Advanced Load Balancer can run with a single Controller (single-node deployment) or with
a three-node Controller cluster. In a deployment that uses a single Controller, that Controller
performs all the administrative functions, it also gathers and processes all the analytics data.

You can create a three-node cluster by adding two additional nodes. This three-node cluster
provides a node-level redundancy for the Controller and maximizes performance for CPU-
intensive analytics functions. However, for a single Controller in a single-node deployment, it
performs all administrative functions; collects and processes all the analytics data. These tasks are
distributed in a three-node cluster.

In a three-node Controller cluster, one node is the primary (leader) node and performs the
administrative functions. The other two nodes are followers (secondaries), and perform data
collection for analytics, in addition to standing by as backups for the leader.

Operation of NSX Advanced Load Balancer Controller High


Availability
This section explains how high availability (HA) operates within NSX Advanced Load Balancer
Controller cluster.

Quorum
The Controller level HA requires a quorum of NSX Advanced Load Balancer Controller nodes to
be up. In a three-node Controller cluster, quorum can be maintained if at least two of the three
Controller nodes are up. If one of the Controllers fail, the remaining two nodes continues service
and NSX Advanced Load Balancer continues to operate. However, if two of the three nodes go
down, then the entire cluster goes down, and NSX Advanced Load Balancer stops working.

Failover
Each Controller node in a cluster periodically sends heartbeat messages to the other Controller
nodes in a cluster, through an encrypted SSH tunnel using TCP port 22 (port 5098 if running as
Docker containers).

VMware, Inc. 478


VMware NSX Advanced Load Balancer Configuration Guide

Primary (leader)

0110

Heartbeat
messages

0110 0110

The heartbeat interval is ten seconds. The maximum number of consecutive heartbeat messages
that can be missed is four. If one of the Controllers does not hear from another Controller for 40
seconds (four missed heartbeats), then the other Controller is assumed to be down.

If only one node is down, then the quorum is still maintained and the cluster can continue to
operate.

If a follower node goes down but the primary (leader) node remains up, then the access to virtual
services continues without any interruption.

Primary (leader)

0110

Member node down. 0110


No reply to heartbeats.

n If the primary (leader) node goes down, the member nodes form a new quorum and elect a
cluster leader. The election process takes about 50-60 seconds and during this period, there
is no impact on the data plane. The SEs will continue to operate in the Headless mode, but
the control plane service will not be available. During this period, you cannot create a VIP
through LBaaS or use the NSX Advanced Load Balancer user interface, API, or CLI.

VMware, Inc. 479


VMware NSX Advanced Load Balancer Configuration Guide

Primary (leader)
node down.
No reply to heartbeats.

Headless mode
(during election of
new primary/leader)
0110 0110

Converting a Single-Node Deployment to a Three-node Cluster


This section explains the process to convert a single-node deployment to a three-node cluster.

In this procedure, the NSX Advanced Load Balancer Controller node that is already deployed
in the singe-node deployment is referred to as the incumbentNSX Advanced Load Balancer
Controller.

The following are the steps to convert a single-node NSX Advanced Load Balancer Controller
deployment into a three-node deployment:

1 Install a single new Controller node. During installation, configure only the following settings
for each node:

n Node management IP address

n Gateway address

2 Connect the management interface of each new Controller node to the same network as the
incumbent Controller. After the incumbent Controller detects the two new Controller nodes,
the incumbent Controller becomes the primary (leader) Controller node for the three-node
cluster.

3 Use a web browser to navigate to the management IP address of the primary (leader)
Controller node.

4 Navigate to Administrator > Controller > Nodes and click Edit . The Edit Clusterwindow
appears.

5 In the Controller Cluster IP field, specify the shared IP address for the Controller cluster.

6 In the Public IP or Host Name field, specify the management IP address of the new Controller
node.

Note To configure cluster in AWS Cloud, each node of the cluster requires an admin account
password.

VMware, Inc. 480


VMware NSX Advanced Load Balancer Configuration Guide

After execution of the above steps, the incumbent Controller becomes the primary (leader) for the
cluster and invites the other Controller to the cluster as members.

The NSX Advanced Load Balancer then performs a warm reboot of the cluster. This process can
take two or three minutes. After the reboot, the configuration of the primary (leader) Controller is
synchronized to the new member nodes once the cluster appears online.

Primary (leader)

Cluster IP:
10.30.163.63
10.30.163.68 0110

Invite to join Invite to join


cluster cluster

0110 0110

10.30.163.64 10.30.163.65

To know more about cluster HA, refer to the links below:

n Controller Cluster IP

n Clustering NSX Advanced Load Balancer Controllerof Different Networks

n Impact of NSX Advanced Load Balancer Controller Failure

n How to Enable Per-app SE mode for a Service Engine Group

Data Plane High Availability


This section explains data plane high availability mode for elastic HA and legacy HA for the SEs..

NSX Advanced Load Balancer Service Engines groups support the following HA modes:

n Elastic HA: This provides fast recovery for individual virtual services following failure of the
Service Engine. Depending on the mode, the virtual service is either already running on
multiple SEs or is quickly placed on another SE. The following modes of cluster HA are
supported:

n Active/Active

n N+M

n Legacy HA for the Service Engines: This emulates the operation of two device hardware
active/ standby HA configuration. The active SE carries all the traffic for a virtual service placed
on it. The other SE in the pair is the standby for the virtual service, which does not carry traffic
while the other active SE is healthy.

VMware, Inc. 481


VMware NSX Advanced Load Balancer Configuration Guide

Elastic High Availability for NSX Advanced Load Balancer Service


Engines
This section explains elastic HA for NSX Advanced Load Balancer Service Engines.

High Availability Modes


NSX Advanced Load Balancer supports the following two modes:

n Service Engine Elastic HA mode: This combines scale-out performance and high availability

n N+M mode (the default mode)

n Active/ Active

n Legacy HA mode: This enables a smooth migration from legacy appliance based load
balancers.

Elastic HA N+M Mode


The N+M mode is the default mode of Elastic HA. In this mode, each virtual service is placed on
only one SE.

The 'N' in N+M is the minimum number of SEs required to place virtual services in the SE group.
This calculation is performed by the NSX Advanced Load Balancer Controller based on Virtual
Services per Service Engine parameter. The 'N' varies over time as the virtual services are
placed on or removed from the group. The maximum number of Service Engines is labeled 'E'.

The 'M'in N+M is the number of additional SEs the NSX Advanced Load Balancer Controller spins
up in order to handle 'M' number of SE failures without reducing the capacity of the SE group. The
'M' appears in Buffer Service Engines field.

The minimum scale per virtual service is labeled as 'B' and the maximum scale per virtual service is
labeled as 'C'.

Note The buffer SE in N+M mode is the number of SE failures that the system can tolerate for the
virtual services to be up and operational (placed on atleast one SE), but not in the same capacity.
In the SE Group, if a minimum scale per virtual service is set and an additional SE is required, then
you should increase the buffer SE according to the calculations.

You can select N+M mode parameters by navigating to Infrastructure > Cloud Resources > Service
Engine Group. You can either create a new SE group or edit the existing one. Select N+M (buffer)
option in Elastic HA under High Availability Mode section in Basic Settings tab. Here, the two
parameters are set.

The Advanced HA & Placement section in Advanced tab shows that the three parameters are set.

Example: Elastic HA N+M Mode Example


As per the image below in the left hand side, there are twenty virtual service placements on an SE
group.

VMware, Inc. 482


VMware NSX Advanced Load Balancer Configuration Guide

With virtual services per SE set to 8, N is 3 (20/8 = 2.5, which rounds to 3).

With M = 1, a total of N+M = 3 + 1 = 4 SEs are required in the group.

Note that no single SE in the group is completely idle. The Controller places virtual services on all
available SEs. In N+M mode, NSX Advanced Load Balancer ensures enough buffer capacity exists
in aggregate to handle one (M=1) SE failure. In this example, each of the four SEs has five virtual
services placed. A total of 12 spare slots are still available for additional virtual service placements,
which is sufficient to handle one SE failure.

The right side of below image shows the SE group just after SE2 has failed. The five virtual services
in SE2 have been placed onto spare slots found on surviving SEs, namely, SE1, SE3, and S4.

Prior to SE failure New placements just after SE failure

SE1 SE2 SE3 SE4 SE1 SE2 SE3 SE4

A A

The imbalance in loading disappears over time if one or both of two things happens:

n New virtual services are placed on the group. As many as four virtual services can be placed
without compromising the M=1 condition. They will be placed on SE5 because NSX Advanced
Load Balancer chooses the least-loaded SE first.

n The Auto-Rebalance option is selected.

With 'M' set to 1, the SE group is single-SE fault tolerant. Customers desiring multiple-SE
fault tolerance can set 'M' higher. NSX Advanced Load Balancer permits 'M' to be dynamically
increased by the administrator without interrupting any services. Consequently, you can start with
M=1 (typical of most N+M deployments), and increase it if the conditions warrant.

If an N+M group is scaled out to maximum number of Service Engines and 'N' times virtual
services per SE is placed, then NSX Advanced Load Balancer will permit additional virtual service
placements (into the spare capacity represented by 'M'), but an HA_COMPROMISED event will be
logged.

For a Write Access cloud, the Controller will attempt to recover the failed SE after five minutes
by rebooting the virtual machine. After a further five minutes, the Controller will attempt to delete
the failed SE virtual machine after which a new SE will be spun up to restore the configured buffer
capacity.

VMware, Inc. 483


VMware NSX Advanced Load Balancer Configuration Guide

Back to M = 1 state

SE1 SE2 SE3 SE4 SE5

As shown in above image, with only four slots remaining just after the five re-placements, if NSX
Advanced Load Balancer’s orchestrator mode is set to write access, NSX Advanced Load Balancer
spins up SE5 to meet the M=1 condition, which in this case requires at least eight slots available for
re-placements.

Note To provide time to identify the cause of a failure, the first SE that fails in an SE group is not
automatically deleted even after five minutes. You can then perform troubleshooting on the failed
SE and delete the virtual machine manually if restoration is not possible. The Controller will delete
the SE virtual machine after three days if you have not manually deleted the same.

Elastic HA Active/Active
In active/active mode, NSX Advanced Load Balancer places each virtual service on more than one
SE, as specified by Minimum Scale per Virtual Service parameter, the default minimum is two. If
an SE in the group fails, then,

n Virtual services that had been running are not interrupted. They continue to run on other SEs
with degraded capacity until they can be placed once again.

n If NSX Advanced Load Balancer’s orchestrator mode is set to write access, a new SE is
automatically deployed to bring the SE group back to its previous capacity. After waiting for
the new SE to spin up, the Controller places on it the virtual services that had been running on
the failed SE.

Example: Elastic HA Active/Active Example


The images illustrates SE failure and full recovery. The image shows a SE group with the
specifications as listed below:

n Virtual Services per Service Engine = 3 (label A in the UI)

n Minimum Scale per Virtual Service = 2 (label B)

n Maximum Scale per Virtual Service = 4 (label C)

n Max Number of Services Engines = 6 (label E)

VMware, Inc. 484


VMware NSX Advanced Load Balancer Configuration Guide

In a span of time, five virtual services (VS1-VS5) are placed. The VS3 is scaled from its initial two
placements to third place, illustrating NSX Advanced Load Balancer’s support for 'N-way active'
virtual services. The below image depicts five virtual services placed on an active/active SE group.

SE 1 SE 2 SE 3 SE 4 SE 5 SE 6

A VS4 VS4 VS3 VS5 VS5


VS1 VS1 VS2 VS2 VS3 VS3

The below image displays that the SE3 has failed. As a result, one of the two VS2 instances and
and one of three VS3 instances have failed. However, other three virtual services (VS1, VS4, VS5)
are unaffected. Also, neither VS2 nor VS3 are interrupted, because these instances were placed
on SE4, SE5, and SE6 previously and they continue to work with degraded performance. In the
below image, you can also view a single SE failure in an active/active SE group.

SE 1 SE 2 SE 3 SE 4 SE 5 SE 6

A VS4 VS4 VS3 VS5 VS5


VS1 VS1 VS2 VS2 VS3 VS3

The NSX Advanced Load Balancer Controller deploys SE7 as a replacement for SE3 and places
VS2 and VS3 on it which brings both virtual services up to their prior level of performance. The
below image shows the recovery of a single SE in an active/active SE group.

Compact Placement
When Compact placement is ON, NSX Advanced Load Balancer uses the minimum number of SEs
required. When Distributed placement is ON, NSX Advanced Load Balancer uses as many SEs
as required within a limit allowed by maximum number of Service Engines. By default, Compact
placement is ON for Elastic HA, N+M (buffer) mode. And by default, Distributed placement is ON
for Elastic HA, Active/Active mode.

Example: Compact Placement Example


The below image shows the effect of compact placement on an Elastic HA, N+M mode SE group
where the maximum number of Service Engines is four. In both the compact placement and
distributed placement examples, you can observe the following:

n Eight virtual services are created in sequence.

n After VS1 is placed, SE2 is deployed because M=1 (handles one SE failure).

VMware, Inc. 485


VMware NSX Advanced Load Balancer Configuration Guide

n When VS2 requires placement, NSX Advanced Load Balancer assigns it to an idle SE2 to make
the best use of all the running SEs.

At this point, placement behavior diverges and is as described as follows:

n Compact Placement ON: Subsequent placements of VS3 through VS8 does not require
additional SEs to maintain HA (M=1 => one SE failure). With Compact placement ON, NSX
Advanced Load Balancer prefers to place virtual services on existing SEs.

n Distributed Placement ON: Subsequent placements of VS3 and VS4 results in scaling the SE
group out to its maximum number four, illustrating NSX Advanced Load Balancer’s preference
for performance at the expense of its resources. After reaching four deployed SEs which is
the maximum number of SEs for this group, the NSX Advanced Load Balancer places virtual
services VS5 through VS8 on pre-existing, least-loaded SEs. The below image shows the
Elastic HA N+1 SE group with Compact placement ON and OFF. It has eight successive virtual
service placements as shown.

Compact Placement ON Compact Placement OFF

SE 1 SE 2 SE 1 SE 2 SE 3 SE 4

A A
VS7 VS8
VS5 VS6
VS3 VS4 VS5 VS6 VS7 VS8
VS1 VS2 VS1 VS2 VS3 VS4

Interaction of Compact Placement with Elastic HA Modes


The compact placement interacts in a subtle way with the elastic HA modes with respect to the
timing.

n Elastic HA N+M mode: Since the compact placement is ON by default in N+M mode, the
NSX Advanced Load Balancer Controller deferred deployment of spare capacity is preferred
instead of immediately packing the virtual services densely onto existing SEs.

n Elastic HA active/active mode: Since the distributed placement option is ON by default


in active/active mode, the NSX Advanced Load Balancer Controller delays the placement of
VS2 and VS3 until the replacement of SE7 spin ups. Additional activities are not placed on the
four surviving SEs (SE1, SE2, SE4, SE5). Instead, both virtual services are placed on a fresh
SE so that all the virtual services perform like they did previously that is before the failure had
taken place.

VMware, Inc. 486


VMware NSX Advanced Load Balancer Configuration Guide

Configuring Auto-Rebalance
The Auto-Rebalance option applies only to the Elastic HA modes, and it is OFF by default. If the
Auto-Rebalance remains in its default OFF state then, an event is logged instead of performing
migrations automatically. To enable Auto-Rebalance, see How to Configure Auto-rebalance using
NSX Advanced Load Balancer CLI.

If auto-rebalance is left in its default OFF state, an event is logged instead of automatically
performing migrations.

How to Configure Auto-rebalance using NSX Advanced Load Balancer CLI


The auto-rebalance feature helps in automatically migrating or scaling virtual services when the
load on the Service Engines (SE) goes beyond or falls below the configured threshold.

The following are the the trigger types that aggregate at NSX Advanced Load Balancer Service
Engine level:

n Packets per second (PPS)

n Throughput in Mbps

n Open connections

n CPU

The minimum and the maximum threshold is configured along with one of these options for the
trigger type. By default, auto-rebalance is based on CPU trigger type.

Instructions
The following are the steps to configure auto-rebalance feature on a NSX Advanced Load
Balancer Service Engine:

1 Login to the NSX Advanced Load Balancer Controller CLI and enter the shell mode by using
the shell command.

2 On prompt, specify the Username and Password.

3 Optionally, use the switch command to switch to the respective tenant or cloud for which
auto_rebalance can be configured.

switchto tenant tenant-name


switchto cloud cloud-name

4 The auto_rebalance option is set to false. To enable it, login to the Controller, bring up the
shell, and set auto_rebalance to True.

[admin:1-Controller-2]: > configure serviceenginegroup Default-Group


[admin:1-Controller-2]: serviceenginegroup> auto_rebalance
Overwriting the previously entered value for auto_rebalance
[admin:1-Controller-2]: serviceenginegroup> save

VMware, Inc. 487


VMware NSX Advanced Load Balancer Configuration Guide

5 Configure the auto-rebalance parameters.

configure serviceenginegroup Default-Group


auto_rebalance_interval interval-value
auto_rebalance_criteria option
auto_rebalance_capacity_per_se integer-value

The auto_rebalance_interval interval-value provides the interval for which the auto-
rebalance will be triggered on reaching the configured threshold. The interval-value is in
seconds and the recommended value is 300s. For instance, auto_rebalance_interval 300.

The auto_rebalance_criteria option defines the auto-rebalance criteria. The options


available are as follows:

n se_auto_rebalance_cpu

n se_auto_rebalance_mbps

n se_auto_rebalance_open_conns

n se_auto_rebalance_pps

For instance, auto_rebalance_criteria se_auto_rebalance_cpu.

The

auto_rebalance_capacity_per_se integer-value defines the maximum allowed value for the


specified criteria. For instance, the integer-value will be the maximum PPS in the case of
se_auto_rebalance_pps and the maximum Mbps in the case of se_auto_rebalance_mbps. For
instance, auto_rebalance_capacity_per_se 200000.

6 Enter Save to save the configuration.

Configuring Auto-rebalance Threshold


The auto-rebalance threshold can be configured based on the SE group. You can use the
following commands to configure the maximum and minimum threshold values per SE groups.

Note The object used to configure thresholds is the same for all the trigger types (max_cpu_usage,
min_cpu_usage) and is a part of the SE group configuration.

configure serviceenginegroup Default-Group


max_cpu_usage value
min_cpu_usage value
save

The max_cpu_usage value defines the maximum threshold value for CPU. The value is in
percentage. For instance, max_cpu_usage 70.

The min_cpu_usage value defines the minimum threshold value for CPU. The value is in
percentage. For instance, min_cpu_usage 30.

VMware, Inc. 488


VMware NSX Advanced Load Balancer Configuration Guide

Use Case Scenario


This section covers two possible scenarios and the associated configuration.

Scenario 1: Auto-rebalance is based on PPS trigger, with a scale out threshold of 70% (of 200,000
PPS, that is, when it exceeds 140,000 PPS) and a scale in the threshold of 30% (of 200,000 PPS ,
that is, when it reduces below 60,000 PPS).

switchto tenant Avi


switchto cloud azure
configure serviceenginegroup Default-Group
auto_rebalance_interval 300
auto_rebalance_criteria se_auto_rebalance_pps
auto_rebalance_capacity_per_se 200000
max_cpu_usage 70
min_cpu_usage 30
save

Scenario 2: Auto-rebalance is based on open connection triggers, with a scale out threshold of
60% (of 5,000 open connections, that is, when it exceeds 3,000 open connections) and a scale
in threshold of 20% (of 5,000 open connections, that is, when it reduces below 1,000 open
connections).

switchto tenant Avi-azure


switchto cloud azure
configure serviceenginegroup Default-Group
auto_rebalance_interval 300
auto_rebalance_criteria se_auto_rebalance_open_conns
auto_rebalance_capacity_per_se 5000
max_cpu_usage 60
min_cpu_usage 20
save

Configuring Elastic HA
This sections explains the steps to configure elastic HA.

The following are the steps to configure elastic HA for an SE group.

1 Navigate to Infrastructure > Clouds.

2 Click the cloud name, for instance, Default-Cloud.

3 Click Service Engine Group.

4 Click the edit icon next to the SE group name, or click Create to create new one. Fill out the
requisite fields.

5 Click Save.

VMware, Inc. 489


VMware NSX Advanced Load Balancer Configuration Guide

Notes and Recommendations


Virtual services are non-disruptive during SE upgrade, except that Elastic HA, N+M buffer mode,
for virtual services are placed on just one SE (not scaled out). For more information on Rolling
Service Engine Upgrade, see Upgrading NSX Advanced Load Balancer Software .

Elastic HA N + M mode (the default) is applied to the applications with the following conditions:

1 The SE performance required by any application can be delivered by a fraction of one SE's
capacity. Hence, each virtual service is placed on a single SE.

2 Applications can tolerate brief outages, though not longer than it takes to place a virtual
service on an existing SE and plumb its network connections. This should takes few seconds
only.

The pre-existence of buffer SE capacity, coupled with the default setting of compact placement
ON, speeds up the replacement of virtual services that are affected by a failure. The NSX
Advanced Load Balancer does not wait for a substitute SE to spin up; it immediately places
affected virtual services on spare capacity.

Most applications requirement for HA is satisfied by M=1. However, in either development or test
environments, 'M' can be set to 0 as the developers or test engineers can wait for the new SE to
spin up before the virtual service is back online.

Elastic HA active/active mode is applied to mission-critical applications where the virtual services
must continue without interruption during the recovery period.

Additional Information
Difference between HA_MODE_SHARED and HA_MODE_SHARED_PAIR options available on
Avi CLI

Legacy HA for NSX Advanced Load Balancer Service Engines


Legacy active/standby high availability (HA) is available for NSX Advanced Load Balancer Service
Engine (SE) redundancy. Legacy active/standby is useful for migrating from hardware appliance-
based solutions.

NSX Advanced Load Balancer also provides elastic HA, including active/active and N+M modes. In
legacy HA mode, only two NSX Advanced Load Balancer SEs are configured. By default, active
virtual services are compacted onto one SE. In this mode, one SE carries all the traffic for a virtual
service placed on it and is thus the active SE for that virtual service. The other SE in the pair is the
standby for that virtual service that does not carry traffic for it while the other SE is healthy.

Upon failure of an SE, the surviving SE takes over traffic for all virtual services that were previously
active on the failed SE, by continuing to handle traffic for virtual services that are already assigned
to it. As part of the takeover process, the survivor also takes ownership of all floating IP addresses
such as VIPs, SNAT-IP and so on. The compacted and distributed options determine whether all
active virtual service placements are concentrated onto one SE in a healthy pair or not.

VMware, Inc. 490


VMware NSX Advanced Load Balancer Configuration Guide

NSX Advanced Load Balancer supports rolling upgrades by the NSX Advanced Load Balancer
Controller of SEs in a legacy HA configuration. Virtual services running on a legacy HA SE group
are not disrupted during a rolling upgrade. The below image depicts Legacy HA active/ standby,
displaying compacted and distributed load options.

Compacted (default) Distributed

SE 1 SE 2 SE 1 SE 2

VS_1 VS_1 VS_1 VS_1


VS_2 VS_2 VS_2 VS_2
VS_3 VS_3 VS_3 VS_3
VS_4 VS_4 VS_4 VS_4
VS_5 VS_5 VS_5 VS_5
VS_6 VS_7 VS_6 VS_7 VS_6 VS_7 VS_6 VS_7

Health Monitoring
By default, health checks are sent by both SEs to the back-end servers. You can also disable
health monitoring by an SE for virtual services for which it is standing by. However, you can enable
health checks for each SE’s next-hop gateways.

Note Gateway health checking is supported for both SEs.

Floating IP Address
You can assign one or more floating IP addresses to a SE group configured for legacy HA. The
floating IP address is applicable when the SE interfaces are not in the same subnet as the VIP or
source NAT (SNAT) IP addresses that uses the SE group. One floating interface IP is required per
each attached subnet per SE group while configuring in the Legacy HA mode.

The network service is used to configure floating IP. For more details on this, see Network Service
Configuration guide.

Disabling a Legacy-Mode SE
A combination of factors cause the disabling of a legacy-mode SE, that is different from SEs
running in either active/active or N+M mode. For more information, see Deactivating SE Members
of a Legacy HA SE Group

Configuring Legacy HA
This sections explains how to configure legacy HA.

VMware, Inc. 491


VMware NSX Advanced Load Balancer Configuration Guide

The following are the steps to configure a pair of SEs for legacy HA.

1 Create an SE group for the pair of SEs. Legacy HA requires each pair of active/standby SEs to
be in its own SE group.

2 Within each SE group,

a Add two SEs.

b Change the SE group's HA mode to legacy HA.

c If applicable, add a floating IP interface.

Using the Web Interface


This section explains the steps to configure legacy HA using the web interface.
Create an SE Group for each Active/Standby Pair of SEs
The following are the steps to use the web interface:

1 Navigate to Infrastructure > Cloud Resources > Service Engine Group. Click CREATE.

2 Specify a name for the SE group.

3 Select Legacy HA Active/Standby.

4 Specify the floating IP address (optional). Configuration of floating IP address is not supported
via UI in current release. You need to configure it using CLI via Network Service of the
corresponding SE-Group. For more details, see Network Service Configuration page.

5 By default, NSX Advanced Load Balancer compacts all virtual services into one SE within
the active/standby pair. To distribute active virtual services across the pair, within the Virtual
Service Placement Policy section of the SE group editor, select Distribute Load option.

Note You can specify the second floating IP address. Assign virtual services on an individual
basis to one or the other SE in the legacy pair by navigating to the Advanced tab in the virtual
service editor.

You can configure the second floating IP address using CLI via Network Service of the
corresponding SE-Group. For more details, see Network Service Configuration page.

6 By default, virtual services that fail cannot be migrated to the SE that replaces the failed
SE. Instead, the load remains compacted on the failover SE. Choose Auto-redistribute Load
option to make failback automatic.

7 The Virtual Services per Service Engine field sets a maximum number of virtual services that
may be placed. The legacy is non-elastic such that for any given virtual service, exactly one
placement (onto the virtual service's active SE) will be performed.

8 Finally, uncheck Health Monitoring on Standby Service Engine(s) option so that it can be
performed only by active SEs.

9 Click Save.

VMware, Inc. 492


VMware NSX Advanced Load Balancer Configuration Guide

Add a Pair of SEs to the SE Group


The following are the steps to add a pair of SEs to the SE group.

1 Navigate to Infrastructure > Cloud Resources > Service Engine.

2 Select the cloud from Select Cloud drop-down list.

3 Click on edit icon next to one of the SEs.

4 Select the SE group from Service Engine Groupdrop-down list.

Note
n If NSX Advanced Load Balancer is deployed in full access mode then, the other SE is added to
the same group automatically .

n If NSX Advanced Load Balancer is installed in no access mode then, select the second SE to
add it to the group.

Placing a Virtual Service on the SE Group


After configuring the SE group for legacy HA, virtual services can be placed on the group. The
following are the steps to place a virtual service on the SE group.

1 Navigate to Applications > Virtual Services.

a If you are creating a new virtual service, select CREATE VIRTUAL SERVICE > Advanced
Setup. Specify a name and the VIP address, and then click Advanced tab.

b If you are editing an existing virtual service, click the edit icon in the row for the virtual
service. Click Advanced tab.

2 In the Other Settings section, select the SE group from ServiceEngine Groupdrop-down list.

3 Click Save.

Using the CLI


This example configures a pair of SEs (10.10.22.80 and 10.10.22.123) for Legacy HA.

The following commands create a new SE group for the pair of SEs:

: > configure serviceenginegroup NewGroup3


: serviceenginegroup> ha_mode ha_mode_legacy_active_standby
: serviceenginegroup> floating_intf_ip 10.10.1.100
: serviceenginegroup>
: serviceenginegroup> save

The following commands create a new SE group for the pair of SEs:

: > configure serviceenginegroup NewGroup2


: serviceenginegroup> ha_mode ha_mode_legacy_active_standby
: serviceenginegroup> save

VMware, Inc. 493


VMware NSX Advanced Load Balancer Configuration Guide

The following commands add the SEs to the new SE group:

: > configure serviceengine


10.10.22.123 10.10.22.80
: > configure serviceengine 10.10.22.123
: serviceengine> se_group_ref NewGroup2
: serviceengine>

Note
n If NSX Advanced Load Balancer is deployed in full access mode then, these commands add
both SEs to the group.

n If NSX Advanced Load Balancer is installed in no access mode then, additional commands are
needed to add the second SE to the group.

: > configure serviceengine


10.10.22.123 10.10.22.80
: > configure serviceengine 10.10.22.80
: serviceengine> se_group_ref NewGroup2
: serviceengine> save

The following commands configure a virtual service vs1 with VIP 10.10.1.99 on the SE group:

: > configure virtualservice vs1


: virtualservice> address 10.10.1.99
: virtualservice> se_group_ref NewGroup2
: virtualservice> save

Additional Information
n Default Gateway (IP Routing on NSX Advanced Load Balancer SE)

n Enable a Virtual Service VIP on All Interfaces

n MAC Masquerade

n Network Service Configuration

Deactivating SE Members of a Legacy HA SE Group


An SE in a legacy HA SE group cannot be “deactivated” per se. Instead, a special workflow is
required. This section describes the workflow in detail.

Procedure

1 Use a switchover serviceengine name-of-SE command to switch all active instantiations of


virtual services to the other SE in the legacy group. The switchover command determines
which virtual services running on the SE are active. Any standby instantiations running on the
SE are unaffected.

Note Switchover functionality is currently unavailable in the NSX Advanced Load Balancer UI.

VMware, Inc. 494


VMware NSX Advanced Load Balancer Configuration Guide

2 The virtual service switchovers occur asynchronously hence it is required to wait until all have
been completed. Poll the SE event log to verify all virtual services are in standby mode.

Note The standby virtual services should not be in the SE_STATE_DISABLED state.

3 The only way to prevent the standby SE from taking on any active virtual services is to remove
it from the legacy SE group. This can be done by moving it into a “maintenance” SE group
created for the purpose.

4 At this point, the legacy HA SE group comprises just one active SE. To return to a state of high
availability, there are two options:

n Option 1: If the Controller has write access to the cloud, it will automatically spin up a
replacement SE.

n Option 2: Otherwise, the user must manually add one to the group.

Note
n To speed up Option 2, the user can spin up the replacement SE in the maintenance group
before removing the standby SE from the legacy HA SE group.

n Switchovers can be accomplished either through the REST API or CLI.

n The SE moves mentioned above can be accomplished through the REST API, CLI, or UI.

Virtual Service Scaling


This section covers the virtual service optimization topics.

The following are the types of virtual service optimization:

n Scaling out a virtual service to an additional NSX Advanced Load Balancer Service Engine.

n Scaling in a virtual service back to fewer SEs.

n Migrating a virtual service from one SE to another SE.

NSX Advanced Load Balancer supports scaling virtual services which distributes the virtual
service workload across multiple SEs to provide increased capacity on demand. This extends the
throughput capacity of the virtual service and increasing the level of high availability.

n Scaling out a virtual service distributes that virtual service to an additional SE. By default, NSX
Advanced Load Balancer supports a maximum of four SEs per virtual service when native load
balancing of SEs is in play. In BGP environments, the maximum can be increased to 64.

n Scaling in a virtual service reduces the number of SEs over which its load is distributed. A
virtual service requires atleast one SE always.

VMware, Inc. 495


VMware NSX Advanced Load Balancer Configuration Guide

Secondary SE Primary SE Secondary SE

Scaling NSX Advanced Load Balancer Virtual Services in VMware/


OpenStack with Nuage
For VMware deployments and OpenStack deployments with Nuage, the scaled out traffic behaves
as follows:

n The virtual service IP is GARPed by the primary SE. All inbound traffic from clients will arrive at
this SE.

n The primary SE will handle a percentage of traffic as expected.

n At Layer 2, excess traffic is forwarded to the MAC address of the additional secondary Service
Engine(s).

n The scaled-out traffic to the secondary SEs is processed as normal. The SEs will change the
source IP address of the connection to their own IP address within the server network.

n The servers will respond to the source IP address of the traffic, which can be the primary or
one of the secondary SEs.

n Secondary SEs will forward the response traffic back to the client, bypassing the primary SE.

Scaling NSX Advanced Load Balancer Virtual Services in OpenStack


with Neutron
For OpenStack deployments with native Neutron, the server response traffic sent to the secondary
SEs will be forwarded through the primary SE before returning to the original client.

NSX Advanced Load Balancer will issue an alert if the average CPU utilization of an SE exceeds
the designated limit during a five-minute polling period. The alerts for additional thresholds can
be configured for a virtual service. The process of scaling in or scaling out must be initiated by
an administrator. The CPU Threshold field of the SE Group > High Availability tab defines the
minimum and maximum CPU percentages.

VMware, Inc. 496


VMware NSX Advanced Load Balancer Configuration Guide

Scaling NSX Advanced Load Balancer Virtual Services in Amazon


Web Services (AWS)
For deployments in AWS, the scaled out traffic behaviour is as follows:

n The virtual service IP is GARPed by the primary SE. All inbound traffic from clients will arrive at
this SE.

n The primary SE will handle a percentage of traffic as expected.

n At Layer 2, excess traffic is forwarded to the MAC address of the additional secondary Service
Engine(s).

n The scaled-out traffic to the secondary SEs is processed as normal. The SEs will change the
source IP address of the connection to their own IP address within the server network.

n The servers will respond to the source IP address of the traffic, which could be the primary or
one of the secondary SEs.

n Secondary SEs will forward the response traffic back to the client origin, bypassing the primary
SE.

Scaling NSX Advanced Load Balancer Virtual Services in Microsoft


Azure Deployments
NSX Advanced Load Balancer deployments in Microsoft Azure leverage the Azure Load Balancer
to provide an ECMP-like, layer 3 scale-out architecture. In this case, the traffic flow is as follows:

n The virtual service IP resides on the Azure Load Balancer. All inbound traffic from clients will
arrive at the Azure LB.

n The Azure load balancer has a backend pool consisting of the NSX Advanced Load Balancer
Service Engines.

n The Azure load balancer balances the traffic to one of the NSX Advanced Load Balancer
Service Engines associated with the virtual service IP.

n The traffic to the SEs is processed. The SEs will change the source IP address of the
connection to their own IP address within the server network.

n The servers will respond to the source IP address of the traffic, which can be the primary or
one of the secondary SEs.

n The SEs forward their response traffic directly back to the origin client, bypassing the Azure
load balancer.

VMware, Inc. 497


VMware NSX Advanced Load Balancer Configuration Guide

Scaling Process
The process used to scale out will depend on the level of access that is either 'write' access or
'read access/no access', that the NSX Advanced Load Balancer has to the hypervisor orchestrator.
The following is the scaling process:

n If NSX Advanced Load Balancer is in 'write' access mode with write privileges to the
virtualization orchestrator, then the NSX Advanced Load Balancer will automatically create
additional Service Engines when required to share the load. If the Controller runs into an issue
while creating a new Service Engine, then it will wait for few minutes and then retry on a
different host. With native load balancing of SEs in play, the original Service Engine (primary
SE) and ARPs for the virtual service IP address processes as much traffic as possible. Some
percentage of traffic arriving here will be forwarded via Layer 2 to the additional (secondary)
Service Engines. When traffic decreases, the virtual service automatically scales in back to the
original primary Service Engine.

n If NSX Advanced Load Balancer is in 'read access or no access' mode, an administrator must
manually create and configure new Service Engines in the virtualization orchestrator. The
virtual service can be scaled out only when the Service Engine is both configured for the
network and connected to the NSX Advanced Load Balancer Controller.

Note Existing Service Engines with spare capacity and appropriate network settings may be used
for the scale out. Otherwise, scaling out may require either modifying existing Service Engines or
creating new Service Engines.

Manual Scaling of Virtual Services


This section elaborates the manual scaling of virtual services.

Virtual services inherit the values for the minimum and maximum number of SEs from their SE
group on which they can be instantiated. Between the virtual service minimum/ maximum values,
you can manually scale the virtual service out/in from the UI, CLI, or REST API. Also, within the
same SE group, the current SEs virtual service instantiations can be migrated to other SEs.

Note
n A virtual service’s maximum instantiation count can be below the maximum number of SEs in
its group.

n For more information on SE group settings min_scaleout_per_vs and max_scaleout_per_vs,


see Impact of Changes to Min-Max Scaleout Per Virtual Service.

Automatic Scaling of Virtual Services


This section elaborates the automatic scaling of virtual services.

VMware, Inc. 498


VMware NSX Advanced Load Balancer Configuration Guide

The NSX Advanced Load Balancer supports the automatic rebalancing of virtual services across
the SE group based on the load levels experienced by each SE. Auto-rebalance can migrate or
scale in/out virtual services to rebalance the load and, in a write-access cloud, this can also result
in SEs being provisioned or de-provisioned if required.

Note Auto-rebalancing applies only if elastic HA has been selected for the SE group.

To configure auto-rebalancing for an SE group, see How to Configure Auto-rebalance using NSX
Advanced Load Balancer CLI

Scaling Out
The following are the steps to manually scale a virtual service out when NSX Advanced Load
Balancer is operating in 'write access' mode:

1 Open the Virtual Servicewindow for the virtual service that you prefer to scale.

2 Hover the cursor over the name of the virtual service to open the Virtual Service Quick Info
popup message.

3 Click Scale Out button, to scale the Virtual Service out to an additional SE per click, up to four
SEs.

4 If available, NSX Advanced Load Balancer will attempt to use an existing Service Engine. If
none is available or matches reachability criteria, it may create a new SE.

5 In some environments, NSX Advanced Load Balancer may prompt for additional information to
create a new Service Engine, such as additional IP addresses.

The prompt Currently Scaling Out displays the progress while the operation is taking place.

Note
n If virtual service scales out across multiple SEs, then each SE will independently perform server
health monitoring to the pool’s servers.

n Scaling out does not interrupt existing client connections.

Scaling out a virtual service can take around few seconds or few minutes. The scale out timing
depends whether an additional SE exists or if a new SE with relevant network and disk speeds
requirement must be created.

Scaling In
The following are the steps to manually scale in a virtual service in when NSX Advanced Load
Balancer is operating in 'write access' mode:

1 Open the Virtual Service Details page for the virtual service that you prefer to scale.

2 Hover the cursor over the name of the virtual service to open the Virtual Service Quick Info
popup message.

3 Click Scale In button, to open the Scale In popup window.

VMware, Inc. 499


VMware NSX Advanced Load Balancer Configuration Guide

4 Select Service Engine to scale in. In other words, select the Service Engine that should be
removed from supporting this virtual service.

5 Scale the virtual service by one Service Engine per SE selection, to a minimum of one Service
Engine.

The prompt Currently scaling in displays the progress while the operation is taking place.

Note While Scaling In, existing connections take thirty seconds to complete. Remaining
connections to the SE are closed and must restart.

Migrating
The Migrate option allows smooth migration from one Service Engine to another Service Engine.
During this process, the primary SE will scale out to the new SE and begins to send it new
connections. After thirty seconds, the old SE will be deprovisioned from supporting the virtual
service.

Note Existing connections to the migration’s source Service Engine take thirty seconds to
complete prior to the SE that is being deprovisioned for the virtual service. Remaining connections
to the SE are closed and must restart.

Additional Information
This section provides additional information for specific infrastructures.

How different Scaling Methods work


ARP tables are maintained for scaled out virtual service configuration, which is relevant for VIP
scale-out scenarios only, i.e., a single VIP across multiple Service Engines In L2 scale-out mode,
the primary always responds to the ARP for the VIP. It then sends out a part of the traffic to
the secondary SEs. The return traffic can go directly from the secondary SEs through the Direct
Secondary Return mode or the primary SE (Tunnel mode). In the case of Tunnel mode, the
MAC-VIP mapping is always unique. The VIP is always mapped to the primary SE.

In the Direct Secondary Return mode, the return traffic will use VIP as the source IP and the
secondary SE’s MAC as the source MAC. The ‘ARP Inspection’ must be disabled in the network,
i.e., the network layer should not inspect/block/learn the MAC of the VIP from these packets.
Otherwise, MAC-IP mapping will flap. This is a case with a few environments, such as OpenStack,
Cisco ACI, etc., and tunnel mode is required in these environments.

In the L3 scale-out with BGP, this is not applicable since the ARP is done for the next hop, which
is the upstream router, which in turn does the ECMP to individual SEs. The return traffic uses
respective SE’s MAC as source MAC and VIP as source IP. The router handles this as expected.

VMware, Inc. 500


VMware NSX Advanced Load Balancer Configuration Guide

Throughput
The term throughput appears throughout the NSX Advanced Load Balancer web interface and
documentation. Every vendor has a slightly different definition of throughput, which may even
change depending on context.

Thoughput
In NSX Advanced Load Balancer, throughput is defined based on the traffic paths through virtual
services, pools, and SE:
A B

Client C Service D
Server
Engine

n A - Client request to SE

n B - SE request to server

n C - SE response to client

n D - Server response to SE

Throughput Calculations
NSX Advanced Load Balancer calculates throughput as follows:

n For virtual service traffic, throughput is calculated as: A + C

n For pool traffic, throughput is calculated as: B + D

n For SE traffic, throughput is calculated as: A + B + C + D

Why Throughput Numbers May Differ


An SE’s throughput value may be higher than the combined throughput values of the virtual
service and pool. This can occur due to any of the following conditions:

n Management traffic load while communicating with Controllers.

n Health monitoring traffic to the pool servers.

n Multiple virtual services and pools hosted by the SE. (The throughput number includes all
virtual services and pools hosted by the SE.)

Throughput numbers may differ between a virtual service and its pool due to network or
application headers, SSL offload, compression, caching, multiplexing, or many other features.

VMware, Inc. 501


VMware NSX Advanced Load Balancer Configuration Guide

Virtual Service Policies


Policies allow advanced customization of network layer security, HTTP security, HTTP requests,
and HTTP responses. A policy can be used to control security, client request attributes, or server
response attributes. Policies are comprised of matches and actions, similar to an if-then logic. If
the logic evaluates to true, it matches the policy, and therefore the NSX Advanced Load Balancer
performs the corresponding action.

Policies are comprised of one or more rules, which are match-action pairs. A rule can contain
many matches, or have many actions. Multiple policies can be configured for a virtual service.
Policies can alter the default behavior of the virtual service, or if the matching criteria are not met,
can stay benign for a particular connection, request, or response.

Policies are not shared. They are defined on a per-virtual-service basis and intended to be simple
point-and-click functionality.

For more advanced capabilities, see DataScripts.

Policies are configured within the Policies tab of the virtual service editor.

Prioritizing Policies
Policies can be used to recreate similar functionality found elsewhere within the NSX Advanced
Load Balancer. For instance, a policy can be configured to generate an HTTP redirect from HTTP
to HTTPS. The same functionality can be configured within the Secure-HTTP application profile.
Since a policy is more specific than a general purpose profile, the policy takes precedence.

If the profile is set to redirect HTTP to HTTPS via port 443, and the policy is set to redirect HTTP to
HTTPS on port 8443, the client will be sent to port 8443. (See Execution Priority for more on this
topic.)

A virtual service can have multiple policies defined, one for each of the four types. Once defined,
policies for the four types are implemented in the following order of priority:

1 Network security policy

2 HTTP security policy

3 HTTP request policy

4 HTTP response policy

5 DataScripts policy

6 Access policy

For instance, a network policy that is set to discard traffic takes precedence over an HTTP request
policy set to modify a header. Since the connection is discarded, the HTTP request policy will not
execute. Each policy type can contain multiple rules, which in turn can be prioritized to process in
a specified order. This is done by moving the policies up or down in the ordered list within the NSX
Advanced Load Balancer UI.

VMware, Inc. 502


VMware NSX Advanced Load Balancer Configuration Guide

Match - Action
All policies are made up of match and action rules, which are similar in concept to if - then logic.
Administrators set match criteria for all connections, requests, or response to the virtual service.
The NSX Advanced Load Balancer executes the configured actions for all traffic that meets the
match criteria.

A single match with multiple entries is treated as “or” operation. For instance, if a single match
type has the criteria “marketing”, “sales”, and “engineering” set, then the match is true if the path
contains “marketing”, or “sales”, or “engineering”.

If a rule has multiple matches configured, all match types must be true for the action to be
executed. In the above screenshot, both the path and HTTP method matches must be true.
Within each of these two match types, only one of the entries must be true for that match type
to be true. For HTTP method, a client request must be of type GET or HEAD. Multiple rules can be
configured for each policy and they can be configured to occur in a specific order. If no match is
applied, the condition is automatically met and the actions will be executed for each connection as
per the policy type.

Matches against HTTP content are case-insensitive. This is true for header names and values,
cookies, host names, paths, and queries. For HTTP policies, the NSX Advanced Load Balancer
compares Uniform Resource Identifier (URI) matches against the decoded HTTP URI. Many
browsers and web servers encode human-readable format content differently. For instance, a
browser’s URI encoding might translate the dollar character “$” to “%24”. The Service Engine (SE)
translates the “%24” back to “$” before evaluating it against the match criteria.

Create a Policy
The virtual service editor defines policies consisting of one or more rules that control the flow of
requests through the virtual service.

The following are the steps to create a policy:

1 Policy Type: First select the policy type to add by selecting one of the following categories:

a HTTP Security: HTTP security policies perform defined actions such as allow/deny,
redirect to HTTPS, or respond with a static page.

b Network Security: It is configured to explicitly allow or block traffic of user-defined types in


the network.

c HTTP Request: HTTP request policies allow manipulation of HTTP requests, content
switching and also allow customized actions based on client HTTP requests.

d HTTP Response: HTTP response policies evaluate responses from the server, and can be
used to modify the server’s response headers. HTTP response policies are most often used
in conjunction with HTTP request policies to provide an Apache Mod_ProxyPass capability
for rewriting a website’s name between the client and the server.

e DataScripts: DataScripts execute when various events are triggered by data plane traffic.

VMware, Inc. 503


VMware NSX Advanced Load Balancer Configuration Guide

f Access: Access policy can be provided for SAML or Ping access.

2 Create Rule: Create a new rule by clicking the 'plus' icon and specify the following information
for the new rule:

a Enable or Disable: By default, the new rule is enabled. The green slider icon can be
toggled to change to gray, to disable the rule and make it have no effect on the traffic.

b Rule Name: Specify a unique name for the rule in the Rule Name field, or leave the default
system generated name in place.

c Logging: Select Logging checkbox if you want logging enabled for this rule. When
enabled, a log is generated for any connection or request that matches the rule’s match
criteria. If a virtual service is already set to log all connections or requests, this checkbox
will not create a duplicate log. Client logs are flagged with an entry for the policy type and
rule name that matched. When viewing the policy’s logs within the logs tab of the virtual
service, the logs will be part of the significant logs option unless the specific connection or
request is an error, in which case it can be displayed under the default non-significant logs
filter.

d Match: Add one or more matches using the Add New Match drop-down menu. The match
options vary depending on the context defined by the policy type to be created. If a rule is
not given a match, all connections or requests are considered true or matched.

e Action: Add one or more actions from the drop-down list to be taken when the matches
are true. The available options vary depending on the type of rule to be created.

f Save Rule: Click the Save Rule button to save the new rule.

3 Ordering: Rules are enforced in the order in which they appear in the list. For instance, if
you add a rule to close a connection based on a client IP address, followed by a rule that
redirects an HTTP request from that IP address to a secure HTTP (HTTPS) connection, the
NSX Advanced Load Balancer closes the connection without forwarding the request. Alter the
order in which rules are applied by clicking the up and down arrow icons until the rules are in
the desired order.

Network Security
The following table lists both the available network security match criteria and the configurable
actions that can occur when a match is made.

Note This feature is supported for IPv6 in NSX Advanced Load Balancer.

Match Client IP: Client IP address or a group of client addresses.


n Use a "-" to specify a range: 10.0.0.0-10.1.255.255
n Use a "/" to specify a netmask: 10.0.0.0/24

Service Port: The ports on which the virtual service is listening.

IP Reputation: The IP reputation service to identify or categorize IP addresses based on


the threats associated with them.

VMware, Inc. 504


VMware NSX Advanced Load Balancer Configuration Guide

Actions Logging: Selecting the checkbox causes the NSX Advanced Load Balancer to log when
an action has been invoked.

Allow or Deny: Explicitly allow or deny any matched traffic. Denied traffic is issued a reset
(RST), unless the system is under a volumetric or denial of service attack, in which case
the connection can be silently discarded.

Rate Limit: Restrict clients from opening greater than the specified number of
connections per second in the Maximum Rate field. Clients that exceed this number
will have their excessive connection attempts silently discarded. If burst size is enabled,
clients can burst above the maximum rate, if they have not recently been opening
connections. This feature can be applied to TCP or UDP. All clients that match the match
criteria will be treated as one bucket. For instance, if no match is defined, any and all
IP addresses will increment the maximum rate counter. Throttling occurs for all new
connecting clients. To enable per client throttling, see the Advanced tab of the virtual
service. The manual for this page also contains a more robust description of connecting
throttling.

HTTP Security Policy


The following table lists both the available HTTP security match criteria and the configurable
actions that can occur when a match is made.

Match Client IP: Client IP address or a group of client addresses.


n Use a "-" to specify a range: 10.0.0.0-10.1.255.255
n Use a "/" to specify a netmask:10.0.0.0/24
n Use a pre-defined IP group, which can be based on geo-location.

Service Port: The ports on which the virtual service is listening.


In SNI virtual hosting and enhanced virtual hosting, the service match criterion is
matched against its parent virtual service for policies under a child virtual service.

Protocol Type: HTTP or HTTPS.


Example: https://www.avinetworks.com/marketing/index.html?a=1&b=2

HTTP Method: The method used by the client request. The match is true if any one
of the methods that an administrator specifies is true.
The options available are GET, HEAD, POST, PUT, DELETE, OPTIONS, TRACE, and
CONNECT, PATCH, PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, and
UNLOCK.

HTTP Version: True if the client version is 0.9, 1.0, or 1.1

Path: The path or a group of paths. Paths do not need to begin with a forward slash
( / ). For comparison purposes, the NSX Advanced Load Balancer automatically
omits any initial slash specified in the match field.
Example: https://www.avinetworks.com/marketing/index.html?a=1&b=2

Query: A query or a group of queries. Do not add the leading ‘?’ or ‘&’ characters
to a match.
Example: https://www.avinetworks.com/marketing/index.html?a=1&b=2

Headers: True if a header exists, or if it exists and contains a specified value

Cookie: True if a cookie exists, or if it exists and contains a specified value

VMware, Inc. 505


VMware NSX Advanced Load Balancer Configuration Guide

Host Header: The host header of the request.


Example: https://www.avinetworks.com/marketing/index.html?a=1&b=2

IP Reputation: Select the IP reputation type.

Bot Management: Select the option to configure the bot classification result.

Source IP: Select the source IP address.

Actions Logging: Selecting the checkbox causes the NSX Advanced Load Balancer to log
when an action has been invoked.

Allow: Allows matched requests to continue on to further policies or to the


destination pool servers.

Close Connection: Matched requests causes the NSX Advanced Load Balancer to
close the TCP connection that received the request through an FIN. Many browsers
open multiple connections that are not closed unless requests sent over those
connections also trigger a close connection action.

Enable ICAP: is a lightweight HTTP-like protocol to transport HTTP messages to


3rd party services.

Redirect To HTTPS: Respond to the request with a temporary redirect to the


desired port for SSL.

Send Response: The NSX Advanced Load Balancer can serve an HTTP response
using HTTP status code 200 (success), 403 (unauthorized), or 404 (file not found).
A default page is rendered by the browser for each of these status codes. Instead,
you can also upload a custom HTML file. This file can have links to images or other
files, but only the initial HTML file is stored and served through the Send Response.

Note You can upload any type of file as a local response. It is recommended to
configure a local file using the UI. To update the local file using API, encode the
base64 file out of band and use the encoded format in the API.

Rate limit: Specify the maximum number of new connections, HTTP requests,
bandwidth in Mbps, and/or concurrent open connections from/for/by clients.

Note HTTP security policy option in available on the NSX Advanced Load Balancer UI. To create
or edit the existing HTTP security policy, navigate to Applications > Virtual Services, select the
desired virtual services, and select the HTTP Security option.

HTTP Request
HTTP request policies allow manipulation of HTTP requests. These requests can be modified
before they are forwarded to the server, used as a basis for content switching, or discarded.

See HTTP Request Policy for more details.

HTTP Response
HTTP response policies evaluate responses from the server, and can be used to modify the
response headers of the server. These policies are generally used to rewrite redirects or in
conjunction with HTTP request policies, to provide a client to server name-rewriting capability
similar to Apache’s ProxyPass.

VMware, Inc. 506


VMware NSX Advanced Load Balancer Configuration Guide

See HTTP Response Policy for more details.

DataScripts
DataScripts are executed when various events are triggered by data plane traffic. A single rule can
execute different code during different events.

See DataScript Events for more details.

Access
Access policy can be provided for SAML, PingAccess, JWT or LDAP access.

SAML

If you select SAML option, specify the following details:

n SSO Policy: Specify the SSO policy attached to the virtual service.

n Entity ID: Specify the SAML entity ID for this node.

n SSO URL: Specify the SAML single signon URL to be programmed on the IDP.

n Session Cookie Name: Specify the HTTP cookie name for the authenticated session.

n Session Cookie Timeout: Specify the cookie timeout in minutes.

n SSL Key: Select the SSL key from the drop-down list.

PingAccess

If you select PingAccess option, specify the following details:

n SSO Policy: Specify the SSO policy attached to the virtual service.

Create SSO policy by clicking Create SSO Policy button. Specify the following details:

n Name — Specify the name of the SSO policy.

n Type — Select the SSO policy type from the drop-down list. The following are the options
available in the drop-down list.

n JWT

n LDAP

n OAUTH/OIDC

n PingAccess

n SAML

n Default Auth Profile: Specify the auth profile to use for validating users.

n Authentication Rules: Click ADD button to add the authentication details.

JWT

VMware, Inc. 507


VMware NSX Advanced Load Balancer Configuration Guide

If you select JWT option, specify the following details:

n SSO Policy: Select the SSO Policy attached to the virtual service.

n Audience: Specify the unique audience to identify a resource server.

n Token Location: Select the token location as Authorization Headeror URL Query.

LDAP

If you select LDAP option, specify the following details:

n SSO Policy: Specify the SSO policy attached to the virtual service.

n Basic Realm: When a request to authenticate is presented to a client, the basic realm indicates
to the client which realm they are accessing.

n Connections Per Server: Specify the number of concurrent connections to LDAP server by a
single basic auth LDAP process.

n Cache Size: Specify the size of LDAP basic auth credentials cache used on the dataplane.

n Bind Timeout: Specify LDAP basic auth default bind timeout enforced on connections to LDAP
server.

n Request Timeout: Specify LDAP basic auth default login or group search request timeout
enforced on connections to LDAP server.

n Connect Timeout: Specify LDAP basic auth default connection timeout enforced on
connections to LDAP server.

n Reconnect Timeout: Specify LDAP basic auth default reconnect timeout enforced on
connections to LDAP server.

n Servers Failover Only: Check this box to indicate that LDAP basic auth uses multiple LDAP
servers in the event of a fail-over only.

After specifying all the necessary details, click Save button.

Policy Tokens
In more complex scenarios, an administrator can capture data from one location and apply it to
another location. The NSX Advanced Load Balancer supports the use of variables and tokens,
which can be used for this purpose.

Variables can be used to insert dynamic data into the modify header actions of HTTP request
and HTTP response policies. Two variables namely, $client_ip and $vs_port are supported. For
instance, a new header can be added to a HTTP request called origin_ip, with a value set to
$client_ip, to insert the source address of the client as the value of the header.

VMware, Inc. 508


VMware NSX Advanced Load Balancer Configuration Guide

Tokens can be used to find and reorder specific parts of the HTTP hostname or path. For
instance, it is possible to rewrite the original request http://support.avinetworks.com/docs/
index.htm to http://www.avinetworks.com/support/docs/index.htm. Tokens can be used
for HTTP host and HTTP path. The tokens are derived from the original URL. Token delimiter in
host header is “.” and in the URL path it is “/”.

Example: Example 1
Original request support avinetworks com docs index.htm
URL:

Token: host[0] host[1] host[2] path[0] path[1]

In the example above, the client request is broken down into HTTP host and HTTP path. Each
section of the host and path are further broken down according to the “.” and “/” delimiters
for host and path. A host or path token can be used in an action to rewrite a header, a
host, or a path. In the example, a redirect of http://www.avinetworks.com/support/docs/
index.htm would send requests to docs.avinetworks.com/support/docs/index.htm

In addition to using the host[0], host[1], host[2] convention, a colon can be used to denote
whether the system must continue till the end of the host or path. For instance, host[1:] implies to
use avinetworks, followed by any further host fields. The result will be avinetworks.com. This is
especially useful in a path, which may contain many levels. Tokens can also specify a range, such
as path[2:5]. Host and path tokens can also be abbreviated as 'h' and 'p', such as h[1:] and p[2].

In the rewrite URL, redirect, and rewrite location header actions, the host component of the URL
can be specified in terms of tokens, the tokens can be constants strings or tokens from existing
host and path component of the URL.

Example: Example 2
New URL: region.avinetworks.com/france/paris/index.htm

Request URL paris france avinetworks com region index.htm

Token: host[0] host[1] host[2] host[3] path[0] path[1]

New Host: path[0].host[2:]

New Path: /host[1]/host[0]/path[1]

Example: Example 3
Request
www1 avinetworks com sales foo index.htm auth=true
URL

Token: host[0] host[1] host[2] path[0] path[1] path[2] (query)

New Host: www.host[1:]

New Path: /host[0]/path[0:]

VMware, Inc. 509


VMware NSX Advanced Load Balancer Configuration Guide

Query: Keep Query enabled

New URL: www.avi.com/www1/sales/foo/index.htm?auth=true

n If the host header contains an IPv4 address and not a FQDN, and the rewrite URL or redirect
action refers to a host token, for instance, host[0], host[1,2], and so on, the rule action is
skipped and the next rule is evaluated.

n If the host header or path contains less tokens than that referenced in the action, then the rule
action is skipped. For instance, if the host name in the host header has only three tokens (host
name www.avinetworks.com, where, token host[0] = www, host1 = avinetworks, host2 =
com). If the action refers to host[4] the rule action is skipped.

n If the location header in the HTTP response contains an IPv4 address and the response policy
action is rewrite location header which refers to host tokens, the rule action is skipped.

n Rule engine does not recognize octal or hexadecimal IPv4 address in the host address. That is,
the rule action is not skipped if the host header has octal/ hexadecimal IPv4 address and the
action references a host token such as host1, and so on.

n If an HTTP request arrives with multiple host headers, the first host header will be used.

n Per RFC, HTTP 1.1 requests must have a non-empty host header. In case of encountering an
empty header, a 400 ‘Bad Request’ HTTP response is returned by the NSX Advanced Load
Balancer.

n The HTTP processing is performed against decoded URIs.

Regex Matching in Policies


The NSX Advanced Load Balancer supports the use of regex captures as tokens that can be
used in Policy actions. These regex captures are obtained from the regex pattern string that are
matched with the URI in the match rule configured in the policy.

The following are the steps to configure regex matching and tokens:

n Create a string group object with the list of regex patterns you want to use for URI matching.
Note the use of regex captures (the string pattern within the parentheses) which are needed to
generate the regex tokens.

n Navigate to Templates > Groups > String Groups. Click CREATE. Specify the string name and
type.

n Under Policies, create a matching rule with the Criteria field selected as Regex pattern
matches and attach the necessary string group(s).

n You can now use regex captures as tokens in the corresponding action rule. On the GUI, you
can use SG_RE[] to access these tokens. These tokens are obtained from the first string in the
string group list that matched with the request Path.

VMware, Inc. 510


VMware NSX Advanced Load Balancer Configuration Guide

Example: Example
Regex String: ^/hello/(.*)/world/(.*)$

Request Path: /hello/foo/world/bar

Regex Request Path Tokens New Path New Path

^/hello/ /hello/ SG_RE[0]/ /foo/bar /


SG_RE[1] /
(.*) /foo/ SG_RE[0]

/world/ /world/

(.*)$ bar SG_RE[1]

Controller Interface and Route Management


This section discusses about the NSX Advanced Load Balancer Controller interface and route
management.

The NSX Advanced Load Balancer Controller has a single interface used for various control plane
related tasks such as:

n Operator access to the Controller via CLI, UI, and API.

n Communication between the Controller and the Service Engines.

n Communication between the Controller and third-party entities for automation, observability,
etc.

n Communication between the Controller and third-party Hardware Security Modules (HSMs).

Starting with the NSX Advanced Load Balancer version 21.1.3, an additional interface is available
on the Controller to allow the ability to isolate the communication for some of the above entities.

Additionally, any static routes to be added to the Controller interfaces should now leverage the
cluster configuration instead of /etc/network/interfaces subsystem. These configurations will
be persisted across the Controller reboot and upgrade.

Note This feature is supported only on the Controllers deployed in vCenter and enables the use
of the additional interface only for HSMs.

Classification
The following labels available for classification:

n MGMT — This signifies general management communication for the Controller access, as well as
the Controller initiating communication, for instance, logging, third party API calls, and so on.

n SE_SECURE_CHANNEL — This label is used to classify secure communication between the


Service Engine and the Controller.

VMware, Inc. 511


VMware NSX Advanced Load Balancer Configuration Guide

n HSM — This is used to classify communication between the Controller and an HSM device.

With this classification, the traffic can be moved from the default, main interface to the additional
interface, if configured.

Note
n MGMT and SE_SECURE_CHANNEL can only be performed by the primary (eth0) interface.

n HSM can be moved to the additional interface.

Operating Model
By default (prior to 21.1.3), the Controller is provisioned with one interface when being deployed in
vCenter (during installation).

The following are the steps to add additional interface:

1 Shut down the Controller virtual machine and add the interface through vCenter UI.

2 On powering ON the Controller virtual machine, NSX Advanced Load Balancer will recognize
the additional interface, and additional configuration through the NSX Advanced Load
Balancer CLI can be performed.

Note Hotplug of interfaces (addition to the virtual machine without powering off the virtual
machine) is not supported.

For the interface to be recognized within the NSX Advanced Load Balancer Controller software
and further classification via labels to be performed, NSX Advanced Load Balancer ‘cluster’
configuration model should be used.

Configuration for a Single Node Controller


The following are the configuration steps:

1 Shut down the Controller and add the new interface via the vCenter.

2 Power on the Controller. The new interface will be visible as eth1, while the primary interface
will always be visible as eth0 in the Cluster configuration:

[admin:controller]: > show cluster


+-----------------+----------------------------------------------+
| Field | Value |
+-----------------+----------------------------------------------+
| uuid | cluster-83e1ebf5-2c63-4690-9aaf-b66e7a7b5f08 |
| name | cluster-0-1 |
| nodes[1] | |
| name | 10.102.64.201 |
| ip | 10.102.64.201 |
| vm_uuid | 00505681cb45 |
| vm_mor | vm-16431 |
| vm_hostname | node1.controller.local |
| interfaces[1] | |
| if_name | eth0 |

VMware, Inc. 512


VMware NSX Advanced Load Balancer Configuration Guide

| mac_address | 00:50:56:81:cb:45 |
| mode | STATIC |
| ip | 10.102.64.201/22 |
| gateway | 10.102.67.254 |
| labels[1] | MGMT |
| labels[2] | SE_SECURE_CHANNEL |
| labels[3] | HSM |
| interfaces[2] | |
| if_name | eth1 |
| mac_address | 00:50:56:81:c0:89 |
+-----------------+----------------------------------------------+

In the above, the second interface (eth1) has been discovered.

Configure the mode and IP details on the additional interface:

[admin:controller]: > configure cluster


[admin:controller]: cluster> nodes index 1

[admin:controller]: cluster:nodes> interfaces index 2


[admin:controller]: cluster:nodes:interfaces> mode static
[admin:controller]: cluster:nodes:interfaces> ip 100.64.218.90/24
[admin:controller]: cluster:nodes:interfaces> labels HSM
[admin:controller]: cluster:nodes:interfaces> save
[admin:controller]: cluster:nodes> interfaces index 1
[admin:controller]: cluster:nodes:interfaces> no labels HSM
[admin:controller]: cluster:nodes:interfaces> save

In the above,

n For the second interface (index 2), the IP and label has been added.

n The label HSM has been removed from the primary interface (index 1).

Note The nodes that already are configured with additional interfaces and routes, can be added
to a cluster.

For more information on configuring cluster, see API - Configuring the NSX Advanced Load
Balancer Controller Cluster.

Unconfiguring the Additional Interface for a Single Node Controller


The following are the steps to revert the configuration to use the primary interface:

1 Remove the configuration (mode, IP, labels) from the second interface (eth1).

2 Add the HSM label to the primary interface (eth0).

[admin:controller]: > configure cluster


[admin:controller]: cluster> nodes index 1
[admin:controller]: cluster:nodes> interfaces index 2
[admin:controller]: cluster:nodes:interfaces> no mode
[admin:controller]: cluster:nodes:interfaces> no ip
[admin:controller]: cluster:nodes:interfaces> no labels HSM
[admin:controller]: cluster:nodes:interfaces> save

VMware, Inc. 513


VMware NSX Advanced Load Balancer Configuration Guide

[admin:controller]: cluster:nodes> interfaces index 1


[admin:controller]: cluster:nodes:interfaces> labels HSM
[admin:controller]: cluster:nodes:interfaces> save
[admin:controller]: cluster:nodes> save
[admin:controller]: cluster> save

Configuring a Static Route


A static route can be configured for the primary and secondary through the Cluster configuration.

Starting with NSX Advanced Load Balancer version 21.1.3, you should not edit /etc/network/
interfaces file. All configurations (IP, Static Route) should be via cluster configuration.

[admin:controller]: > configure cluster


[admin:controller]: cluster> nodes index 1
[admin:controller]: cluster:nodes> static_routes
New object being created
[admin:controller]: cluster:nodes:static_routes> prefix 1.1.1.0/24
[admin:controller]: cluster:nodes:static_routes> next_hop 100.64.218.20
[admin:controller]: cluster:nodes:static_routes> route_id 1
[admin:controller]: cluster:nodes:static_routes> if_name eth1
[admin:controller]: cluster:nodes:static_routes> save
[admin:controller]: cluster:nodes> save
[admin:controller]: cluster> where
Tenant: admin
Cloud: Default-Cloud
+--------------------+----------------------------------------------+
| Field | Value |
+--------------------+----------------------------------------------+
| uuid | cluster-83e1ebf5-2c63-4690-9aaf-b66e7a7b5f08 |
| name | cluster-0-1 |
| nodes[1] | |
| name | 10.102.64.201 |
| ip | 10.102.64.201 |
| vm_uuid | 00505681cb45 |
| vm_mor | vm-16431 |
| vm_hostname | node1.controller.local |
| interfaces[1] | |
| if_name | eth0 |
| mac_address | 00:50:56:81:cb:45 |
| mode | STATIC |
| ip | 10.102.64.201/22 |
| gateway | 10.102.67.254 |
| labels[1] | MGMT |
| labels[2] | SE_SECURE_CHANNEL |
| interfaces[2] | |
| if_name | eth1 |
| mac_address | 00:50:56:81:c0:89 |
| mode | STATIC |
| ip | 100.64.218.90/24 |
| labels[1] | HSM |
| static_routes[1] | |
| prefix | 1.1.1.0/24 |
| next_hop | 100.64.218.20 |

VMware, Inc. 514


VMware NSX Advanced Load Balancer Configuration Guide

| if_name | eth1 |
| route_id | 1 |
+--------------------+----------------------------------------------+
[admin:controller]: cluster> save

Configuration for a 3-node Cluster


In case of a 3-node Cluster, the following steps are required:

n For the discovery of the secondary interface, the Controller nodes need to be stand-alone, i.e.,
not part of a cluster. This is a one-time operation for NSX Advanced Load Balancer to discover
the additional interface.

n Once the secondary interfaces have been discovered, the Leader node can be used to form
the cluster, as detailed in Deploying an NSX Advanced Load Balancer Controller Cluster.

n After the cluster is fully formed, the secondary interface configuration for all the nodes can be
performed.

[admin:controller]: cluster> nodes index 1


[admin:controller]: cluster:nodes> interfaces index 2
[admin:controller]: cluster:nodes:interfaces> mode static
[admin:controller]: cluster:nodes:interfaces> ip 100.64.218.90/24
[admin:controller]: cluster:nodes:interfaces> labels HSM
[admin:controller]: cluster:nodes:interfaces> save
[admin:controller]: cluster:nodes> interfaces index 1
[admin:controller]: cluster:nodes:interfaces> no labels HSM
[admin:controller]: cluster:nodes:interfaces> save
[admin:controller]: cluster:nodes> save
[admin:controller]: cluster> nodes index 2
[admin:controller]: cluster:nodes> interfaces index 2
[admin:controller]: cluster:nodes:interfaces> mode static
[admin:controller]: cluster:nodes:interfaces> ip 100.64.218.100/24
[admin:controller]: cluster:nodes:interfaces> labels HSM
[admin:controller]: cluster:nodes:interfaces> save
[admin:controller]: cluster:nodes> interfaces index 1
[admin:controller]: cluster:nodes:interfaces> no labels HSM
[admin:controller]: cluster:nodes:interfaces> save
[admin:controller]: cluster:nodes> save
[admin:controller]: cluster> nodes index 3
[admin:controller]: cluster:nodes> interfaces index 2
[admin:controller]: cluster:nodes:interfaces> mode static
[admin:controller]: cluster:nodes:interfaces> ip 100.64.218.110/24
[admin:controller]: cluster:nodes:interfaces> labels HSM
[admin:controller]: cluster:nodes:interfaces> save
[admin:controller]: cluster:nodes> interfaces index 1
admin:controller]: cluster:nodes:interfaces> no labels HSM
[admin:controller]: cluster:nodes:interfaces> save

VMware, Inc. 515


VMware NSX Advanced Load Balancer Configuration Guide

[admin:controller]: cluster:nodes> save


[admin:controller]: cluster> save

Note
n There is no requirement to log in to the node for the interface discovery to succeed. The only
requirement is for the interface to be in a connected state in the virtual machine and for the
Controller to have been powered on.

n The cluster formation and the secondary interface configuration should be performed as
separate steps.

Updating the Configuration Following the Controller IP Address


Change
The management IP addresses of each Controller node must be static. This applies to single-node
deployments and three-node deployments.

The cluster configuration and runtime configuration contain the IP information for the cluster.
If the IP address of a leader or follower node changes (for instance, due to DHCP), this script
must be run to update the IP information. The cluster will not function properly until the cluster
configuration is updated.

If the IP address of a Controller node is changed for any reason (such as DHCP), the following
script must be used to update the cluster configuration. This applies to single-node deployments
as well as cluster deployments.

To repair the cluster configuration after IP address of a Controller node has changed, run the
change_ip.py script.

The script is located in the /opt/avi/python/bin/cluster_mgr/change_ip.py directory.

Note
n The change IP script only changes the NSX Advanced Load Balancer cluster configuration.
It does not change the IP address of the host or the virtual machine on which Controller
services are running. For instance, it does not update the /etc/network/interfacesfile in
a VMware-hosted Controller. You should change the IP address for the virtual machine in the
vApp properties in VMware.

n Special consideration is required when changing the IP addresses of Controllers in a bare-


metal configuration. For more information, see How to change Controller IP addresses in a
bare-metal environment.

Script Options
Caution Before running the script, check to make sure new IPs are working on all nodes
and are reachable across nodes. If one or more IPs are not accessible, the script makes a best-
effort update, though there is no guarantee that the cluster will be back in sync upon restoring
connectivity.

VMware, Inc. 516


VMware NSX Advanced Load Balancer Configuration Guide

The script can be run on the Controller node whose management IP address changed, or on
another Controller node within the same cluster.

The script must be run on one of the nodes that is in the cluster. If the script is run on a node that
is not in the cluster, the script fails.

-i ipaddr: Specifies the new IP address of the node on which the script is run.

-o ipaddr: Specifies the IP address of another node in the cluster.

-m subnet-mask: If the subnet also changed, use this option to specify the new subnet. Specify the
mask in 255.255.255.0. Format.

-g gateway-ipaddr: If the default gateway also changed, use this option to specify the new
gateway.

Note The -m and -g options apply to all IP addresses in the cluster.

Updating IP Information for a Single-node Deployment


To update the Controller IP information for a single-node deployment, use a command string such
as the following:

*change_ip.py -i **ipaddr*

This command is run on node 10.10.25.81. Since no other nodes are specified, this is assumed to
be a single-node cluster (just this Controller).

username@avi:~$ | change_ip.py -i 10.10.25.81

In the following example, the node’s default gateway also has changed.

username@avi:~$ | change_ip.py -i 10.10.25.81 -g 10.10.10.1

Updating IP Information for a Controller Cluster


Note Before executing change_ip.py, ensure all new IPs are reachable from one another over
SSH ports (22 for regular, 5098 for containers).

To update Controller IP information for a cluster, use a command string such as:

change_ip.py -i **ipaddr **-o ipaddr -o ipaddr

Example:

username@avi:~$ | change_ip.py -i 10.10.25.81 -o 10.10.25.82 -o 10.10.25.83

This command is run on node 10.10.25.81, which is a member of a 3-node cluster that also contains
nodes 10.10.25.82 and 10.10.25.83.

VMware, Inc. 517


VMware NSX Advanced Load Balancer Configuration Guide

The script can be run on any of the nodes in the cluster. The following example is run on node
10.10.25.82:

username@avi:~$ | change_ip.py -i 10.10.25.82 -o 10.10.25.81 -o 10.10.25.83

Note After executing change_ip.py, in case of failure, use recover.py to convert nodes to
single nodes and create the 3-node cluster again. For more information, see Recover a Non-
Operational Controller Cluster.

To verify if the system is functioning properly, go to the Controller Nodes page and ensure that all
nodes are CLUSTER_ACTIVE.

Steps to Change Controller IPs on Nutanix Cluster


The following are the steps to change the Controller IPs on Nutanix cluster:

1 Change the IP address of each Controller node within the cluster to the new IP by manually
editing the network scripts on the host and changing the interface configuration.

2 For instance, /etc/network/interfaces/ file on the Controller virtual machine should be


modified as follows (if using static IP):

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address <ipv4 address>
netmask 24
gateway <ipv4 gw>

3 Ensure that the new Controller IP addresses are reachable in the network from the other
Controller nodes.

4 Run /opt/avi/python/bin/cluster_mgr/change_ip.py script on the Controller to reflect the


above IP address change.

5 Reboot the Controller.

For a 3 node cluster deployment, you need to change the IPs on all the Controllers and then run
the command as shown below from any Controller node to update the Controller IP information
for a cluster.

username@avi:~$ change_ip.py -i **ipaddr **-o ipaddr -o ipaddr

where,

n -i ipaddr: Specifies the new IP address of the node on which the script is run.

n -o ipaddr: Specifies the IP address of another node in the cluster.

VMware, Inc. 518


VMware NSX Advanced Load Balancer Configuration Guide

n -m subnet-mask: If the subnet is also changed, use this option to specify the new subnet.
Specify the mask in the following format: 255.255.255.0

n -g gateway-ipaddr: If the default gateway is also changed, use this option to specify the new
gateway.

Note The Controller cluster should come back up with the new IPs.

Considerations
The following considerations should be noted:

n The interface names, eth0, eth1, and so on, and discovered MAC addresses are static, and
cannot be modified.

n The primary (eth0) interface cannot be modified, apart from the labels.

n The default gateway cannot be configured for the additional interfaces.

n All labels needs to be a part of some interface and a label cannot be repeated in more than
one interface.

n For the additional interface, only Static IP mode is supported. DHCP is not supported.

n The Access Controls are applied only to the primary interface. It is recommended to continue
to use external firewall settings to restrict access, for instance, inbound SSH to the additional
interface.

n You should not edit /etc/network/interfaces file. All configurations, such as IP, Static
Route, should be via cluster configuration.

n The secondary interfaces should remain in connected state within the virtual machine.
Disconnecting them may lead to the interface being removed, if the virtual machine is
rebooted.

Auto Scaling
This topic explains the various autoscaling capabilities provided by NSX Advanced Load Balancer
and their integration with public cloud ecosystems, such as Amazon Web Services (AWS) and
Microsoft Azure.

Autoscaling in Public Clouds


NSX Advanced Load Balancer is an elastic fabric architecture. Various resources, such as the SEs
and application servers, can be scaled up and down on-demand, based on load and capacity
requirements.

For the public cloud ecosystems which can provide elastic autoscaling capabilities for workloads,
NSX Advanced Load Balancer uses these capabilities and even manages their behaviour based on
the metrics collected by NSX Advanced Load Balancer.

VMware, Inc. 519


VMware NSX Advanced Load Balancer Configuration Guide

NSX Advanced Load Balancer provides the following scaling functionality:

n Scaling the virtual service to more (or fewer) SEs, so that traffic can be serviced by more (or
fewer) load-balancing instances as the NSX Advanced Load Balancer SEs reach (underutilize)
capacity.

n Scaling the application server pool to more (or fewer) application instances, so that traffic can
be serviced by a right-sized back end pool.

Both types of scaling can be performed automatically through pre-set NSX Advanced Load
Balancer policies, based on load and capacity measurements done using NSX Advanced Load
Balancer.

Ecosystem Integration
NSX Advanced Load Balancer supports the above-mentioned autoscaling features in all
ecosystems. This section discusses integration considerations related to the below public clouds:

n Amazon Web Services

n Microsoft Azure

Virtual Service Scaling


Each SE has a maximum capacity for processing traffic,typically measured in terms of traffic
throughput or SSL transactions per second. The SE capacity is a function of various parameters,
such as SE VM size (number of vCPUs, or memory), type of traffic, and the ecosystem in which the
SE is functioning.

In the default configuration, a virtual service is placed on a single SE. However, if the SE is not
sufficient to handle traffic for the virtual service, the virtual service can be scaled out to additional
SEs. In this case, more than one SE handles traffic for the virtual service.

Scaling out or scaling in virtual services can be performed manually or automatically.

In the case of automated scaling of virtual service placements, one of the following SE parameters
can be used to configure thresholds beyond which a virtual service should be scaled out to a new
SE, or scaled back into fewer SEs:

n CPU utilization of the SE

n Bandwidth, in Mbps, being served by the SE

n Connections per second (CPS) being served by the SE

n Packets per second (PPS)

For more information on virtual service scaling, see Virtual Service Scaling.

Application Server Scaling


Along with the virtual service load balancing, it is important to ensure enough capacity is available
at the application instance tier to handle traffic loads.

VMware, Inc. 520


VMware NSX Advanced Load Balancer Configuration Guide

As public cloud infrastructure is charged based on usage or uptime, it is important to have enough
capacity based on usage, along with the ability to scale resources on-demand.

Public clouds provide autoscaling features. The templates for autoscaling servers can be used to
spawn virtual machines and configure them. The scale-out or scale-in can either be done manually
or based on certain load conditions.

The relevant features are:

n Amazon Web Services: Amazon EC2 Auto Scaling

n Microsoft Azure: Virtual Machine Scale Set

NSX Advanced Load Balancer Integration with Public Cloud


The following are the two variations of NSX Advanced Load Balancer support for autoscaling
groups:

n Autoscale decision managed by public cloud.

n Autoscale decision managed by NSX Advanced Load Balancer.

Autoscale Decision Managed by Public Cloud


In this method of autoscaling, the appropriate autoscaling group is added to the server pool on an
NSX Advanced Load Balancer Controller. The Controller tracks the autoscaling group. As virtual
machine instances are added or removed from the group, NSX Advanced Load Balancer adds or
removes the virtual machine from its pool member list.

In this manner, NSX Advanced Load Balancer distributes traffic requests to the requisite virtual
machine instances.

The scaling in or scaling out of the pool is controlled based on policies associated with the
autoscale group, and the Controller does not influence this operation.

Autoscale Decision Managed by NSX Advanced Load Balancer


In this method of autoscaling, NSX Advanced Load Balancer takes over the decision to scale the
virtual machine instances. Also, the public cloud autoscale group is added to the NSX Advanced
Load Balancer server pool.

An autoscale policy is created on the Controller and is associated with the pool. This autoscale
policy contains parameters and thresholds for triggering the scale-out and scale-in event, based
on a wide range of metrics and alerts that NSX Advanced Load Balancer supports.

When the threshold is crossed, the Controller communicates with the public cloud to initiate a
scale-out or a scale-in operation and also manages the pool membership.

A key advantage of this method is the ability to use a much richer set of metrics for performing
scaling decisions, as compared to the metrics available with the public cloud.

Server Garbage Collection Parameter

VMware, Inc. 521


VMware NSX Advanced Load Balancer Configuration Guide

Whenever NSX Advanced Load Balancer decides on scale-in, any server which is already down
will be selected for scale-in.

Also, the down servers can be garbage collected by NSX Advanced Load
Balancer, after a configured delay. To configure the delay parameter, use the
delay_for_server_garbage_collection under the autoscale_policy options.

AZ Aware Autoscaling
While scaling in, NSX Advanced Load Balancer autoscale will ensure the balance of servers across
different AWS availability zones. For instance, if there are four servers in a pool (two servers each
in AZ1 and AZ2), and scale-in happens for two servers, you will be left with two servers, one in
each AZ.

Note This feature is available only for AWS.

Configuring Autoscale Integration with NSX Advanced Load Balancer


For more information on configuring autoscale groups with public clouds, see the following
sections:

n Amazon Web Services (autoscaling managed by public cloud): NSX Advanced Load Balancer
Integration with AWS Auto Scaling Groups

n Amazon Web Services (autoscale managed by NSX Advanced Load Balancer): Configuration
and Metrics Collection on NSX Advanced Load Balancer for AWS Server Autoscaling

n Microsoft Azure: Virtual Machine Scale Set integration with NSX Advanced Load Balancer

Configuration and Metrics Collection on NSX Advanced Load


Balancer for AWS Server Autoscaling
Cloud server autoscaling is a new feature in NSX Advanced Load Balancer platform that enables
infrastructure admins, operators, and application developers to use the NSX Advanced Load
Balancer server auto scaling solution in concert with cloud auto scaling groups.

Auto scaling groups in the AWS are referred to as external autoscaling groups in NSX Advanced
Load Balancer because they are an external entity to NSX Advanced Load Balancer. With this
feature, more fine-grained scaling policies can be applied, based on NSX Advanced Load Balancer
Controller collected metrics and AWS CloudWatch metrics.

Metrics used for Server Autoscaling


Metrics collected from SE, and AWS cloud enable NSX Advanced Load Balancer to take action
regarding scale-out or scale-in events. Metrics for services hosted on SE are collected by NSX
Advanced Load Balancer SEs. Server instance (virtual machine) level metrics are collected from
AWS CloudWatch.

The following metrics are collected by Service Engines:

n Level 4 metrics

VMware, Inc. 522


VMware NSX Advanced Load Balancer Configuration Guide

n Level 7 metrics

n Service Engine metrics

n Insight metrics

For the complete list of metrics collected by NSX Advanced Load Balancer, see Metric List.

Metrics collected by AWS

Infrastructure-related metrics for server instances such as CPU usage, network usage, and so on,
are fetched from CloudWatch by NSX Advanced Load Balancer. The metrics collected from AWS
are as follows:

n vm_stats.avg_cpu_usage

n vm_stats.avg_disk_read

n vm_stats.avg_disk_write

n vm_stats.avg_disk_io

n vm_stats.avg_net_usage

Configuring Autoscale Policy and Autoscale Launch


The above-mentioned metrics are fetched for a pool when the pool is enabled for autoscale
orchestration. The metrics mentioned in the previous section are fetched for a pool when the
pool is enabled for autoscaling. For server autoscaling to work, the pool must be enabled with an
autoscale policy and an autoscale launch configuration.

Autoscale Policy
An autoscale policy is a set of rules to configure and trigger an alert using the above-mentioned
metrics. To create or choose an existing autoscale policy, navigate to Applications > Pools and
click the edit icon for the desired pool. Select the AutoScale Policy option inthe Settings tab to
add a new autoscale policy or to use an existing one.

Autoscale Launch Configuration


An autoscale launch configuration must be associated with the pool for server autoscaling to work.

Setting the value for use-external-asg to true instructs the Autoscale Manager to start
orchestrating scale-in or scale-out activities for the associated pool. The value for the use-
external-asg flag is set to true for the default autoscale launch configuration (default-
autoscalelaunchconfig).

To enable the checkbox for Use External ASG, navigate to Applications > Pools, and click the
edit icon for the desired pool. Select the desired name from the the drop-down list of Autoscale
Launch Config field in the Settings tab.

VMware, Inc. 523


VMware NSX Advanced Load Balancer Configuration Guide

AZ Aware Autoscaling
While scaling in, NSX Advanced Load Balancer autoscale will ensure the balance of servers across
different AWS availability zones. For instance, if there are four servers in a pool (two servers each
in AZ1 and AZ2), and scale-in happens for two servers, you will be left with two servers (one in
each AZ).

Note This feature is available only for AWS.

Configuring Pool for Server Autoscaling


The following are the steps to configure pool for server autoscaling:

1 Navigate to Applications > Pools. Click Create Pool and select the required cloud.

2 Select Create Autoscale Policy option from Autoscale Policy drop-down list.

3 On the New AutoScale Policy page, provide the desired name, and minimum and maximum
instances for the pool. You can also provide Server Garbage Collection Delaydetails.

a Check Intelligent (Machine Learning) and Use Predicted Load checkboxes.

4 Select the required alerts for server autoscaling from the Alerts drop-down list in the Scale-
Out section. Also specify the Cooldown Period,Adjustment Step and Intelligent Margin.

5 Select the required alerts for server autoscaling from the Alerts drop-down list in the Scale-In
section. Also specify the Cooldown Period and Adjustment Step and Intelligent Margin.

a Cooldown Period: During this period no new scale-out event is triggered to allow the
previous scale-out to complete.

b Adjustment Step: Maximum number of servers to scale-in simultaneously. This number


is chosen such that the target number of servers is always more than or equal to the
min_size.

c Intelligent Margin: Minimum extra capacity as percentage of load used by the intelligent
scheme. Scale-out is triggered when the available capapcity is less than this margin.
Whereas, Scale-in is triggered when the available capapcity is more than this margin.

6 After specifying the necessary details, click Save.

7 Navigate to Applications > Pools and select the drop-down menu for AutoScale Launch
Config to create a new autoscale launch configuration. Specify the name for autoscale launch.

8 Click Save.

9 Create a virtual service for the configured pool with an autoscaling group.

Autoscale Activities and Events


Autoscale alerts are generated and pool members scale-out when configured metrics exceed the
threshold values. Following are the events generated by the pool for a scale-in and scale-out
activity.

VMware, Inc. 524


VMware NSX Advanced Load Balancer Configuration Guide

Events generated for scale-out activity,

1 SERVER_AUTOSCALE_OUT: An autoscale scale-out alert is generated by the alerts


manager.

2 SERVER_AUTOSCALE_OUT_TRIGGERED: The autoscale manager triggers scale-out activity


on the pool.

3 CONFIG_UPDATE: The pool is updated with the new member.

4 AWS_ASG_NOTIFICATION_INSTANCE_ADDED: An instance is added to AWS auto scaling


group.

5 SERVER_AUTOSCALE_OUT_COMPLETE: The autoscale manager triggers this event when


the scale-out activity is complete. Indicates scaleout is successfully complete.

6 SERVER_UP: Indicates the newly added server is ready to serve traffic.

Navigate to Templates > Events to check alerts generated for scale-out or scale-in events.

The following are the events generated for scale-in activity,

1 SERVER_AUTOSCALE_IN: An autoscale scale-out alert is generated by the alerts manager.

2 SERVER_AUTOSCALE_IN_TRIGGERED: The autoscale manager triggers the scale-in activity


on the pool.

3 CONFIG_UPDATE: The pool was updated and the scaled-in pool member is deleted.

4 AWS_ASG_NOTIFICATION_INSTANCE_REMOVED: Indicates an instance has been


removed from AWS auto scaling group.

5 SERVER_AUTOSCALE_IN_COMPLETE: The autoscale manager triggers this event when the


scale-in activity is complete. Indicates scale-in activity is successfully completed.

Note Burstable Performance Instance types are not supported for CPU utilization based
autoscaling. Burstable Performance Instance types are AWS instance types with their names
starting with T2, such as, T2.micro, T2.large, and so on.

NSX Advanced Load Balancer Integration with AWS Auto Scaling


Groups
This section describes how the NSX Advanced Load Balancer platform integrates with AWS auto
scaling groups.

An NSX Advanced Load Balancer pool is a group of back end servers having similar
characteristics, or serving or hosting similar applications. In the NSX Advanced Load Balancer-
AWS integration, a pool is scaled in or out to reflect actions taken by AWS on the corresponding
AWS auto scaling group. These actions are governed by AWS preconfigured policies and criteria.

Scaling out is adding one or more instances to the auto scaling group and scaling in is removing
one or more instances from the auto scaling group.

For more information about auto scaling groups on AWS, see Auto Scaling groups.

VMware, Inc. 525


VMware NSX Advanced Load Balancer Configuration Guide

Background
NSX Advanced Load Balancer supports AWS auto scaling groups for configuring pools for a
virtual service.

NSX Advanced Load Balancer AWS cloud connector periodically polls AWS auto scaling group
membership information and updates the corresponding pool server membership if the changes
are required.

For instance, if a new server (instance) is added to an AWS auto scaling group being used as an
NSX Advanced Load Balancer pool, NSX Advanced Load Balancer will automatically update the
pool membership to include the newly provisioned server. Conversely, upon deletion of a server
(instance) from the AWS auto scaling group, NSX Advanced Load Balancer will delete this server
from its pool membership. This enables seamless, elastic and automated management of back end
server resources without any operator intervention or configuration updates.

Note
n NSX Advanced Load Balancer supports SNS and SQS features for auto scaling groups. If SNS
and SQS are not in use, the default polling method is used. For more information, see Using
the SNS-SQS feature for Auto Scaling Groups.

n ASG with launch templates is supported.

Prerequisites
n The AWS user or IAM role needs to read access to Auto scaling groups and instances therein.
For more information, see IAM Role Setup for Installation into AWS.

n The auto scaling group is already configured on AWS.

Configuring using the NSX Advanced Load Balancer UI


The following are the configuration steps using the UI:

1 Log in to the UI. Navigate to Applications > Pools. Click Create Pool. Select the cloud and
specify the pool name and accept the defaults for the remaining field options.

2 Click Next to view server options. Select the Auto Scaling Groups option from Select Servers.

3 Select auto scaling group instances already configured on AWS for that specific cloud from the
Auto Scaling Group drop-down list.

4 After selecting an instance or server from the list, NSX Advanced Load Balancer will fetch the
instance or server information from AWS.

5 Click the Save option, the UI will return to the Pools page to display the Auto Scaling group
members.

VMware, Inc. 526


VMware NSX Advanced Load Balancer Configuration Guide

Using the SNS-SQS feature for Auto Scaling Groups


NSX Advanced Load Balancer can make use of the Simple Notification Service (SNS) and Simple
Queue Service (SQS) features of AWS. SNS is a push notification service used to update pool
member information of AWS auto scaling groups. SQS is a messaging queue service. For more
information about SNS and SQS, see the following links:

n Amazon Simple Notification Service

n Amazon Simple Queue Service

By default, the flag for using SNS or SQS option is set to false on the NSX Advanced Load
Balancer Controller. In the default polling method, the Controller polls every ten minutes to
synchronize information regarding ASG membership changes. If SNS and SQS features are not
enabled, set the polling interval to one minute. This value can be configured between 60 seconds
(1 minute) to 1800 seconds (30 minutes). When using the SNS-SQS feature, increase the polling
interval value from 1 minute to 10 minutes (recommended), as the cloud connector notifies the
Controller instantly when ASG membership changes.

Configuring SNS-SQS on NSX Advanced Load Balancer using CLI


Change the value of use_sns_sqs. Check asg_poll_interval value. It should be set to ten
minutes or more, based on the requirement. If the SNS and SQS features are not in use, change
the polling interval to one minute.

Log in to the Controller’s shell prompt and follow the steps as shown below.

[admin:10-1-1-1]: cloud> aws_configuration


[admin:10-1-1-1]: cloud:aws_configuration> asg_poll_interval 600
Overwriting the previously entered value for asg_poll_interval
[admin:10-1-1-1]: cloud:aws_configuration> use_sns_sqs
Overwriting the previously entered value for use_sns_sqs
+---------------------+-------------------+
| Field | Value |
+---------------------+-------------------+
| access_key_id | sensitive |
| secret_access_key | sensitive |
| region | us-west-2 |
| vpc | AVI-MISC-West-VPC |
| vpc_id | vpc-c8d6b5af |
| zones[1] | |
| availability_zone | us-west-2c |
| mgmt_network_name | 2C-nw-9 |
| route53_integration | False |
| free_elasticips | True |
| use_iam_roles | False |
| ttl | 60 sec |
| wildcard_access | True |
| use_sns_sqs | True |
| asg_poll_interval | 600 sec |
+---------------------+-------------------+

VMware, Inc. 527


VMware NSX Advanced Load Balancer Configuration Guide

Set use_sns_sqs to false and change asg_poll_interval to 60 seconds when SNS/SQS is not
in use.

[admin:10-1-1-1]: cloud:aws_configuration> no use_sns_sqs


+---------------------+-------------------+
| Field | Value |
+---------------------+-------------------+
| access_key_id | sensitive |
| secret_access_key | sensitive |
| region | us-west-2 |
| vpc | AVI-MISC-West-VPC |
| vpc_id | vpc-c8d6b5af |
| zones[1] | |
| availability_zone | us-west-2c |
| mgmt_network_name | 2C-nw-9 |
| route53_integration | False |
| free_elasticips | True |
| use_iam_roles | False |
| ttl | 60 sec |
| wildcard_access | True |
| use_sns_sqs | False |
| asg_poll_interval | 600 sec |
+---------------------+-------------------+
[admin:10-1-1-1]: cloud:aws_configuration>
[admin:10-1-1-1]: cloud:aws_configuration> asg_poll_interval 60
Overwriting the previously entered value for asg_poll_interval
+---------------------+-------------------+
| Field | Value |
+---------------------+-------------------+
| access_key_id | sensitive |
| secret_access_key | sensitive |
| region | us-west-2 |
| vpc | AVI-MISC-West-VPC |
| vpc_id | vpc-c8d6b5af |
| zones[1] | |
| availability_zone | us-west-2c |
| mgmt_network_name | 2C-nw-9 |
| route53_integration | False |
| free_elasticips | True |
| use_iam_roles | False |
| ttl | 60 sec |
| wildcard_access | True |
| use_sns_sqs | False |
| asg_poll_interval | 60 sec |
+---------------------+-------------------+

Configuring on AWS
AWS users should have all the required privileges to perform various actions required to enable
and use SNS-SQS services. For the list of privileges provided, check the following JSON files:

n avicontroller-sns-policy.json

n avicontroller-sqs-policy.json

VMware, Inc. 528


VMware NSX Advanced Load Balancer Configuration Guide

n avicontroller-asg-notification-policy.json

Follow the steps mentioned in IAM Role Setup for Installion into AWS to associate these policies to
AWS users.

Alerts
NSX Advanced Load Balancer synchronizes information of Auto Scaling groups configured on
AWS. If any of the Auto Scaling groups are deleted on the integrated AWS, a corresponding alert,
and an event is generated on NSX Advanced Load Balancer. For more information on this, see
Alerts when an Auto Scaling Group is deleted on AWS.

Alerts when an Auto Scaling Group is deleted on AWS


NSX Advanced Load Balancer synchronizes information of auto scaling groups configured on
AWS. If any of the auto scaling groups are deleted on the integrated AWS, a corresponding alert,
and an event is generated on NSX Advanced Load Balancer.

Alerts can be checked on the NSX Advanced Load Balancer user interface under the Pools tab. To
check the alert, navigate to Applications > Pools, and select the desired pool.

Navigate to the Alerts tab to check the alerts for the Auto Scaling group deletion.

The alert generated on the UI has the following information:

n Deleted ASG

n Pool information: Pool name and Pool ID

n Associated cloud: Cloud ID

Note
n If multiple auto scaling groups are deleted on AWS, there will be only one alert for the specific
pool (of which ASG is part).

n If the deleted auto scaling group is part of multiple pools, NSX Advanced Load Balancer
generates alerts for each pool.

n While reconfiguring the same auto scaling group on NSX Advanced Load Balancer, the
information regarding associated members is available to reuse.

NSX Advanced Load Balancer SE Behavior on Gateway


Monitor Failure
This section describes the issues that can occur and the NSX Advanced Load Balancer behavior in
response to these issues when the gateway monitor is enabled using the UI or CLI.

VMware, Inc. 529


VMware NSX Advanced Load Balancer Configuration Guide

Legacy high availability (HA) includes support for gateway monitoring. NSX Advanced Load
Balancer SEs taking on either active or standby roles for a virtual service in a legacy HA
deployment can perform gateway monitoring. By default, gateway monitoring is off until an IP
address to monitor is furnished for the cloud. When an IP address is furnished, all legacy HA SE
groups within the cloud perform gateway monitoring.

Note

n Gateway monitoring for legacy HA is not supported for IPv6.

n If the external GW monitor fails, then the SEs are removed for any placement.

n This is applicable for a single monitor in one VRF or other monitors that are succeeding.

Issue Description

Gateway is not reachable from active NSX Advanced Load If only the standby NSX Advanced Load Balancer SE for
Balancer SE but is reachable from standby SE. a virtual service can reach the gateway, the active SE
becomes standby, and the standby SE becomes active.
When the gateway reachability is restored on the standby
SE, it stays in the standby state.

Gateway is not reachable from standby NSX Advanced The active NSX Advanced Load Balancer SE for the virtual
Load Balancer SE but is reachable from active SE. service remains active, and the standby NSX Advanced
Load Balancer SE remains in the standby state. When
gateway reachability is restored on the standby SE, the SE
stays in the standby state.

Active NSX Advanced Load Balancer SE loses gateway The active NSX Advanced Load Balancer SE for the virtual
connectivity after standby SE has lost gateway service remains active, and the standby SE remains in the
connectivity. standby state.

Both the active NSX Advanced Load Balancer SE and the The active NSX Advanced Load Balancer SE for the virtual
standby SE simultaneously lose gateway reachability. service remains active and the standby SE remains in
standby state.

Note Even when the gateway monitor shows that


gateway connectivity is down, the NSX Advanced Load
Balancer SEs remain operationally up.

With multiple gateway monitors, at least one gateway is This results in switching all the virtual services on the
not reachable from active NSX Advanced Load Balancer current active SE to standby SE.
SE, but all gateways reachable from standby SE.

VMware, Inc. 530


DNS
4
The NSX Advanced Load Balancer DNS virtual service is a generic DNS infrastructure that can
implement the following functionality. The hyperlinks lead to four sections within this article.

The NSX Advanced Load Balancer DNS virtual service primarily implements the following
functionality:

n DNS Load Balancing

n Hosting Manual or Static DNS Entries

n Virtual Service IP Address DNS Hosting

n Hosting GSLB Service DNS Entries

NSX Advanced Load Balancer DNS as a Virtual Service


NSX Advanced Load Balancer DNS runs a virtual service with System-DNS application profile type
and a network profile using per-packet load balancing.

A DNS service is represented in green and it is hosted on the leftmost Service Engine as shown
in the figure below. If a matching entry is found then, the DNS virtual service responds to DNS
queries. If a matching entry is not found and the pool members are configured then, the DNS
virtual service forwards the request to the backend DNS pool servers (represented in blue).

DNS virtual service supports A/A, A/S and N+M with health monitoring support added for DNS
virtual service configured in active/standby mode.

NSX Advanced Load Balancer can be configured with more than one DNS virtual service.

VMware, Inc. 531


VMware NSX Advanced Load Balancer Configuration Guide

SE-local
DNS service
DNS

DNS DNS
App-1 App-2
Back-end DNS servers

A NSX Advanced Load Balancer DNS virtual service acts as an authoritative DNS server for one or
more subdomains (zones), and all analytics and client logs are supported.

NSX Advanced Load Balancer Deployment Scenario


Authorative Name Server for a Subdomain (Zone)

In this scenario, the corporate name server delegates one or more subdomains to the NSX
Advanced Load Balancer DNS service, which in turn acts as an authoritative DNS server for
them. In the example shown below, avi.acme.com and gslb.acme.com are the subdomains.
Typically, the corporate name server will have a NS record pointing to the NSX Advanced Load
Balancer DNS service (10.100.10.50). Client queries for these subdomains are sent directly to NSX
Advanced Load Balancer, whereas all DNS requests outside of acme.com are instead sent to the
external “.com” name server.

VMware, Inc. 532


VMware NSX Advanced Load Balancer Configuration Guide

avi.acme.com NS 10.100.10.50
gslb.acme.com NS 10.100.10.50
All DNS requests acme.com <local records>
outside of ace.com, Corp
sent to external “.com” DNS Server
Name Server
All DNS requests for avi.acme.com
and gslnb.acme.com sent to Avi DNS
service

10.100.10.50

DNS

Supported Zones (subdomains)


avi.acme.com
gslb.acme.com

Primary Name Server for a Domain

In this scenario, where there is a primary name server for a domain with pass-through to corporate
name server NSX Advanced Load Balancer DNS responds to any zone it has been configured to
support. DNS queries that do not match NSX Advanced Load Balancer DNS records pass through
(proxy) to corporate DNS servers via a virtual service pool created for that purpose. If members of
that pool receive DNS requests outside the corporate domain (acme.com in this case), they send
them to their external “.com” name server.

VMware, Inc. 533


VMware NSX Advanced Load Balancer Configuration Guide

Supported Zones (subdomains)


“*” -Any
DNS VS Pool member - “Corp DNS”

All DNS requests not matching Avi DNS


Service records, pass-through (proxy) to
Corp DNs

All DNS requests outside of ace.com,


sent to external “.com” Name Server

acme.com
Corp
DNS Server

This chapter includes the following topics:

n NSX Advanced Load Balancer DNS Feature

n DNS Load Balancing

n DNS Policy

n Integration with External DNS Providers

n Custom IPAM Profile on NSX Advanced Load Balancer

n Support for Authoritative Domains, NXDOMAIN Responses, NS and SOA Records

n Adding Custom A Records to an NSX Advanced Load Balancer DNS Virtual Service

n Clickjacking Protection

n DNS Queries Over TCP

n Adding DNS Records Independent of Virtual Service State

n DNS TXT and MX Record

n Add Servers to Pool by DNS

VMware, Inc. 534


VMware NSX Advanced Load Balancer Configuration Guide

NSX Advanced Load Balancer DNS Feature


This section explains the NSX Advanced Load Balancer DNS features.

Visibility and Analytics


n Navigate to Applications > Virtual Services and click on the name of a virtual service
configured for DNS. Click on the pencil icon to view analytics, logs, health, security, events,
alerts, and DNS records details.

n The Analytics tab displays the required metrics.

n The Logs tab provides detailed information about DNS queries from clients, including FQDN,
query-type, significant errors, responses such as IP addresses, CNAME, SRV, etc.

n Non-significant logs should be enabled with caution, since a large number of DNS queries
typically hit a DNS service, and this would result in too many logs entries.

n Categorization of non-significant logs is also very important. If certain errors are typical in a
certain deployment, these errors should be excluded from significant logs.

n The DNS Records tab is unique to this kind of a virtual service.

n DNS health monitors in Health tab can be configured to monitor the health of DNS servers that
are configured as DNS service pool members. For complete information, refer to DNS Health
Monitor section.

Note
n Detailed analytics is not available for TCP.

n DNS requests can be filtered by sub-domain name.

n NO-DATA may occasionally appear when a metric tile is selected. This typically implies “Not
Applicable”. For instance, a GSLB service name may not be applicable for the DNS proxy or a
static entry.

Additional Features
The following are the additional features:

n Domain filtering drops requests for any domains that are not explicitly configured on the DNS
service (Default setting is to allow all domains).

n The time-to-live (TTL) can be customized (Default is 30 seconds).

n Network security policy can be based on client (source) IP and port.

n With full TCP proxy, client spoofing is prevented for TCP DNS queries. SYN flood attacks are
mitigated.

n You can respond to failed DNS requests by returning a DNS error code or dropping the
packets.

VMware, Inc. 535


VMware NSX Advanced Load Balancer Configuration Guide

DNS Load Balancing


NSX Advanced Load Balancer Service Engines proxy DNS requests to a backend pool of DNS
servers. A virtual service with a System-DNS (or similar) application profile is defined as usual.
However, a pool of backend servers loaded with DNS software packages must be assigned.

Hosting Manual or Static DNS Entries


NSX Advanced Load Balancer DNS can host manual or static DNS entries. For a given FQDN, you
can configure an A, AAAA, SRV, CNAME, or NS record to be returned.

NSX Advanced Load Balancer supports text record (TXT) record and mail exchanger (MX) record.

n TXT record: This is used to store text-based information for the configured domain.

n MX record: This is used in mail delivery based on the configured domain.

Virtual Service IP Address DNS Hosting


NSX Advanced Load Balancer DNS can host the names and IP addresses of the virtual services
configured in NSX Advanced Load Balancer. NSX Advanced Load Balancer serves as DNS
provider for the hosted virtual services. For complete configuration details, see Chapter 5 Service
Discovery using NSX Advanced Load Balancer as IPAM and DNS Provider.

Hosting GSLB Service DNS Entries


The NSX Advanced Load Balancer DNS virtual service can host GSLB service DNS entries, and
automatically update its responses based on the application service health, service load and
proximity of clients to sites implementing the application service. NSX Advanced Load Balancer
GSLB automatically populates these DNS entries. For more information on NSX Advanced Load
Balancer GSLB, refer to:

n NSX Advanced Load Balancer GSLB Overview

n NSX Advanced Load Balancer GSLB Site Configuration and Operations

n NSX Advanced Load Balancer GSLB Service Health Monitoring

DNS for NSX Advanced Load Balancer hosted Virtual Services


NSX Advanced Load Balancer-SE-hosted DNS virtual services translate the FQDNs of NSX
Advanced Load Balancer-hosted virtual services into IP addresses. This configuration does not
require pool assignment, as the translation is completely done within the SE VMs.

n Navigate to Administration > Settings and select DNS Service.

n Under the DNS Virtual Services section, click the drop-down list to either choose a pre-
defined DNS virtual service or create a virtual service.

For more information on configuration steps for DNS virtual services, see to Configuring a Local
DNS Virtual Service on All Active Sites that host DNS.

VMware, Inc. 536


VMware NSX Advanced Load Balancer Configuration Guide

DNS for GSLB


For GSLB configuration, the DNS is not defined by the DNS virtual service but it is configured as a
GSLB site object. As part of the GSLB site configuration, a few pre-existing DNS service(s) is (are)
designated to serve in the role.

The following are the steps to configure DNS for GSLB:

n Navigate to Infrastructure > GSLB > Site Configuration.

n Click Add New Site button in the Site Configuration tab.

n Specify relevant information for all fields in the editor. Enable the checkbox for Active
Memberoption and click Save and Set DNS Virtual Services.

n Select from one or more DNS virtual services in the drop-down list and click Save to enable it
for the GSLB configuration.

This below screenshot illustrates, the case where there are no DNS virtual services to choose. An
active GSLB site does not require a DNS, though it may be preferred, as described in the next
section.

High Availability Recommendations for GSLB


For high availability, it is recommended to configure DNS for GSLB on an SE group that is scalable
to two or more Service Engines. It is also recommended to implement DNS for GSLB in more than
one location. This can be implemented in the following two ways:

1 You must have at least two geographically separated active GSLB sites. For each site,
configure DNS to a scalable SE group.

VMware, Inc. 537


VMware NSX Advanced Load Balancer Configuration Guide

2 If only one active site is defined then, ensure a minimum of at least one geographically remote
cloud. On that remote cloud, configure DNS for GSLB on a scalable SE group. Also, define all
virtual services to support the mission-critical applications running on the original location.

Configuring DNS
This section explains how to configure DNS on NSX Advanced Load Balancer.

Custom DNS Application Profile


You can create a custom DNS profile that can be referenced when defining the DNS virtual service
optionally. Refer to the DNS Profile section of the Application Profile.

DNS Virtual Service


The DNS virtual service can be configured with IPv4 VIP, IPv6 VIP, or a dual VIP.

n Navigate to Applications > Virtual Services.

n Click Create Virtual Service (Advanced Setup).

The configuration tabs associated with DNS are as explained below:

Settings Tab

1 Under the Profiles section, select System DNS profile option in the Application Profile drop-
down list.

2 Choose a suitable profile for the network settings under TCP/UDP Profile, such as System-
UDP-Per-Pkt.

3 Under the Service Port section, enter 53 for the Services field.

4 Under the Pool section, choose a relevant IPv4, IPv6, or IPv4 + IPv6 pool from the drop-down
list or click Create Pool to configure a new pool. On creating a new pool, navigate to the
Servers tab to enter the IPv4, IPv6, or IPv4v6 member information.

Static DNS Records Tab

1 Click Create DNS Record to create a new DNS record. You can create the DNS record for both
IPv4 and IPv6 traffic.

2 Specify a qualified domain name under FQDN. For Type, choose the record type from the
drop-down list.

3 Under the AandAAAA Record section, enter the IP for A record under IPv4 Address field and
IP for AAAA record under IPv6 Address. You can choose to enter any one of them or both.
Multiple IP addresses (both for IPv4 and IPv6) can be configured as well.

VMware, Inc. 538


VMware NSX Advanced Load Balancer Configuration Guide

DNS Resolution on Service Engine


NSX Advanced Load Balancer supports DNS resolution on the Controller by default. In cases
where the Controller does not have reachability to the DNS resolver and the configuration objects
need FQDN resolution, the DNS resolution on SE enables FQDN resolution through SE.

Note
n FQDN resolution of pool member objects is supported only through SE.

n It is currently supported on VMware and No access clouds.

To enable the DNS Resolution on SE, dns_resolution_on_se must be set in cloud configuration.

The Service Engine needs DNS resolver configuration for resolving the FQDNs from the Service
Engine. For this a DNSResolver object needs to be configured in the cloud configuration. Only
one DNSResolver object is supported per cloud.

By default, the refresh of the records is based on TTL.

Configuring DNS Resolution on SE


The following is the CLI command for enabling the DNS resolution on SE:

[admin:Avi-Controller]: > configure cloud Default-Cloud


[admin:Avi-Controller]: cloud > dns_resolution_on_se
[admin:Avi-Controller]: cloud > save

The following is the CLI command for configuring the DNS resolver in cloud:

[admin:Avi-Controller]: > configure cloud Default-Cloud


[admin:Avi-Controller]: cloud> dns_resolvers
[admin:Avi-Controller]: cloud:dns_resolvers> resolver_name resolver1
[admin:Avi-Controller]: cloud:dns_resolvers> nameserver_ips 100.64.88.201
[admin:Avi-Controller]: cloud:dns_resolvers> nameserver_ips 100.64.89.202
[admin:Avi-Controller]: cloud:dns_resolvers> save
[admin:Avi-Controller]: cloud> save

The following are the configurable attributes in the DNS Resolver:

n resolver_name: Name of the resolver.

n nameserver_ips: The IPv4 addresses of DNS servers to be used for resolution.

n fixed_ttl: If configured, this value is used for refreshing the DNS entries. This will override
both received_ttl and min_ttl. The entries are refreshed only on fixed_ttl even when
received_ttl is less than fixed_ttl.

n min_ttl: If configured, this TTL overrides the TTL from responses if TTL is less than
min_ttl.effectively and if TTL is equal to max(received_ttl, min_ttl).

n use_mgmt: If this is enabled, DNS resolution is performed through management network.

VMware, Inc. 539


VMware NSX Advanced Load Balancer Configuration Guide

The output is as follows:

[admin:demo-cntrlr]: > show serviceengine demo-se2 resolverdb


+----------------------+-------------------------------------------+
| Field | Value |
+----------------------+-------------------------------------------+
| se_ref | demo-se2 |
| dns_resolution_on_se | True |
| fqdns[1] | |
| fqdn | ntest17.foo.avi.com |
| obj_uuids[1] | pool-da9e76ad-9bf3-4a8b-9dce-13bf7d36b96d |
| ips[1] | 1.1.1.17 |
| ttl | 300 |
| last_resolved_time | Mon Apr 12 06:54:12 2021 |
| | |
| last_updated_time | Mon Apr 12 05:03:35 2021 |
| | |
| fqdns[2] | |
| fqdn | ntest15.foo.avi.com |
| obj_uuids[1] | pool-f4e9743c-0585-4d67-897e-38328702813c |
| ttl | 0 |
| last_resolved_time | Mon Apr 12 06:53:53 2021 |
| | |
| last_updated_time | Thu Jan 1 00:00:00 1970 |
| | |
| err_response | ERROR |
| resolvers[1] | |
| resolver_name | resolver6 |
| nameserver_ips[1] | 100.64.88.201 |
| nameserver_ips[2] | 100.64.92.40 |
| total_fqdns | 2 |
| resolvers[2] | |
| resolver_name | Default-ResolvConf |
| total_fqdns | 0 |
+----------------------+-------------------------------------------+

n If the resolution needs to be done through SE and the DNS resolvers are updated through
DHCP, you can enable only dns_resolution_on_se code and do not have to configure
dns_resolver code in the cloud.

n If a dns_resolver object is configured, it will always be used for FQDN resolution.

Limitations of Configuring DNS Resolution on SE


The following are the limitations of DNS resolution on SE:

n Only IPv4 transport is supported for FQDN resolution.

n DNS resolution is done over UDP only.

n Only A records are queried.

n Only pool members FQDN resolution is supported.

VMware, Inc. 540


VMware NSX Advanced Load Balancer Configuration Guide

Configuring DNS Nameservers on Service Engine for Client Log Streaming and
for External Health Monitor
If DNS resolver in cloud is configured as per steps in Configuring DNS Resolution on
SE, /etc/systemd/resolved.conf for management network and /etc/netns/{namespace-
name}/resolv.conf for all VRF on SE virtual machine are updated.

Domain names configured in external_server under Analytics Profile,


client_log_streaming_config to Stream NSX Advanced Load Balancer Client Logs and domain
names present in the Script Code for External Health Monitor will be resolved through the
configured nameservers.

DNS Policy
This section explains about DNS policy.

A DNS policy consists of rules which has match targets and actions. The match targets are the
various attributes of a DNS request such as query type, query domain name, DNS transport
protocol used, client IP originating the request, etc. The rule actions can vary from security
actions, such as closing the connection, to response actions, such as generating an empty
response, etc.

A DNS policy can be referenced by a Layer-4 DNS virtual service (L4 DNS VS), a virtual service
which has an application profile type DNS. A single DNS virtual service can refer to a single DNS
policy.

DNS
virtual service
foo

DNS policy A

The DNS rule engine is executed for a DNS request only when a DNS request has been received
and parsed successfully.

A DNS policy rule is said to be a hit for a DNS request if all the match targets of the rule evaluate
to TRUE. If any match target of the rule does not evaluate to TRUE, the rule is not considered a hit
and the subsequent rule of the current policy (or, if there are no more rules in current policy, then
the first rule of the next policy is applicable) is evaluated.

Note For a DNS query, prior to lookups into the database for GSLB and static DNS entries, the
DNS policy rules are applied first.

VMware, Inc. 541


VMware NSX Advanced Load Balancer Configuration Guide

Matches
This section explains about rule matching in the DNS policy with match targets and actions.

Client IP
The match target matches the client IP address of the DNS query against a configured set of IP
addresses. The IP address match can be against an implicit set of IP addresses, IP address ranges
and IP prefixes, and/or a set of IP address group objects.

The client IP match operation supports the following match operations:

n IS IN evaluates to TRUE, if the client IP of the current DNS request is in the configured set of IP
addresses.

n IS NOT IN evaluates to TRUE, if the client IP of the current DNS request is not in the configured
set of IP addresses.

Use Case

A client IP match target can be used to block DNS queries emanating from a particular
geographical area hosting a bad bot. This is achieved by configuring a client IP rule match using
the IP addresses associated with the particular geographical area, and a rule action of drop.

VMware, Inc. 542


VMware NSX Advanced Load Balancer Configuration Guide

client IP match

match: IS IN

addresses:
202.192.0.1
202.192.0.2

ranges
begin: 192.168.71.1
end: 192.168.71.20
begin: 192.168.73.15
end: 192.168.73.31

prefixes
addr: 192.0.31.1
mask: 16

IP group foo
ipgroup: ip_group_foo
ipgroup: ip_group_bar
IP group bar

Query Domain Name


This match target matches the query domain name in the DNS query request against the
configured set of strings. The query domain name match target supports an implicit set of domain
names as match targets and a set of string group objects.

The query name match operation supports the following match operations:

n Begins With evaluates to TRUE, if the query domain name of the current DNS request begins
with any of the strings in the configured set of strings.

n Does Not Begin With evaluates to TRUE, if the query domain name of the current DNS request
does not begin with any of the strings in the configured set of strings.

n Contains evaluates to TRUE, if the query domain name of the current DNS request contains any
of the strings in the configured set of strings.

n Does Not Contain evaluates to TRUE, if the query domain name of the current DNS request
contains none of the strings in the configured set of strings.

n Ends evaluates to TRUE, if the query domain name of the current DNS request ends with any of
the strings in the configured set of strings.

VMware, Inc. 543


VMware NSX Advanced Load Balancer Configuration Guide

n Does Not End With evaluates to TRUE, if the query domain name of the current DNS request
does not end with any of the strings in the configured set of strings.

n Equals evaluates to TRUE, if the query domain name of the current DNS request equals any of
the strings in the configured set of strings.

n Does Not Equal evaluates to TRUE, if the query domain name of the current DNS request
equals none of the strings in the configured set of strings.

Use case

A query domain name match target can be used to block DNS queries for certain domains that are
not served by the DNS virtual service. This is achieved by configuring a rule with query domain
name match using the desired unavailable domain names, and a rule action of drop.

query domain name match

match: Begins With

query_domain_names:
internal.
dmz.
admin.

string group foo


string_group: string_group_foo
string_group: ip_group_bar
string group bar

Query Type
This match target matches the type of the DNS query against a configured set of query types
(record types A, AAAA, CNAME, and so on). The query type match operation supports the
following match operations:

n Is In evaluates to TRUE if the query type of the current DNS request is in the configured set of
query types.

n Is Not In evaluates to TRUE if the query type of the current DNS request is not in the
configured set of query types.

Use case

A query type match target can be used to block DNS queries not served by the DNS virtual
service. This is achieved by configuring a rule query type match using the desired available query
types, and a rule action of drop. Thus, any query type not in the configured set will be dropped.

VMware, Inc. 544


VMware NSX Advanced Load Balancer Configuration Guide

query type match

match: Is Not In

query_types:
A
AAAA
CNAME
SRV

DNS Transport Protocol


This match target matches the transport protocol carrying DNS query against configured set of
transport protocols. The query type match operation supports the following match operations:

n Is In evaluates to TRUE, if the transport of the current DNS request is in the configured set of
transport protocols.

n Is Not In evaluates to TRUE, if the transport of the current DNS request is not in the configured
set of transport protocols.

Use Case

A query transport protocol match target can be used to redirect DNS queries over UDP instead
of the TCP. This is achieved by configuring a rule with transport protocol match using the UDP
protocol as match, and rule action of Empty Response with truncation TC bit set. Thus, any query
over UDP will receive an empty response with truncation TC bit set which allows the client to
retransmit the query over TCP.

transport protocol match

match: Is In

protocol:
UDP

Rate Limiting
It is possible to identify a match which specifies the maximum number of DNS requests allowed in
a period of time through the REST API or UI.

VMware, Inc. 545


VMware NSX Advanced Load Balancer Configuration Guide

Actions

n Access Control: This rule action allows a UDP DNS query to be processed or dropped. If the
query arrives over TCP then, it will be allowed or dropped with additional option of resetting
the connection.

Use Case: If a rule match is configured to block DNS queries of types other than A, AAAA, CNAME
and SRV then, the drop action is used in the rule.

n Custom Response: This action allows a custom response that is sent for a DNS query request.
The response can be controlled to set the response code RCODE, the Authority AA and
Truncation TC bit in the response. Through REST API and CLI, the resource record sets are
supported, permits custom data to be inserted into the Answer, Authority, and Additional
sections of the DNS response body. For details on RRsets, see RFC 1034, Domain Names —
Concepts and Facilities.
Use Cases: If the DNS entries in the DNS virtual service does not support AAAA records for IPv6
address and hints the client to request for A records then, a rule match is configured to catch
the AAAA DNS queries and the response action is used in the rule action to generate an empty
NOERROR response. This causes the client to reissue the query for an A record. The Custom A,
CNAME, NS and/or AAAA record types can be returned.

n GSLB Site Selection : The policy of the DNS virtual service is configured so that a rule match
can override the usual GSLB-algorithm-based response. As a result of a match, one site is
chosen from a set of IP addresses (each homed at a different GSLB site) that share a common
site_name tag. If none of these are available, up to 16 fallback sites can be identified as an
alternative. If none of the fallback sites are healthy and the is_preferred_site Boolean is
TRUE, the DNS virtual service picks a site based on the configured GSLB algorithm. For more
information, see GSLB Site Selection with Fallback and Preferred-Site Options.

Use case: Imagine three GSLB sites, one each in Paris, Lyons, and Antwerp. With the geolocation
algorithm of NSX Advanced Load Balancer, a client situated in France, close to the French-Belgian
border is directed to Antwerp. However, since the client is in France, the GSLB-site-selection
action returns the VIP of a site having the site name “FRANCE”.

n Pool and Pool Group Selection: An NSX Advanced Load Balancer DNS virtual service will be
configured with backend DNS servers. Routing of requests to backend DNS servers but not
the members of the default pool requires definition of a pool or pool-group selection action.
This feature is supported in the NSX Advanced Load Balancer REST API, CLI and UI.

Note Pool selection is also referred to as pool switching.

VMware, Inc. 546


VMware NSX Advanced Load Balancer Configuration Guide

Use Case: It might be necessary to resolve a subset of DNS queries using DNS infrastructure
residing in a remote cloud. NSX Advanced Load Balancer DNS virtual service can conditionally
load balance such queries to one of the DNS servers in the remote cloud.

n Rate Limiting : A NSX Advanced Load Balancer DNS virtual service can be configured to limit
the rate at which DNS requests are accepted. You can specify the number of requests that are
allowed in a given time period. The action can be configured as either DROP or Report Only.
If DROP is configured then, the traffic exceeding the rate limit is dropped by the virtual service.
If Report Only is configured then, such traffic is passed through but marked as significant
logs in the application logs.

Note The rate limiting is configured from the NSX Advanced Load Balancer REST API or CLI and
not the UI.

Use Cases: DNS request rate limiting can be used to ensure quality of service and improved
security.

Rule Configuration through the NSX Advanced Load Balancer UI


This section explains the rule configuration using NSX Advanced Load Balancer UI.

Procedure

1 Edit the DNS virtual service for which policy rules are to apply.

2 Click Green button. NSX Advanced Load Balancer displays Rule 1 as default, which can be
changed accordingly.

3 Select a relevant match from the Matches drop-down list.

4 Select a relevant action from the Actions drop-down list.

5 After selecting all the relevant options/parameters, click Submit button.

VMware, Inc. 547


VMware NSX Advanced Load Balancer Configuration Guide

NSX Advanced Load Balancer CLI Commands and Data Structures


This section discusses NSX Advanced Load Balancer CLI commands and data structures.

Note The following schema includes the type DNS_RECORD_AAAA:

[admin:10-10-23-1]: dnspolicy:rule:action> new

allow:
allow: '(true | false) # Field Type: Optional'
reset_conn: '(true | false) # Field Type: Optional'
gslb_site_selection:
fallback_site_names: <string>
is_site_preferred: '(true | false) # Field Type: Optional'
site_name: '<string> # Field Type: Optional'
pool_switching:
pool_group_uuid: '<string> # Field Type: Optional'
pool_uuid: '<string> # Field Type: Optional'
response:
authoritative: '(true | false) # Field Type: Optional'
rcode: '<choices: DNS_RCODE_NOERROR | DNS_RCODE_NXDOMAIN | NS_RCODE_YXDOMAIN |
DNS_RCODE_REFUSED | DNS_RCODE_FORMERR | DNS_RCODE_YXRRSET | DNS_RCODE_NOTIMP |
DNS_RCODE_NOTZONE | DNS_RCODE_SERVFAIL | DNS_RCODE_NXRRSET | DNS_RCODE_NOTAUTH>
# Field Type: Optional'
resource_record_sets:
- resource_record_set:
cname:
cname: '<string> # Field Type: Required'
fqdn: '<string> # Field Type: Optional'
ip_addresses:
- ip_address:
addr: '<string> # Field Type: Required'
type: '<choices: V4 | V6 | DNS> # Field Type: Required'
nses:
- ip_address:
addr: '<string> # Field Type: Required'
type: '<choices: V4 | V6 | DNS> # Field Type: Required'
nsname: '<string> # Field Type: Required'
ttl: '<integer> # Field Type: Optional'
type: '<choices: DNS_RECORD_DNSKEY | DNS_RECORD_AAAA | DNS_RECORD_A | DNS_RECORD_OTHER
| DNS_RECORD_AXFR | DNS_RECORD_SOA | DNS_RECORD_MX | DNS_RECORD_SRV |
DNS_RECORD_HINFO
| DNS_RECORD_RRSIG | DNS_RECORD_OPT | DNS_RECORD_ANY | DNS_RECORD_PTR | DNS_RECORD_RP
| DNS_RECORD_TXT | DNS_RECORD_CNAME | DNS_RECORD_NS>
Field Type: Optional'
section: '<choices: DNS_MESSAGE_SECTION_QUESTION | DNS_MESSAGE_SECTION_ADDITIONAL
| DNS_MESSAGE_SECTION_AUTHORITY | DNS_MESSAGE_SECTION_ANSWER>
Field Type: Optional'
truncation: '(true | false) # Field Type: Optional'

[admin:10-10-23-1]: dnspolicy:rule:action> cancel


Exited out of the submode without saving the result.

VMware, Inc. 548


VMware NSX Advanced Load Balancer Configuration Guide

Processing DNS Request on both SE and Backend Server


A DNS policy needs to be set based on any of the existing match criteria types with match action
as either pool or pool group switching so that when a match is found, query will be sent to
backend server for response.

For example, if there is a static record of type A for foo.com on SE, and if a DNS policy is
configured stating that if query matches foo.com, action will be pool or pool group switching.
In that case you will get response from pool or pool group switched server rather than record
present on SE.

Another use case is supporting record types of TXT, NS, etc. on a server which are not yet
supported in GSLB services and redirect those queries to the backend server based on DNS
policies.

Custom DNS Profile on NSX Advanced Load Balancer


This section discusses Custom DNS Profile on NSX Advanced Load Balancer.

NSX Advanced Load Balancer supports custom DNS profiles to communicate the DNS provider.
With the new feature, you can use your own DNS provider and NSX Advanced Load Balancer uses
the allowed usable domain as per the requirement.

Configuring Custom DNS using UI


This section discusses how to configure Custom DNS using UI.
Uploading Python Script
A python script is uploaded to NSX Advanced Load Balancer to use a custom DNS profile option.

n Navigate to Templates > Profiles > Custom IPAM/DNS and click Create to upload the script.

n Provide DNS name and upload the script as the code to handle DNS records, for instance,
update and delete the DNS records.

The script has the following methods used:

n Create and update record

n Delete record

In this example, the following parameters are used while uploading the script to NSX Advanced
Load Balancer :

n username: admin

n password: password (It is marked as sensitive)

n wapi version: v2.0

n Server: IP address of the DNS provider

VMware, Inc. 549


VMware NSX Advanced Load Balancer Configuration Guide

These parameters (provider-specific information) are used to communicate with DNS providers.

Note The above parameters are provided for example purpose only. Based on the method used
in the script, the parameters are passed to the script.

Creating Custom DNS Profile


n Navigate to the Templates > IPAM/DNS Profiles and click Create button to begin. Name the
profile.

n Select Custom DNS from the Type drop-down list.

n Choose Custom DNS created in the previous step and provide the additional provider-specific
parameters, as shown below:

n network_view: In this case, it is the default network view.

n dns_view: In this case, it is the default DNS view.

The additional parameters provided above and usable domains are optional fields. But, they help
in provisioning virtual service automatically with the required attributes.

Using the same script, multiple usable domains can be created.

While provisioning the virtual service, the option to choose among multiple domains are available
under Applicable Domain Name as shown below.

Using Custom DNS Profile for Cloud Deployment


To associate the custom DNS option for the cloud, navigate to Infrastructure > Cloud and use the
DNS profile created in the previous steps.

VMware, Inc. 550


VMware NSX Advanced Load Balancer Configuration Guide

Creating Virtual Service


The following are the steps to create virtual service:

n Navigate to Applications > Virtual Service.

n Click Create to create a new virtual service which will use the Custom DNS profile for
registering domain automatically. Specify the following details for the virtual service:

n Name: Name of the virtual service.

n VIP address: IP address of the virtual service.

n Application Domain Name: Use the usable domain provided while creating the custom
DNS profile.

n Servers: IP address of the backend server.

n Once the virtual service creation is successful, the FQDN will be registered with the virtual
service.

n The same domain will be registered at the DNS provider site as well.

Configuring DNS Profile using CLI


This section explains how to Configuring DNS Profile using CLI.
Uploading Python Script
A python script is uploaded to NSX Advanced Load Balancer to use a custom DNS profile. Use
the following script to upload the desired custom DNS script to NSX Advanced Load Balancer
Controller.

"
Custom DNS script
"""
import socket
import os
import getpass
import requests
import inspect
import urllib
import json
import time

def CreateOrUpdateRecord(record_info, params):


username = params.get('username')
passkey = params.get('password')
ip = record_info.get('f_ip_address', '') or record_info.get('ip_address', '')
cname = record_info.get('cname', '')
fqdn = record_info.get('fqdn')
ttl = record_info.get('ttl', 900)
record_type = record_info.get('type', 'DNS_RECORD_A')
dns_record_id = 0
metadata_j = record_info.get('metadata', None)
if metadata_j:
metadata = json.loads(metadata_j)

VMware, Inc. 551


VMware NSX Advanced Load Balancer Configuration Guide

# Check if default of 0 as DNS record id is useful


dns_record_id = metadata.get('dns_record_id', 0)

if not fqdn:
print "Not valid FQDN found %s, returning"%record_info
return

# REST API
api = WebApiClient(username, passkey, domain)
api.disable_ssl_chain_verification()
param_dict = {
# DNS Record Information
"dns_record_id" : dns_record_id,
"fqdn" : fqdn,
"type" : "CNAME" if record_type == 'DNS_RECORD_CNAME' else "A",
"ttl" : str(ttl),
"content" : cname if record_type == 'DNS_RECORD_CNAME' else ip,
"site" : "ALL"
}

# Send request to register the FQDN, failures can be raised and the VS creation will fail
rsp = api.send_request("Update", param_dict)
if not rsp:
err_str = "ERROR:"
err_str += " STATUS: " + api.get_response_status()
err_str += " TYPE: " + str(api.get_error_type())
err_str += " MESSAGE: " + api.get_error_message()
print err_str
raise Exception("DNS record update failed with %s"%err_str)

def DeleteRecord(record_info, params):


username = params.get('username')
passkey = params.get('password')
ip = record_info.get('f_ip_address', '') or record_info.get('ip_address', '')
cname = record_info.get('cname', '')
fqdn = record_info.get('fqdn')
ttl = record_info.get('ttl', 900)
record_type = record_info.get('type', 'DNS_RECORD_A')
dns_record_id = 0
metadata_j = record_info.get('metadata', None)
if metadata_j:
metadata = json.loads(metadata_j)
# Check if default of 0 as DNS record id is useful
dns_record_id = metadata.get('dns_record_id', 0)

api = WebApiClient(username, passkey, domain)


api.disable_ssl_chain_verification()
param_dict = {
# DNS Record Information
"dns_record_id" : int(dns_record_id),
"delete_reason" : "Reason for deleting record",
"push_immediately" : True,
"update_serial" : True,
}

VMware, Inc. 552


VMware NSX Advanced Load Balancer Configuration Guide

rsp = api.send_request("Delete", param_dict)


if not rsp:
print "ERROR:"
print " STATUS: " + api.get_response_status()
print " TYPE: " + str(api.get_error_type())
print " MESSAGE: " + api.get_error_message()
return ""

The following parameters can be used in the script:

n username: <username>

n password: <password>

n API version: <version number>


Creating Custom DNS Profile using CLI

l
[admin-cntrl1]: > configure customipamdnsprofile custom-dns-profile

[admin-cntrl1]: customipamdnsprofile>
cancel Exit the current submode without saving
do Execute a show command
name Name of the Custom IPAM DNS Profile.
new (Editor Mode) Create new object in editor mode
no Remove field
save Save and exit the current submode
script_params (submode)
script_uri Script URI of form controller://ipamdnsscripts/<file-name>
show_schema show object schema
tenant_ref Help string not found for argument
watch Watch a given show command
where Display the in-progress object
[admin-cntrl1]: customipamdnsprofile>

In the above configuration snippet, the custom_dns_script.py script is uploaded with the
following attributes.

n Name: custom-dns-profile

n Username: dnsuser

n Password: Password with the is_sensitive flag set to True

n URI for the script: controller://ipamdnsscripts/custom_dns_script.py

Use the following syntax for uploading your script. controller://ipamdnsscripts/<script name>

The following is the output of the show customipamdnsprofile custom-dns-profilecommand.

[admin:10-10-25-160]: > show customipamdnsprofile custom-dns-profile


+------------------+--------------------------------------------------- Field |
Value

----------------------------------------------------------------------+

VMware, Inc. 553


VMware NSX Advanced Load Balancer Configuration Guide

| uuid |customipamdnsprofile-c12faa8a-f0eb-4128-a976-98d30391b9f2 |
name |custom-dns-
profile
|script_uri | controller://ipamdnsscripts/
custom_dns_script.py | script_params[1]
|
| name |
username
| value |
dnsuser
|is_sensitive |
False
|is_dynamic |
False
|script_params[2]
|
|name |
password
|value |
<sensitive>
|is_sensitive |
True
|is_dynamic |
False
|tenant_ref |
admin
+------------------
+--------------------------------------------------+

Configuring IPAM DNS Provider profile


This section explains how to configure IPAM DNS provider profile.

Use the command configure ipamdnsproviderprofile <profile name> to create the IPAM DNS
provider profile.

Note Parameters used for the profile configuration depends on the environment.

[admin-cntrl1]: configure ipamdnsproviderprofile dns-profile


[admin-cntrl1]: ipamdnsproviderprofile>
allocate_ip_in_vrf If this flag is set, only allocate IP from networks in the Virtual
Service VRF. Applicable for Avi Vantage IPAM only
aws_profile (submode)
azure_profile (submode)
cancel Exit the current submode without saving
custom_profile (submode)
do Execute a show command
gcp_profile (submode)
infoblox_profile (submode)
internal_profile (submode)
name Name for the IPAM/DNS Provider profile
new (Editor Mode) Create new object in editor mode
no Remove field
openstack_profile (submode)
proxy_configuration (submode)

VMware, Inc. 554


VMware NSX Advanced Load Balancer Configuration Guide

save Save and exit the current submode


show_schema show object schema
tenant_ref Help string not found for argument
type Provider Type for the IPAM/DNS Provider profile
watch Watch a given show command
where Display the in-progress object
[admin-cntrl1]: ipamdnsproviderprofile>

n Provide the desired name: dns-profile.

n Select Type as IPAMDNS_TYPE_CUSTOM.

n Provide the custom_ipam_dns_profile_ref value as custome-dns-profile (name of the custom


DNS profile created in the previous step).

The following additional parameter is passed to the script:

n Name: api_version

n Value: 2.2

[admin-cntrl1]: > show ipamdnsproviderprofile dns-profile


+-------------------------------+------------------------------------------
| Field | Value
-----------+-------------------------------------------------------------+
| uuid |ipamdnsproviderprofile-82ec8888-122e-4ca9-a1b3-0320c37e2d68 |
name | dns-profile
| type | IPAMDNS_TYPE_CUSTOM
| custom_profile |
| custom_ipam_dns_profile_ref | custom-dns-profile
| dynamic_params[1] |
| name | api_version
| value | 2.2
| is_sensitive | False
| is_dynamic | False
| allocate_ip_in_vrf | False
| tenant_ref | admin
+-------------------------------+----------------------------------------+

Integration with External DNS Providers


NSX Advanced Load Balancer integrates with Amazon Web Services (AWS) to provide DNS
services to applications running on instances in AWS.

Note
n AWS Cloud in NSX Advanced Load Balancer supports AWS DNS by enabling
route53_integration in the cloud configuration and does not require this DNS profile
configuration.

n A separate DNS provider configuration (as described in the 'DNS Configuration' section below)
is required only for cases where AWS provides the infrastructure service for other clouds.

n AWS DNS is supported only for North-South DNS provider.

VMware, Inc. 555


VMware NSX Advanced Load Balancer Configuration Guide

For more information refer to Service Discovery Using IPAM and DNS.

DNS Configuration
This section explains DNS configuration.

To use AWS as the DNS provider, one of the following types of credentials are required:

1 Identity and Access Management (IAM) roles: Set of policies that define access to resources
within AWS.

2 AWS customer account key: Unique authentication key associated with the AWS account.

If you prefer to use the Using IAM Role, then follow the steps below:

1 If you use the IAM role method to define access for an NSX Advanced Load Balancer
installation in AWS, then use the steps in IAM Role Setup for Installation into AWS article
to set up the IAM roles before you start to deploy the NSX Advanced Load Balancer Controller
EC2 instance.

2 In the Type field, select AWS Route 53 DNS and select Use IAM Roles button.

If you prefer to use the Using Access Key, then follow the steps below:

n In the Type field, select AWS Route 53 DNS and select Use Access Keys and enter the
following information:

n Access Key ID: AWS customer key ID

n Secret Access Key: Customer key

VMware, Inc. 556


VMware NSX Advanced Load Balancer Configuration Guide

n Select the AWS region into which the VIPs will be deployed.

n Select Access AWS through Proxy, if access to AWS endpoints requires a proxy server.

n Select Use Cross-Account AssumeRole, if the AWS credentials or role is leveraged to


access across accounts and click Next. For more information, see Configuring the NSX
Advanced Load Balancer Controller Cloud Connector.

A drop-down of available VPCs in that region is displayed.

n Select the appropriate VPC.

n A drop-down of available domain names associated with that VPC are displayed. Configure at
least one domain for virtual service’s FQDN registration with Route 53.

n Click Save.

VMware, Inc. 557


VMware NSX Advanced Load Balancer Configuration Guide

Custom IPAM Profile on NSX Advanced Load Balancer


This section explains the steps to configure a custom IPAM profile.

NSX Advanced Load Balancer supports integration with third-party IPAM providers such as NS1,
TCPWave, so on, for providing automatic IP address allocation for virtual services.

Configuring Custom IPAM


The following are the steps to configure custom IPAM:

1 Upload Python Script

2 Create a Custom IPAM Profile

3 Attach a Custom IPAM Profile to the Cloud

4 Create a Virtual Service

Step 1: Upload Python Script


A python script with some expected functions (explained in the Python Script section below) is
uploaded to the Controller. The functions defined in this script will be invoked by NSX Advanced
Load Balancer for IP address management from the third party providers.

Along with the script, you can add the following key-value parameters, which are used by the
functions in the script to communicate with the IPAM provider:

n username — <username>

VMware, Inc. 558


VMware NSX Advanced Load Balancer Configuration Guide

n password — <password> with the is_sensitive flag set to True

n server — 1.2.3.4

These parameters (provider-specific information) are used to communicate with IPAM providers.

Note
n The above parameters are provided for example purpose only. Based on the method used in
the script, the parameters are passed to the script.

n The file-name must have a .py extension and conform to PEP8 naming convention.

Configuring using UI
1 Navigate to Templates > Profiles > Custom IPAM/DNS, and click Create.

2 Specify the Name, upload the .py file in Script.

3 Click ADD SCRIPT PARAMS and enter the below details:

n username: <username>

n password: <password> with the Sensitive checkbox selected.

n server: 1.2.3.4

n wapi_version

n network_view: default

n dns_view: default

4 Click Save

Configuring using CLI


1 Copy script to the/var/lib/avi/ipamdnsscripts/ location on the Controller.

2 Use configure customipamdnsprofile. For instance, the custom_ipam_script.py script


is uploaded with the following attributes as shown below:

VMware, Inc. 559


VMware NSX Advanced Load Balancer Configuration Guide

Step 2: Create a Custom IPAM Profile


Configuring using CLI

1 Use configure ipamdnsproviderprofile <profile name> command to create the IPAM


provider profile.

Note Parameters used for the profile configuration depend on the environment.

2 Provide the desired name, for instance, custom-ipam-profile.

3 Select Type as IPAMDNS_TYPE_CUSTOM.

4 Provide the custom_ipam_dns_profile_ref value as custom-ipam-script (name of the script


object created in the Step 1).

VMware, Inc. 560


VMware NSX Advanced Load Balancer Configuration Guide

5 Add usable subnets if required. If it is set, while provisioning the virtual service, the option
to choose among multiple usable subnets are available under Network for VIP Address
Allocation as shown in Step 4: Create a Virtual Service section. If it is not set, all the available
networks/ subnets from the provider are listed.

Note You can not configure this step using the UI in 21.1.1 version.

Step 3: Attach a Custom IPAM Profile to the Cloud


Configuring using UI

1 To associate the custom IPAM option for the cloud, navigate to Infrastructure > Cloud, and
use the custom IPAM profile created in the Step 2.

Configuring using CLI

1 Use the configure cloud <cloud name> to attach the IPAM profile to the cloud.

2 Provide the ipam_provider_ref value as custom-ipam-profile.

VMware, Inc. 561


VMware NSX Advanced Load Balancer Configuration Guide

Step 4: Create a Virtual Service


1 Creating a new virtual service will use the Custom IPAM profile and the script for creating an
IPAM record with the provider automatically.

2 Provide the following mandatory attributes for the virtual service:

a Name: Name of the virtual service.

b Network for VIP Address Allocation: Select the network/ subnets for IP allocation
(mandatory only through UI).

c Servers: IP address of the back-end server.

Configuring using UI

1 Navigate to Applications > Virtual Service and click on Create button.

2 Once the virtual service creation is successful, the IP is allocated for virtual service as shown
below. Also an IPAM record will be created with the provider for the same.

Configuring using CLI

1 Use the configure vsvsip <vsvip name> command and configure virtualservice <vs
name> command to create vsvip and vs respectively.

Python Script
1 The script should have all the required functions and exception classes defined, else
the system displays the following error message during IPAM profile creation: “Custom
IPAM profile script is missing required functions/exception classes
{function_or_exception_names}.”

2 The following are the required functions:

a TestLogin

b GetAvailableNetworksAndSubnets

c GetIpamRecord

d CreateIpamRecord

e DeleteIpamRecord

f UpdateIpamRecord

3 The following are the required exception classes:

a CustomIpamAuthenticationErrorException

b CustomIpamRecordNotFoundException

c CustomIpamNoFreeIpException

d CustomIpamNotImplementedException

e CustomIpamGeneralException

VMware, Inc. 562


VMware NSX Advanced Load Balancer Configuration Guide

4 CustomIpamNotImplementedException can be raised when the function/ functionality is not


implemented in the script.

5 It is recommended to use logger_name(of auth_params) for script logging. tenant-specific


debug log files (named custom_ipam_script_<tenant_name>.log) are created to save the
log statements from the script. For admin tenant, log statements can be found in this
location: /var/lib/avi/log/custom_ipam_script.log

6 A separate Python script will also be provided to validate the provider script.

Note Example scripts for various IPAM providers are being developed and will be made available
once done.

The following is an example script template:

"""
This script allows the user to communicate with custom IPAM provider.

Required Functions
------------------
1. TestLogin: Function to verify provider credentials, used in the UI during IPAM profile
configuration.
2. GetAvailableNetworksAndSubnets: Function to return available networks/subnets from the
provider.
3. GetIpamRecord: Function to return the info of the given IPAM record.
4. CreateIpamRecord: Function to create an IPAM record with the provider.
5. DeleteIpamRecord: Funtion to delete a given IPAM record from the provider.
6. UpdateIpamRecord: Function to update a given IPAM record.

Required Exception Classes


--------------------------
1. CustomIpamAuthenticationErrorException: Raised when authentication fails.
2. CustomIpamRecordNotFoundException: Raised when given record not found.
3. CustomIpamNoFreeIpException: Raised when no free IP available in the given subnets/
networks.
4. CustomIpamNotImplementedException: Raised when the functionality is not implemented.
5. CustomIpamGeneralException: Raised for other types of exceptions.
"""

class CustomIpamAuthenticationErrorException(Exception):
"""
Raised when authentication fails.
"""
pass

class CustomIpamRecordNotFoundException(Exception):
"""
Raised when given record not found.
"""
pass

class CustomIpamNoFreeIpException(Exception):
"""

VMware, Inc. 563


VMware NSX Advanced Load Balancer Configuration Guide

Raised when no free IP available in the given subnets/networks.


"""
pass

class CustomIpamNotImplementedException(Exception):
"""
Raised when the functionality is not implemented.
"""
pass

class CustomIpamGeneralException(Exception):
"""
Raised for other types of exceptions.
"""
pass

def TestLogin(auth_params):
"""
Function to validate user credentials. This function is called from IPAM profile
configuration UI page.
Args
----
auth_params: (dict of str: str)
Parameters required for authentication. These are script parameters provided
while
creating a Custom IPAM profile.
Eg: auth_params can have following keys
server: Server ip address of the custom IPAM provider
username: self explanatory
password: self explanatory
logger_name: logger name
Returns
-------
Return True on success
Raises
------
CustomIpamNotImplementedException: if this function is not implemented.
CustomIpamAuthenticationErrorException: if authentication fails.
"""
1. Check all credentials params are given else raise an exception.
2. Raise an exception if test login fails.

def GetAvailableNetworksAndSubnets(auth_params, ip_type):


"""
Function to retrieve networks/subnets from the provider.
Called from the IPAM profile configuration to populate usable subnets on the UI.
Note: Subnets are represented in CIDR format.
Args
----
auth_params: (dict of str: str)
Parameters required for authentication.
ip_type: (str)
IP type supported by the networks/subnets. Allowed values: V4_ONLY, V6_ONLY and

VMware, Inc. 564


VMware NSX Advanced Load Balancer Configuration Guide

V4_V6.
Returns
-------
subnet_list: (list of dict of str : str)
network (str): network id/name
v4_subnet (str): V4 subnet
v6_subnet (str): V6 subnet
v4_available_ip_count (str): V4 free ips count of the network/v4_subnet
v6_available_ip_count (str): V6 free ips count of the network/v6_subnet
each dict has 5 keys: network, v4_subnet, v6_subnet, v4_available_ip_count,
v6_available_ip_count
v4_available_ip_count and v6_available_ip_count are optional, currenty this function
returns the first 3 keys. returning counts is TBD.
Raises
------
None
"""
1. return all the available networks and subnets.

def GetIpamRecord(auth_params, record_info):


"""
Function to return the info of the given IPAM record.
Args
----
auth_params: (dict of str: str)
Parameters required for authentication.
record_info: (dict of str: str)
id (str): uuid of vsvip.
fqdns (list of str): list of fqdn from dns_info in vsvip.
Returns
-------
alloc_info(dict of str: str):
Dictionary of following keys
v4_ip (str): IPv4 of the record
v4_subnet (str): IPv4 subnet
v6_ip (str): IPv6 of the record
v6_subnet (str): IPv6 subnet
network (str): network id/name
Raises
------
CustomIpamNotImplementedException: if this function is not implemented.
CustomIpamGeneralException: if the api request fails.
"""
1. Get the reference of the given IPAM record.
2. Raise a valid error message if the given record not found.
3. Return ipam record info like ipv4, ipv6, and its subnet/network name

def CreateIpamRecord(auth_params, record_info):


"""
Implements a Custom Rest API to create an IPAM record.
Args
----
auth_params: (dict of str: str)

VMware, Inc. 565


VMware NSX Advanced Load Balancer Configuration Guide

Parameters required for authentication.


record_info: (dict of str: str)
New record information with following keys.
id (str): uuid of vsvip.
fqdns (list of str): list of fqdn from dns_info in vsvip.
preferred_ip (str): the vsvip IPv4 if it's already configured by the user.
preferred_ip6 (str): the vsvip IPv6 if it's already configured by the user.
allocation_type (str): IP allocation type. Allowed values: V4_ONLY, V6_ONLY and
V4_V6.
nw_and_subnet_list (list of tuples : str): List of networks and subnets to use
for new IPAM
record IP allocation. Each tuple has 3 values (network, v4_subnet, v6_subnet).
Returns
-------
alloc_info(dict of str: str):
Dictionary of following keys
v4_ip (str): allocated IPv4
v4_subnet (str): subnet used for IPv4 allocation.
v6_ip (str): allocated IPv6
v6_subnet (str): subnet used for IPv6 allocation.
network (str): network used for IPv4/IPv6 allocation.
Raises
------
CustomIpamNoFreeIpException: if no free ip available.
CustomIpamGeneralException: if create record fails for any other reason.
"""
1. Either id or fqdns can be used as the name/identifier to create a new IPAM record,
choose according to the requirements.
2. If the preferred_ip/preferred_ip6 is set, call specific rest URL to create an IPAM
record (according to the allocation_type).
3. If the nw_and_subnet_list is empty call GetAvailableNetworksAndSubnets() to use any
available
subnets or networks for IP allocaton.
4. Based on the allocation_type, build payload data and call specific rest URL to create
an IPAM record.
5. If create IPAM record fails, raise an exception.

def DeleteIpamRecord(auth_params, record_info):


"""
Implements a Custom Rest API to delete an IPAM record.
Args
----
auth_params: (dict of str: str)
Parameters required for authentication.
record_info: (dict of str: str)
Record to be deleted. Has following keys.
id (str): uuid of vsvip.
fqdns (list of str): list of fqdn from dns_info in vsvip.
Returns
-------
True on successfully deleting the record.
Raises
------
CustomIpamRecordNotFoundException: if the given record not found

VMware, Inc. 566


VMware NSX Advanced Load Balancer Configuration Guide

CustomIpamGeneralException: if delete record request fails.


"""
1. Get the reference of the given IPAM record.
2. Raise a valid error message if the given record not found.
3. Delete the record, if it fails, raise an exception.

def UpdateIpamRecord(auth_params, new_record_info, old_record_info):


"""
Function to handle update IPAM record requests. Eg: Change of allocation_type from
V4_ONLY to V6_ONLY.
Args
----
auth_params: (dict of str: str)
Parameters required for authentication.
new_record_info: (dict of str: str)
New record information with following keys.
id (str): uuid of vsvip.
fqdns (list of str): list of fqdn from dns_info in vsvip.
preferred_ip (str): the vsvip IPv4 if it's already configured by the user.
preferred_ip6 (str): the vsvip IPv6 if it's already configured by the user.
allocation_type (str): IP allocation type. Allowed values: V4_ONLY, V6_ONLY and
V4_V6.
nw_and_subnet_list (list of tuples : str): List of networks and subnets to use
for an
IPAM record IP allocation. Each tuple has 3 values (network, v4_subnet,
v6_subnet)
old_record_info: (dict of str: str)
Old record information with following keys.
id (str): uuid of vsvip.
fqdns (list of str): list of fqdn from dns_info in vsvip of an old record.
preferred_ip (str): old record's preferred IPv4.
preferred_ip6 (str): old record's preferred IPv6.
allocation_type (str): old record's IP allocation type. Allowed values: V4_ONLY,
V6_ONLY and V4_V6.
nw_and_subnet_list (list of tuples : str): List of networks and subnets used for
an old IPAM
record IP allocation. Each tuple has 3 values (network, v4_subnet, v6_subnet)
Returns
-------
alloc_info(dict of str: str):
Dictionary of following keys
v4_ip (str): allocated IPv4
v4_subnet (str): subnet used for IPv4 allocation.
v6_ip (str): allocated IPv6
v6_subnet (str): subnet used for IPv6 allocation.
network (str): network used for IPv4/IPv6 allocation.
Raises
------
CustomIpamNotImplementedException: if this function or specific update requests not
implemented.
CustomIpamRecordNotFoundException: if the given record not found
CustomIpamGeneralException: if the api request fails for any other reason.
"""
1. Raise CustomIpamNotImplementedException exception if the UpdateIpamRecord function or

VMware, Inc. 567


VMware NSX Advanced Load Balancer Configuration Guide

specific update request is not implemented.


2. Get the reference of the given IPAM record.
3. Raise a valid error message if the given record not found.
4. Call specific rest URL based on the update type, if the update request fails, raise an
exception.

Support for Authoritative Domains, NXDOMAIN Responses,


NS and SOA Records
This section discusses the support for authoritative domains of the NSX Advanced Load Balancer.

When an NSX Advanced Load Balancer DNS virtual service has a pass-through pool (of back-end
servers) configured and the FQDNs are not found in the DNS table, it proxies these requests to
the pool of servers. An exception is when the NSX Advanced Load Balancer is configured with an
authoritative domain, and the queried FQDN is within the authoritative domain, in which case an
NXDOMAIN is returned.

The NSX Advanced Load Balancer DNS virtual service includes a Start of Authority (SOA) record
with its NXDOMAIN (and other) replies.

Note Responses to SOA queries are not supported prior to NSX Advanced Load Balancer release
18.2.5. See Support for SOA rdata Queries.

Features
An SOA record accompanies an NXDOMAIN (non-existent domain) response if the incoming query’s
domain is a subdomain of one of the configured authoritative domains in the DNS application
profile.

Negative caching, such as, the caching of the fact of non-existence of a record, is determined by
name servers authoritative for a zone which must include the Start of Authority (SOA) record when
reporting no data of the requested type exists. The minimum field value of the SOA record and the
TTL of the SOA itself is used to establish the TTL for the negative answer.

If the query’s FQDN matches an entry in the DNS table, but the query type is not supported by
default then, the NSX Advanced Load Balancer SE generates a NOERROR response, optionally with
an SOA record if the domain matches a configured authoritative domain.

Configuration Using the NSX Advanced Load Balancer UI


Queries for FQDNs that are subdomains of the authoritative domain names and do not have
any DNS record in the NSX Advanced Load Balancer are dropped or the NXDOMAIN response is
sent. The NSX Advanced Load Balancer System-DNS profile comes preconfigured to respond to
unhandled DNS requests. However, when creating a new DNS profile, it is necessary to change
(Options for) Invalid DNS Query processing field to respond to unhandled DNS requests to
ensure NXDOMAIN responses are sent when appropriate.

VMware, Inc. 568


VMware NSX Advanced Load Balancer Configuration Guide

When an NXDOMAIN reply is appropriate for an FQDN that ends with one of the authoritative
domains, the value appearing in the Negative TTL field will be incorporated into the attached SOA
record. This value is 30-seconds by default. However, the allowed range is 1 to 86400 seconds.

An NSX Advanced Load Balancer DNS virtual service need not have a back-end DNS server pool.
If it does have a back-end pool, the NSX Advanced Load Balancer DNS Service Engines will only
load balance to it if the FQDN is not a subdomain of one of those configured in the Authoritative
Domain Names field. All are configured with Ends-With semantics.

The values in the Valid subdomains field are specified for validity checking and thus optional. If
not configured, all subdomains of acme.com will be processed and looked up in the DNS table.

Configuration Using the NSX Advanced Load Balancer CLI


In the below example, we see the before and after configurations of the System-DNS application
profile. Various applicationprofile:dns_service_profile subcommands are used to:

n Define the authoritative domain names.

n Enable NXDOMAIN responses. To do this, the value of error_response is changed


from DNS_ERROR_RESPONSE_NONE (the default) to DNS_ERROR_RESPONSE_ERROR. The
negative_caching_ttl is left unchanged from its 30-second default.

n Specify subdomains of the authoritative domain for which the DNS can provide an IP address
for validity checking (optional). These subdomains are for validity checking and thus optional.
If not configured, all subdomains will be processed and looked up in the DNS table.

[admin:10-10-25-20]: > configure applicationprofile System-DNS Updating an existing object.


Currently, the object is:
+-------------------------+---------------------------------------------+ |
Field | Value
| +-------------------------+---------------------------------------------+
| uuid | applicationprofile-fdb6a5d6-bbf8-4f15-b851-
f436b599992c |
| name | System-DNS |
| type | APPLICATION_PROFILE_TYPE_DNS |
| dns_service_profile | | |
num_dns_ip | 1 | |
ttl | 30 sec |
| error_response | DNS_ERROR_RESPONSE_NONE |
| edns | False |
| dns_over_tcp_enabled | True |
| aaaa_empty_response | True |
| negative_caching_ttl | 30 sec
| | ecs_stripping_enabled | True
| | preserve_client_ip | False
| | tenant_ref | admin
| +-------------------------+---------------------------------------------+
[admin:10-10-25-20]: applicationprofile> dns_service_profile
[admin:10-10-25-20]: applicationprofile:dns_service_profile> authoritative_domain_names
acme.com
[admin:10-10-25-20]: applicationprofile:dns_service_profile> authoritative_domain_names
coyote.com

VMware, Inc. 569


VMware NSX Advanced Load Balancer Configuration Guide

[admin:10-10-25-20]: applicationprofile:dns_service_profile> error_response


dns_error_response_error Overwriting the previously entered value for error_response
[admin:10-10-25-20]: applicationprofile:dns_service_profile> domain_names sales.acme.com
[admin:10-10-25-20]: applicationprofile:dns_service_profile> domain_names docs.acme.com
[admin:10-10-25-20]: applicationprofile:dns_service_profile> domain_names support.acme.com
[admin:10-10-25-20]: applicationprofile:dns_service_profile> save [admin:10-10-25-20]:
applicationprofile> save
+---------------------------------+-------------------------------------+ |
Field | Value
| +---------------------------------+-------------------------------------+ |
uuid | applicationprofile-fdb6a5d6-bbf8-4f15-b851-
f436b599992c |
| name | System-DNS |
| type | APPLICATION_PROFILE_TYPE_DNS |
| dns_service_profile | | |
num_dns_ip | 1 | |
ttl | 30 sec |
| error_response | DNS_ERROR_RESPONSE_ERROR |
| domain_names[1] | sales.acme.com |
| domain_names[2] | docs.acme.com |
| domain_names[3] | support.acme.com |
| edns | False |
| dns_over_tcp_enabled | True |
| aaaa_empty_response | True |
| authoritative_domain_names[1] | acme.com |
| authoritative_domain_names[2] | coyote.com |
| negative_caching_ttl | 30 sec
| | ecs_stripping_enabled | True
| | preserve_client_ip | False
| | tenant_ref | admin
| +---------------------------------+-------------------------------------+
[admin:10-10-25-20]: >

Support for SOA rdata Queries


NSX Advanced Load Balancer supports SOA queries for configured authoritative domains,
and the customization of SOA fields MNAME and RNAME (see RFC 1035), which are
configured using the NSX Advanced Load Balancer CLI configuration sub-command
applicationprofile>dns_service_profile to supply two corresponding parameters:

name_server: The <domain-name> of the name server that was the original or primary source
of data for this zone. This field is used in SOA records pertaining to all domain names specified
as authoritative domain names. If not configured, domain name is used as name server in SOA
response.

admin_email: Email address of the administrator responsible for this zone . This field is used
in SOA records pertaining to all domain names specified as authoritative domain names. If not
configured, the default value hostmaster is used in SOA responses.

VMware, Inc. 570


VMware NSX Advanced Load Balancer Configuration Guide

CLI Example
[admin:10-10-25-20]: applicationprofile> dns_service_profile admin_email john_doe@acme.com
[admin:10-10-25-20]: applicationprofile:dns_service_profile> name_server roadrunner.com
[admin:10-10-25-20]: applicationprofile:dns_service_profile> save [admin:10-10-25-20]:
applicationprofile> save
+---------------------------------+-------------------------------------+ |
Field | Value
| +---------------------------------+-------------------------------------+ |
uuid | applicationprofile-fdb6a5d6-bbf8-4f15-b851-
f436b599992c |
| name | System-DNS |
| type | APPLICATION_PROFILE_TYPE_DNS |
| dns_service_profile | | |
num_dns_ip | 1 | |
ttl | 30 sec |
| error_response | DNS_ERROR_RESPONSE_ERROR |
| domain_names[1] | sales.acme.com |
| domain_names[2] | docs.acme.com |
| domain_names[3] | support.acme.com |
| edns | False |
| dns_over_tcp_enabled | True |
| aaaa_empty_response | True |
| authoritative_domain_names[1] | acme.com |
| authoritative_domain_names[2] | coyote.com |
| negative_caching_ttl | 30 sec
| | name_server | roadrunner.com
| | admin_email | john_doe@acme.com
| | ecs_stripping_enabled | True
| | preserve_client_ip | False
| | tenant_ref | admin
| +---------------------------------+-------------------------------------+
[admin:10-10-25-20]: >

When a SOA request is made, the SOA response is sent in the answer section. For non-existent
records of domains for which the NSX Advanced Load Balancer is the authority, the response is
sent in the authority section.

Adding Custom A Records to an NSX Advanced Load


Balancer DNS Virtual Service
NSX Advanced Load Balancer DNS can host manual static DNS entries. For a given FQDN, the
user can configure an A, SRV, or CNAME record to be returned. This is accomplished through the
NSX Advanced Load Balancer CLI as documented in this article.

The configure virtualservice dns-vs command shows there is already an existing static
custom A record for FQDN ggg.avi.local.

: < configure virtualservice dns-vs


Updating an existing object. Currently, the object is:
+----------------------------------+--------------------------------------+
| Field | Value |

VMware, Inc. 571


VMware NSX Advanced Load Balancer Configuration Guide

+----------------------------------+--------------------------------------+
| uuid | virtualservice-bc7c7fc6-583e-4335-8d33-ec4670771a85 |
| name | dns-vs |
| ip_address | 10.90.12.200 |
| enabled | True |
| services[1] | |
| port | 53 |
| enable_ssl | False |
| port_range_end | 53 |
| application_profile_ref | System-DNS |
| network_profile_ref | System-UDP-Per-Pkt |
| se_group_ref | Default-Group |
| east_west_placement | False |
| scaleout_ecmp | False |
| active_standby_se_tag | ACTIVE_STANDBY_SE_1 |
| flow_label_type | NO_LABEL |
| static_dns_records[1] | |
| fqdn[1] | ggg.avi.local |
| type | DNS_RECORD_A |
| ip_address[1] | |
| ip_address | 1.1.1.1 |
+----------------------------------+--------------------------------------+

Another similar custom A record can be added:

: virtualservice< static_dns_records
New object being created
: virtualservice:static_dns_records< fqdn abc.avi.local
: virtualservice:static_dns_records< ip_address
New object being created
: virtualservice:static_dns_records:ip_address< ip_address 11.11.11.11
: virtualservice:static_dns_records:ip_address< save
: virtualservice:static_dns_records< type dns_record_a
: virtualservice:static_dns_records< save
: virtualservice< save
<+----------------------------------+-------------------------------------+
| Field | Value |
+----------------------------------+--------------------------------------+
| uuid | virtualservice-bc7c7fc6-583e-4335-8d33-ec4670771a85 |
| name | dns-vs |
| ip_address | 10.90.12.200 |
| enabled | True |
| services[1] | |
| port | 53 |
| enable_ssl | False |
| port_range_end | 53 |
| application_profile_ref | System-DNS |
| network_profile_ref | System-UDP-Per-Pkt |
| se_group_ref | Default-Group |
| east_west_placement | False |
| scaleout_ecmp | False |
| active_standby_se_tag | ACTIVE_STANDBY_SE_1 |
| flow_label_type | NO_LABEL |
| static_dns_records[1] | |
| fqdn[1] | ggg.avi.local |

VMware, Inc. 572


VMware NSX Advanced Load Balancer Configuration Guide

| type | DNS_RECORD_A |
| ip_address[1] | |
| ip_address | 1.1.1.1 |
| static_dns_records[2] | |
| fqdn[1] | abc.avi.local |
| type | DNS_RECORD_A |
| ip_address[1] | |
| ip_address | 11.11.11.11 |
+----------------------------------+--------------------------------------+

The above command sequence will create a static entry for the FQDN abc.avi.local on virtual
service dns-vs. This can also be confirmed from the GUI under Applications > Virtual Services >
DNS Records as illustrated below.

Clickjacking Protection
Clickjacking is a malicious technique of tricking a user into clicking on something different from
what the user perceives, thus potentially revealing confidential information or allowing others
to take control of their computer while clicking on seemingly innocuous objects, including web
pages.

VMware, Inc. 573


VMware NSX Advanced Load Balancer Configuration Guide

Clickjacking Protection in NSX Advanced Load Balancer


In NSX Advanced Load Balancer, clickjacking protection is enabled by default. Clickjacking
protection can be disabled if required. For example, the Horizon integration with iframes does
not work with the option enabled. You can disable the option by logging into the Controller CLI
and entering the commands shown below:

$> shell
Login: admin
Password:

: > configure systemconfiguration


: systemconfiguration> portal_configuration
: systemconfiguration:portal_configuration> no enable_clickjacking_protection
: systemconfiguration:portal_configuration> save
: systemconfiguration> save
: > exit
$>

Selective Disabling of Clickjacking Protection


Clickjacking comes in many forms.One such example is when a site maliciously embeds an
unsuspecting site within an iframe, effectively showing the child site through their own. Preventing
this is easy enough via a few headers on the server. However, it is possible in more robust
environments to require enabling iframing sometimes, but not always.

The following DataScript selectively determines if the referring site, determined by the referer
header, is allowed to embed this site within an iframe. The list of allowed referers is maintained
within a separate string group, which allows for an extensive, REST API updatable list without
directly modifying the rule with every update.

The following example involves creating a string group, then creating the DataScript which
references the string group:

String Group: Allowed-Referer

http://www.avinetworks.com/

https://avinetworks.com/docs/

https://avinetworks.github.com

https://support.avinetworks.com

DataScript

-- Add to the HTTP Response event


var = avi.http.get_header("referer", avi.HTTP_REQUEST)
if var then
-- The following line strips off the path from the hostname
name = string.match(var, "[https?://]*[^/]+" )
val, match = avi.stringgroup.equals("Allowed-Referer", name)
end
if match then

VMware, Inc. 574


VMware NSX Advanced Load Balancer Configuration Guide

-- The referring site is allowed to embed this site within an iframe


avi.http.replace_header("X-Frame-Options", "ALLOW-FROM "..name)
avi.http.replace_header("Content-Security-Policy", "frame-ancestors " .. name)
else
-- The site may not be iframed
avi.http.replace_header("X-Frame-Options", "DENY")
avi.http.replace_header("Content-Security-Policy", "frame-ancestors 'none'")
end

DNS Queries Over TCP


NSX Advanced Load Balancer supports DNS queries over both UDP and TCP protocols. DNS-
over-TCP implementation requirements are described in RFC 7766.

One DNS Query per TCP Connection


NSX Advanced Load Balancer processes only one DNS query per TCP connection. It does not
support DNS query pipelining as described in the RFC 7766. If multiple DNS queries are sent over
the same TCP connection, NSX Advanced Load Balancer will generate the response only for the
first DNS query and ignore the remaining queries. If the DNS queries were meant for pass through
to upstream DNS servers, then only the first DNS query in the TCP connection is passed to the
upstream server, and the remaining queries are ignored.

NSX Advanced Load Balancer Initiated TCP Connection Close


When NSX Advanced Load Balancer responds to a DNS query in a TCP connection, it generates
a FIN towards the client to close the TCP connection. This is done to release memory resources
immediately rather than wait for the client to timeout waiting on the responses for the multiple
potential queries it sent.

Note If the multiple queries were passthrough to the upstream DNS server, then the TCP
connection between the client and NSX Advanced Load Balancer follows the regular connection
close process.

Other than DNS query pipelining, DNS queries over TCP get the same treatment as DNS over UDP
as far as DNS behavior is concerned. Note that by using TCP, DNS over TCP is not limited to 512
bytes size, as is the case for DNS over UDP.

Adding DNS Records Independent of Virtual Service State


In some instances, it might be helpful to have the NSX Advanced Load Balancer DNS respond with
a virtual service’s virtual IP address independent of the state of that virtual service. This behavior is
supported through state-based-dns-registration, a new option associated with the cloud object in
which the VS is defined. This option is available only to users of the NSX Advanced Load Balancer
CLI or NSX Advanced Load Balancer REST API.

VMware, Inc. 575


VMware NSX Advanced Load Balancer Configuration Guide

For backward compatibility, by default, the option is TRUE for all clouds. However, if set to FALSE
for a given cloud, NSX Advanced Load Balancer DNS lookups of virtual services within the cloud
will return IP addresses as soon as the virtual services become operational. Virtual services enter
the operational state when one of the following conditions are true:

n No pool has been defined, but a return page has been defined. The classic use case for this is
the return of a static “under construction” page by a virtual service still in its infancy.

n The virtual service is an NSX Advanced Load Balancer DNS, whether or not it has a back-end
server pool defined for it.

n A server pool has been associated with the virtual service.

Changing the 'state-based-dns-registration' Option


Below is a sequence of NSX Advanced Load Balancer shell commands that configure the option to
FALSE. It has been edited to highlight the most relevant command output.

[admin:10-130-150-30]: > configure cloud Default-Cloud


Updating an existing object. Currently, the object is:
+----------------------------------------+--------------------------------------------+
| Field | Value |
+----------------------------------------+--------------------------------------------+
| uuid | cloud-3e33a415-49c9-414d-b71e-8ec79289ae98 |
| name | Default-Cloud |
| . | . |
| . | . |
| . | . |
| state_based_dns_registration | True |
| tenant_ref | admin |
+----------------------------------------+--------------------------------------------+
[admin:10-130-150-30]: cloud> no state_based_dns_registration
+----------------------------------------+--------------------------------------------+
| Field | Value |
+----------------------------------------+--------------------------------------------+
| uuid | cloud-3e33a415-49c9-414d-b71e-8ec79289ae98 |
| name | Default-Cloud |
| . | . |
| . | . |
| . | . |
| state_based_dns_registration | False |
| tenant_ref | admin |
+----------------------------------------+--------------------------------------------+

Note Toggling the state-based-dns-registration option impacts virtual services that are
defined thereafter. It does not have a retroactive effect on virtual services that have already been
defined.

VMware, Inc. 576


VMware NSX Advanced Load Balancer Configuration Guide

DNS TXT and MX Record


NSX Advanced Load Balancer supports text record (TXT) record and mail exchanger (MX) record.
This section discusses the steps to configure them.

DNS virtual service on NSX Advanced Load Balancer primarily implements the following
functionality:

n DNS Load Balancing

n Hosting Manual or Static DNS Entries

n Virtual Service IP Address DNS Hosting

n Hosting GSLB Service DNS Entries

NSX Advanced Load Balancer DNS can host manual static DNS entries. For a given FQDN, you
can configure an A, AAAA, SRV, CNAME, or NS record to be returned.

n TXT Record: This is used to store text-based information of the outside domain for the
configured domain. This is useful in identifying ownership of a domain.

n MX Record: This is used in mail delivery based on the configured domain. This is useful in
redirecting email requests to the mail servers for a specified domain.

Configuring DNS TXT Record


Login to NSX Advanced Load Balancer CLI and use the static_dns_records option from the
configure virtualservice mode to add a TXT record for the desired domain, as shown below:

[admin:controller]: > configure virtualservice VS-DNS


[admin:controller]: virtualservice> static_dns_records
New object being created
[admin:controller]: virtualservice:static_dns_records> fqdn txtrec.acme.com
[admin:controller]: virtualservice:static_dns_records> type dns_record_txt
[admin:controller]: virtualservice:static_dns_records> txt_records
New object being created
[admin:controller]: virtualservice:static_dns_records:txt_records> text_str
"favorite_protocol=DNS"
[admin:controller]: virtualservice:static_dns_records:txt_records> save
[admin:controller: virtualservice:static_dns_records> save
[admin:controller]: virtualservice> save

In the above instance, the favorite-protocol=DNS test is used as a DNS TXT record for the domain
txtrec.acme.com.

Configuring DNS TXT Record with A or MX record


TXT record can be configured with any other existing record (for example, A record and MX
record) with the same FQDN.

[admin:controller]: > configure virtualservice VS-DNS


[admin:controller]: virtualservice> static_dns_records index 1

VMware, Inc. 577


VMware NSX Advanced Load Balancer Configuration Guide

[admin:controller]: virtualservice:static_dns_records>
[admin:controller]: virtualservice> static_dns_records index 1
[admin:controller]: virtualservice:static_dns_records> txt_records
New object being created
[admin:controller]: virtualservice:static_dns_records:txt_records> text_str
"favorite_protocol=DNS"
[admin:controller]: virtualservice:static_dns_records:txt_records> save
[admin:controller]: virtualservice:static_dns_records> save
[admin:controller]: virtualservice> save

Configured TXT record data now respond to the appropriate DNS query. Use the following dig
command to test the desired output.

aviuser@controller:~$ dig @10.140.135.22 txtrec.acme.com TXT


; <<>> DiG 9.10.3-P4-Ubuntu <<>> @10.140.135.22 txtrec.acme.com TXT
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3327
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;txtrec.acme.com. IN TXT
;; ANSWER SECTION:
txtrec.acme.com. 30 IN TXT "favorite_protocol=DNS"
;; Query time: 2 msec
;; SERVER: 10.140.135.22#53(10.140.135.22)
;; WHEN: Tue Feb 25 10:42:59 UTC 2020
;; MSG SIZE rcvd: 66

Configuring DNS MX Record


For the MX record, a static DNS entry of type mx_records is added to redirect email requests to
the designated mail server. The host (m1.vmware.com) used in the below example is the FQDN for
the designated mail server.

[admin:controller]: > configure virtualservice VS-DNS


[admin:controller]: virtualservice> static_dns_records
[admin:controller]: virtualservice> static_dns_records
New object being created
[admin:controller]: virtualservice:static_dns_records> fqdn txtrec.acme.com
[admin:controller]: virtualservice:static_dns_records> type dns_record_mx
[admin:controller]: virtualservice:static_dns_records> mx_records
New object being created
[admin:controller]: virtualservice:static_dns_records:mx_records> host m1.acme.com
[admin:controller]: virtualservice:static_dns_records:mx_records> priority 10
[admin:controller]: virtualservice:static_dns_records:mx_records> save

VMware, Inc. 578


VMware NSX Advanced Load Balancer Configuration Guide

[admin:controller]: virtualservice:static_dns_records> save


[admin:controller]: virtualservice> save

Note The value for the priority field can vary from 0-65535.

Configuring MX record with Other Existing Record


Use the following configuration to enable the MX record for the existing A record. In the below
example, the MS record for m.foo.com is added to the existing A Record (foo.com).

[admin:controller]: > configure virtualservice VS-DNS


(INTEGER) Index of the Object (use where command to see index)
[admin:controller]: virtualservice> static_dns_records index 2
[admin:controller]: virtualservice:static_dns_records> where
Tenant: admin
Cloud: Default-Cloud
+-------------------------+---------------------------------+
| Field | Value |
+-------------------------+---------------------------------+
| fqdn[1] | acme.com |
| type | DNS_RECORD_A |
| ip_address[1] | |
| ip_address | 1.1.1.1 |
| num_records_in_response | 0 |
| algorithm | DNS_RECORD_RESPONSE_ROUND_ROBIN |
| wildcard_match | False |
| delegated | False |
+-------------------------+---------------------------------+
[admin:controller]: virtualservice:static_dns_records>
[admin:controller]: virtualservice:static_dns_records> mx_records
New object being created
[admin:controller]: virtualservice:static_dns_records:mx_records> host m.acme.com
[admin:controller]: virtualservice:static_dns_records:mx_records> priority 12
dmin:naveen-cntrlr]: virtualservice:static_dns_records:mx_records> save
sav[admin:controller]: virtualservice:static_dns_records> save
[admin:controller]: virtualservice> save

DNS queries to the VIP should now serve the record data thus configured .

aviuser@controller:~$ dig @10.140.135.22 txtrec.acme.com MX

; <<>> DiG 9.10.3-P4-Ubuntu <<>> @10.140.135.22 txtrec.acme.com MX


; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6518
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;txtrec.acme.com. IN MX

VMware, Inc. 579


VMware NSX Advanced Load Balancer Configuration Guide

;; ANSWER SECTION:
txtrec.acme.com. 30 IN MX 10 m1.acme.com.

;; Query time: 1 msec


;; SERVER: 10.140.135.22#53(10.140.135.22)
;; WHEN: Tue Feb 25 09:40:59 UTC 2020
;; MSG SIZE rcvd: 72

aviuser@controller:~$

Add Servers to Pool by DNS


This section explains the steps to add servers to a pool based on the DNS domain name.

Servers can be added to a pool in the following ways:

n By IP address or IP address ranges

n By a list retrieved from the cloud orchestrator (select by Network)

n IP group

n DNS domain name

To add servers by domain name, follow the below:

n Configure valid DNS servers on the NSX Advanced Load Balancer Controller.

n In the web interface, navigate to Administration > Settings > DNS / NTP.

n Create or edit an existing pool, or create a new virtual service in basic mode.From the Servers
tab, select servers using the IP address, IP address range, or DNS name option. In the Server
IP address field, enter a valid domain name.

n If DNS cannot resolve the name, it is displayed in red. If DNS resolves the name to an IP
address, it will be listed below the field.

n If DNS resolves to multiple IP addresses, the list will be shown below though it is
potentially truncated.

n Click the Add Server to add the server(s) to the pool.

DNS Overrides Manual IP Changes


For servers added by domain name, manual changes to the resolved server’s IP addresses are
overwritten automatically.

Periodic Address Verification and Refresh


If the IP address has changed (a single name may return multiple IP addresses), then the
NSX Advanced Load Balancer Controller will periodically refresh the server IP information by
rechecking with DNS.

VMware, Inc. 580


VMware NSX Advanced Load Balancer Configuration Guide

If the DNS server returns the IP address which is already assigned to the server then, there is no
change. However, the pool is updated in the following cases:

n If DNS resolution of a server hostname results in a different set of IP addresses than the set
received previously, the pool members corresponding to this hostname are updated with the
new set of IP addresses, and the older IP addresses are removed.

n In case of either the DNS resolution results in a timeout or if there is a failure due to a
temporary outage of the DNS server, then the old set of IP addresses is preserved.

n If DNS resolution results in an error (for example, non-existent domain or no answer from the
server), then the hostname is mapped to IP address “0.0.0.0.”

In case a timeout or an error occurs then, NSX Advanced Load Balancer will seek to resolve the
hostname in the next resolution interval.

Changing the DNS Refresh Interval


The default DNS refresh time is 60 minutes. This can be changed using the CLI:

: > configure controller properties dns_refresh_period 50


: > save

VMware, Inc. 581


Service Discovery using NSX
Advanced Load Balancer as IPAM
and DNS Provider
5
This section explains the configuration of NSX Advanced Load Balancer's native IPAM and DNS
solution for providing service discovery.

The NSX Advanced Load Balancer IPAM/DNS profile consists of both IPAM and DNS related
configuration in a single bundle. It is recommended to have both IPAM and DNS configuration in
a single profile for ease of management. However, configuration of one can exclude the other if
different profiles for IPAM and DNS are preferred.

For instance, vantage-ipam can be created without configuring any DNS domains and vantage-
dns can be created by using only domain names and without any networks/subnets.

IPAM/DNS Support for Cloud Infrasturcture


NSX Advanced Load Balancer can be configured to provide automatic IP address allocation for
virtual services and to provide authoritative DNS resolution for their virtual IP addresses.

NSX Advanced Load Balancer


Provider Infoblox Internal Cloud-native

Cloud
Infrastructure IPAM DNS IPAM DNS IPAM DNS

VMware Yes Yes Yes Yes NA NA (Not Used)


vCenter

OpenStack No No No Yes Yes (Default) NA (Not Used)

Amazon Web No No No Yes Yes (Default) Yes (Default)


Services

Google Cloud No No No Yes Yes No


Platform

Azure (as of No No No Yes Yes (Default) Yes (Default)


18.2.5)

VMware, Inc. 582


VMware NSX Advanced Load Balancer Configuration Guide

NSX Advanced Load Balancer


Provider Infoblox Internal Cloud-native

Cloud
Infrastructure IPAM DNS IPAM DNS IPAM DNS

Linux Server Yes Yes Yes Yes Yes No


(bare metal)

No access Yes Yes Yes Yes Yes No


cloud

Note
n When creating virtual services in OpenStack or AWS cloud, a separate configuration for IPAM
is not needed/allowed, since the cloud configuration has support for IPAM natively in NSX
Advanced Load Balancer.

n Default means NSX Advanced Load Balancer accepts the cloud’s IPAM/DNS support
without additional action on the part of the NSX Advanced Load Balancer admin.

n NSX Advanced Load Balancer supports Route 53 when AWS is the cloud provider
configuration in NSX Advanced Load Balancer.

n Not used means, although the cloud supports DNS, NSX Advanced Load Balancer does
not use it.

n When creating a virtual service in Linux Server cloud in AWS/ GCP environment, you can use
the cloud-native IPAM solution of AWS/ GCP.

n NSX Advanced Load Balancer DNS service can be used with all these clouds.

General Configuration Workflow


Initial configuration is common for both IPAM and DNS. The configuration fields differ among the
infrastructure types and the provider (NSX Advanced Load Balancer, Infoblox, AWS, GCP, and
OpenStack). To configure IPAM and DNS support, follow the steps listed below:

1 Navigate to Templates > Profiles.

2 Click IPAM/DNS Profile.

3 Click Create and select the provider.

4 Fill in the displayed fields (detailed steps are provided in the sections below).

5 Click Save. The profile appears in the list.

6 Navigate to Infrastructure > Clouds and edit the cloud setting.

7 Select the IPAM and DNS providers from the drop-down list. Either one or both must be
selected based on the provider(s) required. For instance, prior to 18.2.5 versions, if Infoblox is
the IPAM provider then, it must be the DNS provider as well.

VMware, Inc. 583


VMware NSX Advanced Load Balancer Configuration Guide

8 For east-west virtual services in this cloud, you need to additionally select east-west IPAM and
DNS providers from the pull-down list. Either one or both can be selected. This is an optional
step.

9 Click Save.

This chapter includes the following topics:

n IPAM Configuration

n DNS Configuration

n Configuring the IPAM/DNS Profiles by Provider Type

IPAM Configuration
This section explains the steps to configure IPAM.

Prerequisites

NSX Advanced Load Balancer allocates IP addresses from a pool of IP addresses within the subnet
configured listed as follows.

Procedure

1 Navigate to Infrastructure > Clouds, and click on the cloud name.

2 Select Network and click Create.

3 Specify a name for the network.

4 Under IP Address Management, click on the required option for DHCP Enabled and IPv6
Auto Configuration.

5 Add IPv4, IPv6 networks for IP address allocation:

a Click Add Subnet.

b Enter the subnet address in IP Subnet field, in the following format: 9.9.9.0/24

c Click Add Static IP Address Pool to specify the pool of IP addresses. Specify the range of
the pool under IP Address Pool. NSX Advanced Load Balancer will allocate IP addresses
from this pool. For instance, 9.9.9.100-9.9.9.200.

d Click Save.

e Repeat 1-4 for each network to be used for IP address allocation.

VMware, Inc. 584


VMware NSX Advanced Load Balancer Configuration Guide

6 Click Save.

Note
n Virtual service creation will fail if the static IP address pool is empty or exhausted.

n For East West IPAM (applicable to container-based clouds, i.e., Mesos, OpenShift,
Kubernetes, Docker UCP, and Rancher), create another network with the appropriate
link-local subnet and a separate IPAM/DNS Profile.

Usable Networks
This feature enables assigning one or more of the networks created above to be default
usable networks, if no specific network/subnet is provided in the virtual service configuration.
An administrator can configure these networks, thus eliminating the need for a developer to
provide a specific network/subnet while creating a virtual service for the application.

VMware, Inc. 585


VMware NSX Advanced Load Balancer Configuration Guide

DNS Configuration
This section explains how to configure DNS.

The following are the steps to configure DNS:

1 Navigate to Templates > IPAM/DNS Profiles and create a DNS profile by selecting the DNS
type in the Type drop-down list.

2 Add one or more DNS Service Domain names. NSX Advanced Load Balancer will be the
authoritative DNS server for these domains.

3 Configure a TTL value for all records for a particular domain, or leave the Default Record TTL
for all Domains field blank to accept the default TTL of 30 seconds.

4 Click Save.

Using NSX Advanced Load Balancer DNS

VMware, Inc. 586


VMware NSX Advanced Load Balancer Configuration Guide

After configuring a DNS profile (above) with a set of domains for which NSX Advanced Load
Balancer DNS will be serving records, configure a DNS virtual service in NSX Advanced
Load Balancer for applications to discover each other. This serves two purposes, DNS high
availability and interoperability with other DNS providers in the same cluster. For instance, Mesos-
DNS.

Setting up DNS Virtual Service


1 Create a DNS Pool with back-end servers consisting of all Controller IPs in the cluster, with the
server port as 53.

Note If the Controllers are running on Mesos nodes with Mesos DNS enabled, use port 8053.

2 Create a virtual service with the following attributes:

a Publicly-accessible virtual IP address.

b System-DNS as the Application Profile.

c Check Ignore network reachability constraints for the server pool.

d If the Controller is on an external network (requires routing for SE data traffic to reach the
Controller), then add a static route to the Controller network as shown below.

3 To add a static route (when the Controller is in an external network), navigate to Infrastructure
> Cloud Resources > Routing. Click Createand add a Default-Gateway IP address for the
cluster.

4 There are 2 ways to enable NSX Advanced Load Balancer DNS service in your data center.

n Add DNS VIP (“10.160.160.100” as configured above) to the nameservers list in /etc/
resolv.conf on all nodes requiring service discovery. Create applications and verify
resolution works for the application’s FQDN by issuing dig app-name.domain anywhere
in the cluster.

n Add DNS VIP in the corporate DNS server as the nameserver for serving domain names
configured in the DNS profile above. Any requests to mycompany-cloud will be redirected
to and serviced by the NSX Advanced Load Balancer DNS service.

Configuring the IPAM/DNS Profiles by Provider Type


This section explains how IPAM and/or DNS profiles can be configured.

Using IPAM/ DNS in a Virtual Service Configuration


The following examples are cloud-independent:

IPAM only: With IPAM in play, selecting the Auto Allocate checkbox causes the Network for
VIP Address Allocation selection box to appear. From a list of displayed networks and subnets a
choice can be made; in this case, either ipam-nw1 or ipam-nw2 can be selected. From the selected
network (ipam-nw1) an address for the VIP will be auto-allocated.

VMware, Inc. 587


VMware NSX Advanced Load Balancer Configuration Guide

DNS only: With DNS in play, no list of networks is offered. Instead, one of several domains is
offered. By selecting .test.avi from the list and accepting the default prefix (vs) in the Fully
Qualified Domain Name field, the user is specifying vs.test.avi as the final FQDN.

VMware, Inc. 588


VMware NSX Advanced Load Balancer Configuration Guide

IPAM and DNS: With both IPAM and DNS available, you can both specify a network from which to
auto-allocate a VIP address and the FQDN (vs.test.avi) to which it will be associated.

VMware, Inc. 589


VMware NSX Advanced Load Balancer Configuration Guide

Note
n If a DNS profile is configured under a cloud where the virtual service is being created, then
the virtual service's IP cannot be determined from a fully qualified domain name; however, you
can enter an IP address or select the Auto Allocate checkbox.

n In the case of Infoblox, if there is a list of usable_subnets/ usable_domains configured then the
drop-down will consist only of those entries. If no such configuration is found, NSX Advanced
Load Balancer will display the entire list of available subnets/ domains from Infoblox.

n VIP allocation through DHCP is not supported.

VMware, Inc. 590


IPAM Provider (OpenStack)
6
This section explains IPAM Provider (OpenStack).

NSX Advanced Load Balancer communicates with OpenStack Neutron via APIs to provide IPAM
functionality. Currently, DNS services from OpenStack are not supported in this configuration.

This function provides support for cloud providers who host their Virtual Machines/ instances
on OpenStack (for example, Mesos nodes running on OpenStack instances). Therefore, this
configuration is irrelevant if you are using OpenStack cloud in NSX Advanced Load Balancer.

Configuring IPAM
The following are the steps to configure IPAM:

1 Navigate to Templates > Profiles > IPAM/DNS Profiles.

2 Click on Create to view the New IPAM/DNS Profile window.

3 Specify the Profile name.

4 Select the Type as OpenStack IPAM.

5 Specify OpenStack profile configuration details.

6 Click Save.

VMware, Inc. 591


Security
7
This chapter includes the following topics:

n Overview of NSX Advanced Load Balancer Security

n SSL Certificates

n Integrating Let's Encrypt Certificate Authority with NSX Advanced Load Balancer System

n Integrating Let's Encrypt Certificate Authority with NSX Advanced Load Balancer

n OCSP Stapling in NSX Advanced Load Balancer

n Client SSL Certificate Validation

n Client-IP-based SSL Profiles

n SSL/TLS Profile

n SSL Client Cipher in Application Logs on NSX Advanced Load Balancer

n Server Name Indication

n True Client IP in L7 Security Features

n App Transport Security

Overview of NSX Advanced Load Balancer Security


This section is focused on the security of NSX Advanced Load Balancer Service Engines and
Controllers.

VMware strives to ensure the highest level of security, adhering to rigorous testing and validation
standards. NSX Advanced Load Balancer includes numerous security-related features to ensure
the integrity of the NSX Advanced Load Balancer system as well as the applications and services it
protects.

VMware, Inc. 592


VMware NSX Advanced Load Balancer Configuration Guide

Industry Validation
Many of the largest and most trusted brands on the Internet have subjected NSX Advanced Load
Balancer to their own testing or testing by third-party companies such as Qualys and Rapid7.
This continuous testing ensures that, in addition to the proven success of NSX Advanced Load
Balancer in public and private networks, it has been thoroughly vetted by known industry security
leaders.

The following are a few examples of web UI and other attack vectors tested through external
penetration testing:

n SQL injection

n Cross site request forgery (CSRF)

n Cross site scripting (XSS)

n Arbitrary code execution

n Credential disclosure

n Clickjacking

n Improper cookie settings

n Password protection via PBKDF2

n Encryption of SSL certificate’s private keys

n Role based access control

n Strong output validation to guard against disclosure of sensitive fields such as passwords,
export of keys

Patching Security Issues

Despite the best attempts to proactively resolve any potential threat before the code release, it
is essential to ensure a solid plan of action if a security hole is discovered in customer deployed
software.

VMware strongly recommends key administrators subscribe to NSX Advanced Load Balancer's
mailing list. Security alerts are proactively sent to customers to notify them if an issue has been
found, and the potential mitigation required. Subscribe through VMware customer portal. It also
publishes responses to Common Vulnerabilities and Exposures (CVEs) of note, which include
known vulnerabilities in NSX Advanced Load Balancer or software used by it, such as SSL and
Linux. Avi may also publish CVE responses to issues that do not impact NSX Advanced Load
Balancer to inform our customers that they are protected. These CVEs are posted to the NSX
Advanced Load Balancer Knowledge Base site but not sent proactively via email alerts.

VMware Security Advisories document remediation for security vulnerabilities that are reported
in VMware products. Sign up on the right-hand side of this page to receive new and updated
advisories in e-mail.

VMware, Inc. 593


VMware NSX Advanced Load Balancer Configuration Guide

See also:

n Support Terms & Conditions

n CVEs

n Upgrade NSX Advanced Load Balancer Software

Hardening NSX Advanced Load Balancer


With a basic deployment of NSX Advanced Load Balancer, the system is secured and reasonably
locked down. However, many administrators may wish to customize the security posture or
tighten policies regarding who can access NSX Advanced Load Balancer. VMware strongly
recommends thoroughly reviewing the choices for securing NSX Advanced Load Balancer, which
is essential to guarantee its security in production environments where the potential exposure to
malicious attacks is more severe.

n User Account Management

n Protocol Ports Used by NSX Advanced Load Balancer for Management Communication

n NSX Advanced Load Balancer Service Engine to Controller Communication

n Clickjacking Protection

n Securing Management IP Access

SSL Certificates
NSX Advanced Load Balancer supports terminating client SSL and TLS connections at the virtual
service, which requires it to send a certificate to clients that authenticate the site and establishes
secure communications.

A virtual service that handles secure connections requires the following:

n SSL/TLS profile

n This determines the supported ciphers and versions.

n ssl_ciphers HIGH:!aNULL:!MD5:+SHA1; and DHE 1024, 2048, and so on are the supported
ciphers and cipher sizes.

n See SSL/TLS Profilefor more details on SSL/TLS profiles.

n SSL Certificate

n This is presented to clients connecting to the site.

n SSL certificates can be used to present to administrators connecting to the NSX Advanced
Load Balancer web interface or API, and also for the NSX Advanced Load Balancer
SE to present to servers when SE-to-server encryption is required with client (the SE)
authentication.

VMware, Inc. 594


VMware NSX Advanced Load Balancer Configuration Guide

The SSL/TLS Certificates page on Templates > Security > SSL/TLS Certificates allows the import,
export, and generation of new SSL certificates or certificate requests. Newly-created certificates
may be either self-signed by NSX Advanced Load Balancer or created as a Certificate Signing
Request (CSR) that must be sent to a trusted Certificate Authority (CA), that generates a trusted
certificate.

n Creating a self-signed certificate generates both the certificate and a corresponding private
key.

n Imported existing certificates are not valid until a matching key has been supplied.

n NSX Advanced Load Balancer supports PEM and PKCS12 formats for certificates.

SSL/TLS Certificates Page


Select Templates > Security > SSL/TLS Certificates to open the SSL/TLS Certificates page. This
tab includes the following functionalities:

n Search: Search across the list of objects.

n Create: Shows the list of certificates from the drop-down list to create a certificate.

n Edit: Opens the Edit Certificatewindow. Only incomplete certificates that do not have a
corresponding key can be edited.

n Export: The down arrow icon exports a certificate and corresponding private key.

The table on this tab contains the following information for each certificate:

n Name: This displays the name of the certificate. Mouse over the name of the certificate will
display any intermediate certificate that has been automatically associated with the certificate.

n Status: This shows the status of the certificate. Status in 'green' indicates it is good, in
'yellow'/orange/red' indicates the certificate is expiring soon or has already expired, and 'gray'
indication, if the certificate is incomplete.

n Common Name: This displays the fully qualified name of the site to which the certificate
applies. This entry must match the hostname the client will enter in their browser in order for
the site to be considered trusted.

n Issuer Name: This displays the name of the certificate authority.

n Algorithm: This displays the algorithm as either EC (Elliptic Curve) or RSA.

n Self Signed: This displays whether the certificate is self-signed by NSX Advanced Load
Balancer or signed by a Certificate Authority.

n Valid Until: This displays the date and time when the certificate expires.

Create Certificate
Navigate to Templates > Security > SSL/TLS Certificates. Click Create to open the New
Certificate (SSL/TLS)window.

VMware, Inc. 595


VMware NSX Advanced Load Balancer Configuration Guide

When creating a new certificate, you can select any of the following certificates:

n Root/Intermediate CA Certificate: This certificate is used to automatically create the certificate


chain for application certificates. There are no configuration options other than importing the
certificate through a file or pasting the text. The root/intermediate certificate will show up
in a separate table at the bottom of the SSL Certificates page. It is recommended to import
the root/intermediate certificate prior to importing an application certificate that relies on the
intermediate for the chain.

n Application Certificate: This certificate is used for normal SSL termination and decryption on
NSX Advanced Load Balancer. This option is also used to import or create a client certificate
for NSX Advanced Load Balancer to present to a backend server when it needs to authenticate
itself.

n Controller Certificate: This certificate is used for the GUI and API for the Controller
cluster. Once uploaded, select the certification through Administration > Settings > Access
Settings.

To create a new certificate, follow the steps below:

n Name: Specify the name of the certificate.

n Type: Select the type of certificate to create from the drop-down list. The following are the
options:

n Self Signed: Quickly create a test certificate that is signed by NSX Advanced Load
Balancer. Client browsers will display an error that the certificate is not trusted. If the HTTP
application profile has HTTP Strict Transport Security (HSTS) enabled, clients will not be
able to access a site with a self signed certificate.

n CSR: Create a valid certificate by first creating the certificate request. This request must be
sent to a certificate authority, which will send back a valid certificate that must be imported
back into NSX Advanced Load Balancer.

n Import: Import a completed certificate that was either received from a certificate authority
or exported from another server.

n Common Name: Specify the fully qualified name of the site, such as www.vmware.com. This
entry must match the hostname the client entered in the browser in order for the site to be
considered trusted.

n Input the required information required for the type of certificate you are creating:

n Self-Signed Certificates

n CSR Certificates

n Importing Certificates

Note The OCSP stapling can be enabled and configured using the UI. For more information, see
Using OCSP Stapling through the UI.

VMware, Inc. 596


VMware NSX Advanced Load Balancer Configuration Guide

Self-Signed Certificates
NSX Advanced Load Balancer can generate self-signed certificates. Client browsers do not trust
these certificates and will warn the user that the virtual service’s certificate is not part of a trust
chain.

Self-signed certificates are good for testing or environments where administrators control the
clients and can safely bypass the browser’s security alerts. Public websites should never use
self-signed certificates.

If you have selected Self Signed option from Type drop-down list in the New Certificatewindow,
then specify the following details:

n Organization: Company or entity registering the certificate, such as NSX Advanced Load
Balancer Networks, Inc. (optional).

n Organization Unit: Group within the organization that is responsible for the certificate, such as
Development (optional).

n Country: Country in which the organization is located (optional).

n State Name or Province: State in which the organization is located (optional).

n Locality or City: City of the organization (optional).

n Email: The email contact for the certificate (optional).

n Algorithm: Select either EC (Elliptic Curve) or RSA. RSA is older and considered less secure
than EC, but is more compatible with a broader array of older browsers. EC is new, less
expensive computationally, and generally more secure; however, it is not yet accepted by
all clients. NSX Advanced Load Balancer allows a virtual service to be configured with two
certificates at a time, one each of RSA and EC. This will enable it to negotiate the optimal
algorithm with the client. If the client supports EC, then the NSX Advanced Load Balancer will
prefer this algorithm, which gives the benefit of natively supporting Perfect Forward Secrecy
for better security.

n Key Size: Select the level of encryption to be used for handshakes, as follows:

n 2048 Bit is recommended for RSA certificates.

n SECP256R1 is recommended for EC certificates.

Higher values may provide better encryption but increase the CPU resources required by both
NSX Advanced Load Balancer and the client.

n After specifying the necessary details, click Save.

CSR Certificates
The Certificate Signing Request (CSR) is the first of three steps involved in creating a valid SSL/TLS
certificate. The request contains the same parameters as a Self-Signed Certificate; however, NSX
Advanced Load Balancer does not sign the completed certificate. Rather, it must be signed by a
Certificate Authority that is trusted by client browsers.

VMware, Inc. 597


VMware NSX Advanced Load Balancer Configuration Guide

If you have selected CSR option from Type drop-down list in the New Certificate window, then
specify the following details:

n Organization: Company or entity registering the certificate, such as NSX Advanced Load
Balancer Networks.

n Organization Unit: Group within the organization that is responsible for the certificate, such as
Development.

n Country: Country in which the organization is located.

n State Name or Province: State in which the organization is located.

n Locality or City: City of the organization.

n Email: The email contact for the certificate.

n Algorithm: Select either EC (Elliptic Curve) or RSA. RSA is older and considered less secure
than EC, but is more compatible with a broader array of older browsers. EC is new, less
expensive computationally, and generally more secure; however, it is not yet accepted by
all clients. NSX Advanced Load Balancer allows a Virtual Service to be configured with two
certificates at a time, one each of RSA and EC. This allows NSX Advanced Load Balancer to
negotiate the optimal algorithm with the client. If the client supports EC, then NSX Advanced
Load Balancer prefers this algorithm, which gives the added benefit of natively supporting
Perfect Forward Secrecy for better security.

n Key Size: Select the level of encryption to be used for handshakes, as follows:

n 2048 Bit is recommended for RSA certificates.

n SECP256R1 is recommended for EC certificates.

Higher values provide better encryption but increase the CPU resources required by both NSX
Advanced Load Balancer and the client.

n After specifying the necessary details, click Save to generate the CSR.

n Forward the completed CSR to any trusted Certificate Authority (CA), such as Thawte or
Verisign, by selecting the Certificate Signing Request at the bottom left of the New Certificate
popup and then either copying and pasting it directly to the CA’s website or saving it to a file
for later use.

n Once the CA issues the completed certificate, you may either paste it or upload it into the
Certificate field at the bottom right of the New Certificatewindow.

Note It can take several days for the CA to return the finished certificate. Meanwhile, you can
close the New Certificatewindow to return to the SSL/TLS Certificates page. The new certificate
will appear in the table with the notation Awaiting Certificate Valid Until column.

When you receive the completed certificate, click Edit icon for the certificate to open the Edit
Certificate, and then paste the certificate and click Save to generate the CSR certificate. NSX
Advanced Load Balancer will generate a key from the completed certificate automatically .

VMware, Inc. 598


VMware NSX Advanced Load Balancer Configuration Guide

Import Certificates
You may directly import an existing PEM or PKCS12 SSL/TLS certificate into NSX Advanced Load
Balancer (such as from another server or load balancer). A certificate will have a corresponding
private key, which must also be imported.

Note NSX Advanced Load Balancer generates the key for self-signed or CSR certificates
automatically.

Thefollowing are the steps to import the certificate:

1 Navigate to Templates > Security > SSL/TLS Certificates.

2 Click CREATE and select the certificate type such as Application Certificate.

3 Click Type and select Import. Certificate or Private Key can be imported by copying-pasting or
uploading a file.

n PEM File – PEM files contain certificate or private key in plain-text Base64 encoded format.
Certificate and private key can be provided in separate PEM files or combined in a single PEM
file.

n If certificate and private key are provided in a single PEM file, navigate to Paste Key text box
and add the private key by following any one of the methods listed below:

n Upload File: Click the Upload File button, select the PEM or PKCS12 file, then click
Validate button to parse the file. If the upload is successful then, the Key field will be
populated.

n Paste: Copy and paste a PEM key into the Key field. Be careful, not to introduce extra
characters in the text, which can occur when using some email clients or rich text editors. If
you copy and paste the key and certificate together as one file then, click Validate button
to parse the text and populate the Certificate field.

n If certificate and private key are provided in two seperate PEM files, follow the below steps to
import each individually:

n Certificate - Add the certificate in the Paste Certificate text box by copying-pasting or file
upload, as described above.

n Key – Add the private key in the Paste Key field by copying-pasting or file upload.

n PKCS 12 File - PKCS12 files contain both the certificate and key, PKCS12 is a binary format,
which cannot be copied-pasted, and hence it can be uploaded only. Navigate to the Paste Key
and follow the below step to import the PKCS #12 file.

n Upload File - Click the Import File button, select the PKCS12 file, click the Validate button to
parse the file. If the upload is successful, both the Key and Certificate fields will be populated.

n Key Passphrase: You can also add and validate a key passphrase to encrypt the private key.

n Import: Select Import to finish adding the new certificate and key. The key will be embedded
with the certificate and treated as one object within the NSX Advanced Load Balancer UI.

VMware, Inc. 599


VMware NSX Advanced Load Balancer Configuration Guide

Certificate Authority
Certificates require a trusted chain of authority to be considered as valid. If the certificate used is
directly generated by a certificate authority that is known to all client browsers then, no certificate
chain is required. However, if there are multiple levels required, an intermediate certificate may be
necessary. Clients will often traverse the path indicated by the certificate to validate on their own
if no chain certificate is presented by a site, but this adds additional DNS lookups and time for the
initial site load. The ideal scenario is to present the chain certs along with the site cert.

If a chain certificate, or rather a certificate for a certificate authority, is uploaded via the Certificate
> Import in the certificates page, it will be added to the Certificate Authority section. NSX
Advanced Load Balancer will automatically build the certificate chain if it detects a next link in
the chain exists.

To validate a certificate that has been attached to a chain certificate, hover the cursor over the
certificate’s name in the SSL Certificates table at the top of the page. NSX Advanced Load
Balancer supports multiple chain paths. Each may share the same CA issuer, or they may be
chained to different issuers.

SSL Profile
NSX Advanced Load Balancer supports the ability to terminate SSL connections between the
client and the virtual service, and to enable encryption between NSX Advanced Load Balancer
and the back-end servers. The SSL/TLS profile contains the list of accepted SSL versions and the
prioritized list of SSL ciphers.

Both an SSL/TLS profile and an SSL certificate must be assigned to the virtual service while
configuring it to terminate client SSL/TLS connections. In case you prefer to encrypt traffic
between NSX Advanced Load Balancer and the servers then, an SSL/TLS profile must be assigned
to the pool. While creating a new virtual service via the basic mode, the default system SSL/TLS
profile is used automatically.

SSL termination can be performed on any service port. However, browsers assume that the
default port as 443. The best practice is to configure a virtual service to accept both HTTP and
HTTPS by creating a service on port 80, by selecting the + icon to add an additional service port,
and then set the new service port to 443 with SSL enabled. A redirect from HTTP to HTTPS is
generally preferable, which can be done through a policy or by using the System-HTTP-Secure
application profile.

Each SSL/TLS profile contains default groupings of supported SSL ciphers and versions that may
be used with RSA or an Elliptic Curve certificate, or both. Ensure that any new SSL/TLS profile
you create, includes ciphers that are appropriate for the certificate type that will be used later. The
default SSL/TLS profiles included with NSX Advanced Load Balancer provides a broad range of
security. For instance, the Standard Profile works for typical deployments.

Creating a new SSL/TLS profile or using an existing profile entails various trade-offs between
security, compatibility, and computational expense. For instance, increasing the list of accepted
ciphers and SSL versions increases the compatibility with clients while also lowering security
potentially.

VMware, Inc. 600


VMware NSX Advanced Load Balancer Configuration Guide

SSL Profile Settings


Navigateto Templates > Profiles > SSL/TLS Profile.

n Search: Searches across the list of objects.

n Create: Opens the new application or systemprofile.

n Edit: Opens the existing profile to edit.

n Delete: An SSL/TLS profile can only be deleted, if it is not currently assigned to a virtual
service. An error message will indicate the virtual service referencing the profile. The default
system profiles can be modified, but not deleted.

The table on this tab provides the following information for each SSL/TLS profile:

n Name: Name of the profile.

n Accepted Versions: SSL and TLS versions accepted by the profile.

Create an SSL Profile


The following are the steps to create or edit an SSL profile:

n Name: Specify a unique name for the SSL/TLS Profile.

n Type: Select the type of profile from the drop-down list.

n Accepted Versions: Select one or more SSL/TLS versions from the drop-down list to add
to this profile. Chronologically, TLS v1.0 is the oldest supported, and TLS v1.2 is the newest.
SSL v3.0 is no longer support as of NSX Advanced Load Balancer v15.2. In general with SSL,
older versions have many known vulnerabilities while newer versions have many undiscovered
vulnerabilities. As with any security, NSX Advanced Load Balancer recommends diligence to
understand the dynamic nature of security and to ensure that NSX Advanced Load Balancer
is always up to date. Some SSL ciphers are dependent on specific versions of SSL or TLS
supported. For more information, refer to OpenSSL.

n Accepted Ciphers: Enter the list of accepted ciphers in the Accepted Ciphers field. Each
cipher entered must conform to the cipher suite names listed at OpenSSL. Separate each
cipher with a colon. For example, AES:3DES means that this Profile will accept the AES and
3DES ciphers. When negotiating ciphers with the client, NSX Advanced Load Balancer will
prefer ciphers in the order listed. You may use an SSL/TLS profile with both an RSA and an
Elliptic Curve certificate. These two types of certificates can use different types of ciphers, so it
is important to incorporate ciphers for both types. Selecting only the most secure ciphers may
incur higher CPU load on NSX Advanced Load Balancer and may also reduce compatibility
with older browsers.

VMware, Inc. 601


VMware NSX Advanced Load Balancer Configuration Guide

PKI Profile
The Public Key Infrastructure (PKI) profile allows configuration of Certificate Revocation List
(CRLs), and the process for updating the lists. The PKI profile may be used to validate clients
and server certificates.

n Client Certificate Validation: NSX Advanced Load Balancer supports the ability to validate
client access to an HTTPS site via client SSL certificates. Clients will present their certificate
when accessing a virtual service, which will be matched against a CRL. If the certificate is valid
and the clients are not on the list of revoked certificates, they will be allowed access to the
HTTPS site.

Client certificate validation is enabled via the HTTP profile’s Authentication tab. The HTTP profile
will refer the PKI profile for specifics on the Certificate Authority (CA) and the CRL. A PKI profile
may be referenced by multiple HTTP profiles.

n Server Certificate Validation: Similar to validating a client certificate, NSX Advanced Load
Balancer can validate the certificate presented by a server, such as when an HTTPS health
monitor is sent to a server.

Server certificate validation uses the same PKI profile to validate the certificate presented.
Server certificate validation can be configured by enabling SSL within the desired pool, and then
specifying the PKI Profile.

PKI Profile Settings


Select Templates > Security > PKI Profile to open the PKI tab. This tab includes the following
functions:

n Search: Search across the list of objects.

n Create: Opens the New PKI Profile popup window.

n Edit: Opens the Edit PKI Profile popup window.

n Delete: A PKI profile may only be deleted if it is not currently assigned to an HTTP profile. An
error message will indicate the HTTP profile referencing the PKI profile.

The table on this tab provides the following information for each PKI Profile:

n Name: Name of the Profile.

n Certificate Authority: Denotes if a CA has been attached to the PKI Profile.

n Certificate Revocation List: Revocation lists (CRLs) that have been attached to the PKI Profile.

Create a PKI Profile


To create or edit a PKI Profile, do the following:

VMware, Inc. 602


VMware NSX Advanced Load Balancer Configuration Guide

n Name: Enter a unique name for the PKI profile.

n Ignore Peer Chain: When set to true, the certificate validation will ignore any intermediate
certificates that might be presented. The presented certificate is only checked against the
final root certificate for revocation. When this option is disabled (by default), the certificate
must present a full chain which is traversed and validated, starting from the client or server
presented cert to the terminal root cert. Each intermediate cert must be validated and
matched against a CA cert included in the PKI profile.

n Certificate Authority: Add a new certificate from a trusted Certificate Authority. If more than
one CA are included in the PKI profile, then a client’s certificate must match only to any one
of them to be valid. A client’s certificate must match the CA as the root of the chain. If the
presented cert has an intermediate chain, then each link in the chain must be included here.
See Ignore Peer Chain (step above) to ignore intermediate validation checking.

n Client Revocation List: The CRL allows invalidation of certificates, or more specifically the
certificate’s serial number. The revocation list may be updated by manually uploading a new
CRL, or by periodically downloading from a CRL server. If a client or server certificate is found
to be in the CRL, the SSL handshake will fail, with a resulting log created to provide further
information about the handshake.

n Server URL: Specify a server to download CRL updates. Access to this server will be done
from the Controller IP addresses, which means they will require firewall access to this
destination. The server may be an IP address, or an FQDN along with an HTTP path, such
as www.avinetworks.com/crl.

VMware, Inc. 603


VMware NSX Advanced Load Balancer Configuration Guide

n Refresh Time: After the elapsed period of time, NSX Advanced Load Balancer will
automatically download an updated version of the CRL. If time is not specified then, NSX
Advanced Load Balancer will download a new CRL at the current CRL’s lifetime expiration.

n Upload CRL File: Upload a CRL manually. Subsequent CRL updates can be done by
manually uploading new lists, or configuring the Server URL and Refresh Time to automate
the process.

Certificate Management
To create a new certificate, follow the steps below:

1 From the NSX ALB UI, navigate to the Templates > Security > Certificate Management.

2 Click Create.

3 In the New Certificate Management screen, enter the Name of the profile.

4 In the Control Script field, select the required alert script configuration, as required.

Note Click Create button in the drop down, to create a new Control Script (if required).

5 If the profile needs to pass some parameter values to the script, select Enable Custom
Parameters.

6 Enter the Name and Value for the parameters.

Note Re-upload the Control Script, if the file has been modified after uploading for the
changes to reflect.

7 Click Save.

VMware, Inc. 604


VMware NSX Advanced Load Balancer Configuration Guide

Authentication Profile
The Authentication profile (“auth profile”) allows configuration of clients into a Virtual Service via
HTTP basic authentication.

The authentication profile is enabled via the HTTP basic authentication setting of a virtual service’s
Advanced Properties tab.

NSX Advanced Load Balancer also supports client authentication via SSL client certificates, which
is configured the HTTP Profile’s Authentication section.

Auth Profile Settings


Select Templates > Security > Auth Profile to open the Auth tab. This tab includes the following
functions:

n Search: Search across the list of objects.

n Create: Opens the Create/Edit window.

n Edit: Opens the Create/Edit window.

n Delete: An Auth profile may only be deleted if it is not currently assigned to a virtual service or
in use by NSX Advanced Load Balancer for administrative authentication.

The table on this tab provides the following information for each auth profile:

n Name: Name of the profile.

n Type: The Type will be LDAP.

Create an Authentication Profile


To create or edit an authentication profile, do the following:

VMware, Inc. 605


VMware NSX Advanced Load Balancer Configuration Guide

n Name: Enter a unique name.

n LDAP Servers: Configure one or more LDAP servers by adding their IP addresses.

n LDAP Port: The service port to use when communicating with the LDAP servers. This is
typically 389 for LDAP or 636 for LDAPS (SSL).

n Secure LDAP using TLS: Enable startTLS for secure communications with the LDAP servers.
This may require a service port change.

n Base DN: LDAP Directory Base Distinguished Name. Used as default for settings where DN is
required but was not populated like User or Group Search DN.

VMware, Inc. 606


VMware NSX Advanced Load Balancer Configuration Guide

n Anonymous Bind: Minimal LDAP settings that are required to verify User authentication
credentials by binding to LDAP server. This option is useful when you do not have access
to administrator account on the LDAP server.

n User DN Pattern: LDAP user DN pattern is used to bind an LDAP user after replacing the
user token with real username. The pattern should match the user record path in the LDAP
server. For example, cn=,ou=People,dc=myorg,dc=com is a pattern where we expect to
find all user records under ou “People”. When searching LDAP for a specific user, we
replace the token with username.

n User Token: An LDAP token is replaced with real user name in the user DN pattern. For
example, in User DN Pattern is configured as “cn=-user-,ou=People,dc=myorg,dc=com”,
the token value should be -user-.

n User ID Attribute: LDAP user ID attribute is the login attribute that uniquely identifies a
single user record. The value of this attribute should match the username used at the login
prompt.

n User Attributes: LDAP user attributes to fetch on a successful user bind. These attributes
are used only for debugging purpose.

n Administrator Bind: LDAP administrator credentials configured under LDAP Directory


Settings below are used to bind Avi as an admin when querying LDAP for Users or Groups.

n Admin Bind DN: Full DN of LDAP administrator. Admin bind DN is used to bind to an
LDAP server. Administrators should have sufficient privileges to search for users under
user search DN or groups under group search DN.

n Admin Bind Password: Administrator password. Password expiration or change is not


handled. The password is hidden from Rest API and CLI.

n User Search DN: LDAP user search DN is the root of search for a given user in the
LDAP directory. Only user records present in this LDAP directory sub-tree are allowed for
authentication. Base DN value is used if this value is not configured.

n Group Search DN: LDAP group search DN is the root of search for a given group in the
LDAP directory. Only matching groups present in this LDAP directory sub-tree will be
checked for user membership. Base DN value is used if this value is not configured.

n User Search Scope: LDAP user search scope defines how deep to search for the user
starting from user search DN. The options are search at base, search one level below or
search the entire subtree. The default option is to search one-level deep under user search
DN.

n Group Search Scope: LDAP group search scope defines how deep to search for the group
starting from the group search DN. The default value is the entire subtree.

n User ID Attribute: LDAP user ID attribute is the login attribute that uniquely identifies a
single user record. The value of this attribute should match the username used at the login
prompt.

VMware, Inc. 607


VMware NSX Advanced Load Balancer Configuration Guide

n Group Member Attribute: LDAP group attribute that identifies each of the group
members. For example, member and memberUid are commonly used attributes.

n User Attributes: LDAP user attributes to fetch on a successful user bind. These attributes
are only for debugging.

n Insert HTTP Header for Client UserID: Insert a HTTP header into the client request before it
is sent to the destination server. This field is used to name the header. The value will be the
client’s User ID. This same UserID value will also be used to populate the User ID field in the
Virtual Service’s logs.

n Required User Group Membership: User should be a member of these groups. Each group is
identified by the DN. For example,’cn=testgroup,ou=groups,dc=LDAP,dc=example,dc=com’

n Auth Credentials Cache Expiration: The max allowed length of time a client’s authentication is
cached.

n Group Member Attribute Is Full DN: Group member entries contain full DNs instead of only
User ID attribute values.

Additional Information
Changing the NSX Advanced Load Balancer Controller’s Default Certificate

Integrating Let's Encrypt Certificate Authority with NSX


Advanced Load Balancer System
Let’s Encrypt is a free, automated (automates both issuing and renewing the certificate) and
open certificate authority. This section elaborates the configuration summary for the Let’s Encrypt
integration with the NSX Advanced Load Balancer.

SSL/TLS protocol helps keep an internet connection secure. It safeguards any sensitive data sent
between two machines, systems, or devices, preventing intruders from reading and modifying any
information transferred between two machines/systems/devices. SSL/TLS Certificate facilitates
secure, encrypted connections between the two machines, systems, or devices. However, there
are some challenges around SSL/TLS Certificate:

n Manually getting a certificate

n The cost associated with a certificate signed by CA

Let’s Encrypt resolves all the above challenges.For more information please see Let’s Encrypt.

Working with Let's Encrypt


Before issuing a certificate, Let’s Encrypt servers validate that the requester controls the domain
names in that certificate using “challenges,” as defined by the ACME standard. Let’s Encrypt uses
the ACME protocol to verify that you control a given domain name and issue you a certificate.
There are different ways that the agent/client can prove control of the domain:

n Provisioning a DNS record under the domain (as per CSR’s common name)

VMware, Inc. 608


VMware NSX Advanced Load Balancer Configuration Guide

n Provisioning an HTTP resource under a well-known URI

Note NSX Advanced Load Balancer supports HTTP-01 challenge for domain validation.

n Let’s Encrypt gives a token to ACME client, and the ACME client puts a file on the web server
at http://<YOUR_DOMAIN>/well-known/acme-challenge/<TOKEN>. That file contains the token,
plus a thumbprint of account key.

n Once the ACME client tells Let’s Encrypt that the file is ready, Let’s Encrypt tries retrieving it
(potentially multiple times from multiple vantage points).

n If validation checks get the right responses from the web server, the validation is considered
successful, and certificate will be issued.

Note As Let’s Encrypt CA communicates on port 80 for HTTP-01 challenge, hence port 80 should
be opened on the firewall and Let’s Encrypt CA should be able to reach to user’s network (network
where NSX Advanced Load Balancer System is deployed, Let’s Encrypt CA connects through
public network to user’s NSX Advanced Load Balancer System on port 80).

If there is a virtual service listening on port 80 at NSX Advanced Load Balancer, script does not
create a virtual service else script would automatically create a virtual service listening on port 80
for the respective virtual service listening on port 443/custom SSL Port.

For more information regarding domain validation please refer the below URLs:
n Challenge Types

n Let’s Encrypt - How It Works

Configuring Let’s Encrypt


Below is the configuration summary for the Let’s Encrypt integration with the NSX Advanced Load
Balancer :

n Get the script that would assist in getting and renewing the certificate.

n Add the script as controller script on NSX Advanced Load Balancer System.

n Add user account with customer (limited access only).

n Create certificate management profile on NSX Advanced Load Balancer System.

n Add virtual service on NSX Advanced Load Balancer System.

n Ensure that FQDN resolves to public IP, port 80 is open at Firewall.

n Create CSR and select the configured certificate management profile.

n Review the list of certificates, Let’s Encrypt CA would push signed certificate.

n Associate the certificate to the configured virtual service.

VMware, Inc. 609


VMware NSX Advanced Load Balancer Configuration Guide

Configuring NSX Advanced Load Balancer System


Follow the steps below to configure Let’s Encrypt for the NSX Advanced Load Balancer:

1 Download the script available at letsencrypt_mgmt_profile. To download the file, click the Raw
option.

2 Copy the code.

3 Access the NSX Advanced Load Balancer Controller and navigate to Templates > Scripts >
ControlScripts and click CREATE.

4 Enter the Name and paste the script in the Import or Paste Control Script. Click Save.

5 Configure a user account, first configure custom role (Make sure that read & write
access enabled for virtual service, Application Profile, SSL/TLS Certificates and Certificate
Management Profile. Now add a user, add/select all relevant details and call the custom role
here.

6 Add a user, enter all the required details and select the configured custom role.

7 Navigate to Templates > Security > Certificate Management and click CREATE.

8 Enter the Name and select the configured control script and select Enable Custom Script
Parameters and add custom parameters by clicking ADD.

Note It is recommended not to use admin account, always add a user account which has
custom role (with limited access).

9 Navigate to Templates > Security > SSL/TLS Certificates, click CREATE and select
Application Certificate.

10 Enter the Name and select the configured Certificate Management Profile and add all relevant
details. Click Save.

Note Make sure that a virtual service is configured with the Application Domain Name as
Common Name (CN) of certificate, CN of certificate must match with the Application Domain
Name of virtual service. FQDN (CN of certificate/ Application Domain Name of virtual service
should resolve to IP address and reachability also should be there).

After few minutes, review the list of the certificate, you can see the certificate pushed by Let’s
Encrypt CA.

Please associate the certificate to the configured virtual service.

Logs
To view the logs, please enable non-significant logs at the configured virtual service and attempt
to generate the certificate.

VMware, Inc. 610


VMware NSX Advanced Load Balancer Configuration Guide

Automation of Certificate Renewal


Controllerproperties has the configuration for the ssl_certificate_expiry_warning_days. As per
the default configuration, it is 30, 7, and 1 days, and it can be modified if required. As soon as
certificate renewal is required as per the configuration, the script gets activated, and the script
itself would take care of certificate renewal (it is completely automatic).

Note There is a rate limit imposed by Let’s Encrypt CA, and hence please make sure that the
renewal of the certificate does not hit the rate limit.

Integrating Let's Encrypt Certificate Authority with NSX


Advanced Load Balancer
Let’s Encrypt is a free, automated (automates both issuing and renewing the certificate) and
open certificate authority. The following section describes integration of Let's Encrypt Certificate
Authority with NSX Advanced Load Balancer.

SSL/TLS protocol helps keeping an internet connection secure and safeguard any sensitive
data sent between two machines, systems or devices, preventing intruders from reading, and
modifying any information transferred between two machines, systems or devices. Though
SSL/TLS certificate ensures secure, encrypted connections between systems, following are some
challenges around it.

n Manually getting a certificate.

n The cost associated with a certificate signed by CA.

Let’s Encrypt resolves the above challenges. For more information, see Let’s Encrypt.

Working with Let’s Encrypt


Before issuing a certificate, Let’s Encrypt servers validate that the requester controls the domain
names in that certificate using challenges as defined by the ACME standard. Let’s Encrypt uses the
ACME protocol to verify that you control a given domain name and to issue you a certificate. There
are different ways that the agent or client can prove control of the domain.

n Provisioning a DNS record under the domain (as per CSR’s common name).

n Provisioning an HTTP resource under a well-known URI.

The NSX Advanced Load Balancer supports HTTP-01 challenge for domain validation.

HTTP-01 Challenge

n Let’s Encrypt gives a token to the ACME client that puts a file on the web server at http://
<YOUR_DOMAIN>/well-known/acme-challenge/<TOKEN>. This file contains the token and a
thumbprint of account key.

n Once the ACME client tells Let’s Encrypt that the file is ready, Let’s Encrypt tries retrieving it
(potentially multiple times from multiple vantage points).

VMware, Inc. 611


VMware NSX Advanced Load Balancer Configuration Guide

n If validation checks get the right responses from the web server, the validation is considered
successful, and a certificate is issued.

Note
n As Let’s Encrypt CA communicates on port 80 for HTTP-01 challenge, port 80 must be opened
on the firewall and Let’s Encrypt CA must be able to reach user network (network where the
NSX Advanced Load Balancer System is deployed). Let’s Encrypt CA connects through public
network to user’s NSX Advanced Load Balancer System on port 80.

n The script automatically creates a virtual service on port 80 for the respective virtual service
listening on port 443/custom SSL Port, only if there is no virtual service already listening on
port 80.

For more information on domain validation, see the following links.

n Challenge Types

n Let’s Encrypt - How It Works

Configuring Let’s Encrypt


Following is the configuration summary for the Let’s Encrypt integration with the NSX Advanced
Load Balancer.

1 Get the script which would assist in getting and renewing the certificate.

2 Add the script as controller script on the NSX Advanced Load Balancer System.

3 Add user account with customer (limited access only).

4 Create certificate management profile on NSX Advanced Load Balancer System.

5 Add virtual service on NSX Advanced Load Balancer System.

6 Make sure that the FQDN resolves to public IP and port 80 is open at Firewall.

7 Create CSR and select the configured certificate management profile.

8 Review the list of certificates. Let’s Encrypt CA would push signed certificate.

9 Associate the certificate to the configured virtual service.

Configuring the NSX Advanced Load Balancer System


To configure Let’s Encrypt for NSX Advanced Load Balancer.

1 Download the script available at letsencrypt_mgmt_profile. To download the file, click Raw
option. Copy the code available.

2 In the NSX Advanced Load Balancer, navigate to Templates > Scripts > ControlScripts and
click Create.

3 Add a meaningful name and paste the code copied in step 1 in the Import or Paste Control
Script field. Save the configuration.

VMware, Inc. 612


VMware NSX Advanced Load Balancer Configuration Guide

4 Configure a custom role by navigating to Administration > Roles. Ensure that read and write
access is enabled for Virtual Service, Application Profile, SSL/TLS Certificates and Certificate
Management Profile, for this role.

5. Add a user, enter all the required details and select the configured custom role.

VMware, Inc. 613


VMware NSX Advanced Load Balancer Configuration Guide

6. Navigate to Templates > Security > Certificate Management and click Create.

7. Enter a meaningful name, select the configured control script and enable custom parameters,
add custom parameters as shown in the following example:

VMware, Inc. 614


VMware NSX Advanced Load Balancer Configuration Guide

Note It is recommended not to use admin account. Always add a user account which has custom
role (with limited access).

8. Navigate to Templates > Security > SSL/TLS Certificates, click Create and select Application
Certificate.

9. Enter meaningful name, common name, select the configured certificate management profile,
add all relevant details and save the configuration.

VMware, Inc. 615


VMware NSX Advanced Load Balancer Configuration Guide

Ensure that a virtual service is configured with the Application Domain Name as Common Name
(CN) of certificate. CN of certificate must match with the Application Domain Name of the virtual
service. FQDN (CN of certificate or Application Domain Name of virtual service must resolve to IP
address and the domain must be reachable).

After few minutes, review the list of the certificates. You can see the certificate pushed by Let’s
Encrypt CA. Associate the certificate to the configured virtual service.

Logs
To view the logs, enable non-significant logs for the configured virtual service and generate the
certificate. Following is an example of the log.

VMware, Inc. 616


VMware NSX Advanced Load Balancer Configuration Guide

Automation of certificate renewal


Controllerproperties has the configuration for the ssl_certificate_expiry_warning_days. In
the default configuration, the value for ssl_certificate_expiry_warning_days is 30 days, 7 days,
and 1 day. This setting can be modified if required. When certificate renewal is required (based on
configured settings), the script gets activated and automatically takes care of certificate renewal.

Note Let’s Encrypt CA imposes a rate limit. So ensure that the renewal of the certificate does not
hit the rate limit.

Additional Information
n For more details regarding rate limit, see Rate Limits.

n For more details regarding SSL/TLS Certificate details, see SSL Certificates.

OCSP Stapling in NSX Advanced Load Balancer


Online certificate status protocol (OCSP) stapling is an extension of the OCSP protocol. The validity
of SSL/TLS certificates can be checked using OCSP stapling. This section discusses OCSP Stapling
in detail.

An SSL certificate can be revoked by the certificate authority (CA) before the scheduled expiration
date. This implies that the certificate can no longer be trusted. This process of invalidating an
issued SSL certification before expiry of the certificate validity is called certificate revocation.

It is critical for browsers and clients to detect if a certificate has been revoked and suggest a
security warning. Certificate revocations are checked either using the Certificate Revocation List
(CRL) or Online Certificate Status Protocol (OCSP).

A CRL is a large list of certificates that have been revoked by the CA. When a client sends a
request for an SSL connection to a virtual service, the NSX Advanced Load Balancer checks the
CAs and CRL(s) in the PKI profile of the virtual service to verify whether the client certificate is still
valid. To know more, see Full-chain CRL Checking for Client Certificate Validation.

VMware, Inc. 617


VMware NSX Advanced Load Balancer Configuration Guide

Downloading and updating the long list of serial numbers that have been revoked can be
cumbersome. In the OCSP method, the client queries the status of a single certificate, instead of
downloading and parsing an entire list. This results in lesser overhead on the client and network.
Since OCSP requests are sent for each certificate, it can be an overhead for the OCSP responder,
in case of high traffic.

OCSP Stapling
The RFC 2560 describes OCSP stapling, a new method for checking revoked certificates. In this
method, when a certificate has to be verified, the browser issues an OCSP request with the serial
number of the certificate to the OCSP responder. The OCSP responder looks up the CA database
using the serial number and fetches the revocation status of the certificate corresponding to the
serial number. It returns the revocation status of the certificate through a signed OCSP response.

The client does not have to communicate with the CA server each time to get the certificate
status. The NSX Advanced Load Balancer retrieves the information and serves it to the client, on
receiving a request.

In NSX Advanced Load Balancer, OCSP stapling can be enabled only on the Application
certificates and the Root/Intermediate certificates. For response, the OCSP response of only the
Application certificate is stapled to the certificate in TLS/SSL handshake. OCSP Stapling can be
enabled and configured through the NSX Advanced Load Balancer UI and CLI.

Parameters in OCSP Stapling


This sections introduces the parameters that can be configured to use the OCSP stapling feature in
NSX Advanced Load Balancer.

Time Parameters

UI Field CLI Label Description

Define the time interval between


Frequency Interval ocsp_req_interval
OCSP requests.

Define the time interval for which


the Controller waits for the response
Response Timeout ocsp_resp_timeout from the CA. If there is no response
from the responder, the failover
mechanism is initiated.

Use this option to schedule OCSP


job at smaller intervals, when there
Fail Job Interval ocsp_job_fail_interval is no response received within
the Response (ocsp_req_interval)
timeout.

Maximum number of times the failed


Max Tries max_tries
OCSP jobs can be scheduled.

VMware, Inc. 618


VMware NSX Advanced Load Balancer Configuration Guide

Responder URL Action


In the absence of a response from the OCSP server, a failover mechanism is initiated. One of the
following URL actions can be selected as a URL action.

UI Field Description

Failover Select this method to strictly use user-configured URLs


(`responder_url_lists`) instead of the CA configured URLs.

Override Select this method to strictly use user-configured URLs


(responder_url_lists) instead of the CA configured
URLs. Select this method to try reaching the
URLs found in AIA extension of the certificate first
ocsp_responder_url_list_from_certs. If the system is
unable to fetch a response, it falls back to the respondent
URL list (responder_url_lists).

Note If for any reason, the OCSP request cannot be processed, OCSPErrorStatus tracks the
status errors to include failures in the OCSP workflow.

Using OCSP Stapling through the UI


n Configuring OCSP Stapling

OCSP stapling can be enabled through the NSX Advanced Load Balancer UI for Root/Intermediate
CA Certificates and Application Certificates.

Note
n OCSP stapling can be enabled only on Root/Intermediate certificates and Application
certificates, and not on controller certificates.

n In case of application certificates, OCSP stapling is currently supported in the CSR and import
modes. OCSP stapling cannot be enabled for self-signed certificates.

To enable OCSP stapling,

n From the NSX Advanced Load Balancer UI, navigate to Templates > Security > SSL/TLS
Certificates.

n Click on Create > Root/Intermediate CA Certificate.

n Enter a Name for the certificate.

n Import File or paste the details in the Upload or Paste Certificate File field.

n Select Enable OCSP stapling check box to enable the option.

n Enter a value between 60 and 31536000 as the Frequency Interval.

n Enter a value in seconds as the Response Timeout.

n Enter a value between 60-86400 seconds as the Fail Job Interval.

VMware, Inc. 619


VMware NSX Advanced Load Balancer Configuration Guide

To understand more about the time parameters, see Time Parameters.

n Enter Max Tries to define the number of times the failed job gets scheduled (with Fail Job
Interval) . After the maximum number of tries are exhausted, the job gets scheduled with
regular OCSP job (Frequency Interval).

n Under OCSP Responder URL List, enter the responder URL.

n Choose Failover or Override for Responder URL Action, to either failover or override the AIA
extension contained in the SSL/TLS certificate of the OCSP responder.

n Click Add under OCSP Responder URL List.

n Click Validate.

Handling Revoked Certificates


n Events or alerts are raised whenever the SSL certificate status changes to Revoked or Issuer
Revoked.

n If a certificate is revoked, the status of the certificate is marked as Revoked in the NSX
Advanced Load Balancer UI.

n If a root/intermediate certificate is revoked, all certificates issued by the revoked root/


intermediate certificate are marked as Issuer Revoked. The Controller stops requesting OCSP
certificate status for these certificates.

n SSL score of all the certificates with status as Revoked or Issuer Revoked are marked 0.

n Virtual service faults are added to alert the users when the certificate is either Revoked or
Issuer Revoked.

OCSP Stapling During SSL Handshake


When the OCSP response is sent to client, the exchange of stapling information in the SSL
handshake is as shown below:

Using OCSP Stapling through CLI - Enabling OCSP Stapling


OCSP stapling can be configured through the CLI using the enable_ocsp_stapling flag within the
certificate object and configured as shown in the following example.

[admin:user-ctrl]: > configure sslkeyandcertificate test-cert


[admin:user-ctrl]: sslkeyandcertificate> enable_ocsp_stapling
Overwriting the previously entered value for enable_ocsp_stapling
[admin:user-ctrl]: sslkeyandcertificate> ocsp_config
[admin:user-ctrl]: sslkeyandcertificate:ocsp_config> ocsp_req_interval 21600
Overwriting the previously entered value for ocsp_req_interval
[admin:user-ctrl]: sslkeyandcertificate:ocsp_config> ocsp_resp_timeout 60
[admin:user-ctrl]: sslkeyandcertificate:ocsp_config> url_action ocsp_responder_url_
ocsp_responder_url_failover Used as a Failover URL to the AIA extension contained in the
certificate.
ocsp_responder_url_override URL configured is used instead of the URL contained in the AIA

VMware, Inc. 620


VMware NSX Advanced Load Balancer Configuration Guide

extension of the certificate.


[admin:user-ctrl]: sslkeyandcertificate:ocsp_config> url_action ocsp_responder_url_failover
[admin:user-ctrl]: sslkeyandcertificate:ocsp_config> responder_url_lists
[admin:user-ctrl]: sslkeyandcertificate:ocsp_config> responder_url_lists http://
ocsp2.example.com:8080/
[admin:user-ctrl]: sslkeyandcertificate:ocsp_config> failed_ocsp_jobs_retry_interval 30
Overwriting the previously entered value for failed_ocsp_jobs_retry_interval
[admin:user-ctrl]: sslkeyandcertificate:ocsp_config> save
[admin:user-ctrl]: sslkeyandcertificate> save

The following are the configuration details.

+-------------------------------------------
+---------------------------------------------------------------------------------------------
-----+
| Field |
Value
|
+-------------------------------------------
+---------------------------------------------------------------------------------------------
-----+
| uuid | sslkeyandcertificate-380d9e69-4f04-4519-8151-
c89ff2d7bb6f |
| name | test-
cert |
| type |
SSL_CERTIFICATE_TYPE_VIRTUALSERVICE
|
| certificate
|
|
| version |
2
|
| serial_number |
15597070261980010830
|
| self_signed |
True
|
| issuer
|
|
| common_name |
test.example.com
|
| email_address |
usera@abc.com
|
| organization_unit |
L7
|
| organization |
abc
|

VMware, Inc. 621


VMware NSX Advanced Load Balancer Configuration Guide

| locality |
Bangalore
|
| state |
Karnataka
|
| country |
IN
|
| distinguished_name | C=IN, ST=Karnataka, L=Bangalore, O=VMware,
OU=L7, CN=test.example.com, emailAddress=user@abc.com |
|
|
|
| enable_ocsp_stapling |
True
|
| ocsp_config
|
|
| ocsp_req_interval | 21600
sec |
| ocsp_resp_timeout | 60
sec
|
| responder_url_lists[1]. | http://
ocsp.example.com/ |
| url_action |
OCSP_RESPONDER_URL_FAILOVER
|
| failed_ocsp_jobs_retry_interval | 30
sec
|
| tenant_ref |
admin
|
+-------------------------------------------
+---------------------------------------------------------------------------------------------
-----+

Note If a successful OCSP response is received, the next_update value and the
ocsp_req_interval value are compared and the lesser value of the two is used to schedule the
next OCSP Request.

Verifying the Certificate Status


If OCSP stapling is enabled for a certificate, the Controller looks for the OCSP URL under the
Authority Information Access (AIA) extension, in the certificate, and sends a request to the
identified URL. Both GET and POST HTTP methods are supported. The OCSP requests are first
sent using the POST method. In case of no response or occurrence of error, the GET method is
used.

VMware, Inc. 622


VMware NSX Advanced Load Balancer Configuration Guide

On receiving the OCSP requests, the CA servers or responders respond with the certificate status.
The OCSP responses cannot be forged, as they are directly signed by the CA. The NSX Advanced
Load Balancer Controller verifies the signature of the OCSP response. If the response verification
fails, the response is dropped and failover mechanisms are triggered to send further requests.

The CA responds with one of the following certificate statuses.

n Good

A positive response is received to the status inquiry. So the certificate with the requested
certificate serial number is not revoked within the validity interval.

n Revoked

The certificate has been revoked, either temporarily or permanently.

n Unknown

The responder does not recognize the certificate being requested. This can be because the
request indicates an unrecognized issuer, that is not served by this responder.

Navigate to Templates > Security > SSL/TLS Certificates to view the status of SSL/TLS
certificates.

Application Logs
App logs are generated with the following significance.

n Certificate Status is Revoked.

n Certificate status is Issuer Revoked.

n Certificate Status is Unavailable.

n OCSP response is stale.

To control the significant logs for the above scenarios, configure analytics profile as shown in the
following example.

[admin:controller-vmdc2]: > configure analyticsprofile


System-Analytics-Profile
[admin:controller-vmdc2]: > configure analyticsprofile System-Analytics-Profile
Updating an existing object. Currently, the object is:
+-------------------------------------------------
+-------------------------------------------------------+
| Field |
Value |
+-------------------------------------------------
+-------------------------------------------------------+
| uuid |
analyticsprofile-1775513e-bbf5-47ce-a067-42237c91315d |
| name |
System-Analytics-Profile |
| tenant_ref |
admin |
| exclude_revoked_ocsp_responses_as_error |

VMware, Inc. 623


VMware NSX Advanced Load Balancer Configuration Guide

True |
| exclude_stale_ocsp_responses_as_error |
True |
| exclude_issuer_revoked_ocsp_responses_as_error |
True |
| exclude_unavailable_ocsp_responses_as_error |
True |
| hs_security_ocsp_revoked_score |
0.0 |
| enable_adaptive_config |
True |
+-------------------------------------------------
+-------------------------------------------------------+

+-------------------------------------------------
+-------------------------------------------------------+
[admin:controller-vmdc2]: analyticsprofile> no
exclude_revoked_ocsp_responses_as_error |
+-------------------------------------------------
+-------------------------------------------------------+
| Field
| Value |
+-------------------------------------------------
+-------------------------------------------------------+
| uuid
| analyticsprofile-1775513e-bbf5-47ce-a067-42237c91315d |
| name
| System-Analytics-Profile |
| tenant_ref
| admin |
| exclude_revoked_ocsp_responses_as_error
| False |
| exclude_stale_ocsp_responses_as_error
| True |
| exclude_issuer_revoked_ocsp_responses_as_error
| True |
| exclude_unavailable_ocsp_responses_as_error
| True |
| hs_security_ocsp_revoked_score
| 0.0 |
| enable_adaptive_config
| True |
+-------------------------------------------------
+-------------------------------------------------------+

+-------------------------------------------------
+-------------------------------------------------------+
[admin:controller-vmdc2]: analyticsprofile>
hs_security_ocsp_revoked_score 3.0 |
+-------------------------------------------------
+-------------------------------------------------------+
| Field
| Value |
+-------------------------------------------------
+-------------------------------------------------------+

VMware, Inc. 624


VMware NSX Advanced Load Balancer Configuration Guide

| uuid
| analyticsprofile-1775513e-bbf5-47ce-a067-42237c91315d |
| name
| System-Analytics-Profile |
| tenant_ref
| admin |
| exclude_revoked_ocsp_responses_as_error
| False |
| exclude_stale_ocsp_responses_as_error
| True |
| exclude_issuer_revoked_ocsp_responses_as_error
| True |
| exclude_unavailable_ocsp_responses_as_error
| True |
| hs_security_ocsp_revoked_score
| 3.0 |
| enable_adaptive_config
| True |
+-------------------------------------------------
+-------------------------------------------------------+

The following fields are configurable.

n exclude_revoked_ocsp_responses_as_error
n exclude_stale_ocsp_responses_as_error
n exclude_issuer_revoked_ocsp_responses_as_error
n exclude_unavailable_ocsp_responses_as_error
These fields are enabled by default. When set to True, the corresponding logs are excluded from
significant logs. To include the logs in significant logs, set the fields to False.

Also, hs_security_ocsp_revoked_score can be configured. By default, the score is set to 0.0


when the certificate or issuer certificate is revoked.

Risk and Risk Mitigation


OCSP stapling is effective because it offloads OCSP requests from browser to the server. It is
optional. Browsers do not know whether a response is expected or not, and so they use a soft-fail
behavior. This can lead to a security compromise. To avoid this possibility, the certificate extension
called OCSP Must-Staple is used. With this label, the server communicates to the browser that the
certificate has to be served with a valid OCSP response, n the absence of which, the certificate is
not accepted.

In the event of a security compromise, even if the attacker has the key, they must supply an OCSP
staple when using the certificate. If not, the browser rejects the certificate. If an OCSP staple is
included, the response would identify the certificate as revoked, and the browser will reject the
certificate. This mitigates the security issues of OCSP stapling.

VMware, Inc. 625


VMware NSX Advanced Load Balancer Configuration Guide

Caveat
OCSP Stapling v2 described in RFC RFC6961 defines a new extension status_request_v2 that
enables the client to request the status of all certificates in the chain. Currently, in NSX Advanced
Load Balancer, multiple certificate status requests are not supported. When a client sends the
client hello with “status_request_v2” extension, the NSX Advanced Load Balancer return the
certificate status of only the application certificate directly attached to the virtual service.

Client SSL Certificate Validation


This article explains the application profiles and PKI profile configurations.

NSX Advanced Load Balancer can validate SSL certificates presented by clients against a trusted
certificate authority (CA) and a configured certificate revocation list (CRL). Certificate information
is passed to the server through various headers through additional options. For certificate
authentication, an HTTP application profile and an associated public key infrastructure (PKI) profile
have to be configured.

Starting with NSX Advanced Load Balancer release 18.2.3, this has been extended to L4 SSL/TLS
applications (via the NSX Advanced Load Balancer CLI).

HTTP Application Profile


To configure an HTTP application profile, follow the steps below:

1 Navigate to the Templates > Profiles > Application.

2 Click Create to create a new HTTP application profile with type as HTTP. For more
information, refer to Configuring HTTP Profile.

HTTP Headers
NSX Advanced Load Balancer optionally inserts the client’s certificate, or parts of it, into a new
HTTP header to be sent to the server. To insert multiple headers, the plus icon is used. These
inserted headers are in addition to any headers added or manipulated by the more granular HTTP
policies or DataScripts.

n HTTP Header Name : Name of the headers to be inserted into the client request that is sent to
the server.

n HTTP Header Value : Used with the HTTP Header Name field, this field is used to determine
the field of the client certificate to insert into the HTTP header sent to the server. Several
options are more general, such as the SSL Cipher, which lists the ciphers negotiated between
the client and NSX Advanced Load Balancer. These generic headers may be used for non-
client certificate connections by setting the Validation Type to Request.

VMware, Inc. 626


VMware NSX Advanced Load Balancer Configuration Guide

L4 SSL/TLS Application Profile


Starting with NSX Advanced Load Balancer version 18.2.3 release, client certificate verification on
L4 SSL/TLS applications are supported. Refer to the Configuring L4 SSL/TLS Profile section of the
How to Enable Client Certificate Authentication on NSX Advanced Load Balancer article.

Disabling Chunk Merging in HTTP Application Profile


To support clients requiring chunk boundaries to be maintained the way the server sent them,
a new Boolean parameter, enable_chunk_merge, has been introduced for use within the HTTP
application profile. It can be used to disable the chunk body merge when response buffer mode is
not configured.

Parameter Settings
The enable_chunk_merge parameter takes on one of two values:

n When set to True (the default), if the back-end server sends chunked HTTP responses, the
NSX Advanced Load Balancer SE merges chunks _that are received together_ into a single
chunk before forwarding its response to the client. If the server is slow, SE will not wait for the
server to send all the chunks.

n For example, if the server has seven chunks, but the SE only receives the first three chunks
when it is scheduled to process the response, it will merge them into one big chunk and
forward it to the client. Next time, if the SE has received all four of the remaining chunks,
it will merge them into one and forward it to the client. Chunk merging has been the
behavior of NSX Advanced Load Balancer from the beginning.

n When set to False, in case of a chunked HTTP response, if response buffer mode is not
configured, the NSX Advanced Load Balancer SE forwards the chunks received from the
server as is. In addition, the response body, which is in chunked mode, will not get cached. If
the cache is configured, the saved cache entry needs to be cleared.

UI Configuration
The Enable Chunk Merge option appears under the General tab of the Application Profile editor,
as shown below.

VMware, Inc. 627


VMware NSX Advanced Load Balancer Configuration Guide

CLI Data Structure for an HTTP Application Profile

[admin:controller]: > show applicationprofile applicationprofile-1


+---------------------------------------
+---------------------------------------------------------+
| Field |
Value |
+---------------------------------------
+---------------------------------------------------------+
| uuid |
applicationprofile-d9016ba3-cb99-474a-bcd2-3f459984002d |
| name |
applicationprofile-1 |
| type |
APPLICATION_PROFILE_TYPE_HTTP |
| http_profile
| |
| connection_multiplexing_enabled |
True |
| xff_enabled |
True |
| xff_alternate_name |
X-Forwarded-For |
| ssl_everywhere_enabled |
False |
| hsts_enabled |
False |
| hsts_max_age |
365 |
| secure_cookie_enabled |
False |
| httponly_enabled |
False |
| http_to_https |
False |
| server_side_redirect_to_https |

VMware, Inc. 628


VMware NSX Advanced Load Balancer Configuration Guide

False |
| x_forwarded_proto_enabled |
False |
| spdy_enabled |
False |
| spdy_fwd_proxy_mode |
False |
| post_accept_timeout |
30000 milliseconds |
| client_header_timeout |
10000 milliseconds |
| client_body_timeout |
30000 milliseconds |
| keepalive_timeout |
30000 milliseconds |
| client_max_header_size |
12 kb |
| client_max_request_size |
48 kb |
| client_max_body_size |
0 kb |
| max_rps_unknown_uri |
0 |
| max_rps_cip |
0 |
| max_rps_uri |
0 |
| max_rps_cip_uri |
0 |
| ssl_client_certificate_mode |
SSL_CLIENT_CERTIFICATE_NONE |
| websockets_enabled |
True |
| max_rps_unknown_cip |
0 |
| max_bad_rps_cip |
0 |
| max_bad_rps_uri |
0 |
| max_bad_rps_cip_uri |
0 |
| keepalive_header |
False |
| use_app_keepalive_timeout |
False |
| allow_dots_in_header_name |
False |
| disable_keepalive_posts_msie6 |
True |
| enable_request_body_buffering |
False |
| enable_fire_and_forget |
False |
| max_response_headers_size |
48 kb |

VMware, Inc. 629


VMware NSX Advanced Load Balancer Configuration Guide

| respond_with_100_continue |
True |
| hsts_subdomains_enabled |
True |
| enable_request_body_metrics |
False |
| fwd_close_hdr_for_bound_connections |
True |
| max_keepalive_requests |
100 |
| disable_sni_hostname_check |
False |
| reset_conn_http_on_ssl_port |
False |
| http_upstream_buffer_size |
0 kb |
| enable_chunk_merge |
False |
| preserve_client_ip |
False |
| preserve_client_port |
False |
| tenant_ref |
admin |
+---------------------------------------
+---------------------------------------------------------+

PKI Profile
The PKI profile contains the configured certificate authorities and CRL. A PKI profile is necessary if
the Validation Type is set to Request or Validation Type is Required.

The PKI profile supports configuring and updating the client certificate revocation lists. The PKI
profile is used to validate clients or server certificates.

1 Navigate to the Applications > Templates.

2 Select Security tab and click PKI Profile option.

VMware, Inc. 630


VMware NSX Advanced Load Balancer Configuration Guide

For more information, refer to Create a PKI Profile .

n Client Certificate Validation: NSX Advanced Load Balancer validates client access to an
HTTPS virtual service via client SSL certificates. Clients will present their certificate while
accessing the virtual service. This will be matched against a CRL. If the certificate is valid and
the clients are not on the list of revoked certificates then, they are allowed access the HTTPS
virtual server. Client certificate validation is enabled via the HTTP profile’s Authentication tab.
The HTTP profile will reference the PKI profile for specifics on the CA and the CRL. A single
PKI profile may be referenced by multiple profiles.

n Server Certificate Validation: NSX Advanced Load Balancer can validate the certificate
presented by a server, such as when a HTTPS health check is sent to a server. Server
certificate validation also uses a PKI profile to validate the certificate presented. Server
certificate validation can be configured by enabling SSL within the desired pool, and then
specifying the PKI Profile.

PKI Profile Settings


The PKI profile settings are explained below.

n Name : The unique name for the profile.

n Ignore Peer Chain : This option is disabled by default. When disabled, the certificate must
present a full chain which is traversed and validated, starting from the client or server
presented certificate to the terminal root certificate. If this option is enabled, NSX Advanced
Load Balancer will ignore any cert chain the peer/client is presenting. Instead, the root and
intermediate certs configured in the Certificate Authority section of the PKI profile are used to
verify trust of the client’s cert. Each intermediate certificate must be validated and matched
against a CA certificate included in the PKI profile.

n Host Header Check : If enabled, this option ensures the virtual service’s VIP field, when
resolved using DNS, matches the domain name field of the certificate presented from a server
to NSX Advanced Load Balancer when back-end SSL is enabled. If the server’s certificate does
not match then, it is considered insecure and marked down.

n Enable CRL Check : If this option is selected, the client’s certificate is verified against the
certificate revocation list.

For more information, refer to Create a PKI Profile

Certificate Authority
Add a new certificate from a trusted Certificate Authority. If more than one CA is included in the
PKI profile then, a client’s certificate should match with one of them to be considered as valid.

A client’s certificate must match the CA as the root of the chain. If the presented certificate has an
intermediate chain then, each link in the chain must be included here. Enable Ignore Peer Chain to
ignore intermediate validation checking.

VMware, Inc. 631


VMware NSX Advanced Load Balancer Configuration Guide

Certificate Revocation List


The CRL allows invalidation of certificates (serial number). The revocation list may be updated by
uploading a new CRL manually, or by downloading from a CRL server periodically. If a client or
server certificate is found to be in the CRL, the SSL handshake will fail, with a resulting log created
to provide further information about the handshake.

n Leaf Certificate CRL validation only : When enabled, NSX Advanced Load Balancer will only
validate the leaf certificate against the CRL. The leaf is the next certificate in the chain up from
the client certificate. A chain may consist of multiple certificates. To validate all certificates
against the CRL, disable this option. Disabling this option means you need to upload all the
CRLs issued by each certificate in the chain. Even if one CRL is missing, the validation process
will fail.

n Server URL: Specify a server from which CRL updates can be downloaded. Access to this
server will be done from the NSX Advanced Load Balancer Controller IP addresses, which
means they will require firewall access to this destination. The CRL server may be identified
by an IP address or a fully qualified domain name (FQDN) along with an HTTP path, such as
https://www.avinetworks.com/crl.

n Refresh Time : After the elapsed period of time, NSX Advanced Load Balancer will download
an updated version of the CRL automatically. If no time is specified, NSX Advanced Load
Balancer will download a new CRL at the current CRL’s lifetime expiration.

n Upload Certificate Revocation List File : Navigate to the CRL file to upload. Subsequent CRL
updates can be done by manually uploading newer lists, or configuring the Server URL and
Refresh Time to automate the process.

Physical Security for SSL Keys


A key component of security is ensuring data integrity at rest, or in this case, for stored SSL keys.

Locally Stored Keys


Private keys are never stored on an NSX Advanced Load Balancer Service Engine’s file system.
They are pushed down to the SEs from the NSX Advanced Load Balancer Controller and kept in
memory for establishing the SSL session with clients. If an SE is compromised or rebooted, all
configurations, including the private key and public certificate, are wiped. When the SE comes
back online, a Controller might repurpose the SE with a new (or the same) config or delete the SE,
depending on the circumstances.

The Controllers store the keys locally in a database in which sensitive information is encrypted.
The keys will be encrypted during backups, provided a passphrase is included during the backup
process. To encrypt (all sensitive fields like passwords or private keys) before storing them in the
database, use the following:

n Encryption Algorithm : AES_256_CBC

n IV: 16-byte random data

n Key: random 32 bytes

VMware, Inc. 632


VMware NSX Advanced Load Balancer Configuration Guide

User passwords are hashed using the PBKDF2 (Password-Based Key Derivation Function 2)
algorithm with a SHA256 hash. All other passwords (for example, cloud credentials) are also
encrypted using this method.

As the Controllers store the system configuration, including the private SSL keys, it is critical to
ensure proper security. Numerous options exist to lock down the access levels of administrators,
ensure strong passwords, and limit administrative source IP address ranges.

For administrators having full access to the certificates and keys, an attempt to export a private
key will be noted in the Operations > Events > Config Audit log. Using role-based access, export
ability should be restricted to the fewest number of administrators possible.

Thales Luna (formerly SafeNet Luna) HSM & Externally Stored Keys
NSX Advanced Load Balancer supports external hardware security modules and certificate stores
to guarantee a higher level of physical security. The original key is stored on the external system,
with the public key available to NSX Advanced Load Balancer. It supports the following types of
external key stores:

n Thales nShield

n Gemalto (formerly SafeNet)

Layer 4 SSL Support


The NSX Advanced Load Balancer supports layer 4 SSL virtual services. Client-facing ports can be
configured either for SSL-termination or in-the-clear communication. For SSL termination of HTTP
protocol, use HTTP/HTTPs application profile. Though server side communication can be clear or
encrypted, it has to be encrypted if the front-end is clear.

Note The UI or the CLI can be used when client-facing ports are SSL-terminated. To make
client-ports communicate in the clear while server-side ports are SSL-encrypted, the CLI mode
must be used.

Client-Facing Ports are SSL-terminated


To apply and tune the client-facing port feature in the NSX Advanced Load Balancer UI:

n Navigate to the Virtual Service Basic or Advanced Setup wizards. Select type
SSL application. As shown in the following figure, click SSL for Application Type. Default
value for Port is 443 and can be changed. The required certificate can be self-signed or be one
of the other certs visible in the drop-down menu.

VMware, Inc. 633


VMware NSX Advanced Load Balancer Configuration Guide

n As shown in the following figure, the default application profile - System-SSL-Application,


appears under the Application tab of Templates. The NSX Advanced Load Balancer
automatically associates it with SSL type applications, unless a change is made to the settings
of the virtual service.

n Edit the settings for the virtual service if the system-standard defaults for the application, i.e.
TCP/UDP, and SSL profiles need to be changed. See the following example.

VMware, Inc. 634


VMware NSX Advanced Load Balancer Configuration Guide

n To enable the PROXY protocol for your layer 4 SSL VS, or to tune the TCP connection rate
limiter settings, use the application profile editoras shown in the following example.

Note You have the option to enable either version 1 or version 2 of the PROXY protocol.

VMware, Inc. 635


VMware NSX Advanced Load Balancer Configuration Guide

Client-Facing Ports are In-the-Clear


Support for this feature is accessible through the NSX Advanced Load Balancer CLI only.

[admin:Ctrl-01]: virtualservice> services


New object being created
[admin:Ctrl-01]: virtualservice:services> port 9000
[admin:Abhinav-Ctrl-01]: virtualservice:services> no enable_ssl
+--------------------+
| Field | Value |
+--------------------+
| port | 9000 |
| enable_ssl | False |
+--------------------+
[admin:Ctrl-01]: virtualservice:services> save
[admin:Ctrl-01]: virtualservice> save

EC versus RSA Certificate Priority


A virtual service may be configured with Elliptic Curve (EC) and RSA certificates to support clients
of each type.

When a virtual service is configured with both EC and RSA certificates, NSX Advanced Load
Balancer will prioritize the EC certificates.

n If a client supports ciphers from only one certificate type, NSX Advanced Load Balancer uses
that certificate type.

n If the client supports ciphers for both certificates and the virtual service is configured with both
certificates, then the EC certificate will be chosen.

VMware, Inc. 636


VMware NSX Advanced Load Balancer Configuration Guide

The priority of ECC over RSA is not configurable. NSX Advanced Load Balancer prefers EC
over RSA due to EC’s significantly faster performance with handshake negotiation. On average,
processing for ECC is about four times less CPU-intensive than RSA.

EC also tends to provide significantly higher security. A 256-bit EC certificate (the minimum length
supported) is roughly equivalent to a 3k RSA cert. Additionally, EC cryptography enables Perfect
Forward Secrecy (PFS) with significantly less overhead.

Client-IP-based SSL Profiles


To terminate the client SSL connections, both the SSL profile and SSL certificate must be assigned
to the virtual service. The NSX Advanced Load Balancer can accommodate a broader set of
security needs within a client community by associating multiple SSL profiles with a single virtual
service, and it can allow the Service Engines to choose based on the client’s IP address.

For more information more about the basics of setting up an SSL/TLS profile refer to SSL/TLS
Profile article.

How It Works
At its simplest, an SSL/TLS virtual service must be configured with some base SSL profile. That
profile might be identical to the system default profile shipped with every NSX Advanced Load
Balancer release image or a custom defined image. However, the key point is that it must exist.
Optionally, to treat some of the client community in customized fashion, an authorized user may
define and associate one or more profile selectors with the virtual service. Their presence triggers
an algorithm within NSX Advanced Load Balancer that is based on the client’s IP address and may
cause the Service Engine to obey profile parameters other than those defined in the base SSL
profile.

Virtual Service

Base SSl profile

Profile selector 1

SSL/TLS
Clients

Profile selector n

VMware, Inc. 637


VMware NSX Advanced Load Balancer Configuration Guide

Profile Selector Anatomy


A given virtual service may have several profile selectors. However, the below figure depicts only
one profile selector.

1 A client IP list containing:

a An IP group reference : points at one or more IP groups and identifies all the clients
collectively that applies to the SSL profile selector.

b A match criterion : governs the presence or absence from the list which will cause a client
to take on the selector’s SSL profiles parameters.

2 An SSL profile reference (exactly one per selector) is a SSL profile with parameters such as
SSL/TLS version, SSL timeout, ciphers, etc.

SSL profile selector

IP group reference IP group 1 IP group n


Client IP list
Match criterion “is in” or “is not in”

Profile name
SSL/TLS version
SSL profile reference SSL timeout
Cliphers
...

Algorithm
n If one or more profile selectors are associated with the virtual service, NSX Advanced Load
Balancer checks each of them and attempts to match with the client’s IP address. Since the
selector list is in ordered fashion, it may yield different results depending on the sequence.

n While checking the selectors, if a SSL profile is not assigned to the client, then the base SSL
profile is applied.

Configuration Using the NSX Advanced Load Balancer CLI


The below example adds an SSL profile selector to the pre-existing VS named vs-1.

The client IP list is the conjunction of pre-existing IP groups named Internal and Ip-grp-2. These
two and the ssl_profile_ref (named sslprofile-2 in this example) should be pre-configured
earlier according to the requirements of the traffic flow and SSL algorithms.

Note Some output lines have been removed for the sake of brevity.

[admin:10-160-3-76]: > configure virtualservice vs-1

VMware, Inc. 638


VMware NSX Advanced Load Balancer Configuration Guide

Updating an existing object. Currently, the object is:

+------------------------------------+------------------------------------+
| Field | Value |
+------------------------------------+-----------------------------------+
| uuid | virtualservice-08ba76c3-faab-430d-86db-
a4d9703effa4 |
| name | vs-1 |
| enabled | True |
| services[1] | |
| port | 80 |
| enable_ssl | False |
| port_range_end | 80 |
| services[2] | |
| port | 443 |
| enable_ssl | True |
| port_range_end | 443 |
| application_profile_ref | System-HTTP |
| network_profile_ref | System-TCP-Proxy |
| pool_ref | vs-1-pool |
| se_group_ref | Default-Group |
| network_security_policy_ref | vs-vs-1-Default-Cloud-ns |
| http_policies[1] | |
| index | 11 |
| http_policy_set_ref | vs-1-Default-Cloud-HTTP-Policy-Set-0 |
| ssl_key_and_certificate_refs[1] | System-Default-Cert |
| ssl_profile_ref | System-Standard |
.
.
.
| vip[1] | |
| vip_id | 1 |
| ip_address | 10.160.221.250 |
| enabled | True |
| auto_allocate_ip | False |
| auto_allocate_floating_ip | False |
| avi_allocated_vip | False |
| avi_allocated_fip | False |
| auto_allocate_ip_type | V4_ONLY |
| vsvip_ref | vsvip-vs-1-Default-Cloud |
| use_vip_as_snat | False |
| traffic_enabled | True |
| allow_invalid_client_cert | False |
+------------------------------------+-----------------------------------------------------+

[admin:10-160-3-76]: virtualservice> ssl_profile_selectors


New object being created
[admin:10-160-3-76]: virtualservice:ssl_profile_selectors> client_ip_list
[admin:10-160-3-76]: virtualservice:ssl_profile_selectors:client_ip_list> match_criteria is_in
[admin:10-160-3-76]: virtualservice:ssl_profile_selectors:client_ip_list> group_refs Internal
[admin:10-160-3-76]: virtualservice:ssl_profile_selectors:client_ip_list> group_refs Ip-grp-2
[admin:10-160-3-76]: virtualservice:ssl_profile_selectors:client_ip_list> save
[admin:10-160-3-76]: virtualservice:ssl_profile_selectors> ssl_profile_ref sslprofile-2
[admin:10-160-3-76]: virtualservice:ssl_profile_selectors> save
[admin:10-160-3-76]: virtualservice> save

VMware, Inc. 639


VMware NSX Advanced Load Balancer Configuration Guide

+------------------------------------+-----------------------------------+
| Field | Value
+------------------------------------+---------------------------------+
| uuid | virtualservice-08ba76c3-faab-430d-86db-a4d9703effa4 |
| name | vs-1
| enabled | True
| services[1] |
| port | 80
| enable_ssl | False
| port_range_end | 80
| services[2] |
| port | 443
| enable_ssl | True
| port_range_end | 443
| application_profile_ref | System-HTTP
| network_profile_ref | System-TCP-Proxy
| pool_ref | vs-1-pool
| se_group_ref | Default-Group
| network_security_policy_ref | vs-vs-1-Default-Cloud-ns
| http_policies[1] |
| index | 11
| http_policy_set_ref | vs-1-Default-Cloud-HTTP-Policy-Set-0
| ssl_key_and_certificate_refs[1] | System-Default-Cert
| ssl_profile_ref | System-Standard
.
.
.
| vip[1] |
| vip_id | 1
| ip_address | 10.160.221.250
| enabled | True
| auto_allocate_ip | False
| auto_allocate_floating_ip | False
| avi_allocated_vip | False
| avi_allocated_fip | False
| auto_allocate_ip_type | V4_ONLY
| vsvip_ref | vsvip-vs-1-Default-Cloud
| use_vip_as_snat | False
| traffic_enabled | True
| allow_invalid_client_cert | False
| ssl_profile_selectors[1] |
| client_ip_list |
| match_criteria | IS_IN
| group_refs[1] | Internal
| group_refs[2] | Ip-grp-2
| ssl_profile_ref | sslprofile-2

VMware, Inc. 640


VMware NSX Advanced Load Balancer Configuration Guide

+------------------------------------+------------------------------------+
[admin:10-160-3-76]: >

Note
1 A virtual service’s SSL profile selector client IP list does not (yet) support implicit IP
configurations. Please use group UUIDs.

2 An SSL profile selector configuration requires the virtual service to have at least one SSL-
enabled service port. Otherwise, it should be a child virtual service.

3 A child VS will not inherit its parent virtual service’s SSL profile selectors; just the parent’s
default SSL profile.

Additional Information
n DataScript: NSX Advanced Load Balancer SSL Client Cert Validation

SSL/TLS Profile
The NSX Advanced Load Balancer supports the ability to terminate SSL connections between the
client and the virtual service, and to enable encryption between NSX Advanced Load Balancer and
the back-end servers.

The Templates > Security > SSL/TLS Profile contains the list of accepted SSL versions and the
prioritized list of SSL ciphers. To terminate client SSL connections, both an SSL profile and an SSL
certificate must be assigned to the virtual service. To also encrypt traffic between NSX Advanced
Load Balancer and the servers, an SSL profile must be assigned to the pool. When creating a new
virtual service via the basic mode, the default system SSL profile is automatically used.

Each SSL profile contains default groupings of supported SSL ciphers and versions that may
be used with RSA or an elliptic curve certificates, or both. Ensure that any new profile created
includes ciphers that are appropriate for the certificate type that will be used. The default SSL
profile included with NSX Advanced Load Balancer is optimized for security, rather than just
prioritizing the fastest ciphers.

VMware, Inc. 641


VMware NSX Advanced Load Balancer Configuration Guide

Creating a new SSL/TLS profile or using an existing profile entails various trade-offs between
security, compatibility, and computational expense. For example, increasing the list of accepted
ciphers and SSL versions increases the compatibility with clients, while also potentially lowering
security.

Note
n NSX Advanced Load Balancer can accommodate a broader set of security needs within a
client community by associating multiple SSL profiles with a single virtual service, and have the
Service Engines choose which to use based on the client’s IP address. For more information,
refer to the Client-IP-based SSL Profiles article.

n The virtual service creation without SSL profile should default to System-Standard-PFS SSL
profile.Selecting unsafe ciphers will display the following error message.

SSL Profile Templates


To view the currently defined SSL and TLS profiles follow the below:

1 Navigate to Templates > Security.

2 Click SSL/TLS Profile tab.

The table provides the following information for each SSL/TLS profile:

n Name : Name of the profile.

n Accepted Ciphers : List of ciphers accepted by the profile, including the prioritized order.

n Accepted Versions : SSL and TLS versions accepted by the profile.

Create an SSL/TLS Profile


To create or edit an SSL profile follow the below:

n Click Create to see a window (as shown in the below screenshots). In this, TLS1.3 is
unchecked.

VMware, Inc. 642


VMware NSX Advanced Load Balancer Configuration Guide

n Checking the TLS 1.3 option causes the Early Dataoption to appear.

n Checking TLS 1.2 option, the following ciphers are enabled.

VMware, Inc. 643


VMware NSX Advanced Load Balancer Configuration Guide

VMware, Inc. 644


VMware NSX Advanced Load Balancer Configuration Guide

UI Fields
This section explains UI Fields.

n SSL/TLS Name : Specify a unique name for the SSL/TLS profile.

n Type : Choose Application if the profile is to be associated with a virtual service, System if the
profile is to be associated with the Controller.

n Cipher : Ciphers may be chosen from the default List view or a String. The String view is for
compatibility with OpenSSL-formatted cipher strings. When using String view, NSX Advanced
Load Balancer does not provide an SSL rating, nor a score for the selected ciphers.

n SSL Rating : This is a simple rollup of the security, compatibility, and performance of the
ciphers chosen from the list. Often ciphers may have great performance but very low security.
The SSL rating attempts to provide some insight into the outcome of the selected ciphers.
NSX Advanced Load Balancer Networks may change the score of certain ciphers from time
to time, as new vulnerabilities are discovered. This does not impact or change an existing
NSX Advanced Load Balancer deployment, but it does mean the score for the profile, and
potentially the security penalty of a virtual service, may change to reflect the new information.

n Version : NSX Advanced Load Balancer supports versions SSL 3.0, TLS 1.0 and newer. The
older SSL 2.0 protocol is no longer supported. Starting with release 18.2.6, TLS 1.3 protocol is
supported. Users must select one or more of the three supported TLS 1.3 ciphers in the list of
ciphers or configure them in the Ciphersuites option under the String view.

n Send “close notify” alert : Gracefully inform the client of the closure of an SSL session. This is
similar to TCP doing a FIN/ACK rather than an RST.

n Prefer client cipher ordering : Off by default, set this to On if you prefer the client’s ordering.

n Enable SSL Session Reuse : On by default, this option persists a client’s SSL session across
TCP connections after the first occurs.

n SSL Session Expiration : Set the length of time in seconds before an SSL session expires.

n Ciphers : When negotiating ciphers with the client, NSX Advanced Load Balancer will give
preference to ciphers in the order listed. The default cipher list prioritizes elliptic curve with
PFS, followed by less secure, non-PFS and slow RSA-based ciphers. Enable, disable, and
reorder the ciphers via the List view. In the String view, manually enter the cipher strings
via the OpenSSL format, which is documented on the OpenSSl.org website. You may use
an SSL/TLS profile with both an RSA and an elliptic curve certificate. These two types of
certificates can use different types of ciphers, so it is important to incorporate ciphers for both
types in the profile if both types of certs may be used. As with all security, NSX Advanced Load
Balancer Networks recommends diligence to understand the dynamic nature of security and to
ensure that NSX Advanced Load Balancer is always up to date.

n Ciphersuites : This option exclusively configures TLS 1.3 protocol ciphers. Currently, NSX
Advanced Load Balancer supports the below:

n TLS_AES_128_GCM_SHA256

n TLS_AES_256_GCM_SHA384

VMware, Inc. 645


VMware NSX Advanced Load Balancer Configuration Guide

n TLS_CHACHA20_POLY1305_SHA256

Note These ciphers will only work with the TLS 1.3 protocol. The old ciphersuites cannot be used
with the TLS 1.3 protocol.

n Early Data : This option enables TLS-v1.3-terminated applications to send application data
(referred to here as early data or 0-RTT data) without having to first wait for the TLS
handshake to complete. This saves one full round-trip time between the client and server
before the client requests can be processed. SSL session reuse must be enabled to use the
Early Data option.

Note Starting with NSX Advanced Load Balancer 21.1, NSX Advanced Load Balancer supports
configuring Elliptic Curve Cryptography(ECC) Cipher Suites in SSL profile.

Elliptic Curve Cryptography


This section explains the steps to configure the EC named curve.

Elliptic Curve Cryptography is a public-key cryptosystem that offers equivalent security with
a smaller key size than currently prevalent cryptosystems. This results in conserving power,
memory, bandwidth, and the resultant computational cost.

Configuring EC Named Curve


The following named curves or groups are supported for virtual services:

n secp256r1 (23)

n secp384r1 (24)

n secp521r1 (25)

n x25519(29)

n x448(30)

To configure the EC Named curve, Named Curve (TLS Supported Groups) in SSL Profile
configuration, the field configure ec_named_curve is introduced.

By default, this field is set to auto, as shown below:

show sslprofile System-Standard

VMware, Inc. 646


VMware NSX Advanced Load Balancer Configuration Guide

This implies that the secp256r1 (23), secp384r1 (24) and secp521r1 (25) curve group is supported by
default.

Configure x25519 and x448 as shown below:

configure sslprofile System-Standard

sslprofile> ec_named_curve P-256:X25519:X448


Overwriting the previously entered value for ec_named_curve

sslprofile>save

Signature Algorithms
This section describes the steps to configure signature algorithm.

The SSL client uses the “signature_algorithms” extension to indicate to the server which
signature/hash algorithm pairs should be used in digital signatures.

VMware, Inc. 647


VMware NSX Advanced Load Balancer Configuration Guide

The extension_data field of this extension contains a supported_signature_algorithms value.

Supported Hash Algorithms:

n md(5)

n sha1(2)

n sha224(3)

n sha256(4)

n sha384(5)

n sha512(6)

Supported Signature Algorithms:

n rsa

n dsa

n ecdsa

In NSX Advanced Load Balancer, the signature algorithms set by a client are used directly in the
supported signature algorithm in the client hello message.

The supported signature algorithms set by a server are not sent to the client but are used to
determine the set of shared signature algorithms and their order.

The client authentication signature algorithms set by a server are sent in a certificate request
message if client authentication is enabled. Otherwise, they are unused. Similarly, client
authentication signature algorithms set by a client are used to determine the set of client
authentication shared signature algorithms.

Signature algorithms will neither be advertised nor used if the security level prohibits them.

Configuring Signature Algorithm


The field signature_algorithm is introduced in the SSL profile configuration. By default, this field
is set to auto.

show sslprofile System-Standard


[admin]: > show sslprofile System-Standard
+-------------------------------
+----------------------------------------------------------------------------------+
| Field |
Value |
+-------------------------------
+----------------------------------------------------------------------------------+
| uuid |
sslprofile-9052601e-0203-4702-81fd-221d0f4a3c5a |
| name |
System-Standard |
| accepted_versions[1]
| |
| type |

VMware, Inc. 648


VMware NSX Advanced Load Balancer Configuration Guide

SSL_VERSION_TLS1 |
| accepted_versions[2]
| |
| type |
SSL_VERSION_TLS1_1 |
| accepted_versions[3]
| |
| type |
SSL_VERSION_TLS1_2 |
| accepted_ciphers |
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDH |
| | E-ECDSA-
AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:EC |
| | DHE-
RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA |
| | -AES256-
SHA:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-S |
| | HA256:AES256-
SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA |
| --------------------Truncated
Output---------------------- |
|
| |
| prefer_client_cipher_ordering |
False |
| enable_ssl_session_reuse |
True |
| ssl_session_timeout |
86400 sec |
| type |
SSL_PROFILE_TYPE_APPLICATION |
| ciphersuites |
TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256 |
| enable_early_data |
False |
| ec_named_curve |
auto |
| signature_algorithm |
auto |
| tenant_ref |
admin |
+-------------------------------
+----------------------------------------------------------------------------------+

By default, NSX Advanced Load Balancer supports ECDSA+SHA256:RSA+SHA256 (when


signature_algorithm is set to auto ).

Modify the signature algorithm as shown below:

> configure sslprofile System-Standard

sslprofile> signature_algorithm ECDSA+SHA256:RSA+SHA256:RSA-PSS+SHA256


Overwriting the previously entered value for signature_algorithm

VMware, Inc. 649


VMware NSX Advanced Load Balancer Configuration Guide

sslprofile> save

SSL Client Cipher in Application Logs on NSX Advanced


Load Balancer
NSX Advanced Load Balancer supports capturing of SSL client’s ciphers details in the application
logs on NSX Advanced Load Balancer. NSX Advanced Load Balancer records ciphers sent by a
client in the client hello SSL packet. The ciphers details used to establish an SSL connection with a
virtual service is available in the application log.

No Shared Ciphers Error


When a client uses a cipher that is not supported, the virtual service closes the connection with
the error No Shared Cipher in the application log. The following are the reasons for the No Shared
Cipher error:

n The client sends a cipher(s) that is not configured in the virtual service’s SSL profile.

n The client sends a cipher(s) that does not match the certificate’s authentication type on the
virtual service.

n For example, the client sends ECDSA ciphers when the virtual service has only an RSA
certificate configured.

n The client sends a cipher(s) that does not match the SSL/TLS protocol.

n For example, the client sends AES256-GCM-SHA394 TLS 1.2 cipher when the virtual service
does not have TLS1.2protocol enabled (even though, the SSL profile has this cipher
enabled).

When any one of this issues occurs, it is beneficial to show what ciphers client has sent as part
of the client hello. The necessary changes can be performed to the virtual service or the client
configuration to fix the problem.

A client sends anywhere between 180-200ciphers in a client hello, and the server picks one of
them.

The cipher selection depends on the various factors like ciphers and protocols enabled, type of
the certificate configured, et.c. on the virtual service. When the virtual service is unable to select
a single cipher, the SSL connection fails with the error: SSL Error: No Shared Cipher. In such
a case, the NSX Advanced Load Balancer records all the ciphers that the client has sent in the
application log.

Accessing Client’s Cipher List


The client’s cipher list is accessible through a REST API request for the application log. The
identified and unidentified ciphers are checked using the field client_cipher_list within the
application log (add location here).

VMware, Inc. 650


VMware NSX Advanced Load Balancer Configuration Guide

A no shared ciphers SSL error can be fixed by making the necessary changes to the virtual
service or client configuration as per the ciphers sent by the client.

Configure Stronger SSL Cipher


This section explains how to Configure Stronger SSL Cipher

Strength
SSL ciphers are defined by the Templates > Security > SSL/TLS Profile. Within a profile, there are
two modes for configuring ciphers, List view and String view.

For more information refer to Apple’s App Transport Security.

SSL Rating
Modifying or reordering the list will alter the associated SSL Rating in the top right corner of the
SSL / TLS Profile edit window. This provides insight into the encryption performance, security, and
client compatibility of the selected ciphers. This ranking is only made against the validated ciphers
from the List View mode.

List View
The default cipher list view shows common ciphers in order of priority. Enable or disable ciphers
via the checkbox, and reorder them via the up/down arrows or drag and drop. List view provides
a static list of validated ciphers. If alternate ciphers not listed are required, consider using String
View. The ciphers included in this list are considered reasonably strong. If a cipher is later deemed
to be insecure or less secure, it’s security score rating will drop to indicate it has fallen out of favor.

VMware, Inc. 651


VMware NSX Advanced Load Balancer Configuration Guide

String View
The second cipher configuration mode allows accepted ciphers to be added as a string, similar to
the OpenSSL syntax for viewing and setting ciphers. For this mode, NSX Advanced Load Balancer
accepts all TLS 1.0 - 1.2, and Elliptic Curve ciphers from https://www.openssl.org/docs/man1.0.2/
apps/ciphers.html. In this mode, the administrator must determine if the enabled ciphers are
secure. Consider setting strong security by employing a known cipher suite, such as “HIGH”.

ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES256

Server Name Indication


Server Name Indication, or SNI, is a method of virtual hosting multiple domain names for an SSL
enabled virtual IP. A single VIP is advertised for multiple virtual services. When a client connects
to the VIP, the NSX Advanced Load Balancer begins the SSL/TLS negotiation, and chooses a
virtual service or an SSL certificate, only when the client has requested the site by name through
the domain field of the TLS hello packet. If the requested domain name is configured on the virtual
IP, the appropriate certificate is returned to the client and the connection is bound to the proper
virtual service.

For additional references on SNI, see:

n Wildcard SNI Matching for Virtual Hosting

n Support for SNI Extension in TLS Handshakes to Pool Servers

VMware, Inc. 652


VMware NSX Advanced Load Balancer Configuration Guide

n SNI Name-based Pool Selection in L4 Proxy

Configuration
NSX Advanced Load Balancer uses the concept of parent and child virtual services for SNI virtual
hosting. When the option for virtual hosting virtual service is selected on the create (through
advanced mode) or edit action, the virtual service participates in the virtual hosting. The virtual
hosting virtual service must be configured as either a parent or a child virtual service.

Parent Virtual Service

Child VS

aaa.avi.com
Parent VS

10.1.1.1.:443 bbb.avi.com

ccc.avi.com

The parent virtual service governs the networking properties used to negotiate TCP and SSL with
the client. It can also be a catch-all, if the domain name requested by the client does not exist or
does not match with one of the configured child virtual services.

Configure the following properties on the parent virtual service:

Network: The listener IP address, service port, network profile, and SSL profile. No networking
properties are configured on the child virtual services.

Pool: Optionally specify a pool for the parent virtual service. The pool will only be used if no child
virtual service matches a client’s requested domain name.

SSL Certificate: An SSL certificate may be configured which could either be a wildcard certificate
or a specific domain name. The parent’s SSL certificate is used if the client does not send an SNI
hostname TLS extension or if the TLS SNI hostname of the client does not match any of the child
VSs virtual service domain names. If an SSL certificate with specific domain name is returned to
the client, for example, in the case of sending a friendly error message, the client will receive an
SSL name mismatch message. So, it is advisable to use a wildcard on the parent.

The parent virtual service receives all new client TCP connection handshakes, which are reflected
in the statistics. Once a child virtual service is selected, the connection is internally handed off to a
child virtual service. So subsequent metrics such as packets, concurrent connections, throughput,
requests, logs and other statistics will only be recorded on the child virtual service. Similarly, the
child virtual service will not have logs for the initial TCP or SSL handshakes, such as the SSL
version mismatch errors, which are recorded at the parent virtual service.

VMware, Inc. 653


VMware NSX Advanced Load Balancer Configuration Guide

The parent delegates to the child during the SNI phase of the TLS handshake.

If there is an SNI message received from the client and the SNI hostname matches the configured
hostnames for any of the child virtual services, the connection switches to the child virtual service
at that point. Also, all the SSL (certificate etc.) and L7 state (policies, DataScripts etc.) of the child
virtual service is applied to the HTTP request. Subsequently, the log ends up on the child virtual
service.

If the switch to the child virtual service did not happen, the connection/request is handled on the
parent virtual service. So the SSL and L7 state of the parent gets applied. The default certificate on
the parent is presented to the client. Once the request is received and parsed, you can close the
client-side TCP connection through no pool, pool with close action, or security policy. If you have
a wildcard certificate on the parent that covers all the subdomains of the child virtual services, you
can serve that from the parent and then close the connection as mentioned above.

Selection of a child virtual service is solely based on the FQDNs (Fully Qualified Domain Name)
configured on the SNI child. Ensure that there are no duplicates or overlaps among the child
FQDNs. Common Name or Subject Alternate Name in the virtual service certificate has no role to
play in the selection of children for SNI traffic.

The vh_domain_name of the SNI child virtual service has to be explicitly added to the parent virtual
service VIP’s list for DNS records to be populated correctly.

Once a child is selected (using server name TLS extension of client hello), its certificate is served
on the connection and host header of further HTTP requests must match the FQDNs of one of the
children, failing which, the connection would fail with virtual host error on the applog.

If connection fails to select a child, it will be served by the parent virtual service.

Child Virtual Service


The child virtual service does not have an IP address or service port. It instead points to a parent
virtual service, which must be created first. The domain name field is a fully qualified name
requested by the SNI-enabled client within the SSL handshake. The parent matches the client
request with the child’s configured domain name. It does not match against the configured SSL
certificate. The child can use a wildcard or domain specific certificate.

If no child matches the client request, the parent’s SSL certificate and pool are used. In cases
where you have a TLS SNI parent with a TLS/SSL profile that supports TLS versions 1, 1.1, and 1.2,
and a TLS child which has only TLS 1.2 configured, the child will continue to use TLS 1.2.

In such a setup where the parent and child virtual services use different SSL profiles, the flow for
SSL handshake is as follows:

1 TCP handshake -> Parent virtual service

2 Client Hello -> Parent virtual service. The client Hello contains the SNI. So the NSX Advanced
Load Balancer selects the child virtual service.

3 SSL profile of the child is used. Child virtual service SSL profile is used to allow or deny based
on the SSL/TLS version and select a cipher.

VMware, Inc. 654


VMware NSX Advanced Load Balancer Configuration Guide

4 Child virtual service responds with a server Hello that includes the cipher and the child
certificate.

Logs
The application logs option on the user interface displays SNI hostname information along with
other SSL related information. The SNI information in the application logs provide more insight
about the incoming requests and also help in troubleshooting various issues. When the child
virtual service sees an SSL connection with SNI header, the hostname in the SNI header is
recorded in the application log along with the SSL version, PFS, and cipher related information.
To check for SNI-enabled virtual service related logs, navigate to Applications > Virtual Service,
select the desired virtual service, and navigate to Logs.

Figure 7-1.

Note When the Host header of a client request does not match the FQDN configured on the child
virtual service, the request would fail with an application log on the child instead of being proxied
using parent virtual service’s default Pool.

True Client IP in L7 Security Features


This section discusses the advantages of using True Client IP and its configuration.

VMware, Inc. 655


VMware NSX Advanced Load Balancer Configuration Guide

A proxy identifies the client IP from the L3 header of the incoming connection. However, it is not
always the actual client IP address. In a situation where there are proxies between the actual client
and NSX Advanced Load Balancer, the intermediary proxy always adds the source IP address of
the incoming connection into the “X-Forwarded-For” header. It replaces the source IP address
with its IP address as the source IP in the L3 header while forwarding the request to the actual
destination.

<<image>>

The true client IP feature enables fetching the actual client IP address from “X-Forwarded-For” or
a user-defined header and tracking the actual client IP address into logs or configure policies such
as HTTP Security, HTTP Request, etc. based on the true client IP address.

Advantages of Using True Client IP


n You can log actual client IP address in the application logs at NSX Advanced Load Balancer.

n The actual client IP address can be shared with the actual server (NSX Advanced Load
Balancer can add the identified actual client IP as X-Forwarded-For, and the server can be
configured to parse it).

n You can configure HTTP policy, SSO policy, etc., based on the actual client IP address.

True Client IP in NSX Advanced Load Balancer


With the implementation of true client IP, the following are supported:

n Source IP is always the IP address from the IP header of the downstream connection
(incoming).

n Client IP is derived based on user configuration. It could be derived from the X-Forwarded-
For or a user specified header, or it could be the same as Source IP.

With true client IP, the behavior is as shown below:

True Client IP Index Count


Configuration Header Parameter Direction Parameter Parameter Behaviour

Disabled (Default) X-Forwarded-For Left (Default) 1 (Default) Client IP=Source IP


(Default)

Enabled True-User-IP(User Left (Default) 1 (Default) Client IP is the


defined) IP fetched from
user defined header
“True-User-IP” or
from layer 3 header
in case user defined
header not found
in the request
or formatting error
etc.Source IP is
aways from layer 3
header

VMware, Inc. 656


VMware NSX Advanced Load Balancer Configuration Guide

For L4 applications, Source-IP and Client-IP would always be the same. In the case of HTTP
applications, it can be different. By default, the feature is disabled. After enabling true client IP,
specify the desired header from where the client IP should be fetched.

If the user doesn’t define any header, it would be fetched from the X-Forwarded-For header. The
specified header needs to have a format of a comma-separated list of IP addresses as a header
value. If the format is not such, it will be ignored.

For example, the format (header value format) is X-Forwarded-For:


1.1.1.1,2.2.2.2,3.3.3.3,4.4.4.4

You can configure only one header as of now to fetch client IP.

Configure True Client


This section discusses the steps to configure True Client IP in NSX Advanced Load Balancer

Enabling True Client IP


Enable the use_true_client_ip field for the desired custom HTTP profile.

1 Access the CLI by logging into the NSX Advanced Load Balancer Shell.

2 Configure the custom HTTP profile by using the following command:

configure applicationprofile <name of the custom http profile>

3 Enable True Client IP using http_profile use_true_client_ip.

Configuring the Parameters


Use the following parameters with the true_client_ip parameter:

* Headers (optional), define the desired HTTP header from where the client IP needs to be fetched.
If not specified, by default, “X-Forwarded-For” is configured.

* Direction (optional), define the direction to count the IPs in the specified header value. By
default, the value is Left.

* Index_in_header (optional), define the position in the configured direction in the specified
header’s value. By default, the value is 1.

Define the parameters for True_Client_IP (header name, direction, and index in the header) as
shown below:

true_client_ip headers <name of the header> <direction> <index in the header>

Note The valid range for true client IP index is 1-1000.

After configuring the parameters as required, save the configuration.

VMware, Inc. 657


VMware NSX Advanced Load Balancer Configuration Guide

Use cases
The following features can be configured to use actual client IP:

n HTTP Policies

n HTTP Security/Request/Response policy match based on client IP can be configured.

n DataScripts – Client IP based API, Rate limiting API

The following features are affected after enabling True Client IP:

n Application Logs

n Client IP (v4 and v6) in Application Log

n Analytics Policy

n Client Log filter match for Client IP

n RUM/ Client Insights Sampling – Client IP address to check when inserting RUM script

n Rate Limit based on client IP

n Compression Filter based on client IP

n Match based on client IP in SSO policy

n Allow list based on client IP in WAF policy

n WAF – Modsec Rules

n Allow list based on client IP in Bot Management Policy

n IP Reputation

n Geo Location-based Features

n True Client IP in DOS Analytics Reports

Upgrade
By default, True Client IP is disabled. Hence while upgrading the NSX Advanced Load Balancer,
all instances where client IP is referred to will refer to Source IP, and no change in behavior is
evident.

If True Client IP is enabled later, all the instances that refer to client IP will refer to True Client IP.
To use Source IP specifically in any such places, explicitly change the configuration.

VMware, Inc. 658


VMware NSX Advanced Load Balancer Configuration Guide

Examples
True Client IP Header Direction Index Count
Configuration Parameter Parameter Parameter Request Details Behaviour

Enabled X-Forwarded-For Left 3 X-Forwarded- Client IP =


For:1.1.1,2.2.2.2,3. 3.3.3.3Source
3.3.3,4.4.4.4 IP=from layer-3
header

Enabled X-Forwarded-For Left 4 X-Forwarded- Client IP =


For:1.1.1,2.2.2.2,3. 4.4.4.4Source
3.3.3,4.4.4.4 IP=from layer-3
header

Enabled X-Forwarded-For Left 5 X-Forwarded- Client IP =


For:1.1.1,2.2.2.2,3. 4.4.4.4Source
3.3.3,4.4.4.4 IP=from layer-3
header

Enabled X-Forwarded-For Left 4 X-Forwarded- Client IP =


For: 1.1.1.1, 4.4.4.4Source
2.2.2.2, IP=from layer-3
3.3.3.3,4.4.4.4X- header
Forwarded-For:
10.10.10.10,
172.16.1.1,192.168.
1.1

Enabled True-Client-IP Left 4 X-Forwarded- Client IP =


For:1.1.1,2.2.22,3. Source IP = from
3.3.3,4.4.4.4 layer-3 header

Enabled True-Client-IP Left 4 X-Forwarded- Client IP =


For: 1.1.1.1, 192.168.1.1Source
2.2.2.2, IP=from layer-3
3.3.3.3,4.4.4.4Tr header
ue-Client-IP:
10.10.10.10,
172.16.1.1,
192.168.1.1

Not Configured Left 3 X-Forwarded- Client IP =


(Default) For: 3.3.3.3Source
1.1.1,2.2.2.2,3.3.3. IP=from layer-3
3,4.4.4.4 header

X-Forwarded-For Not Configured 3 X-Forwarded- Client IP =


(Default) For: 3.3.3.3Source
1.1.1,2.2.2.2,3.3.3. IP=from layer-3
3,4.4.4.4 header

X-Forwarded-For Left 2 X-Forwarded- Client IP =


For: Source IP = from
1.1.1,2-2,3.3.3.3,4. layer-3header
4.4.4

VMware, Inc. 659


VMware NSX Advanced Load Balancer Configuration Guide

True Client IP Header Direction Index Count


Configuration Parameter Parameter Parameter Request Details Behaviour

X-Forwarded-For Left 2 X-Forwarded- Client IP =


For: 1.1.1.1, 2.2.2.2Source
2.2.2.2, 3.3.3.3, IP=from layer-3
4.4.4.4 header

True-Client-IP Left 2 X-Forwarded- Client IP =


For: 1.1.1.1, Source IP = from
2.2.2.2, 3.3.3.3, layer-3header
4.4.4.4

App Transport Security


With iOS 9 and later, Apple has mandated minimum security settings to comply with their App
Transport Security (ATS) standard. To enable this level of SSL security for applications proxied by
NSX Advanced Load Balancer, use the following settings for SSL/TLS Certificates and SSL/TLS
Profiles.

Certificates
The certificate must be issued by a Certificate Authority that is publicly trusted (included with the
operating system), or the CA’s root cert has been installed in the client device.

n RSA 2k or higher

n ECC 256 or higher

The issuer must create the cert with SHA-256 or greater.

SSL / TLS Version


Only TLS 1.2 is supported. Disable earlier versions of SSL / TLS.

Cipher Support
All enabled ciphers must support PFS. Disable all but the following ciphers from the Cipher list
view. If only an EC or RSA cert is in use, it doesn’t hurt only to enable the compatible ciphers. If
both an EC and RSA certificate will be used (best practice), then leave all of the following ciphers
enabled.

ECC Ciphers

n TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

n TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

n TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384

n TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA

n TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256

VMware, Inc. 660


VMware NSX Advanced Load Balancer Configuration Guide

n TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA

RSA Ciphers

n TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

n TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

n TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384

n TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256

n TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA

VMware, Inc. 661


Certificate Management
Integration for CSR Automation 8
NSX Advanced Load Balancer supports automation of the process for requesting and installing a
certificate signed by a certificate authority (CA). This feature handles initial certificate registration
as well as renewal of certificates based on certificate expiration. You can manage certificate by
navigating to Templates > Security > Certificate Management and use the profile object.

You can create an instance of this object, an individual certificate management profile, which
provides a way to configure a path to a certificate script. Along with the set of parameters the
script needs (CSR, common name, and others) to integrate with a certificate management service
within the customer’s internal network. The script itself is left opaque by design to accommodate
the various certificate management services different customers may have.

For SSL certificate configuration, you need to select CSR and fill in the necessary fields for the
certificate, and select the certificate management profile to which this certificate is bound. The
NSX Advanced Load Balancer Controller will then use the CSR and the script to obtain the
certificate and also renew the certificate upon expiration. As a part of the renewal process, a
new key pair is generated and a certificate corresponding to this is obtained from the certificate
management service.

As a part of the SSL certificate configuration, the NSX Advanced Load Balanceryoushould only
select CSR, fill in the necessary fields for the certificate, and select the certificate management
profile to which this certificate is bound. The NSX Advanced Load Balancer Controller will then use
the CSR and the script to obtain the certificate and also renew the certificate upon expiration. As a
part of the renewal process, a new key pair is generated and a certificate corresponding to this is
obtained from the certificate management service.

Without the addition of this automation, the process for sending the CSR to the external CA, then
installing the signed certificate and keys, must be performed by the NSX Advanced Load Balancer
user.

This chapter includes the following topics:

n Configuring Certificate Management Integration

n Renewing Default (Self-Signed) Certificates on NSX Advanced Load Balancer

n Customizing Notification of Certificate Expiration

n Enabling Client Certificate Authentication on NSX Advanced Load Balancer

n Full-chain CRL Checking for Client Certificate Validation

VMware, Inc. 662


VMware NSX Advanced Load Balancer Configuration Guide

n Updating SSL Key and Certificate

n Customizing Notification of Certificate Expiration

Configuring Certificate Management Integration


This setion explains the details to configure certificate management integration.

The following are the steps to configure certificate management integration:

1 Prepare a Python script that defines a certificate_request() method. The method must
accept the following input as a dictionary:

a CSR

b Hostname for the Common Name field.

c Parameters defined in the certificate management profile.

2 Create a certificate management profile that calls the script.

For more information, click here.

Prepare the Script


The script must use the def certificate_request command. For instance,

def certificate_request(csr, common_name, args_dict):


"""
Check if a token exists that can be used:
If not, authenticate against the service with the provided credentials.
Invoke the certificate request and get back a valid certificate.
Inputs:
@csr : Certificate signing request string. This is a multi-line string output like what
you get from openssl.
@common_name: Common name of the subject.
@args_dict: Dictionary of the key value pairs from the certificate management profile.
"""

The specific parameter values to be passed to the script are specified within the certificate
management profile.

Sensitive Parameters
For parameters that are sensitive, for instance, passwords, the values can be hidden. Marking a
parameter sensitive prevents its value from being displayed in the web interface or being passed
by the API.

VMware, Inc. 663


VMware NSX Advanced Load Balancer Configuration Guide

Dynamic Parameter
The value for a certificate management parameter can be assigned within the profile or within
individual CSRs.

n If the parameter value is assigned within the profile, the value applies to all CSRs generated
using this profile.

n To dynamically assign a parameter’s value, indicate that the parameter is dynamic within the
certificate management profile. This leaves the parameter’s value unassigned. In this case, the
dynamic parameter’s value is assigned when creating an individual CSR using the profile. The
parameter value applies only to that CSR.

Create the Certificate Management Profile


This section explains how to create the certificate management profile.

Procedure

1 Navigate to Templates > Security > Certificate Management and click Create.

2 Specify the name for the profile.

3 Select an alert script configuration object type for the certificate management profile from the
drop-down list.

4 If the profile need to pass some parameter values to the script, check Enable Custom
Parameters box, and specify their names and values.

In this example, the location (URL) of the CA service and the login credentials for the service,
will be passed to the script. For parameters that are sensitive, such as, passwords, select the
Sensitive checkbox. Marking a parameter sensitive prevents its value from being displayed in
the web interface or being passed by the API. For parameters that are to be dynamically
assigned during CSR creation, select the Dynamic checkbox. This leaves the parameter
unassigned within the profile.

5 Click Save.

Use the Certificate Management Profile to get Signed Certificates


After adding the script and creating the certificate management profile, the profile can be used to
easily obtain and install CA-signed certificates.

Procedure

1 Navigate to Templates > Security > SSL/TLS Certificates. Select Application Certificate
option from CREATE drop-down list.

2 Specify the certificate name and select the type as CSR in the Type field.

3 Select the profile configured in the previous section from the Certificate Management Profile
drop-down list.

VMware, Inc. 664


VMware NSX Advanced Load Balancer Configuration Guide

4 Specify the certificate details and click Save

The NSX Advanced Load Balancer Controller generates a key pair and CSR, executes the
script to request the CA-signed certificate from the NSX Advanced Load Balancer PKI service,
and saves the signed certificate in persistent storage.

Automatic Certificate Renewal


This section explains about automatic certificate renewal.

You can choose to customize when certificate expiry notifications are sent; see Chapter 8
Certificate Management Integration for CSR Automation section. If the certificate management
profile is configured for a certificate, a renewal is attempted in the last-but-one interval. By
default, NSX Advanced Load Balancer Controller generates events 30 days, seven days, and one
day before expiry. In this setting, certificate renewal will be attempted seven days before expiry.

If the certificate management profile is configured for automatic certificate renewal, a renewal is
attempted just prior to the penultimate notification (in the above example, that will be just prior
to the seven-day notification). If the renewal succeeds, the last two notifications are not sent. If
the renewal fails, the penultimate notification is sent. Thereafter, if a manual renewal succeeds
prior to the last notification, it is skipped. Otherwise, the final notification will be sent (with no
accompanying final attempt to renew).

When a certificate renewal occurs, a new expiration date is set and yet another notification
schedule is established per the values within the ssl_certificate_expiry_warning_days array in
force at the time.

Renewing Default (Self-Signed) Certificates on NSX


Advanced Load Balancer
This section explains how to replace the default certificate when the certificate has expired or if it
is going to expire. The steps mentioned in this section can also be used to replace the self-signed
certificate with the third party signed certificate.

Note The default certificate on NSX Advanced Load Balancer is self-signed.

Prerequisites
OpenSSL 1.1.x or later.

Changes required using NSX Advanced Load Balancer User Interface


The following are the changes required to renew default certificate using the user interface:

n In NSX Advanced Load Balancer, navigate to Templates > Security > SSL/TLS Certificates,
click on Export icon (right) of System-Default-Cert entry.

n Copy data from the Key and Certificate field to two new files using the COPY TO CLIPBOARD
option. Name the new files as system-default.key and system-default.cer, respectively.

VMware, Inc. 665


VMware NSX Advanced Load Balancer Configuration Guide

Changes Required using OpenSSL


The following are the changes required to renew default certificate using the OpenSSL:

n Use OpenSSL to run the following command to verify the expiration date of the certificate:

openssl x509 -in system-default.cer -noout -enddate

n Run the following command to generate a new CSR with the system-default.key.

openssl req -new -key system-default.key -out system-default.csr

n Run the following command to generate a new certificate with the new expiration date. In this
example, the new certificate is named as system-default2.cer.

openssl x509 -req -days 365 -in system-default.csr -signkey system-default.key -out system-
default2.cer

n Verify the expiration date on the new certificate (system-default2.cer)

openssl x509 -in system-default2.cer -noout -enddate

VMware, Inc. 666


VMware NSX Advanced Load Balancer Configuration Guide

Changes Required using NSX Advanced Load Balancer CLI and NSX
Advanced Load Balancer UI
n Copy the system-default2.cer and the system-default.key to the NSX Advanced Load
Balancer Controller.

Optional Step: Before performing the next steps, you may disable any virtual services that are
configured to use the System-Default-Cert.

n Login to the NSX Advanced Load Balancer CLI, and execute the following command to
perform the changes for the default certificate on NSX Advanced Load Balancer (System-
Default-Cert).

[admin:cntrl1]: > configure sslkeyandcertificate System-Default-Cert

n Execute the certificate command, then click Enter. Run certificate file:<path to system-
default2.cer>/system-default2.cer. Enter the save command to save the changes.

[admin-cntrl1]: sslkeyandcertificate> certificate


[admin-cntrl1]: sslkeyandcertificate:certificate> certificate file:<path to system-
default2.cer>/system-default2.cer
[admin-cntrl1]: sslkeyandcertificate> save

n Enter the key file:<path to system-default.key>/system-default.key. Enter the save


command again.

[admin-cntrl1]: sslkeyandcertificate> key file:<path to system-default.key>/system-


default.key
[admin-cntrl1]: sslkeyandcertificate> save

n Enable the virtual services if they were disabled before the changes (optional).

n Login to the NSX Advanced Load Balancer user interface, navigate to Templates > Security >
SSL/ TLS Certificates and check the expiry date for the renewed certificate.

Customizing Notification of Certificate Expiration


NSX Advanced Load Balancer enables users to customize when SSL certificate expiry notification
is triggered. The system expects a minimum of three notification days. By default, the alerts are
triggered 30 days, seven days and one day before expiry.

Example: Example
In the below sequence,

1 The Controller's properties are first displayed.

2 Two notification periods (45 days and 14 days) are specified

3 Saved into the configuration.

VMware, Inc. 667


VMware NSX Advanced Load Balancer Configuration Guide

4 The revised Controller properties are displayed as confirmation.

Note The two dates are automatically inserted and displayed in sequence.

[admin:10-10-26-52]: > configure controller properties


Updating an existing object. Currently, the object is:

+-----------------------------------------+---------+
| Field | Value |
+-----------------------------------------+---------+
| uuid | global |
| unresponsive_se_reboot | 300 |
| crashed_se_reboot | 900 |
| se_offline_del | 172000 |
| vs_se_create_fail | 1500 |
| vs_se_vnic_fail | 300 |
| vs_se_bootup_fail | 300 |
| se_vnic_cooldown | 120 |
| vs_se_vnic_ip_fail | 120 |
| fatal_error_lease_time | 120 |
| upgrade_lease_time | 360 |
| query_host_fail | 180 |
| vnic_op_fail_time | 180 |
| dns_refresh_period | 60 |
| se_create_timeout | 900 |
| max_dead_se_in_grp | 1 |
| dead_se_detection_timer | 360 |
| api_idle_timeout | 15 |
| allow_unauthenticated_nodes | False |
| cluster_ip_gratuitous_arp_period | 60 |
| vs_key_rotate_period | 60 |
| secure_channel_controller_token_timeout | 60 |
| secure_channel_se_token_timeout | 60 |
| max_seq_vnic_failures | 3 |
| vs_awaiting_se_timeout | 60 |
| vs_apic_scaleout_timeout | 360 |
| secure_channel_cleanup_timeout | 60 |
| attach_ip_retry_interval | 360 |
| attach_ip_retry_limit | 4 |
| persistence_key_rotate_period | 60 |
| allow_unauthenticated_apis | False |
| warmstart_se_reconnect_wait_time | 300 |
| vs_se_ping_fail | 60 |
| se_failover_attempt_interval | 300 |
| max_pcap_per_tenant | 4 |
| ssl_certificate_expiry_warning_days[1] | 30 days |
| ssl_certificate_expiry_warning_days[2] | 7 days |
| ssl_certificate_expiry_warning_days[3] | 1 days |
| seupgrade_fabric_pool_size | 20 |
| seupgrade_segroup_min_dead_timeout | 360 |
+-----------------------------------------+---------+

[admin:10-10-26-52]: controllerproperties> ssl_certificate_expiry_warning_days 45


[admin:10-10-26-52]: controllerproperties> ssl_certificate_expiry_warning_days 14

VMware, Inc. 668


VMware NSX Advanced Load Balancer Configuration Guide

[admin:10-10-26-52]: controllerproperties> save

+-----------------------------------------+---------+
| Field | Value |
+-----------------------------------------+---------+
| uuid | global |
| unresponsive_se_reboot | 300 |
| crashed_se_reboot | 900 |
| se_offline_del | 172000 |
| vs_se_create_fail | 1500 |
| vs_se_vnic_fail | 300 |
| vs_se_bootup_fail | 300 |
| se_vnic_cooldown | 120 |
| vs_se_vnic_ip_fail | 120 |
| fatal_error_lease_time | 120 |
| upgrade_lease_time | 360 |
| query_host_fail | 180 |
| vnic_op_fail_time | 180 |
| dns_refresh_period | 60 |
| se_create_timeout | 900 |
| max_dead_se_in_grp | 1 |
| dead_se_detection_timer | 360 |
| api_idle_timeout | 15 |
| allow_unauthenticated_nodes | False |
| cluster_ip_gratuitous_arp_period | 60 |
| vs_key_rotate_period | 60 |
| secure_channel_controller_token_timeout | 60 |
| secure_channel_se_token_timeout | 60 |
| max_seq_vnic_failures | 3 |
| vs_awaiting_se_timeout | 60 |
| vs_apic_scaleout_timeout | 360 |
| secure_channel_cleanup_timeout | 60 |
| attach_ip_retry_interval | 360 |
| attach_ip_retry_limit | 4 |
| persistence_key_rotate_period | 60 |
| allow_unauthenticated_apis | False |
| warmstart_se_reconnect_wait_time | 300 |
| vs_se_ping_fail | 60 |
| se_failover_attempt_interval | 300 |
| max_pcap_per_tenant | 4 |
| ssl_certificate_expiry_warning_days[1] | 45 days |
| ssl_certificate_expiry_warning_days[2] | 30 days |
| ssl_certificate_expiry_warning_days[3] | 14 days |
| ssl_certificate_expiry_warning_days[4] | 7 days |
| ssl_certificate_expiry_warning_days[5] | 1 days |
| seupgrade_fabric_pool_size | 20 |
| seupgrade_segroup_min_dead_timeout | 360 |

VMware, Inc. 669


VMware NSX Advanced Load Balancer Configuration Guide

To remove any of the warning_days entries, execute a sequence as follows within the configure
command:

[admin:10-10-26-52]: controllerproperties> no ssl_certificate_expiry_warning_days 14


[admin:10-10-26-52]: controllerproperties> no ssl_certificate_expiry_warning_days 1
[admin:10-10-26-52]: controllerproperties> save

Note Add as many warning_days entries as you like. However, while removing them NSX
Advanced Load Balancer will reject any attempt to reduce the number of entries below three.

Enabling Client Certificate Authentication on NSX Advanced


Load Balancer
This section explains how to enable client certificate authentication on NSX Advanced Load
Balancer. When client certificate authentication is enabled, NSX Advanced Load Balancervalidates
SSL certificates presented by a client against a trusted certificate authority and a configured client
revocation list (CRL).

Prerequisites
Knowledge of OpenSSL

Generating Keys and Certificates


Creating Directories for Keys and Certificates

The following are the steps to createdirectories for keys and certificates:

n Login to the NSX Advanced Load Balancer CLI.

n Use the following mkdir command to create a directory to store.

n Execute the keys and certificates required for client authentication.

n Use the cd command to access the directory.

$ mkdir client-cert-auth-demo
$ cd client-cert-auth-demo
[client-cert-auth-demo] $

Generating Client Certificate (CA) Key

Use the openssl genrsa -out CA.key 2048 command to generate a self-signed CA certificate with
2048-bit encryption.

[client-cert-auth-demo] $ openssl genrsa -out CA.key 2048


Generating RSA private key, 2048 bit long modulus
......................................................................+++

e is 65537 (0x10001)
Generate self-signed CA Cert:

VMware, Inc. 670


VMware NSX Advanced Load Balancer Configuration Guide

[client-cert-auth-demo] $ openssl req -x509 -new -nodes -key CA.key -sha256 -days 1024 -out
CA.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:California
Locality Name (eg, city) [Default City]:Santa Clara
Organization Name (eg, company) [Default Company Ltd]:Avi Networks
Organizational Unit Name (eg, section) []:Engineering
Common Name (eg, your name or your server's hostname) []:demo.avi.com
Email Address []:

Note Leave the email address empty.

Generating Client Certificate Signing Request (CSR)

The following are the steps to generate client certificate signing request:

1 Generate aclient.key using the openssl genrsa -out client.key 2048 command.

2 Use the openssl req -new -key client.key -out client.csr command to create a client
CSR.

3 Specify all the details as per the requirement.

Note
n The Common Name should match the hostname or FQDN of your client machine.

n Leave the email address, the challenge password, and the optional company name empty.

Generate client CSR:


[client-cert-auth-demo] $ openssl req -new -key client.key -out client.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:California
Locality Name (eg, city) [Default City]:Santa Clara
Organization Name (eg, company) [Default Company Ltd]:Avi Networks
Organizational Unit Name (eg, section) []:Engineering
Common Name (eg, your name or your server's hostname) []:client.avi.com
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request

VMware, Inc. 671


VMware NSX Advanced Load Balancer Configuration Guide

A challenge password []:


An optional company name []:

Creating Signed Client Certificate

Use the following OpenSSL command to create a signed client certificate.

[client-cert-auth-demo] $ openssl x509 -req -in client.csr -CA CA.pem -CAkey CA.key
-CAcreateserial -
out client.pem -days 1024 -sha256
Signature ok
subject=/C=US/ST=California/L=Santa Clara/O=Avi Networks/OU=Engineering/CN=client.avi.com
Getting CA Private Key

Converting Client Key from PEM to PKCS12 (PFX)

Use the following OpenSSL command to convert the client key format from PEM to PKCS12.
Provide an export password.

[client-cert-auth-demo] $ openssl pkcs12 -export -out client.pfx -inkey client.key -in


client.pem -certfile
CA.pem
Enter Export Password:
Verifying - Enter Export Password:

Configuring CRL
This section explains the two ways of configuring CRL, namely, by generating the CRL and re-
generating the CRL.

Generating CRL
By default, if client certificate validation is enabled in an HTTP profile, the PKI profile used by
the virtual service must contain at least one CRL. This CRL is issued by the CA that signed the
client certificate. Use the following OpenSSL command to generate the CRL using the key and the
certificate created in the previous steps.

[client-cert-auth-demo] $ openssl ca -gencrl -keyfile CA.key -cert CA.pem -out crl.pem


Using configuration from /etc/pki/tls/openssl.cnf
/etc/pki/CA/index.txt: No such file or directory
unable to open '/etc/pki/CA/index.txt'
139687578113952:error:02001002:system library:fopen:No such file or
directory:bss_file.c:398:fopen('/etc/pki/CA/index.txt','r')
139687578113952:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400:

This command may exhibit a few errors. Take the actions as required. For instance, the following
commands create a file.

/etc/pki/CA/index.txt file and the file /etc/pki/CA/crlnumber with the content 01:
[client-cert-auth-demo] $ touch /etc/pki/CA/index.txt
[client-cert-auth-demo] $ echo 01 > /etc/pki/CA/crlnumber

VMware, Inc. 672


VMware NSX Advanced Load Balancer Configuration Guide

Re-generating the CRL


Once action is taken as per the error in the previous step, re-run the openssl ca -gencrl
-keyfile CA.key -cert CA.pem -out crl.pem command to generate the CRL once again.

[client-cert-auth-demo] $ openssl ca -gencrl -keyfile CA.key -cert CA.pem -out crl.pem


Using configuration from /etc/pki/tls/openssl.cnf

Exporting PFX Client Key to the Keychain of the Local Workstation


This section explains the steps to export PFX client key to the keychain of the local workstation.

n Copy the client.pfx to your workstation (in this example, a MAC workstation is used), and
open it in the keychain.

n Specify the export password to add the client PFX key to your local keychain store as shown
below.

Note Use the export password provided while converting PEM key to PFX key.

Creating PKI Application Profile


This section explains the steps to create PKI application profile using NSX Advanced Load
Balancer UI and NSX Advanced Load Balancer CLI.

Creating PKI Application Profile Using NSX Advanced Load Balancer UI


1 Navigate to Templates > Security > PKI Profile. Click Create.

2 In this example, a new PKI profile is created. Provide the desired name, check Enable CRL
Check box.

3 In Certificate Authority (CA) tab, select Add and click Upload Certificate Authority File (CA)
to upload a file.

VMware, Inc. 673


VMware NSX Advanced Load Balancer Configuration Guide

4 Navigate to Certificate Revocation List (CRL) tab and select Add. You can add the details
either byproviding the server URL, or by uploading the file saved on your local workstation.

5 Click Save. As shown below, the CA file and the CRL file have been added to the PKI profile
(My-PKI-Profile). The application profile should contain a CRL for each of the intermediate CA
in the chain of trust.

VMware, Inc. 674


VMware NSX Advanced Load Balancer Configuration Guide

Creating PKI Application Profile using the NSX Advanced Load Balancer CLI
[admin:My-Avi-Controller-17.2.10]: > configure pkiprofile
test
[admin:My-Avi-Controller-17.2.10]: pkiprofile> ca_certs
New object being created
[admin:My-Avi-Controller-17.2.10]: pkiprofile:ca_certs> certificate --
Please input the value for field certificate (Enter END to terminate input):-----BEGIN
CERTIFICATE----- <————————— Paste cert here
MIIFAzCCA+ugAwIBAgIEUdNg7jANBgkqhkiG9w0BAQsFADCBvjELMAkGA1UEBhMC
VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3d3cuZW50
cnVzdC5uZXQvbGVnYWwtdGVybXMxOTA3BgNVBAsTMChjKSAyMDA5IEVudHJ1c3Qs
r2RsCAwEAAaOCAQkwggEFMA4GA1UdDwEB/wQEAwIBBjAP
jbEnmUK+xJPrSFdDcSPE5U6trkNvknbFGe/KvG9CTBaahqkEOMdl8PUM4ErfovrO
GhGonGkvG9/q4jLzzky8RgzAiYDRh2uiz2vUf/31YFJnV6Bt0WRBFG00Yu0GbCTy
BrwoAq8DLcIzBfvLqhboZRBD9Wlc44FYmc1r07jHexlVyUDOeVW4c4npXEBmQxJ/
B7hlVtWNw6f1sbZlnsCDNn8WRTx0S5OKPPEr9TVwc3vnggSxGJgO1JxvGvz8pzOl
u7sY82t6XTKH920l5OJ2hiEeEUbNdg5vT6QhcQqEpy02qUgiUX6C
-----END CERTIFICATE----- <————————— Press Enter key after pasting cert
END <————————— Type END and press Enter key
[admin:My-Avi-Controller-17.2.10]: pkiprofile:ca_certs> save
[admin:My-Avi-Controller-17.2.10]: pkiprofile> no crl_check <————————— Optional for
testing
[admin:My-Avi-Controller-17.2.10]: pkiprofile> save

Configuring HTTP Profile


This section explains the seteps to configure HTTP profile.

VMware, Inc. 675


VMware NSX Advanced Load Balancer Configuration Guide

Procedure

1 Navigate to Templates > Profiles > Application and select HTTP from Create drop-down list
to create a new HTTP application profile. Provide the desired name, and set the type to HTTP.

2 Select the Security tab, and choose the Require tab under the Client SSL Certificate
Validation.

3 Select the PKI profile created in the previous step, and add the desired HTTP headers that you
want to see in the application logs.

Configuring L4 SSL/ TLS Profile


The NSX Advanced Load Balancer CLI interface can be used to configure L4 SSL/TLS application
profiles for client SSL certificate validation.

Procedure

1 Login to the NSX Advanced Load Balancer CLI (shell).

2 Edit or create the application profile for your L4 SSL/TLS application. For instance, my-L4-app-
profile.

> [admin:our-controller]: > configure applicationprofile my-L4-app-profile

3 Declare the profile to be type L4.

> [admin:our-controller]: applicationprofile> type application_profile_type_l4

4 Enter tcp_app_profile submode.

> [admin:our-controller]: applicationprofile> tcp_app_profile

5 Enter the ssl_client_certificate_mode. If you key in just a portion of the keyword, followed
by two TAB key clicks, three choices will appear.

> [admin:our-controller]: applicationprofile:tcp_app_profile> ssl_client_certificate_mode


ssl_client_certificate_
ssl_client_certificate_none Enum option does not have an e_description option
ssl_client_certificate_request Enum option does not have an e_description option
ssl_client_certificate_require Enum option does not have an e_description option

6 Pick the desired validation type, which is explained in a subseqent section of this article.

> [admin:our-controller]: applicationprofile:tcp_app_profile> ssl_client_certificate_mode


ssl_client_certificate_require

7 For either ssl_client_certificate_request or ssl_client_certificate_require mode, a


PKI profile is required and must exist previous to saving the application profile.

> [admin:our-controller]: applicationprofile:tcp_app_profile> pki_profile_ref my-L4-pki

VMware, Inc. 676


VMware NSX Advanced Load Balancer Configuration Guide

8 Save the configuration.

> [admin:our-controller]: applicationprofile:tcp_app_profile> save


> [admin:our-controller]: applicationprofile> save
> [admin:our-controller]:

Associating Application Profile with Virtual Service


This section explanis the details about application profile with virtual service.

The following are the steps to associate application profile with virtual service:

1 Navigate to the Applications > Virtual Services.

2 Select the desired virtual service.

3 Click edit icon and select the HTTP application profile created in the previous step.

Testing Client Certificate Authentication against Virtual Service


Execute the following curl command using the certificates generated in the previous section to
test the connection to the virtual service. 10.10.27.101 is the IP address of the virtual service.

$ curl -k -v --cacert ./CA.pem --key ./client.key --cert ./client.pem https://10.10.27.101/

Full-chain CRL Checking for Client Certificate Validation


NSX Advanced Load Balancer supports use of Certificate Revocation Lists (CRLs). A CRL is a file
issued by a certificate authority (CA) that lists certificates that were issued by the CA but have
been revoked. When a client sends a request for an SSL connection to a virtual service, NSX
Advanced Load Balancer can check the CAs and CRL(s) in the virtual service’s PKI profile to verify
whether the client certificate is still valid.

The PKI profile has an option for full-chain CRL checking. You can enable this option by checking
Enable CRL Check box.

n Full-chain CRL checking disabled: By default, if client certificate validation is enabled in the
HTTP profile used by the virtual service, the PKI profile used by the virtual service must contain
at least one CRL, a CRL issued by the CA that signed the client’s certificate.

For a client to pass certificate validation, the CRL in the profile must be from the same CA that
signed the certificate presented by the client, and the certificate must not be listed in the CRL
as revoked.

n Full-chain CRL checking enabled: For more rigorous certificate validation, CRL checking can
enabled in the PKI profile. In this case, NSX Advanced Load Balancer requires the PKI profile
to contain a CRL for every intermediate certificate within the chain of trust for the client.

VMware, Inc. 677


VMware NSX Advanced Load Balancer Configuration Guide

For a client to pass certificate validation, the profile must contain a CRL from each intermediate
CA in the chain of trust, and the certificate cannot be listed in any of the CRLs as revoked.
If the profile is missing a CRL for any of the intermediate CAs, or the certificate is listed as
revoked in any of those CRLs, the client’s request for an SSL session to the virtual service is
denied.

Note Another option in the PKI profile (Ignore Peer Chain) controls how NSX Advanced Load
Balancer assembles the chain of trust for a client, specifically whether the intermediate certificates
presented by the client are allowed to be used. If full-chain CRL checking is enabled, the PKI
profile must contain CRLs from the signing CAs for every certificate that is used to build a given
client’s chain of trust, whether the intermediate certificates are from the client or from the PKI
profile.

Here is an example of a PKI profile with CRL checking enabled. This profile also contains the
intermediate and root certificates that form the chain of trust for the server certificate. The profile
also contains the CRLs from the issuing authorities for the server and intermediate certificates.
The www.root.client.com CRL is used to verify whether certificate www.intermediate.client.com is
valid. Likewise, the www.intermediate.client.com CRL is used to verify whether the “client” (leaf)
certificatewww.client.client.com is valid.

Enabling Full-chain CRL Checking


The following are the steps to enable full-chain CRL checking:

1 Navigate Templates > Security > PKI Profile.

2 Click Create.

3 Check Enable CRL Check box.

4 If creating a new profile specify a name and add the key, certificate, and CRL files. Ensure the
profile contains a CRL for each intermediate CA in the chain of trust.

5 Click Save.

Updating SSL Key and Certificate


NSX Advanced Load Balancer supports the update of non-self-signed certificates.

Use Case
If a certificate expires or it needs to be replaced, multiple virtual services can be impacted. You
can manually update each virtual service, one by one, to use a replacement certificate, presents
administrative burden. By updating the certificate in place, NSX Advanced Load Balancer lifts that
burden. Updating the pre-existing named certificate is automatically followed by a push to all
affected SEs, which in turn causes all affected virtual services to continue without interruption.

VMware, Inc. 678


VMware NSX Advanced Load Balancer Configuration Guide

UI Interface
1 Navigate to Templates > Security > SSL/TLS Certificates.

2 Click the pencil icon at the extreme right of the row to open the certificate editor.

Note Any row listing a self-signed certificate will present no such option.

3 If the NSX Advanced Load BalancerSSLKeyAndCertificate object is created via a certificate


signing request (CSR), you can take the CSR and upload the new certificate by importing the
file.

4 On the other hand, if the NSX Advanced Load BalancerSSLKeyAndCertificate object is


created by importing the private key and certificate, you can edit and upload a new key-cert
pair.

Customizing Notification of Certificate Expiration


NSX Advanced Load Balancer enables you to customize when SSL certificate expiry notification
is triggered. The system expects a minimum of three notification days. By default, the alerts are
triggered 30 days, seven days and one day before expiry.

For instance, in the below sequence:

1 The Controller's properties are first displayed.

2 Two notification periods (45 days and 14 days) are specified and saved into the configuration.

3 The revised Controller properties are displayed as confirmation.

Note The two dates are automatically inserted and displayed in sequence.

[admin:10-10-26-52]: > configure controller properties


Updating an existing object. Currently, the object is:
+-----------------------------------------+---------+
| Field | Value |
+-----------------------------------------+---------+
| uuid | global |
| unresponsive_se_reboot | 300 |
| crashed_se_reboot | 900 |
| se_offline_del | 172000 |
| vs_se_create_fail | 1500 |
| vs_se_vnic_fail | 300 |
| vs_se_bootup_fail | 300 |
| se_vnic_cooldown | 120 |
| vs_se_vnic_ip_fail | 120 |
| fatal_error_lease_time | 120 |
| upgrade_lease_time | 360 |
| query_host_fail | 180 |
| vnic_op_fail_time | 180 |
| dns_refresh_period | 60 |
| se_create_timeout | 900 |
| max_dead_se_in_grp | 1 |

VMware, Inc. 679


VMware NSX Advanced Load Balancer Configuration Guide

| dead_se_detection_timer | 360 |
| api_idle_timeout | 15 |
| allow_unauthenticated_nodes | False |
| cluster_ip_gratuitous_arp_period | 60 |
| vs_key_rotate_period | 60 |
| secure_channel_controller_token_timeout | 60 |
| secure_channel_se_token_timeout | 60 |
| max_seq_vnic_failures | 3 |
| vs_awaiting_se_timeout | 60 |
| vs_apic_scaleout_timeout | 360 |
| secure_channel_cleanup_timeout | 60 |
| attach_ip_retry_interval | 360 |
| attach_ip_retry_limit | 4 |
| persistence_key_rotate_period | 60 |
| allow_unauthenticated_apis | False |
| warmstart_se_reconnect_wait_time | 300 |
| vs_se_ping_fail | 60 |
| se_failover_attempt_interval | 300 |
| max_pcap_per_tenant | 4 |
| ssl_certificate_expiry_warning_days[1] | 30 days |
| ssl_certificate_expiry_warning_days[2] | 7 days |
| ssl_certificate_expiry_warning_days[3] | 1 days |
| seupgrade_fabric_pool_size | 20 |
| seupgrade_segroup_min_dead_timeout | 360 |
+-----------------------------------------+---------+

[admin:10-10-26-52]: controllerproperties> ssl_certificate_expiry_warning_days 45


[admin:10-10-26-52]: controllerproperties> ssl_certificate_expiry_warning_days 14
[admin:10-10-26-52]: controllerproperties> save

+-----------------------------------------+---------+
| Field | Value |
+-----------------------------------------+---------+
| uuid | global |
| unresponsive_se_reboot | 300 |
| crashed_se_reboot | 900 |
| se_offline_del | 172000 |
| vs_se_create_fail | 1500 |
| vs_se_vnic_fail | 300 |
| vs_se_bootup_fail | 300 |
| se_vnic_cooldown | 120 |
| vs_se_vnic_ip_fail | 120 |
| fatal_error_lease_time | 120 |
| upgrade_lease_time | 360 |
| query_host_fail | 180 |
| vnic_op_fail_time | 180 |
| dns_refresh_period | 60 |
| se_create_timeout | 900 |
| max_dead_se_in_grp | 1 |
| dead_se_detection_timer | 360 |
| api_idle_timeout | 15 |
| allow_unauthenticated_nodes | False |
| cluster_ip_gratuitous_arp_period | 60 |
| vs_key_rotate_period | 60 |
| secure_channel_controller_token_timeout | 60 |

VMware, Inc. 680


VMware NSX Advanced Load Balancer Configuration Guide

| secure_channel_se_token_timeout | 60 |
| max_seq_vnic_failures | 3 |
| vs_awaiting_se_timeout | 60 |
| vs_apic_scaleout_timeout | 360 |
| secure_channel_cleanup_timeout | 60 |
| attach_ip_retry_interval | 360 |
| attach_ip_retry_limit | 4 |
| persistence_key_rotate_period | 60 |
| allow_unauthenticated_apis | False |
| warmstart_se_reconnect_wait_time | 300 |
| vs_se_ping_fail | 60 |
| se_failover_attempt_interval | 300 |
| max_pcap_per_tenant | 4 |
| ssl_certificate_expiry_warning_days[1] | 45 days |
| ssl_certificate_expiry_warning_days[2] | 30 days |
| ssl_certificate_expiry_warning_days[3] | 14 days |
| ssl_certificate_expiry_warning_days[4] | 7 days |
| ssl_certificate_expiry_warning_days[5] | 1 days |
| seupgrade_fabric_pool_size | 20 |
| seupgrade_segroup_min_dead_timeout | 360 |
+-----------------------------------------+---------+

To remove any of the warning_days entries, execute a sequence within the configure command.
For instance,

[admin:10-10-26-52]: controllerproperties> no ssl_certificate_expiry_warning_days 14


[admin:10-10-26-52]: controllerproperties> no ssl_certificate_expiry_warning_days 1
[admin:10-10-26-52]: controllerproperties> save

Note Add as many warning_days entries as you like. However, when removing them, NSX
Advanced Load Balancer will reject any attempt to reduce the number of entries below three.

VMware, Inc. 681


Hardware Security Module (HSM)
9
A Hardware security module (HSM) is a physical computing device that safeguards and
manages digital keys for strong authentication and provides cryptoprocessing. NSX Advanced
Load Balancer supports configuration of dedicated interfaces on NSX Advanced Load Balancer
Controller and Service Engines for hardware security module (HSM) and sideband (ASM)
communication on Cisco Cloud Services Platform (CSP). HSM and ASM communication are
supported for both an existing setup and a new NSX Advanced Load Balancer setup.

The support for HSM and ASM communication on NSX Advanced Load Balancer is as follows:

n NSX Advanced Load Balancersupport dedicated interfaces for HSM communication on new
Service Engines.

n NSX Advanced Load Balancersupport dedicated interfaces for HSM communication on


existing Service Engines.

n NSX Advanced Load Balancersupport dedicated interfaces for ASM (sideband) communication
on new and existing Service Engines.

n NSX Advanced Load Balancersupport dedicated interfaces for HSM communication on new
and existing NSX Advanced Load Balancer Controller.

For more information,see Additional Deployment Options section in the Cisco CSP Installation
guide.

This chapter includes the following topics:

n Thales Luna (formerly SafeNet Luna) HSM

Thales Luna (formerly SafeNet Luna) HSM


This article describes how to configure NSX Advanced Load Balancer to use the key generation
and encryption/decryption services provided by Thales Luna Network HSM. This enables use of
Thales Luna Network HSM to store keys associated with SSL/TLS resources configured on a virtual
service.

NSX Advanced Load Balancer includes integration support for networked Thales Luna HSM
products (formerly SafeNet Luna Network HSM) and AWS CloudHSM V2.

This article covers the Thales Luna Network HSM (formerly SafeNet Luna Network HSM)
integration. For more information on the re-branding, click here.

VMware, Inc. 682


VMware NSX Advanced Load Balancer Configuration Guide

Integration Support
NSX Advanced Load Balancer can be configured to support a cluster of HSM devices in high
availability (HA) mode. NSX Advanced Load Balancer support of HSM devices requires installation
of the user’s Thales Luna Client Software bundle, which can be downloaded from the Thales
website.

By default, NSX Advanced Load Balancer Controller and Service Engines use their respective
management interfaces for HSM communication. On CSP, NSX Advanced Load Balancer supports
the use of a dedicated Service Engine data interface for HSM interaction. Also, on the CSP
platform, you can use dedicated Controller interface for HSM communication.

You can choose to create the HSM group in the admin tenant with all the Service Engines spread
across multiple tenants. This way, HSM can be enabled on a per-SE-group basis by attaching the
HSM group to the corresponding SE group. In this mode, the configuration to choose between
a dedicated interface and a management interface for HSM communication is done in the admin
tenant; all other tenants are forced to use that configuration.

Alternatively, you can create HSM groups in their respective tenants. The configuration choice of
a dedicated or management interface for HSM communication is determined at the tenant level.
In this mode, Controller IPs can overlap in every HSM group. Internally, the certificate for these
overlapping clients is created once and reused for any subsequent HSM group creation.

Prerequisites
n Thales Luna devices are installed on your network.

n Thales Luna devices are reachable from the NSX Advanced Load Balancer Controller and
Service Engines.

n Thales Luna devices must have a virtual HSM partition defined before installing the client
software. Clients are associated with a unique partition on the HSM. These partitions should
be pre-created on all the HSMs that will be configured in HA/non-HA mode. Also note that
the password to access these partitions should be the same across the partitions on all HSM
devices.

n Server certificates for Thales Luna devices are available for creating the HSM Group in NSX
Advanced Load Balancer Controller for mutual authentication.

n Each NSX Advanced Load Balancer Controller and Service Engine must:

n Have the client license from Thales Luna to access the HSM.

n Be able to reach the HSM at ports 22 and 1792 through Controller management
or Controller dedicated and Service Engine management or Service Engine dedicated
management interface.

You need to download the following:

n Thales Luna Network HSM client software

n Thales Luna Network HSM customer documentation

VMware, Inc. 683


VMware NSX Advanced Load Balancer Configuration Guide

HSM Group Updates


After creation, update or deletion of an HSM group requires reloading of a new Thales Luna
configuration, which can only be achieved by restarting the Service Engines. Restart of Service
Engines temporarily disrupts traffic.

Thales Luna Software Import


This section explains the procedure to install the Thales Luna software bundle onto the NSX
Advanced Load Balancer Controller.

To enable support for Thales Luna Network HSM, the downloaded Thales Luna client software
bundle must be uploaded to the NSX Advanced Load Balancer Controller. It must be named
safenet.tar and can be prepared as follows:

n Copy files from the downloaded software into any given directory, for instance, safenet_pkg.

n Change directory (cd) to that directory, and enter the cp commands as follows:

Note This example uses HSM version 7.3.3.

cp 610-012382-008_revC/linux/64/configurator-5.4.1-2.x86_64.rpm
configurator-5.4.1-2.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/configurator-7.3.0-165.x86_64.rpm
configurator-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/libcryptoki-7.3.0-165.x86_64.rpm
libcryptoki-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/vtl-7.3.0-165.x86_64.rpm vtl-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/lunacmu-7.3.0-165.x86_64.rpm lunacmu-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/cklog-7.3.0-165.x86_64.rpm cklog-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/multitoken-7.3.0-165.x86_64.rpm
multitoken-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/ckdemo-7.3.0-165.x86_64.rpm ckdemo-7.3.0-165.x86_64.rpm
cp LunaClient_7.3.0-165_Linux/64/lunacm-7.3.0-165.x86_64.rpm lunacm-7.3.0-165.x86_64.rpm
tar -cvf safenet.tar configurator-7.3.0-165.x86_64.rpm libcryptoki-7.3.0-165.x86_64.rpm
vtl-7.3.0-165.x86_64.rpm lunacmu-7.3.0-165.x86_64.rpm cklog-7.3.0-165.x86_64.rpm
multitoken-7.3.0-165.x86_64.rpm ckdemo-7.3.0-165.x86_64.rpm lunacm-7.3.0-165.x86_64.rpm

n HSM package can be uploaded in the web interface at Administration > Settings > Upload
HSM Packages.

n HSM package upload is also supported through the CLI. You can use the following command
in the NSX Advanced Load Balancer Controller CLI shell to upload the HSM package:

upload hsmpackage filename /tmp/safenet_pkg/safenet.tar

VMware, Inc. 684


VMware NSX Advanced Load Balancer Configuration Guide

This command uploads the packages and installs them on the NSX Advanced Load Balancer
Controller or NSX Advanced Load Balancer Controllers,if clustered. If the Controller is deployed as
a three-node cluster, the command installs the packages on all three nodes. Upon completion of
the above command, the system displays HSM Package uploaded successfully message.

n NSX Advanced Load BalancerService Engines in an SE group referring to an HSM group need
a one-time reboot for auto-installation of the HSM packages. To reboot NSX Advanced Load
BalancerSE, issue the following CLI shell command:

reboot serviceengine Avi-se-ksueq

n To allow NSX Advanced Load Balancer Controllers to talk to Thales Luna HSM, the Thales
Luna client software bundle distributed with the product must be uploaded to NSX Advanced
Load Balancer. The software bundle preparation and upload is described above. In this
example, note that the NSX Advanced Load BalancerSE name is Avi-se-ksueq.

Enabling HSM Support in NSX Advanced Load Balancer


After using the above steps to install the Thales Luna software bundle onto the NSX Advanced
Load Balancer Controller, the Controller may be configured to secure virtual services with HSM
certificates. Follow the detailed steps below:

Step 1: Create the HSM Group and add the HSM devices to it
To begin, use the following commands on Controller bash shell to fetch the certificates of the HSM
servers. The example below fetches certificates from two servers 1.1.1.11 and 1.1.1.13.

username@avi:~$ sudo scp admin@1.1.1.11:server.pem hsmserver11.pem


username@avi:~$ sudo scp admin@1.1.1.13:server.pem hsmserver13.pem

The contents of these certificates are used while creating the HSM Group. NSX Advanced Load
Balancer supports trusted authentication for all nodes in the system. This can be done by
providing IP addresses of Controller(s) and Service Engine(s) which will interact with HSM. Use the
below options of the HSM Group editor. The Thales Luna server certificates can also be provided
by the security team managing the Thales Luna appliances. In either case, having access to these
certificates is a pre-requisite to create any HSM configuration in NSX Advanced Load Balancer.

By default, SEs use the management network to interact with the HSM. On CSP, NSX Advanced
Load Balancer also supports the use of a dedicated network for HSM interaction. Also, on the CSP
platform, you can use a dedicated interface on the Controllers for HSM communication.

The following are the steps to create the HSM group from the GUI:

n Switch to the desired tenant and navigate to Templates > Security > HSM Groups.

n Click Create and provide a suitable name in the Name field.

n Select Luna HSM option from the Type drop-down list.

n Select either Dedicated Network or Management Network for the HSM to communicate with.

VMware, Inc. 685


VMware NSX Advanced Load Balancer Configuration Guide

n Specify the client IP addresses of the desired Thales Luna appliances and the respective server
certificates obtained previously. Multiple HSMs may be included in the group via the green
Add Additional HSM button.

The Password and partition Serial Number fields can be populated if the respective HSM partition
passwords are available at this stage. Otherwise, this has to be done after client registration step
below.

Note
n If any dedicated SE or Controller interfaces have been configured for HSM communication,
check Dedicated Interface box and verify the IPs listed are those of the desired dedicated
interfaces on the Service Engines and/or Controllers. The UI should allow changing the IP
addresses if this is not the case.

n All NSX Advanced Load Balancer Controller's and all Service Engines associated with the SE
group should have at least one IP address in the list to ensure access to the HSMs. This step
is extremely important because Thales Luna appliances will not allow communications from
un-registered client-IP addresses. Click Save once all client-IP addresses have been verified.

Step 2: Register the Client with HSM Devices for Mutual Authentication
The clients in this case are NSX Advanced Load Balancer Controller's and Service Engines and the
generated client certificates need to be registered with the Thales Luna appliances for purposes of
mutual authentication. This can be done directly per steps 3 and 4 below or by sending the client
certificates to the concerned security team managing the HSM appliances.

The following are the steps to register the client with HSM devices:

1 Navigate to Templates > Security > HSM Groups. Click edit icon to download generated
certificates.

2 After download, save the certificate as **.pem**. In this example, the certificate needs to be
saved as 10.160.100.220.pem before scp to HSM.

scp 10.160.100.220.pem admin@1.1.1.11:

VMware, Inc. 686


VMware NSX Advanced Load Balancer Configuration Guide

3 Register the client on the HSM.

username@avi:~$ ssh admin@1.1.1.11


admin@1.1.1.11's password:
Last login: Thu May 12 19:52:00 2016 from 12.97.16.194
Luna SA 7.3.3-7 Command Line Shell - Copyright (c) 2001-2014 SafeNet, Inc. All rights
reserved.
[1.1.1.11] lunash: client register -c 10.160.100.220 -i 10.160.100.220 'client
register' successful. Command Result : 0 (Success)
[1.1.1.11] lunash: client assignPartition -c 10.160.100.220 -p par43 'client
assignPartition' successful. Command Result : 0 (Success)
[1.1.1.11] lunash: exit

4 Perform the above steps (1) and (2) for all HSM devices. The next steps must only be
performed after all client certificates are registered on all HSM appliances configured above to
verify the registration. First ensure the (partition) password is populated in the HSM group by
editing the same.

5 On the NSX Advanced Load Balancer Controller bash shell, the application ID must be opened
before the SE can communicate with the HSM. This can be done using the following command,
which will automatically be replicated to each NSX Advanced Load Balancer Controller in the
cluster. In case HSM groups were created in different tenants, safenet.py scripts can take an
optional argument -t . Alternately the default admin tenant can be provided as the argument
value. Verify that the application ID can be opened successfully per output below.

username@avi:~$ /opt/avi/scripts/safenet.py -p [HSM-GROUP] -i [CLIENT IP OF CONTROLLER


REGISTERED WITH HSM] -t [TENANT_NAME] —c "/etc/luna/bin/sautil -v -s 1 -i 1792:1793 -o
-p my_partition_password"
Copyright (C) 2009 SafeNet, Inc. All rights reserved.
sautilis the property of SafeNet, Inc. and is provided to our customers for the purpose of
diagnostic and development only. Any re-distribution of this program in whole or in part
is a violation of the license agreement.
Config file: /etc/Chrystoki.conf.
Will use application ID [1792:1793].
Application ID [1792:1793] opened.
Open ok.
Session opened. Handle 1
HSM Slot Number is 1.

VMware, Inc. 687


VMware NSX Advanced Load Balancer Configuration Guide

HSM Label is "ha1".


WARNING: Application Id 1792:1793 has been opened for access. Thus access will remain open
until all sessions associated with this Application Id are closed or until the access is
explicitly closed.

Note In the step above, if an error message appears stating that the application is already open,
you can close it using the following command. After closing it, reopen the application.

username@avi:~$ /opt/avi/scripts/safenet.py -p [HSM-GROUP] -i [CLIENT IP OF CONTROLLER


REGISTERED WITH HSM] -t [TENANT_NAME] —c "/etc/luna/bin/sautil -v -s 1 -i 1792:1793 -c -p
my_partition_password"

Copyright (C) 2009 SafeNet, Inc. All rights reserved.


sautilis the property of SafeNet, Inc. and is provided to our customers for the purpose of
diagnostic and development only. Any re-distribution of this program in whole or in part is
a violation of the license agreement.
Config file: /etc/Chrystoki.conf.
Close ok.

Step 3: Setting Up HA across HSM Devices (Optional)


NSX Advanced Load Balancer automates configuration of HA across HSM devices. Before
configuring HA, ensure that the clients are registered with the HSM using listSlots command.
This command provides details about the HSM devices to be set up. The serial number provided in
the output of this command is needed to set up HA across these devices.

Verify that the partition serial numbers listed below match the ones set up on the Thales
Luna appliances or the ones provided by the security team. This should also match with the
configuration in the HSM group object. Internally, the serial number is used to configure HA if the
client is registered on more than one partition on the HSM.

username@avi:~$ /opt/avi/scripts/safenet.py -p [HSM-GROUP] -i [CLIENT IP OF CONTROLLER


REGISTERED WITH HSM] -t [TENANT_NAME] -c "/usr/safenet/lunaclient/bin/vtl listSlots"

Number of slots: 5

The following slots were found:

Slot # Description Label Serial # Status


========= =================== ================== ========== ============
slot #1 LunaNet Slot par43 156908040 Present
slot #2 LunaNet Slot par40 156936072 Present
slot #3 - - - Not present
slot #4 - - - Not present
slot #5 - - -Not present

You can enable HA from the CLI as follows after switching to the appropriate tenant if required.

[username:avi]: > switchto tenant [TENANT_NAME]


[username:avi]: > configure hardwaresecuritymodulegroup safenet-network-hsm-1
[username:avi]: hardwaresecuritymodulegroup> hsm type hsm_type_safenet_luna

VMware, Inc. 688


VMware NSX Advanced Load Balancer Configuration Guide

[username:avi]: hardwaresecuritymodulegroup:hsm> sluna


[username:avi]: hardwaresecuritymodulegroup:hsm:sluna> is_ha
[username:avi]: hardwaresecuritymodulegroup:hsm:sluna> save
[username:avi]: hardwaresecuritymodulegroup:hsm:sluna> save
[username:avi]: hardwaresecuritymodulegroup> save

Alternatively, this can also be done in the web interface by selecting the HSM group and editing it
to select the Enable HA check box. This option is available only while editing the HSM group with
more than one server.

Once HA is set up, verify the output of the listSlots command to ensure the avi_group virtual
card slot is configured.

[username:avi]: /opt/avi/scripts/safenet.py -p [HSM-GROUP] -i [CLIENT IP OF CONTROLLER


REGISTERED WITH HSM] -t [TENANT_NAME] -c "/usr/safenet/lunaclient/bin/vtl listSlots"

Number of slots: 1

The following slots were found:

Slot # Description Label Serial # Status


========= ================= ==================== ========== ============
slot #1 HA Virtual Card Slot avi_group 1529532014 Present

Step 4: Associate the HSM Group with an SE Group


The HSM group must be added to the SE group that will be used by virtual service.

n Switch to appropriate tenant and navigate to Infrastructure > Cloud Resources > Service
Engine Group.

n Bring up the Service Engine group editor for the desired Service Engine group.

n Click Advanced tab.

n Select the desired HSM group from the drop-down list.

n Click Save.

This also can be configured using the CLI shell:

[username:avi]: > switchto tenant [TENANT_NAME]


[username:avi]: > configure serviceenginegroup [SE-GROUP]
[username:avi]: hardwaresecuritymodulegroup_ref

Step 5: Add the Application Certificates and Keys


Create Application Certificate and Keys

The Controller is setup as a client of HSM and can be used to create keys and certificates on the
HSM. Both the RSA and EC type of key/certificate creation is supported.

VMware, Inc. 689


VMware NSX Advanced Load Balancer Configuration Guide

Use a browser to navigate to the Controller’s management IP address. If NSX Advanced Load
Balancer is deployed as a three-node Controller cluster, navigate to the management IP address
of the cluster. Use this procedure to create keys and certificates. The creation process is similar to
any other key/certificate creation. For a key/certificate bound to HSM, select the HSM group while
creating the object. The picture below illustrates the creation of self-signed certificate bound to a
HSM group.

n Navigate to Templates > Security > SSL/TLS Certificates .

n Click Create > Application Certificate.

Note HSM Group t2-avihsm2 is selected. This is the HSM group that was created earlier. You can
create the self-signed EC cert on HSM provided in t2-avihsm2 by clicking on Save button.

Import Application Certificate and Keys

Use a browser to navigate to the NSX Advanced Load Balancer Controller’s management IP
address. If NSX Advanced Load Balancer is deployed as a three-node Controller cluster, navigate
to the management IP address of the cluster. Use this procedure to import the private keys
created using the Thales Luna cmu/sautil utilities, and the associated certificates.

n Navigate to Templates > Security > SSL/TLS Certificates.

n Click Create > Application Certificate.

n Specify the name for the certificate definition.

n Select Import option from the Type drop-down list.

n Prepare to import the private key for the server certificate.

n Upload the certificate file in Upload or Paste Certificate File field in the Certificate
Information section. You can select Paste text (to copy-and-paste the certificate text
directly in the web interface) or Upload File.

n If the key file is secured by a passphrase, enter it in the Key Passphrase field.

n Paste the key file (if copy-and-pasting) or navigate to the location of the file (if uploading).

n Prepare to import the server certificate:

n Above the Certificate field, select Paste text or Upload File.

n Paste the key file (if copy-and-pasting) or navigate to the location of the file (if uploading).

n Click Validate. NSX Advanced Load Balancer checks the key and certificate files to ensure they
are valid.

Step 6: Enable HSM Support on a Virtual Service


n In the Controller web management interface, navigate to Applications > Virtual Services

n Click New or Edit.

n If configuring a new virtual service, specify the name of the VIP.

VMware, Inc. 690


VMware NSX Advanced Load Balancer Configuration Guide

n Select the HSM certificate from the SSL Certificate drop-down list.

n Specify the virtual service name and VIP address.

n In the Service Port section, enable SSL.

n Click Advanced. On the Advanced page, select the SE group to which the HSM group was
added.

n Click Save.

The virtual service is now ready to handle SSL/TLS traffic using the encryption/decryption services
of the Thales Luna Network HSM device.

Configuring Dedicated Interfaces for HSM Communication on New


NSX Advanced Load Balancer Service Engines
NSX Advanced Load Balancer supports dedicated interface on Service Engines for HSM
communication in the following environments:

n Cisco CSP

n vCenter No Orchestrator Mode

Note Starting with NSX Advanced Load Balancer version 20.1.5, dedicated interfaces for Service
Engines deployed in vCenter No Orchestrator environments are supported.

Dedicated hardware security module (HSM) interfaces on NSX Advanced Load Balancer Service
Engines use the following configuration parameters:

n avi.hsm-ip.SE

n avi.hsm-static-routes.SE

n avi.hsm-vnic-id.SE

Parameters
avi.hsm-ip.SE

n Description : This is the IP address of the dedicated HSM vNIC on the SE (this is NOT the IP
address of the HSM).

n Format: IP-address/subnet-mask

n Example : avi.hsm-ip.SE: 10.160.103.227/24

VMware, Inc. 691


VMware NSX Advanced Load Balancer Configuration Guide

avi.hsm-static-routes.SE

n Description : These are comma-separated, static routes to reach HSM devices. Even /32
routes can be provided.

Note If there is a single static route, provide the same and ensure the square brackets are
matched. Also, if HSM devices are in the same subnet as the dedicated interfaces, provide the
gateway as the default gateway for the subnet.

n Format : [ hsm network1/mask1 via gateway1, hsm network2/mask2 via gateway2 ] OR [ hsm
network1/mask1 via gateway1 ]

n Example : avi.hsm-static-routes.SE:[ 10.128.1.0/24 via 10.160.103.1, 10.128.2.0/24 via


10.160.103.2]

avi.hsm-vnic-id.SE

n Description : For CSP, this is the ID of the dedicated HSM vNIC and is typically 3 on CSP
(vNIC0 is management interface, vNIC1 is data-in interface and vNIC2 is data-out interface).
For vCenter No Orchestrator, this is the vNIC ID,for instance, “3”for “Eth3”.

n Format : ‘numeric vNIC ID’

n Example : avi.hsm-vnic-id.SE: ‘3’

YAML Parameter Description Format Example

avi.hsm-ip.SE IP address of the dedicated IP-address/subnet-mask avi.hsm-ip.SE:


HSM vNIC on the SE (this is 10.160.103.227/24
NOT the IP address of the
HSM)

avi.hsm-static-routes.SE Comma-separated, static [ hsm network1/mask1 via avi.hsm-static-routes.SE:


routes to reach the HSM gateway1, hsm network2/ [ 10.128.1.0/24 via
devices. Even /32 routes mask2 via gateway2 ] OR 10.160.103.1, 10.128.2.0/24
can be provided [ hsm network1/mask1 via via 10.160.103.2]
gateway1 ]

avi.hsm-vnic-id.SE ID of the dedicated HSM numeric vNIC ID avi.hsm-vnic-id.SE: '3'


vNIC

Instructions
Cisco CSP

A sample YAML file for the Day Zero configuration on the CSP is shown below:

bash# cat avi_meta_data_dedicated_hsm_SE.yml


avi.mgmt-ip.SE: "10.128.2.18"
avi.mgmt-mask.SE: "255.255.255.0"
avi.default-gw.SE: "10.128.2.1"
AVICNTRL: "10.10.22.50"
AVICNTRL_AUTHTOKEN: “febab55d-995a-4523-8492-f798520d4515"
avi.hsm-ip.SE: 10.160.103.227/24

VMware, Inc. 692


VMware NSX Advanced Load Balancer Configuration Guide

avi.hsm-static-routes.SE:[ 10.128.1.0/24 via 10.160.103.1, 10.128.2.0/24 via 10.160.103.2]


avi.hsm-vnic-id.SE: '3'

Once an NSX Advanced Load Balancer Service Engine is created with the Day Zero configuration
file and appropriate virtual NIC interfaces are added to the SE service instance on Cisco CSP,
verify that the dedicated vNIC configuration is applied successfully and the HSM devices are
reachable via this interface. In this case, interface eth3 (dedicated HSM interface) is configured
with IP 10.160.103.227/24.

Login into the bash prompt of NSX Advanced Load Balancer SE and use ip route command and
run a ping test to check reachability of the dedicated interface IP.

bash# ssh admin@<SE-MGMT-IP>


bash# ifconfig eth3
eth3 Link encap:Ethernet HWaddr 02:6a:80:02:11:05
inet addr:10.160.103.227 Bcast:10.160.103.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4454601 errors:0 dropped:1987 overruns:0 frame:0
TX packets:4510346 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000

RX bytes:672683711 (672.6 MB) TX bytes:875329395 (875.3 MB)


bash# ip route
default via 10.128.2.1 dev eth0
10.128.1.0/24 via 10.160.103.1 dev eth3
10.128.2.0/24 via 10.160.103.2 dev eth3
10.128.2.0/24 dev eth0 proto kernel scope link src 10.128.2.27
10.160.103.0/24 dev eth3 proto kernel scope link src 10.160.103.227
bash# ping -I eth3 <HSM-IP>
ping -I eth3 10.128.1.51
PING 10.128.1.51 (10.128.1.51) from 10.160.103.227 eth3: 56(84) bytes of data.
64 bytes from 10.128.1.51: icmp_seq=1 ttl=62 time=0.229 ms

vCenter No-Orchestrator
When the Service Engine is being deployed, add the OVF properties listed above to the virtual
machine. For existing Service Engines, the SE virtualmachine can be powered off, the OVF
properties added, and the VM powered on.

Additional Information
For different types of supported configuration for HSM and ASM communication on NSX
Advanced Load Balancer, refer to How to configure dedicated interfaces for HSM and ASM
communication on Cisco CSP.

Configuring Dedicated Interfaces for HSM Communication on an


Existing NSX Advanced Load Balancer Service Engine
NSX Advanced Load Balancer supports dedicated interface on Service Engines for HSM
communication in the following environments:

n Cisco CSP

VMware, Inc. 693


VMware NSX Advanced Load Balancer Configuration Guide

n vCenter No Orchestrator Mode

Background
Dedicated hardware security module (HSM) interfaces on NSX Advanced Load Balancer Service
Engines use the following configuration parameters:

n avi.hsm-ip.SE

n avi.hsm-static-routes.SE

n avi.hsm-vnic-id.SE

For existing SEs, these parameters can be populated in the /etc/ovf_config file.

Note All parameters in this file are comma-separated and the file format is slightly different
from the YML file used for spinning up new Service Engines. However, the parameters and their
respective formats are exactly the same as they are for new Service Engines.

YAML parameters
avi.hsm-ip.SE

n Description : This is the IP address of the dedicated HSM vNIC on the SE (this is NOT the IP
address of the HSM).

n Format : IP-address/subnet-mask.

n Example : avi.hsm-ip.SE: 10.160.103.227/24

avi.hsm-static-routes.SE

n Description : These are comma-separated, static routes to reach HSM devices. Even /32
routes can be provided.

Note If there is a single static route, provide the same and ensure the square brackets are
matched. Also, if HSM devices are in the same subnet as the dedicated interfaces, provide the
gateway as the default gateway for the subnet.

n Format : [ hsm network1/mask1 via gateway1, hsm network2/mask2 via gateway2 ] OR [ hsm
network1/mask1 via gateway1 ]

n Example : avi.hsm-static-routes.SE:[ 10.128.1.0/24 via 10.160.103.1, 10.128.2.0/24 via


10.160.103.2]

avi.hsm-vnic-id.SE

n Description : This is the ID of the dedicated HSM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface).

n Format : ‘numeric vNIC ID’.

n Example : avi.hsm-vnic-id.SE: ‘3’

VMware, Inc. 694


VMware NSX Advanced Load Balancer Configuration Guide

YAML Parameter Description Format Example

avi.hsm-ip.SE IP address of the dedicated IP-address/subnet-mask avi.hsm-ip.SE:


HSM vNIC on the SE (this is 10.160.103.227/24
NOT the IP address of the
HSM)

avi.hsm-static-routes.SE Comma-separated, static [ hsm network1/mask1 via avi.hsm-static-routes.SE:


routes to reach the HSM gateway1, hsm network2/ [ 10.128.1.0/24 via
devices. Even /32 routes mask2 via gateway2 ] OR 10.160.103.1, 10.128.2.0/24
can be provided [ hsm network1/mask1 via via 10.160.103.2]
gateway1 ]

avi.hsm-vnic-id.SE ID of the dedicated HSM numeric vNIC ID avi.hsm-vnic-id.SE: '3'


vNIC and is typically 3 on
CSP (vNIC0 is management
interface, vNIC1 is data-in
interface, and vNIC2 is
data-out interface)

Instructions
CSP Configuration

To add a dedicated HSM vNIC on an existing SE CSP service, perform the following steps:

Note In the sample configuration provided below, vNIC3 is used which is actually the fourth NIC
on the CSP service.

1 Navigate to Configuration > Service > Action > Power Off to power off NSX Advanced Load
Balancer SE service using CSP user interface.

2 Add a new vNIC to the SE with desired parameters Navigate to Configuration > Service >
Action > Service Edit > Add vnic to add a new vNIC to the SE with desired parameters.
Provide VLAN id, VLAN type, VLAN tagged, Network Name, Model, etc., and click Submit.

3 To power on the SE service on CSP UI navigate to Configuration > Service > Action > Power
ON.

VMware, Inc. 695


VMware NSX Advanced Load Balancer Configuration Guide

NSX Advanced Load Balancer Service Engine Configuration

1 Perform the following steps using NSX Advanced Load Balancer Service Engine bash shell.

ssh admin@<SE-MGMT-IP&gt
bash#
bash# sudo su
bash# /opt/avi/scripts/stop_se.sh
bash# mv /var/run/avi/ovf_properties.saved /home/admin

Note Perform a move operation; do not copy this file. Edit it to provide the three comma-
separated, HSM-dedicated NIC related parameters. The file looks like the following:

bash# cat /home/admin/ovf_properties.saved


AVICNTRL: 10.128.2.18, AVICNTRL_AUTHTOKEN: 1403771c- fc59-4d76-89b2-b3c35682b342,
avi.default-gw.SE: 10.128.2.1,
avi.hsm-ip.SE: 10.160.103.227/24,
avi.hsm-static-routes.SE:[10.128.1.0/24 via 10.160.103.1, 10.128.2.0/24 via
10.160.103.2],
avi.hsm-vnic-id.SE: '3',
avi.mgmt-ip.SE: 10.128.2.27, ovf_source: CSP,
uuid: FCE9B12D-A1B0-4EF3-B922-BDC2A5F8AA11

bash# cp /home/admin/ovf_properties.saved /etc/ovf_config


bash# /opt/avi/scripts/start_se.sh

2 Verify that the dedicated vNIC information is applied correctly and the HSM devices are
reachable via this interface. In this sample configuration, the eth3 dedicated HSM interface
is configured with IP 10.160.103.227/24.

bash# ssh admin@<SE-MGMT-IP>


bash# ifconfig eth3
eth3 Link encap:Ethernet HWaddr 02:6a:80:02:11:05
inet addr:10.160.103.227 Bcast:10.160.103.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4454601 errors:0 dropped:1987 overruns:0 frame:0
TX packets:4510346 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:672683711 (672.6 MB) TX bytes:875329395 (875.3 MB)
bash# ip route
default via 10.128.2.1 dev eth0
10.128.1.0/24 via 10.160.103.1 dev eth3
10.128.2.0/24 via 10.160.103.2 dev eth3
10.128.2.0/24 dev eth0 proto kernel scope link src 10.128.2.27
10.160.103.0/24 dev eth3 proto kernel scope link src 10.160.103.227
bash# ping -I eth3 <HSM-IP>
ping -I eth3 10.128.1.51
PING 10.128.1.51 (10.128.1.51) from 10.160.103.227 eth3: 56(84) bytes of data.
64 bytes from 10.128.1.51: icmp_seq=1 ttl=62 time=0.229 ms

VMware, Inc. 696


VMware NSX Advanced Load Balancer Configuration Guide

Configuring Dedicated Interfaces for ASM Communication on a New


NSX Advanced Load Balancer Service Engine
Dedicated sideband interfaces on NSX Advanced Load Balancer Service Engines use the following
configuration parameters. For new SEs, these parameters can be provided in the day-zero YAML
file.

YAML Parameters
avi.asm-ip.SE

n Description : This is the ip address of the dedicated sideband interface on the SE (this is NOT
the self IP or virtual service IP of the ASM device).

n Format : IP-address/subnet-mask.

n Example : avi.asm-ip.SE: 10.160.103.227/24

avi.asm-static-routes.SE

n Description : These are comma-separated, static routes to reach the sideband ASM virtual
service IP. Even /32 routes can be provided. The gateway will be the self IP of the ASM device.

Note If there is a single static route, provide the same and ensure the square brackets
are matched. Also, if the ASM virtual service IPs are in the same subnet as the dedicated
interfaces, provide the gateway as the default gateway for the subnet.

n Format : [ asm-vip-network1/mask1 via gateway1, asm-vip-network2/mask2 via gateway2 ] or


[ asm-vip-network1/mask1 via gateway1 ]

n Example : avi.asm-static-routes.SE: [169.254.1.0/24 via 10.160.102.1, 169.254.2.0/24 via


10.160.102.2]

avi.hsm-vnic-id.SE

n Description : This is the ID of the dedicated ASM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface)

n Format : ‘numeric vNIC ID’.

n Example : avi.asm-vnic-id.SE: ‘3’

VMware, Inc. 697


VMware NSX Advanced Load Balancer Configuration Guide

YAML Parameter Description Format Example

avi.asm-ip.SE IP address of the dedicated IP-address/subnet-mask avi.asm-ip.SE:


ASM vNIC on the SE (this is 10.160.103.227/24
NOT the IP address of the
ASM device)

avi.hsm-static-routes.SE Comma-separated, static [ asm-vip-network1/mask1 avi.asm-static-routes.SE:


routes to reach the ASM via gateway1, asm-vip- [169.254.1.0/24
devices. Even /32 routes network2/mask2 via via 10.160.102.1,
can be provided gateway2 ] or [ asm- 169.254.2.0/24 via
vip-network1/mask1 via 10.160.102.2]
gateway1 ]

avi.asm-vnic-id.SE ID of the dedicated ASM numeric vNIC ID avi.asm-vnic-id.SE: '3'


vNIC and is typically 3 on
CSP (vNIC0 is management
interface, vNIC1 is data-in
interface, and vNIC2 is
data-out interface)

Instructions
A sample SE YAML file for the Day Zero configuration on the CSP will look as follows:

bash# cat avi_meta_data_dedicated_asm_SE.yml

avi.mgmt-ip.SE: "10.128.2.18"
avi.mgmt-mask.SE: "255.255.255.0"
avi.default-gw.SE: "10.128.2.1"
AVICNTRL: "10.10.22.50"
AVICNTRL_AUTHTOKEN: “febab55d-995a-4523-8492-f798520d4515”
avi.asm-vnic-id.SE: ‘3'
avi.asm-static-routes.SE: [169.254.1.0/24 via 10.160.102.1, 169.254.2.0/24 via 10.160.102.2]
avi.asm-ip.SE: 10.160.102.227/24

Once the SE is created with this Day Zero configuration and appropriate virtual NIC interfaces are
added to the SE service instance on CSP, verify that the dedicated vNIC configuration is applied
successfully and the ASM virtual service IPs are reachable via this interface. In this case, the
interface eth3 is dedicated sideband ASM interface and it is configured with IP 10.160.102.227/24.

bash# ssh admin@<SE-MGMT-IP>


bash# ifconfig eth3
eth3 Link encap:Ethernet HWaddr 02:6a:80:02:11:05
inet addr:10.160.102.227 Bcast:10.160.102.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4454601 errors:0 dropped:1987 overruns:0 frame:0
TX packets:4510346 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:672683711 (672.6 MB) TX bytes:875329395 (875.3 MB)
bash# ip route
default via 10.128.2.1 dev eth0
10.128.2.0/24 dev eth0 proto kernel scope link src 10.128.2.27
10.160.102.0/24 dev eth4 proto kernel scope link src 10.160.102.227

VMware, Inc. 698


VMware NSX Advanced Load Balancer Configuration Guide

169.254.1.0/24 via 10.160.102.1 dev eth3


169.254.2.0/24 via 10.160.102.2 dev eth3
bash# ping -I eth3 <ASM-VIP>
ping -I eth3 169.254.1.10
PING 169.254.1.10 (169.254.1.10) from 10.160.102.227 eth3: 56(84) bytes of data.
64 bytes from 169.254.1.10: icmp_seq=1 ttl=62 time=0.229 ms

Configuring Dedicated Interfaces for ASM Communication on an


Existing NSX Advanced Load Balancer Service Engine
Dedicated sideband interfaces on NSX Advanced Load Balancer Service Engines use the following
configuration parameters. For existing SEs, these parameters can be populated in the /etc/
ovf_config file.

Note All parameters in this file are comma-separated and the file format is slightly different
from the YML file used for spinning up new Service Engines. However, the parameters and their
respective formats are exactly the same as they are for new Service Engines.

YAML parameters
avi.asm-ip.SE

n Description : This is the IP address of the dedicated sideband interface on the SE (this is NOT
the self IP or virtual service IP of the ASM device).

n Format : IP-address/subnet-mask.

n Example : avi.asm-ip.SE: 10.160.103.227/24

avi.asm-static-routes.SE

n Description : These are comma-separated, static routes to reach the sideband ASM virtual
service IPs. Even /32 routes can be provided. The gateway will be the self IP of the ASM
device.

Note If there is a single static route, provide the same and ensure the square brackets are
matched. Also, if the ASM virtual service IPs are in the same subnet as the dedicated interfaces,
provide the gateway as the default gateway for the subnet.

n Format : [ asm-vip-network1/mask1 via gateway1, asm-vip-network2/mask2 via gateway2 ] or


[ asm-vip-network1/mask1 via gateway1 ]

n Example : avi.asm-static-routes.SE: [169.254.1.0/24 via 10.160.102.1, 169.254.2.0/24 via


10.160.102.2]

avi.hsm-vnic-id.SE

n Description : This is the ID of the dedicated ASM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface).

n Format : ‘numeric vNIC ID’.

n Example : avi.asm-vnic-id.SE: ‘3’

VMware, Inc. 699


VMware NSX Advanced Load Balancer Configuration Guide

YAML Description Format Example


Parameter

avi.asm-ip.SE IP address of the dedicated ASM vNIC on IP-address/subnet-mask avi.asm-ip.SE:


the SE (this is NOT the IP address of the 10.160.103.227/24
ASM)

avi.hsm-static- Comma-separated, static routes to reach [ asm-vip-network1/ avi.asm-static-routes.SE:


routes.SE the ASM devices. Even /32 routes can be mask1 via [169.254.1.0/24
provided gateway1, asm-vip- via 10.160.102.1,
network2/mask2 via 169.254.2.0/24 via
gateway2 ] or [ asm- 10.160.102.2]
vip-network1/mask1 via
gateway1 ]

avi.asm-vnic- ID of the dedicated ASM vNIC and is numeric vNIC ID avi.asm-vnic-id.SE: '3'
id.SE typically 3 on CSP (vNIC0 is management
interface, vNIC1 is data-in interface, and
vNIC2 is data-out interface)

Instructions
Follow the below-mentioned steps to add a dedicated ASM vNIC on an existing SE CSP service. In
this example, vNIC 3 is used which is actually the fourth NIC on the CSP service.

Configuration on Cisco CSP

n Navigate to Configuration > Services > Action > Power Off to power off the SE service on
Cisco CSP.

n To add a new vNIC to the SE with desired parameters, navigate to Configuration > Services
> Action > Service Edit and click Add vNIC and provide VLAN id, VLAN type, VLAN tagged,
network Name, Model etc., and click Submit.

n Navigate to Configuration > Services > Action and select Power On to power on the SE
service on Cisco CSP.

Configuration on NSX Advanced Load Balancer Service Engine

VMware, Inc. 700


VMware NSX Advanced Load Balancer Configuration Guide

Perform the following steps on the Service Engine using bash shell.

n SSH to NSX Advanced Load Balancer SE IP and perform the following steps.

ssh admin@<SE-MGMT-IP&gt bash# bash# sudo su bash# /opt/avi/scripts/stop_se.sh bash#


mv /var/run/avi/ovf_properties.saved /home/admin

Note Move; do not copy this file. Edit it to provide the three comma-separated ASM-dedicated
NIC related parameters. The file looks like the following:

bash# cat /home/admin/ovf_properties.saved AVICNTRL: 10.128.2.18, AVICNTRL_AUTHTOKEN:


1403771c- fc59-4d76-89b2-b3c35682b342, avi.default-gw.SE: 10.128.2.1, avi.asm-ip.SE:
10.160.102.227/24, avi.asm-static-routes.SE: [169.254.1.0/24 via 10.160.102.1, 169.254.2.0/24
via 10.160.102.2], avi.asm-vnic-id.SE: '3', avi.mgmt-ip.SE: 10.128.2.27, ovf_source: CSP,
uuid: FCE9B12D-A1B0-4EF3-B922-BDC2A5F8AA11} bash# cp /home/admin/ovf_properties.saved /etc/
ovf_config bash# /opt/avi/scripts/start_se.sh

n Verify that the dedicated vNIC information is applied correctly and the ASM virtual service IPs
are reachable via this interface. In this case, the interface eth3 is dedicated ASM interface and
it is configured with IP 10.160.102.227/24.

Configuring Dedicated Interfaces for HSM and Sideband


Communication on a new NSX Advanced Load Balancer Service
Engine
This article explains how to configure dedicated interfaces for hardware security module (HSM)
and sideband (ASM) communication on a new NSX Advanced Load Balancer Service Engine.
Dedicated HSM and sideband interfaces on NSX Advanced Load Balancer Service Engines use
the following configuration parameters. For new SEs, these parameters can be provided in the
day-zero YAML file.

YAML parameters
HSM parameters

1 avi.hsm-ip.SE

a Description : This is the IP address of the dedicated HSM vNIC on the SE (this is NOT the
IP address of the HSM device).

b Format : IP-address/subnet-mask.

c Example : avi.hsm-ip.SE: 10.160.103.227/24

VMware, Inc. 701


VMware NSX Advanced Load Balancer Configuration Guide

2 avi.hsm-static-routes.SE

a Description: These are comma-separated, static routes to reach HSM devices. Even /32
routes can be provided.

Note If there is a single static route, provide the same and ensure the square brackets are
matched. Also, if HSM devices are in the same subnet as the dedicated interfaces, provide
the gateway as the default gateway for the subnet.

b Format: [ hsm network1/mask1 via gateway1, hsm network2/mask2 via gateway2 ] OR


[ hsm network1/mask1 via gateway1 ]

c Example: avi.hsm-static-routes.SE:[ 10.128.1.0/24 via 10.160.103.1, 10.128.2.0/24 via


10.160.103.2]

3 avi.hsm-vnic-id.SE

a Description: This is the ID of the dedicated HSM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface).

b Format: ‘numeric vNIC ID’.

c Example: avi.hsm-vnic-id.SE: ‘3’

YAML Description Format Example


Parameter

avi.hsm-ip.SE IP address of the dedicated HSM vNIC on IP-address/subnet- avi.hsm-ip.SE:


the SE (this is NOT the IP address of the mask 10.160.103.227/24
HSM device)

avi.hsm-static- Comma-separated, static routes to reach [ hsm network1/mask1 avi.hsm-static-routes.SE:


routes.SE the HSM devices. Even /32 routes can be via gateway1, hsm [ 10.128.1.0/24 via
provided network2/mask2 via 10.160.103.1, 10.128.2.0/24
gateway2 ] OR [ hsm via 10.160.103.2]
network1/mask1 via
gateway1 ]

avi.hsm-vnic- ID of the dedicated HSM vNIC and is numeric vNIC ID avi.hsm-vnic-id.SE: '3'
id.SE typically 3 on CSP (vNIC0 is management
interface, vNIC1 is data-in interface, and
vNIC2 is data-out interface)

ASM parameters

1 avi.asm-ip.SE

a Description: This is the ip address of the dedicated sideband interface on the SE (this is
NOT the self IP or virtual service IP of the ASM device).

b Format: IP-address/subnet-mask.

c Example: avi.asm-ip.SE: 10.160.103.227/24

VMware, Inc. 702


VMware NSX Advanced Load Balancer Configuration Guide

2 avi.asm-static-routes.SE

a Description: These are comma-separated, static routes to reach the sideband ASM vips.
Even /32 routes can be provided. The gateway will be the self IP of the ASM device.

Note If there is a single static route, provide the same and ensure the square brackets
are matched. Also, if the ASM virtual service IPs are in the same subnet as the dedicated
interfaces, provide the gateway as the default gateway for the subnet.

b Format: [ asm-vip-network1/mask1 via gateway1, asm-vip-network2/mask2 via gateway2 ]


or [ asm-vip-network1/mask1 via gateway1 ]

c Example: avi.asm-static-routes.SE: [169.254.1.0/24 via 10.160.102.1, 169.254.2.0/24 via


10.160.102.2]

3 avi.hsm-vnic-id.SE

a Description: This is the ID of the dedicated ASM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface)

b Format: ‘numeric vNIC ID’.

c Example: avi.asm-vnic-id.SE: ‘3’

YAML Description Format Example


Parameter

avi.asm-ip.SE IP address of the dedicated ASM vNIC on IP-address/subnet-mask avi.asm-ip.SE:


the SE (this is NOT the IP address of the 10.160.103.227/24
ASM)

avi.hsm-static- Comma-separated, static routes to reach [ asm-vip-network1/ avi.asm-static-routes.SE:


routes.SE the ASM devices. Even /32 routes can be mask1 via [169.254.1.0/24
provided gateway1, asm-vip- via 10.160.102.1,
network2/mask2 via 169.254.2.0/24 via
gateway2 ] or [ asm- 10.160.102.2]
vip-network1/mask1 via
gateway1 ]

avi.asm-vnic- ID of the dedicated ASM vNIC and is numeric vNIC ID avi.asm-vnic-id.SE: '3'
id.SE typically 3 on CSP (vNIC0 is management
interface, vNIC1 is data-in interface, and
vNIC2 is data-out interface)

Instructions
A sample Service Engine YAML file for the Day Zero configuration on Cisco CSP will look like as
follows:

bash# cat avi_meta_data_dedicated_asm_hsm_SE.yml


avi.mgmt-ip.SE: "10.128.2.18"
avi.mgmt-mask.SE: "255.255.255.0"
avi.default-gw.SE: "10.128.2.1"
AVICNTRL: "10.10.22.50"
AVICNTRL_AUTHTOKEN: “febab55d-995a-4523-8492-f798520d4515”

VMware, Inc. 703


VMware NSX Advanced Load Balancer Configuration Guide

avi.hsm-ip.SE: 10.160.103.227/24
avi.hsm-static-routes.SE:[ 10.128.1.0/24 via 10.160.103.1, 10.128.2.0/24 via 10.160.103.2]
avi.hsm-vnic-id.SE: '3'
avi.asm-vnic-id.SE: ‘4'
avi.asm-static-routes.SE: [169.254.1.0/24 via 10.160.102.1, 169.254.2.0/24 via 10.160.102.2]
avi.asm-ip.SE: 10.160.102.227/24

Once SE is created with this Day Zero configuration and appropriate virtual NIC interfaces are
added to the SE service instance in CSP, verify that the dedicated vNIC configuration is applied
successfully and the HSM devices and ASM virtual service IPs are reachable via the dedicated
interfaces. In this sample configuration, the interface eth3 is configured as the dedicated HSM
interface with IP 10.160.103.227/24 and the interface eth4 is configured as the sideband ASM
interface with IP 10.160.102.227/24.

Note NSX Advanced Load Balancer Service Engine requires five interfaces for this configuration.
n vNIC0: Management interface

n vNIC1: Data in interface

n vNIC2: Data out interface

n vNIC3: Dedicated HSM interface

n vNIC4: Dedicated sideband interface

To verify configuration of both the dedicated interfaces, ssh to NSX Advanced Load Balancer SE
IP, run ip route command, and perform a ping test.

bash# ssh admin@10.10.2.18


bash# ifconfig eth3
eth3 Link encap:Ethernet HWaddr 02:6a:80:02:11:05
inet addr:10.160.103.227 Bcast:10.160.103.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4454601 errors:0 dropped:1987 overruns:0 frame:0
TX packets:4510346 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:672683711 (672.6 MB) TX bytes:875329395 (875.3 MB)

bash# ip route
default via 10.10.2.1 dev eth0
10.10.1.0/24 via 10.160.103.1 dev eth3
10.10.2.0/24 via 10.160.103.2 dev eth3
10.10.2.0/24 dev eth0 proto kernel scope link src 10.128.2.27
10.160.103.0/24 dev eth3 proto kernel scope link src 10.160.103.227
bash# ping -I eth3 <HSM-IP>
ping -I eth3 10.10.1.51
PING 10.10.1.51 (10.128.1.51) from 10.160.103.227 eth3: 56(84) bytes of data.
64 bytes from 10.10.1.51: icmp_seq=1 ttl=62 time=0.229 ms

VMware, Inc. 704


VMware NSX Advanced Load Balancer Configuration Guide

Configuring Dedicated Interfaces for ASM Communication on an


Existing NSX Advanced Load Balancer Service Engine
Dedicated sideband interfaces on NSX Advanced Load Balancer Service Engines use the following
configuration parameters. For existing SEs, these parameters can be populated in the /etc/
ovf_config file.

Note All parameters in this file are comma-separated and the file format is slightly different
from the YML file used for spinning up new Service Engines. However, the parameters and their
respective formats are exactly the same as they are for new Service Engines.

YAML parameters
avi.asm-ip.SE

n Description: This is the IP address of the dedicated sideband interface on the SE (this is NOT
the self IP or virtual service IP of the ASM device).

n Format : IP-address/subnet-mask.

n Example: avi.asm-ip.SE: 10.160.103.227/24

avi.asm-static-routes.SE

n Description: These are comma-separated, static routes to reach the sideband ASM virtual
service IPs. Even /32 routes can be provided. The gateway will be the self IP of the ASM
device.

Note Note: If there is a single static route, provide the same and ensure the square brackets
are matched. Also, if the ASM virtual service IPs are in the same subnet as the dedicated
interfaces, provide the gateway as the default gateway for the subnet.

n Format: [ asm-vip-network1/mask1 via gateway1, asm-vip-network2/mask2 via gateway2 ] or


[ asm-vip-network1/mask1 via gateway1 ]

n Example: avi.asm-static-routes.SE: [169.254.1.0/24 via 10.160.102.1, 169.254.2.0/24 via


10.160.102.2]

avi.hsm-vnic-id.SE

n Description: This is the ID of the dedicated ASM vNIC and is typically 3 on CSP (vNIC0 is
management interface, vNIC1 is data-in interface, and vNIC2 is data-out interface)

n Format: ‘numeric vNIC ID’.

n Example: avi.asm-vnic-id.SE: ‘3’

YAML Description Format Example


Parameter

avi.asm-ip.SE IP address of the dedicated ASM vNIC on IP-address/subnet-mask avi.asm-ip.SE:


the SE (this is NOT the IP address of the 10.160.103.227/24
ASM)

VMware, Inc. 705


VMware NSX Advanced Load Balancer Configuration Guide

avi.hsm-static- Comma-separated, static routes to reach [ asm-vip-network1/ avi.asm-static-routes.SE:


routes.SE the ASM devices. Even /32 routes can be mask1 via [169.254.1.0/24
provided gateway1, asm-vip- via 10.160.102.1,
network2/mask2 via 169.254.2.0/24 via
gateway2 ] or [ asm- 10.160.102.2]
vip-network1/mask1 via
gateway1 ]

avi.asm-vnic- ID of the dedicated ASM vNIC and is numeric vNIC ID avi.asm-vnic-id.SE: '3'
id.SE typically 3 on CSP (vNIC0 is management
interface, vNIC1 is data-in interface, and
vNIC2 is data-out interface)

Instructions
Follow the below-mentioned steps to add a dedicated ASM vNIC on an existing SE CSP service. In
this example, vNIC 3 is used which is actually the fourth NIC on the CSP service.

Configuration on Cisco CSP

n Navigate to Configuration > Services > Action > Power Off to power off the SE service on
Cisco CSP.

n To add a new vNIC to the SE with desired parameters, navigate to Configuration > Services >
Action > Service Edit and click Add vNIC then provide VLAN id, VLAN type, VLAN tagged,
network Name, Model etc. Click Submit.

n Navigate to Configuration > Services > Action and select Power On to power on the SE
service on Cisco CSP.

Configuration on NSX Advanced Load Balancer Service Engine

Perform the following steps on the Service Engine using bash shell.

n SSH to NSX Advanced Load Balancer SE IP and perform the following steps.

ssh admin@<SE-MGMT-IP>
bash#
bash# sudo su

VMware, Inc. 706


VMware NSX Advanced Load Balancer Configuration Guide

bash# /opt/avi/scripts/stop_se.sh
bash# mv /var/run/avi/ovf_properties.saved /home/admin

Note Do not copy the file Move;. Edit the file to provide the three comma-separated ASM-
dedicated NIC related parameters. The file looks like the following:

bash# cat /home/admin/ovf_properties.saved

AVICNTRL: 10.128.2.18, AVICNTRL_AUTHTOKEN: 1403771c- fc59-4d76-89b2-b3c35682b342,


avi.default-gw.SE: 10.128.2.1,
avi.asm-ip.SE: 10.160.102.227/24,
avi.asm-static-routes.SE: [169.254.1.0/24 via 10.160.102.1, 169.254.2.0/24 via 10.160.102.2],
avi.asm-vnic-id.SE: '3',
avi.mgmt-ip.SE: 10.128.2.27, ovf_source: CSP,
uuid: FCE9B12D-A1B0-4EF3-B922-BDC2A5F8AA11}

bash# cp /home/admin/ovf_properties.saved /etc/ovf_config


bash# /opt/avi/scripts/start_se.sh

n Verify that the dedicated vNIC information is applied correctly and the ASM virtual service IPs
are reachable via this interface. In this case, the interface eth3 is dedicated ASM interface and
it is configured with IP 10.160.102.227/24.

bash# ssh admin@<SE-MGMT-IP>


bash# ifconfig eth3
eth3 Link encap:Ethernet HWaddr 02:6a:80:02:11:05
inet addr:10.160.102.227 Bcast:10.160.102.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4454601 errors:0 dropped:1987 overruns:0 frame:0
TX packets:4510346 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:672683711 (672.6 MB) TX bytes:875329395 (875.3 MB)
bash# ip route
default via 10.128.2.1 dev eth0
10.128.2.0/24 dev eth0 proto kernel scope link src 10.128.2.27
10.160.102.0/24 dev eth4 proto kernel scope link src 10.160.102.227
169.254.1.0/24 via 10.160.102.1 dev eth3
169.254.2.0/24 via 10.160.102.2 dev eth3
bash# ping -I eth3 <ASM-VIP>
ping -I eth3 169.254.1.10
PING 169.254.1.10 (169.254.1.10) from 10.160.102.227 eth3: 56(84) bytes of data.
64 bytes from 169.254.1.10: icmp_seq=1 ttl=62 time=0.229 ms

Configuring Dedicated Interfaces for HSM Communication on New


NSX Advanced Load Balancer Controller
Dedicated HSM interfaces on an NSX Advanced Load Balancer Controller uses the following YAML
parameters:

n avi.hsm-ip.Controller

n avi.hsm-static-routes.Controller

VMware, Inc. 707


VMware NSX Advanced Load Balancer Configuration Guide

n avi.hsm-vnic-id.Controller

YAML parameters
For configuration on a new NSX Advanced Load Balancer Controller, these parameters can be
provided in the day-zero YAML file.

avi-hsm-ip.Controller

n Description: This is the ip address of the dedicated HSM vNIC on the Controller (this is not the
IP address of the HSM).

n Format: IP-address/subnet-mask

n Example: avi.asm-ip.Controller: 10.160.103.230/24

avi.hsm-static-routes.Controller

n Description: These are comma-separated, static routes to reach the HSM devices from the
respective NSX Advanced Load Balancer Controller. Even /32 routes can be provided.

Note If there is a single static route, provide the same and ensure the square brackets are
matched. Also, if the HSM devices are in the same subnet as the dedicated interfaces, provide
the gateway as the default gateway for the subnet.

n Format: [ hsm-network1/mask1 via gateway1, hsm-network2/mask2 via gateway2 ] or [ hsm-


network1/mask1 via gateway1 ]

n Example: avi.hsm-static-routes.Controller: [10.128.1.0/24 via 10.160.103.1, 10.130.1.0/24 via


10.160.103.1]

avi.hsm-vnic-id.Controller

n Description: This is the ID of the dedicated HSM vNIC and is typically 1 on CSP. vNIC0 is the
management interface, which is the only interface on NSX Advanced Load Balancer Controller
by default.

n Format: ‘numeric-vnic-id’

n Example: avi.hsm-vnic-id.Controller: ‘1’

YAML Parameter Description Format Example

avi.hsm-ip.Controller IP address of the dedicated HSM vNIC IP-address/subnet- avi.hsm-ip.SE:


on Avi Controller (this is not the IP mask 10.160.103.230/24
address of the HSM device)

avi.hsm-static- Comma-separated, static routes to [ hsm-network1/mask1 avi.hsm-static-


routes.Controller reach the HSM devices from the via gateway1, hsm- routes.Controller:
respective Avi Controllers. Even /32 network2/mask2 via [10.128.1.0/24 via
routes can be provided. gateway2 ] or [ hsm- 10.160.103.1, 10.130.1.0/24
network1/mask1 via via 10.160.103.1]
gateway1 ]

avi.asm-vnic- ID of the dedicated HSM vNIC and is numeric-vnic-id avi.hsm-vnic-id.Controller:


id.Controller typically 1 on CSP '1'

VMware, Inc. 708


VMware NSX Advanced Load Balancer Configuration Guide

Instructions
A sample NSX Advanced Load Balancer Controller service YAML file for the Day Zero
configuration on the CSP looks like as follows:

bash# cat avi_meta_data_ctlr-dedicated-hsm.yml

avi.default-gw.Controller: 10.128.2.1
avi.mgmt-ip.Controller: 10.128.2.30
avi.mgmt-mask.Controller: 255.255.255.0
avi.hsm-ip.Controller: 10.160.103.230/24
avi.hsm-static-routes.Controller: [10.128.1.0/24 via 10.160.103.1, 10.130.1.0/24 via
10.160.103.1]
avi.hsm-vnic-id.Controller: '1'

Once NSX Advanced Load Balancer Controller is created with this Day Zero configuration and
additional virtual NIC interface is added to the Avi Controller service instance on CSP, verify that
the dedicated vNIC configuration is applied successfully and the HSM devices are reachable via
the dedicated interface. In this case we configured eth1 as the dedicated HSM interface with IP
10.160.103.230/24.

bash# ssh admin@<CONTROLLER-MGMT-IP>


bash# ifconfig eth1
eth1 Link encap:Ethernet HWaddr 02:4a:80:02:11:04
inet addr:10.160.103.230 Bcast:10.160.103.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:342620 errors:0 dropped:2855 overruns:0 frame:0
TX packets:78 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:29201376 (29.2 MB) TX bytes:11230 (11.2 KB)
bash# ip route
default via 10.128.2.1 dev eth0
10.128.1.0/24 via 10.160.103.1 dev eth1
10.128.2.0/24 dev eth0 proto kernel scope link src 10.128.2.18
10.130.1.0/24 via 10.160.103.1 dev eth1
10.160.103.0/24 dev eth1 proto kernel scope link src 10.160.103.218
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
bash# ping -I eth1 <HSM-IP>
ping -I eth1 10.130.1.10
PING 10.130.1.10 (10.130.1.10) from 10.160.103.230 eth1: 56(84) bytes of data.
64 bytes from 10.130.1.10: icmp_seq=1 ttl=62 time=0.229 ms

VMware, Inc. 709


FIPS Compliance in NSX
Advanced Load Balancer 10
The Federal Information Processing Standard (FIPS) 140-2 is a U.S. and Canadian government
standard developed by the National Institute of Standards and Technology (NIST) that defines the
security standards for cryptographic modules. The FIPS 140-2 standard specifies and validates
the cryptographic and operational requirements for the modules within security systems that
protect sensitive information. These modules employ NIST-approved security functions such as
cryptographic algorithms, key sizes, key management and authentication techniques.

For a list of FIPS 140-2 compliant algorithms, see:

n Annexure A: Approved Security Functions for FIPS PUB 140-2, Security Requirements for
Cryptographic Modules.

n Annexure C: Approved Random Number Generators for FIPS PUB 140-2, Security
Requirements for Cryptographic Modules.

There are four levels of security in the FIPS 140-2 standard, and for each level there are different
areas related to the design and implementation of a tool’s cryptographic design. The following are
the levels of security:

n Level-1: This defines the standards for basic security in a cryptographic module and enables
FIPS approved cipher suites.

n Level-2: This defines the standards for tamper-evidence physical security and role-based
authentication of cryptographic modules. Tamper-evidence physical security includes tamper-
evident coatings, seals, or pick-resistant locks.

n Level-3: This defines standards for tamper-resistance physical security and identity-based
authentication. Hardware devices must have internal HSMs with tamper-resistant features such
as a sealed epoxy cover, which when removed, must render the device useless and make the
keys inaccessible.

n Level-4: This requires tamper detection circuits to detect any device penetration, and erase
the contents of the device in the event of tampering.

VMware has specifically obtained FIPS 140-2 validation of its OpenSSL FIPS Object Module
v2.0.20-vmw that is used in NSX Advanced Load Balancer components.

VMware, Inc. 710


VMware NSX Advanced Load Balancer Configuration Guide

The OpenSSL FIPS Object Module v2.0.20-vmw is a general-purpose cryptographic module


that provides FIPS-approved cryptographic functions and services to products and components
of VMware. The module has been validated at the FIPS 140-2 security Level 1 and awarded
Certificate #3550 by CMVP.

Note Security Levels 2–4 are specific to various levels of physical security, such as:
n tamper-evidence physical security: This includes tamper-evident coatings, seals, or pick-
resistant locks.

n tamper-resistance physical security: This includes features such as sealed epoxy cover to
protect the harware device.

These security levels do not apply to software solutions, where hardware is used to run the
software solution.

For more information, see FIPS documentation in VMware.

FIPS Compliance for NSX Advanced Load Balancer


The NSX Advanced Load Balancer supports FIPS mode for the entire system, such as,

n The Control plane, consists of the Controller or Controller cluster.

n The Data plane, consists of the Service Engines (SEs).

The NSX Advanced Load Balancer uses the FIPS canister 2.0.20-vmw, which is compliant with
FIPS 140-2 Level 1 cryptography.

Supported Environments
FIPS is supported when:

n The Controller cluster is deployed in a VMware vSphere environment.

n The SEs are deployed in a VMware vSphere environment, specifically with the following cloud
connectors:

n VMware vCenter and NSX-T Cloud.

n No-Orchestrator cloud running on VMware vSphere.

FIPS is supported for a single-Controller and Controller cluster-based deployments.

Enabling FIPS Mode - Considerations


When enabling FIPS mode for NSX Advanced Load Balancer, consider the following:

n FIPS mode can be enabled only on deployments where no Service Engines are present.

n FIPS mode is enabled on the entire system, either on the Controller or on all nodes in case of a
cluster. FIPS is also enabled on all the SEs.

VMware, Inc. 711


VMware NSX Advanced Load Balancer Configuration Guide

n There is no option to selectively enable FIPS for specific components, only for Controller, SEs,
or specific SE Groups.

n Once the NSX Advanced Load Balancer system is in FIPS mode, you cannot disable the FIPS
mode.

Enabling FIPS Mode for a Single Controller Deployment


The following are the steps to enable FIPS mode for a single Controller deployment:

1 Ensure that the Controller does not have any SEs deployed. It is recommended to disable all
virtual services and delete any existing SEs.

2 Create the Controller cluster before enabling FIPS.

3 Upload the controller.pkg file (i.e., the upgrade package) for the same Controller base
version, to the Controller node. For instance, if the Controller being used is on version 20.1.5,
upload the 20.1.5 controller.pkg to the Controller.

For step-by-step instructions on how to upload, see Flexible Upgrades for NSX Advanced
Load Balancer.

4 Enable FIPS mode through the CLI:

[admin:avi-cntrl]: > system compliancemode fips_mode


+----------------------+------------------------------------------+
| Field | Value |
+----------------------+------------------------------------------+
| fips_mode | True |
| common_criteria_mode | False |
| force | False |
| details[1] | 'Compliance mode transition started. Use 'show upgrade status'
to check the stat |
| | us.' |
+----------------------+------------------------------------------+

The Controller reboots and returns online in FIPS mode.

Enabling FIPS Mode for a Controller Cluster Deployment


1 Ensure that the Controller does not have any SEs deployed. It is recommended to disable all
virtual services and delete any available SEs.

2 Create the Controller cluster before enabling FIPS.

3 Upload the controller.pkg file, i.e., the upgrade package, for the same Controller base
version, to the leader node. For instance, if the version of the Controller being used is 20.1.5,
upload the 20.1.5 version of controller.pkg to the leader.

For step-by-step instructions on how to upload, see Flexible Upgrades for NSX Advanced
Load Balancer.

VMware, Inc. 712


VMware NSX Advanced Load Balancer Configuration Guide

4 Enable FIPS mode through the CLI:

> system compliancemode fips_mode


+----------------------+-------------------------------------------+
| Field | Value |
+----------------------+-------------------------------------------+
| fips_mode | True |
| common_criteria_mode | False |
| force | False |
| details[1] | 'Compliance mode transition started. Use 'show upgrade status' to
check the stat |
| | us.' |
+----------------------+-------------------------------------------+

The Controller nodes reboot and return online in FIPS mode.

Verifying FIPS Mode


You can verifiy if FIPS mode is been successfully enabled or not by using the following commands:

[admin:avi-cntrl]: > show version controller


+-----------------+--------------------------------------+-------+------+
| Controller Name | Version | Patch | Fips |
+-----------------+--------------------------------------+-------+------+
| 100.65.32.101 | 20.1.5(5000) 2021-04-15 09:36:00 UTC | - | True |
+-----------------+--------------------------------------+-------+------+

[admin:admin-ctrl-write]: > show version serviceengine


No results.
[admin:avi-cntrl]: > show version serviceengine
+--------------+--------------------------------------+-------+------+
| SE Name | Version | Patch | Fips |
+--------------+--------------------------------------+-------+------+
| Avi-se-rencf | 20.1.5(5000) 2021-04-15 09:36:00 UTC | - | True |
| Avi-se-nvlwj | 20.1.5(5000) 2021-04-15 09:36:00 UTC | - | True |
+--------------+--------------------------------------+-------+------+

Disaster Recovery Considerations


n Restoring the Configuration to a new Controller Cluster: Restoring the NSX Advanced
Load Balancer configuration from a FIPS enabled deployment can be performed only on a
Controller which has FIPS mode enabled. Ensure that the destination Controller or Controller
cluster has FIPS enabled before performing a configuration import.

n Adding a new Controller node to a Cluster: A Controller cluster requires all the nodes to be
FIPS enabled. If a Controller node needs to be replaced with a new Controller node, ensure
that the new node has FIPS enabled, before adding it to the Controller cluster.

VMware, Inc. 713


VMware NSX Advanced Load Balancer Configuration Guide

n Upgrading a Deployment with FIPS Mode Enabled: Upgrade and Patch Upgrade in the FIPS
mode follow the same process as the non-FIPS deployments. No special considerations are
required for FIPS deployments.

n Disabling FIPS Mode: Once enabled, disabling of FIPS compliance mode is not supported.

Features Unavailable in the FIPS-Compliant Mode


On enabling FIP compliance in NSX Advanced Load Balancer, only cryptographic algorithms that
are FIPS-compliant will be used. The following non-compliant modules will be unavailable in order
to adhere to the FIPS 140-2 standards:

n RADIUS health monitor.

Note RADIUS as an L4 application is supported.

n In BGP, the setting of md5_secret for peers.

n TLS v1.3 and 0-RTT (the enable_early_data option under the SSL Profile).

n Hardware Security Modules (HSM devices) such as Safenet and CloudHSM.

n 1024 RSA Key

n The set of elliptic curves (EC) which are not supported as per OpenSSL FIPS Object Module of
VMware.

n Async SSL (This is a feature under the SE Group that goes in tandem with the HSM
configuration. This feature is not relevant when HSM is not allowed).

n L7 Sideband

n HTTP(S) Health Monitor with NTML authentication.

n HTTP cookie persistence key rotation.

n Use of flushdb.sh for Controller recovery scenarios is not supported. It is recommended to


use clean_cluster.py. Both these scripts should be used under NSX Advanced Load Balancer
Support team supervision.

VMware, Inc. 714


CIS Compliance for NSX
Advanced Load Balancer 11
The Center for Internet Security (CIS) identifies, develops, validates, promotes, and sustains best
practice solutions for cyber defense and helps communities to enable an environment of trust in
cyberspace.

CIS employs a closed crowdsourcing model to identify and refine effective security measures,
where individual recommendations are shared with the community for evaluation through a
consensus decision-making process. At a national and international level, CIS plays an essential
role in forming security policies and decisions by maintaining CIS Controls and CIS Benchmarks
and hosting the Multi-State Information Sharing and Analysis Center (MS-ISAC).

CIS Controls
CIS Controls and CIS Benchmarks provide global standards for internet security. The CIS
Benchmark is categorized as Controls, and each Control is a collection of standard security
tests. The CIS Controls include the popular 20 security controls, which map to many compliance
standards. The CIS Controls advocate a defense-in-depth model to prevent and detect
malware.For instance, Controls 1.1 is for Filesystem Configurations, a collection of tests like 1.1.1 -
Disable unused filesystems, which in turn comprises sub-set tests such as 1.1.1.1 - Ensure mounting
of cramfs filesystems is disabled, 1.1.1.2 - Ensure mounting of freevfs filesystems is disabled, and
others.

For more information on the relevant Controls and tests for Distributed Independent
Linux Benchmark, see CIS Ubuntu Linux LTS Benchmark, available for download at https://
learn.cisecurity.org/benchmarks.

The individual tests are marked at either Level 1 or Level 2.

Level 1 tests are part of the CIS 1.0 profile. As per CIS, the Level 1 - Server profile tests are practical
and prudent and are intended to provide a clear security benefit. These tests may inhibit the utility
of the technology beyond acceptable means.

Level 2 tests are part of the CIS 2.0 profile. This profile is an extension of the Level 1 - Server
profile and includes both Level 1 and Level 2 tests. As per CIS, the tests are intended for
environments or use cases where security is paramount for a deep defense mechanism. These
tests may negatively inhibit the utility or performance of the technology.

Note The Benchmark declares a Control as failed, even if one test within the Control fails.

VMware, Inc. 715


VMware NSX Advanced Load Balancer Configuration Guide

Enabling CIS Mode


Enable the CIS mode on NSX Advanced Load Balancer to successfully run the tests associated
with specific Control. Configure the CIS mode command under system configuration as shown
below:

[admin:10-1-1-1]: > configure systemconfiguration


[admin:10-1-1-1]: systemconfiguration> linux_configuration
[admin:10-1-1-1]: systemconfiguration:linux_configuration> cis_mode
Overwriting the previously entered value for cis_mode

[admin:10-1-1-1]: systemconfiguration:linux_configuration> where


Tenant: admin
+----------+-------+
| Field | Value |
+----------+-------+
| motd | |
| banner | |
| cis_mode | True |
+----------+-------+

Configuring CIS mode enables iptables, which cover all the 3.6.X set of Controls.

Configuring this command applies only to the Controllers and Service Engines created after that. If
the CIS mode needs to be enabled for existing SEs, follow one of the following suggested steps:

n No downtime: Scale out all Service Engines so that the services fall onto the newly spun SEs.
CIS mode will be enabled on the newly created SEs. Scale in to fall back to the former setup,
but with CIS mode SEs.

n With downtime: Reboot the SEs. When the SEs come back online, the CIS mode will be
enabled.

Non-Applicable Benchmark tests for NSX Advanced Load


Balancer Service Engine
NSX Advanced Load Balancer Controller and SEs are purpose-built to provide an elastic
distributed load balancing functionality and run only on services that are required to provide this
functionality. Only the admin user can log in to the SE VM for troubleshooting and recovery. NSX
Advanced Load Balancer conforms to the CIS Benchmark tests with a few exceptions as listed
below. The text following the instance quotes the reason for the exception.

Note The list below only indicates the Benchmark denomination. For more information on the
Benchmarks tests, see CIS Benchmarks Landing Page.

1.1 File System Configurations

n UDF File System – 1.1.1.7: Requires the UDF kernel not to load, leading to Service Engine
boot-up issues and failure to connect to the Controller.

VMware, Inc. 716


VMware NSX Advanced Load Balancer Configuration Guide

n Separate Partitions – 1.1.2 to 1.1.17: Requires a separate partition for /tmp, /var, /var/
log, /var/log/audit, and /home. This does not comply with the two seperate logical
partitions designed to allow NSX Advanced Load Balancer version rollback.

1.3 File System Integrity Checking

n File System Integrity Checking – 1.3.2: Requires installation of the aide tool, which is CPU
intensive and leads to prolonged duration runs.

1.4 Secure Boot Settings

n 1.4.1 to 1.4.4: Requires a password-based grub bootloader menu that will interfere with the
NSX Advanced Load Balancer single-click upgrade functionality.

3.4 TCP Wrappers.

n 3.4.3: Requires adding default deny in /etc/hosts.deny, which would impact Service Engine
connectivity with the NSX Advanced Load Balancer Controller.

5.4.1 Set Shadow Password Suite Parameters

n 5.4.1.1 to 5.4.1.4: Requires enforcing password policy at the Service Engine level.

NSX Advanced Load Balancer supports only admin users. The Controller manages the password
policy, and when the admin user password is changed, it synchronizes this password across the
fleet of SEs. So, no password enforcement is required at the SE level.

Additional Information
For more information on executing Benchmarks using the Inspec tool, see Executing Benchmarks
using Inspec.

VMware, Inc. 717


DDoS Attack Mitigation
12
In most deployments, NSX Advanced Load Balancer is directly exposed to public, untrusted
networks. To protect application traffic, Service Engines (SEs) are able to detect and mitigate a
wide range of Layer 4-7 network attacks.

The following is a list of common denial of service (DoS) attacks and directed DoS (DDoS) attacks
mitigated by NSX Advanced Load Balancer.

Attack Layer Attack Name Description Mitigation

SMURF ICMP packets with the Packets are dropped at


destination IP set as the the dispatcher layer if the
broadcast IP and the source source or destination IP is
IP spoofed to the victim’s a broadcast IP or class D/ E
IP. IP address.

ICMP Flood Excessive ICMP echo ICMP packets are rate


requests to the victim. limited.

Layer 3 Unknown protocol Packets with unrecognized Packets are dropped at the
IP protocol. dispatcher layer.

Tear drop Exploit the reassembly of Packets are dropped in


fragmented IP packets. the protocol stack in the
SE if fragment offsets are
deemed bad.

IP fragmentation Bad fragmented packets. Packets are dropped in the


protocol stack in the SE.

SYN flood Send TCP SYNs without If the TCP table is being
acknowledging SYN-ACK; filled with half connections,
the victim’s TCP table will like uncompleted TCP 3-
grow rapidly. way handshakes, begin
using SYN cookies.

LAND Same as SYN flood except Packets are dropped at the


the source and destination dispatcher layerV.
Layer 4
IP addresses are identical.

Port scan TCP/ UDP packets on Packets are dropped at the


various ports to find out dispatcher layer.
listening ports for next level
of attacks; most of those
ports are non-listening
ports.

VMware, Inc. 718


VMware NSX Advanced Load Balancer Configuration Guide

Attack Layer Attack Name Description Mitigation

X-mas tree TCP packets with all the Packets are dropped in the
flags set to various values protocol stack of the SE.
to overwhelm the victim’s
TCP stack.

Bad RST flood Send TCP RST packets with Packets are dropped in
bad sequence. the protocol stack in the
SE if the packet sequence
numbers are outside the
TCP window.

Fake session Guess a TCP sequence To reduce the chance of


numbers to hijack success for a fake session
connections. attack, the SE uses random
numbers for the initial
sequence numbers.

Bad sequence numbers TCP packets with bad Packets with sequence
sequence numbers. numbers outside the TCP
window are dropped in the
protocol stack in the SE.

Malformed/ Unexpected Unrelated TCP packets Unexpected packets after


flood after a TCP FIN has been the FIN are dropped in the
sent. protocol stack in the SE.

Zero/ small window Attacker advertises a zero If the first TCP packet from
or very small window, the client, after a SYN, is
<100, after the TCP 3-way received with a zero or
handshake. small window, the SE drops
the packet and a RST is
sent.

Rate limiting CPS per IP Connection flood The rate limits configured
in the application profile
are applied. (Application
Profile > HTTP > DDoS >
Rate Limit HTTP and TCP
Settings).

SSL errors Inject SSL handshake The SE closes the


errors. connection after an error.

SSL renegotiation Request for renegotiation Client-triggered


after establishing an SSL renegotiation is disabled.
connection.

Request idle timeout Establishing a connection The control timeout


without sending an HTTP configured in the
request. application profile is used.
Layer 7 (HTTP) (Application Profile > HTTP
> DDoS > HTTP Limit
Settings > Post Accept
Timeout).

VMware, Inc. 719


VMware NSX Advanced Load Balancer Configuration Guide

Attack Layer Attack Name Description Mitigation

Size limit for header and Resource consumption via The header-size limits
request long request time configured in the
application profile are used.
(Application Profile > HTTP
> DDoS > HTTP Limit
Settings > HTTP Size
Settings).

Slow POST Resource consumption via The body-size limits


long request time. configured in the
application profile are used.
(Application Profile > HTTP
> DDoS > HTTP Limit
Settings > HTTP Size
Settings).

SlowLoris / SlowPost Opening multiple The header and body


connections to the victim timeouts configured in the
by sending partial HTTP application profile are used.
requests.

Invalid requests Invalid header, body, or The URI length, header


entity in HTTP request. length, and body length
limits configured in the
application profile are used.

Rate limiting RPS per client Request flood The limit configured in the
IP application profile is used.
(Application Profile > HTTP
> DDoS > Rate Limit HTTP
and TCP Settings).

Rate limiting RPS per URL Request flood The limit configured in the
application profile is used.
(Application Profile > HTTP
> DDoS > Rate Limit HTTP
and TCP Settings).

DNS Amplification Egress The DNS virtual service Any requests coming from
is targeted by sending a defined range of source
very short queries which ports (well-known ports)
solicit very large responses, will be denied. The
spanning to multiple UDP range of ports to be
packets. The DNS virtual denied is configured in
services can be made to the Security Policy. To
Layer 7 (DNS)
participate in a reflection know how to configure
attack. The attacker spoofs a security policy for DNS
the DNS query’s source IP Amplification Egress DDoS
and source port to be that protection, click Configure
of a well known service port Security Policy for DNS
on a victim server. Amplification Egress DDoS
Protectionsection.

VMware, Inc. 720


VMware NSX Advanced Load Balancer Configuration Guide

Attack Layer Attack Name Description Mitigation

DNS Reflection Ingress Sending DNS Queries with Early dropping of unwanted
spoofed IP address of packets, at the dispatcher
the victim resulting in
swamping the victim with
unsolicited traffic via the
DNS server responses

DNS NXDOMAIN Attack Attackers send a flood of Detection: Events are


queries to resolve domains raised for the domains/
that do not exist. Usually sub-domains that are
a randomly generated under attack. The event
unlikely domain names are also mentions the clients
used for the attack. causing the attack.
Mitigation (with Manual
Configuration):
n Configure valid sub-
domains as described
at Support for
Authoritative Domains,
NXDOMAIN Responses,
NS and SOA Records
guide.
n Add DNS Policy for
early dropping or rate-
limiting of DNS queries
to a Domain.
n Add a Network
Security Policy for
early dropping or rate-
limiting of DNS queries
from suspected clients.

DDoS Insights
The DDoS section on the right of the default security page breaks down distributed denial of
service data for the virtual service into the most relevant layer 4 and layer 7 attack data.

VMware, Inc. 721


VMware NSX Advanced Load Balancer Configuration Guide

n L4 Attacks: The number of network attacks per second, such as IP fragmentation attacks
or TCP SYN flood. For the example shown here, each unacknowledged SYN is counted as
an attack. (This is the classic signature of the TCP SYN flood attack, a large volume of SYN
requests that are not followed by the expected ACKs to complete session setup.)

n L7 Attacks: The number of application attacks per second, such as HTTP SlowLoris attacks
or request floods. For the example shown here, every request that exceeded the configured
request throttle limit is counted as an attack. (See the application profile’s DDoS tab for
configuring custom layer seven attack limits.)

n Attack Duration: The length of time during which an attack occurred.

n Blocked Connections: If an attack was blocked, this is the number of connection attempts
blocked.

n Attack Count: Shows attacks plotted in a graph over time.

This chapter includes the following topics:

n Rate Limiters

n Configure Security Policy for DNS Amplification Egress DDoS Protection

Rate Limiters
Rate limiters are used to control the rate (count/period) of requests or connections sent or
received from a network. For instance, if you are using a virtual service that is configured to allow
1000 connections/ second and if the number of connections you make exceeds that limit, then

VMware, Inc. 722


VMware NSX Advanced Load Balancer Configuration Guide

a rate limiting action will be triggered. You can configure this rate limiting action. The rate limits
allows a better flow of data and increases security by mitigating attacks such as DDoS.

Controlling Rate Limiter


The following are the parameters to control the rate limiter:

n Count : It is the rate at which the token is generated. A token is consumed every time a
connection/request lands on the virtual service. If there is no token, then you can trigger the
rate limiting action.

n Burst size : It is the maximum number of tokens that can be held by the virtual service at any
given time.

n Period : It is the time period on which the rate limiting will be performed. In the above
example, it is 1000 connections/ second. You can configure the period to a different value
other than one second.

Classifying the Rate Limiter


The following are the two types of rate limiters based on the use case:

n Static Rate Limiter

n Virtual Service Connection Rate Limiter

n Network Security Rate Limiter

n DNS policy Rate Limiter

n Dynamic Rate Limiter

n Application Profile Rate Limiter

n DataScript Rate Limiter

Static Rate Limiter


The static rate limiter is used to rate limit the number of connections/ requests on virtual service in
total. For instance, if the virtual service rate limit is configured for 1000 connections/ second, it will
deny 1001’th connection/ request for the configured period.

Virtual Service Connection Rate Limiter


This is configured on virtual service by the attribute name connections_rate_limit. This rate
limiter rate limits the number of incoming connections to the virtual service.

The following are the rate limiting action options:

n Drop Syn Packets

n Send TCP Reset

n Report Only

VMware, Inc. 723


VMware NSX Advanced Load Balancer Configuration Guide

The following is the CLI for virtual service connection rate limiter:

[admin]: configure virtualservice vs1


[admin]: virtualservice> connections_rate_limit
[admin]: virtualservice:connections_rate_limit> rate_limiter
[admin: virtualservice:connections_rate_limit:rate_limiter> count 1000
Overwriting the previously entered value for count
[admin]: virtualservice:connections_rate_limit:rate_limiter> period 1
Overwriting the previously entered value for count
[admin]: virtualservice:connections_rate_limit:rate_limiter> burst_sz 1000
Overwriting the previously entered value for burst_sz
[admin]: virtualservice:connections_rate_limit> action type rl_action_reset_conn

You can check the Performance Limits box in Advanced tab of Applications > Virtual Service
window.

Network Security Rate Limiter


This rate limiter is configured on the network security policy. It is a policy-based rate limiter, where
rules can be selectively applied to the rate limit.

The following are the rate limiting action options:

n Default Action

Note For this type of rate limiter, the default period is configured to one second.

For instace, assume that you want to rate limit user with IP subnet 172.100.200.0/24 for 1000
connections per second. The following is the CLI to execute the above request:

[admin:ctrl]: > configure networksecuritypolicy vs-vs1-Default-Cloud-ns


Updating an existing object. Currently, the object is:
+----------------------+-----------------------------------------------+
| Field | Value
-----------+------------------------------------------------------------+
| uuid | networksecuritypolicy-fbe7ec92-15bf-4ec8-
a8bb-7145b03e3dba |
| name | vs-vs1-Default-Cloud-ns |
| rules[1] | |
| name | Rule 1 |
| index | 1 |
| enable | True |
| match | |
| client_ip | |
| match_criteria | IS_IN |
| prefixes[1] | 172.100.200.0/24 |
| action | NETWORK_SECURITY_POLICY_ACTION_TYPE_RATE_LIMIT |
| log | False |
| rl_param | |
| max_rate | 1000 |
| burst_size | 1000 |
| age | 0 min |
| tenant_ref | admin |
+----------------------+------------------------------------------------+

VMware, Inc. 724


VMware NSX Advanced Load Balancer Configuration Guide

[admin]: networksecuritypolicy> rules index 1


[admin]: networksecuritypolicy:rules> rl_param
[admin]: networksecuritypolicy:rules:rl_param> max_rate 1000
No change in field value
[admin]: networksecuritypolicy:rules:rl_param> burst_size 1000
No change in field value
[admin]: networksecuritypolicy:rules:rl_param> save
[admin]: networksecuritypolicy:rules> save
[admin]: networksecuritypolicy> save

You can update this value in the IP Address field in Policies tab of Applications > Virtual Service
window.

HTTP Security Rate Limiter


It rate limits the total number of incoming requests based on the HTTP security policy
configuration. HTTP security policy now supports rate limit per client IP address, per URI path,
or both for a given rate limit action.

You can configure rate limiters to control the policy evaluation based on the different parameters.
The rate limit objects are same the other rate limiters mentioned above:

n Count

n Period

n Burst

You can configure rate profiles under the action attributes of the HTTP policy. Rate limiters are
configured for the following:

n per_client_ip

n per_uri_path

The corresponding actions can be any one of the following:

n Drop the connection

n Send reset code

n Log the information in virtual service logs

The following are the steps to configure HTTP security rate limiter:

n Login to NSX Advanced Load Balancer CLI and use the configure httppolicyset <policy
name> command to start configuring http security policy for rate limiting.

[admin]: > configure httppolicyset example_rl_policy [admin]: httppolicyset>


http_security_policy [admin]: httppolicyset:http_security_policy> rules index 1

n Configure the rate profiles under the action attributes of the HTTP policy as shown below. In
the below example rate profile is chosen as per_uri_path and rate limiter count as 10.

[admin]: httppolicyset:http_security_policy:rules:action> rate_profile


[admin]: httppolicyset:http_security_policy:rules:action:rate_profile> per_uri_path

VMware, Inc. 725


VMware NSX Advanced Load Balancer Configuration Guide

[admin]: httppolicyset:http_security_policy:rules:action:rate_profile> rate_limiter


[admin]: httppolicyset:http_security_policy:rules:action:rate_profile:rate_limiter> count 10
Overwriting the previously entered value for count
[admin]: httppolicyset:http_security_policy:rules:action:rate_profile:rate_limiter> save

n Configure the required action once the rate limit is reached as per the configured policies
mentioned above. You can set the following configuration to set the action type as
rl_action_local_rsp with the response code as http_local_respose_status_code_403.

[admin]: httppolicyset:http_security_policy:rules:action:rate_profile> action


[admin]: httppolicyset:http_security_policy:rules:action:rate_profile:action>
[admin]: httppolicyset:http_security_policy:rules:action:rate_profile:action> type
rl_action_local_rsp
Overwriting the previously entered value for type
[admin]: httppolicyset:http_security_policy:rules:action:rate_profile:action> status_code
http_local_response_status_code_403
Overwriting the previously entered value for status_code
[admin]: httppolicyset:http_security_policy:rules:action:rate_profile:action> save
[admin]: httppolicyset:http_security_policy:rules:action:rate_profile> save
[admin]: httppolicyset:http_security_policy:rules:action> save
[admin]: httppolicyset:http_security_policy:rules> save
[admin]: httppolicyset:http_security_policy> save
[admin]: httppolicyset> save

n The final configuration output is shown below which exhibits the action to send response code
as 403 if the incoming requests cross the limit of 10 requests per 10 seconds for the associated
HTTP security policy and the virtual service.

+-------------------------+--------------------------------------------+
| Field | Value
+--------------------------+--------------------------------------------+
| uuid |
httppolicyset-91f02717-7dc6-42ff-9b00-1f411d3723df
|
| name | example_rl_policy |
| http_security_policy | |
| rules[1] | |
| name | rl_rule_1 |
| index | 1 |
| enable | True |
| match | |
| client_ip | |
| match_criteria | IS_NOT_IN |
| prefixes[1] | 192.168.100.0/24 |
| action | |
| action | HTTP_SECURITY_ACTION_RATE_LIMIT |
| rate_profile | |
| rate_limiter | |
| count | 10 |
| period | 10 sec |
| burst_sz | 0 |
| action | |
| type | RL_ACTION_LOCAL_RSP |

VMware, Inc. 726


VMware NSX Advanced Load Balancer Configuration Guide

| status_code | HTTP_LOCAL_RESPONSE_STATUS_CODE_403 |
| per_client_ip | True |
| per_uri_path | True |
| is_internal_policy | False |
| tenant_ref | admin |
+------------------------+--------------------------------------------+

DNS Policy Rate Limiter


The DNS policy rate limiter is the policy based rate limiter where you can apply rules specific to
the DNS attributes of the request. For instance, if you want to rate limit the DNS, then request to
freesale.com to prevent the server from being overwhelmed with request surge.

The following is the CLI to execute the above request:

[admin]: > configure dnspolicy dns1-Policy


[admin]: dnspolicy> rule index 1
[admin]: dnspolicy:rule> action
[admin]: dnspolicy:rule:action> dns_rate_limiter
[admin]: dnspolicy:rule:action:dns_rate_limiter> rate_limiter_object
[admin]: dnspolicy:rule:action:dns_rate_limiter:rate_limiter_object> count 1000
Overwriting the previously entered value for count
[admin]: dnspolicy:rule:action:dns_rate_limiter:rate_limiter_object> burst_sz 1000
Overwriting the previously entered value for burst_sz
[admin]: dnspolicy:rule:action:dns_rate_limiter:rate_limiter_object> period 1
Overwriting the previously entered value for period
[admin]: dnspolicy:rule:action:dns_rate_limiter:rate_limiter_object> save

You can check Enable box in DNS Policy tab in Policies tab of Applications > Virtual Services
window.

For more information refer to DNS Policy guide.

Dynamic Rate Limiter


The dynamic rate limiter is used if you want to rate limit the number of connections/ request
on virtual service for any user. For instance, if the dynamic rate limiter is configured to do
1000 connections/ requests per second, then it will only allow 1000 requests from user A, 1000
requests from user B, and so on.

Application Profile Rate Limiter


These rate limiters are used to create dynamic rate limiters. It is configured on the application
profile attached to the virtual service.

The following are the rate limiting action options:

n Drop Syn Packets

n Send TCP Reset

n Report Only

VMware, Inc. 727


VMware NSX Advanced Load Balancer Configuration Guide

The following is the CLI to configure application profile:

[admin]: applicationprofile> dos_rl_profile


[admin]: applicationprofile:dos_rl_profile> rl_profile
[admin]: applicationprofile:dos_rl_profile:rl_profile> client_ip_connections_rate_limit
[admin]: applicationprofile:dos_rl_profile:rl_profile:client_ip_connections_rate_limit>
rate_limiter
[admin]:
applicationprofile:dos_rl_profile:rl_profile:client_ip_connections_rate_limit:rate_limiter>
count 1000
No change in field value
[admin]:
applicationprofile:dos_rl_profile:rl_profile:client_ip_connections_rate_limit:rate_limiter>
period 1
No change in field value
[admin]:
applicationprofile:dos_rl_profile:rl_profile:client_ip_connections_rate_limit:rate_limiter>
burst_sz 1000
No change in field value
[admin]:
applicationprofile:dos_rl_profile:rl_profile:client_ip_connections_rate_limit:rate_limiter>
save

You can edit Rate Limit HTTP and TCP Settings section in DDos tab in Application Profile
window.

DataScript Rate Limiter


Rate Limit is applied using DataScript. Arbitrary characteristics are defined and evaluated to
decide which requests are to count against the rate limit. This gives us maximum flexibility. All
the actions which are supported through DataScript, can be applied when the limit is hit.

The new DataScript API for the rate limit is avi.vs.ratelimit.exceed.

New DataScript API – avi.vs.ratelimit.exceed(rl_name, request_key, [consume])

The following are the parameters used in DataScript rate limiter:

n rl_name – This refers to the rate limit object name.

n request_key – This is an arbitrary string used to allow any property or combination of


properties to be used to identify requests.

n consume – This is the number that this API consumes in the rate limiter bucket, The default
value is 1. This function indicates whether the request is above the threshold or not.

VMware, Inc. 728


VMware NSX Advanced Load Balancer Configuration Guide

Configuring DataScript Rate Limiter


This section explains how to configure DataScript Rate Limiter.

n Login to the NSX Advanced Load Balancer CLI and use configure vsdatascriptset <policy
name> command to configure rate limiters. Provide the policy name and assign the desired
value of the rate limiters (count, period, and burst size) as shown below.

[admin]: > configure vsdatascriptset rate_limiter_test


[admin]: vsdatascriptset> rate_limiters
[admin]: vsdatascriptset:rate_limiters> count 1
[admin]: vsdatascriptset:rate_limiters> period 15
[admin]: vsdatascriptset:rate_limiters> burst_sz 0
[admin]: vsdatascriptset:rate_limiters> name rl1
[admin]: vsdatascriptset:rate_limiters> save
[admin]: vsdatascriptset> save

n Use the avi.vs.ratelimit.exceed function in a DataScript with the desired action.

result = avi.vs.rate_limit.exceed("test", "key1")


if result == true then
avi.vs.log("rl exceeds")
else
avi.vs.log("rl does not exceed")
end

Metrics Retention Period


At regular intervals, NSX Advanced Load Balancer Service Engines collect values for a wide range
of metrics and sends them to the NSX Advanced Load Balancer Controller. The Controller then
aggregates these metric values into several buckets.

The NSX Advanced Load Balancer updates these metrics periodically as defined in the Metric
Update Frequency in the virtual service configuration. If a DDoS event is detected by an SE, the
SE immediately sends information about the attack to the Controller, instead of locally storing the
data until the next polling interval.

The following table lists the increments in which metrics data can be displayed in the web
interface. The data granularity per increment and the retention period also are listed.

Metric Increments Data Granularity Retention Period

Real time* 5 seconds 1 hour

Past 6 hours 5 minutes 1 day

Past day 5 minutes 1 day

Past week 1 per hour 1 week

Past month 1 per day 1 year

VMware, Inc. 729


VMware NSX Advanced Load Balancer Configuration Guide

Metric Increments Data Granularity Retention Period

Past quarter 1 per day 1 year

Past year 1 per day 1 pre year

Note Real-time metrics are enabled by default for the first 30 minutes of a virtual service’s life.
After these initial 30 minutes, real-time metrics are disabled to conserve resources. Real-time
metrics can be re-enabled manually at any time.

Configure Security Policy for DNS Amplification Egress


DDoS Protection
This section explains the steps to create and configure a new security policy and use it to protect
the virtual service against a DNS Amplification Egress DDoS attack.

The DNS virtual service is targeted by sending concise queries that solicit expansive responses
(spanning multiple UDP packets). The DNS virtual services can participate in a reflection attack.
The attacker spoofs the DNS query’s source IP and source port to be that of a well-known service
port on a victim server.

Any requests from a defined range of source ports (well-known ports) will be denied. The range of
ports to be denied is configured in the Security Policy.

Use Security Policy to Protect the Virtual Service


Creating a New Security Policy

Log in to the NSX Advanced Load Balancer shell and create a new security policy as shown below:

configure securitypolicy test-secpolicy1 dns_policy_index 0


save
configure securitypolicy test-secpolicy1 oper_mode mitigation
save
configure securitypolicy test-secpolicy1
dns_attacks
attacks attack_vector dns_amplification_egress
mitigation_action deny
save
save
save
exit

The new security policy test-secpolicy1 with DNS Amplification Egress DDoS protection is
displayed as follows:

shell> show securitypolicy test-secpolicy1


+-------------------------------+---------------------------------------+
| Field | Value |
+-------------------------------+---------------------------------------+

VMware, Inc. 730


VMware NSX Advanced Load Balancer Configuration Guide

| uuid | securitypolicy-9f5149f2-ab88-4ea3-9944-
cc6ed6aea77a |
| name | test-secpolicy1 |
| oper_mode | MITIGATION |
| dns_attacks | |
| attacks[1] | |
| attack_vector | DNS_AMPLIFICATION_EGRESS |
| mitigation_action | |
| deny | True |
| enabled | True |
| max_mitigation_age | 60 min |
| network_security_policy_index | 0 |
| dns_policy_index | 0 |
| dns_amplification_denyports | |
| match_criteria | IS_IN |
| ranges[1] | |
| start | 1 |
| end | 52 |
| ranges[2] | |
| start | 54 |
| end | 2048 |
| tenant_ref | admin |
+-------------------------------+---------------------------------------+

The dns_amplification_denyports is automatically created to block well-known ports 1-52 and


54-2048, inclusive, for DNS Amplification Egress DDoS attacks. These ports are usually used as
spoofed source ports in the attacks. Port 53 is excluded, however, since source IP addresses may
initiate legitimate DNS queries to external DNS servers.

Attaching the Security Policy to a Virtual Service


If you have a DNS virtual service and want to protect the virtual service from DNS Amplification
Egress DDoS attack, you can attach a security policy to the virtual service.

Note A security policy configured for DNS Amplification Egress mitigation cannot be attached
to a non-DNS virtual service, for instance, an HTTP virtual service. When attached to a non-DNS
virtual service, an error will be displayed, and the security policy will not be attached to the virtual
service.

For instance, consider a virtual service dns-vs-1. The steps to attach the network policy to the
virtual service are shown below:

shell>
configure virtualservice dns-vs-1
security_policy_ref test-secpolicy1
save
exit

VMware, Inc. 731


VMware NSX Advanced Load Balancer Configuration Guide

Now the virtual service dns-vs-1 is armed with the DDoS protection security policy. Any such
attacks will be detected and mitigated by the SE. Security manager creates network security rules
and DNS rules for SE to use and block the attacker’s IP address, source port, and DNS record
request types. On significant attacks, the metrics manager will raise DDoS events which will be
displayed on the controller UI.

VMware, Inc. 732


Load Balancing Workspace ONE
UEM Components 13
This guide explains the deployments modes when all the Workspace ONE UEM components or
services are deployed on different servers and a separate load balancer VIP is configured for each
component. The NSX Advanced Load Balancer is used to load balance the followings Workspace
ONE UEM components:

n Workspace ONE UEM Admin Console

n Workspace ONE UEM Admin API

n Workspace ONE UEM Device Services

n AirWatch Cloud Messaging

n VMware Tunnel - (Tunnel Proxy)

n VMware Tunnel (Per-App VPN)

For details on various Workspace ONE UEM application modules, see Workspace One UEM.

Recommended Configuration Settings


Workspace Virtual Persistence
Type (L4 Virtual Service Back-end
ONE UEM Service Algorithm and Persistence
or L7 Name Servers Port
Components Ports Timeput

Workspace
ONE UEM Least HTTP Cookie/ 60
L7 SSL 443 VIP1 443
Admin connections minutes
Console

Workspace
Least
ONE UEM L7 SSL 443 VIP2 Source IP 443
connections
Admin API

Workspace
ONE UEM Least Source IP Address/
L7 SSL 443 VIP3 443
Device connections 20 minutes
Services

Consistent
DataScript for
AWCM L7 SSL 443/2001 VIP4 Hash with 2001
persistence
custom string

VMware, Inc. 733


VMware NSX Advanced Load Balancer Configuration Guide

Workspace Virtual Persistence


Type (L4 Virtual Service Back-end
ONE UEM Service Algorithm and Persistence
or L7 Name Servers Port
Components Ports Timeput

Tunnel
proxy –
8443(TCP
and
UDP),
2020(TC Least Source IP/30
Tunnel Proxy L4 VIP5 8443/2020
P). Connections minutes

Fast-path
is
recomme
nded.

Tunnel
Per app –
443 (TCP
Tunnel Per- and Least
L4 VIP6 Source IP 443
App VPN UDP). Connections
Fast-path
recomme
nded

Note
n All components run on different servers and a separate Load balancer VIP is configured for
each component.

n The timeout value must be less than policy retrieval interval for some services,for
instance,Secure Email Gateway).

n Persistence is not required when all the users are coming through the NAT as they have the
same source IP address.

Health Monitor Recommendations


Workspace ONE UEM Monitoring
Method Response Code
Components Interval/Timeout

Workspace ONE UEM GET to https://<host>/airwatch/


200 OK Default
Admin Console awhealth/v1

Workspace ONE UEM GET to https://<host>/api/help/#!/


200 OK Default
Admin API apis

Workspace ONE UEM GET to https://<host>/


200 Ok Deafult
Device Services deviceservices/awhealth/v1

AWCM GET to https://<host>/awcm/status 200 OK Default

VMware, Inc. 734


VMware NSX Advanced Load Balancer Configuration Guide

Workspace ONE UEM Monitoring


Method Response Code
Components Interval/Timeout

Tunnel (Proxy) https://<host>:2020/ and TCP:8443 407 Default

Tunnel (Per-App VPN) TCP:443 NA Default

Note Change the monitoring interval as per the deployment requirement.

This chapter includes the following topics:

n Load Balancing Workspace ONE UEM Admin Console

n Load Balancing Workspace ONE UEM Admin API

n Load Balancing Workspace ONE UEM Device Services

n Load Balancing AirWatch Cloud Messaging

n Load Balancing VMware Tunnel (Tunnel Proxy)

n Load Balancing VMware Tunnel (Per-App VPN)

Load Balancing Workspace ONE UEM Admin Console


The steps and navigation path mentioned for various configuration parameters are same for
the configuration of other Workspace ONE UEM applications. A few of the attributes differ as
mentioned in the tables in the previous section.

Creating a Custom Health Monitor


The following are the steps to create a custom health monitor:

1 Navigate to Templates > Profiles > Health Monitors. Click Create.

2 Specify the name and description for the health monitor.

3 Select HTTPS option from the Typedrop-down list.

4 Specify the successful and failure checks details and Send Interval and Receive Timeout
details.

5 Is Federated field describes the object's replication scope. If this field is unchecked, then the
object is visible within the controller-cluster and its associated service engines. If checked,
then the object is replicated across the federation.

6 Specify the health monitor port

7 Select the Authentication Type to either NTLM or Basic from the drop-down list.

8 Specify the Client Request details (Both header and body).

9 Select the Response Code option as 2XXfrom the drop-down list.

10 Select the SSL Attributesand Use Exact Requestcheck box.

VMware, Inc. 735


VMware NSX Advanced Load Balancer Configuration Guide

11 Specify the server maintenance mode and Role-Based Access Control (RBAC) details.

12 Click Save and proceed to the next step of creating a persistence profile.

Creating a Persistence Profile


For Workspace ONE UEM admin Cconsole, Source IP persistence or cookie-based persistence is
preferred with timeout value set to 60 minutes.

The following are the steps to create the persistence profile:

1 Navigate to Templates > Profiles > Persistence and click Create.

2 Add the required details to the new persistence profile.

3 Click Save and proceed to creating a pool.

Creating Pool
The following are the steps to create a pool:

1 Navigate to Applications > Pools. Click CREATE POOL.

2 Select the cloud from the Select Cloud sub-screen and click Next.

3 Select Least Connections from the Load Balance drop-down menu.

4 Select the persistence profile created in the previous step from the Analytics Profile drop-
down menu.

5 To bind the monitor, click Add Active Monitor and select the custom HTTPS monitor that was
created in the previous section.

6 For SSL offload, the Enable SSL option on the pool level is not required as traffic goes to the
back-end servers in plain text. If the back-end server listens only on SSL, the traffic needs to
be sent in encrypted form. So we need to enable SSL on the pool level. Select the Enable SSL
check box, select the appropriate SSL profile, and click Next.

7 In the Servers tab, add IP addresses of the servers,and click Next.

8 Navigate through Step 3: Advancedtab and Step 4: Reviewtab by clicking Next and then click
Save.

Creating Application Profile


As a best practice, all HTTP requests should be redirected to HTTPS. Load Balancers for UEM
must be configured to set the XFF header with the Client’s Source IP. Other options are not
mandatory and depend on the requirement. The default System-Secure-HTTP profile can also be
used instead of creating a new application profile.

The following are the steps to create application profile:

1 Navigate to Templates > Profiles > Application.

2 Select HTTP from the Create drop-down menu.

VMware, Inc. 736


VMware NSX Advanced Load Balancer Configuration Guide

3 Specifythe name and description of the application profile, and retain default values in the
HTTP Settings section.

4 Check X-Forwarded-Forbox.

5 In the Security tab, select the SSL Everywhere check box.

6 Click Save to proceed further to install the SSL certificate. If not required, some of these
options can be disabled.

7 Some services like Device Service and Admin Console might require HTTP Strict Transport
Security. Select the HTTP Strict Transport Security (HSTS) check box if required.

Installing SSL Certificate for L7 Virtual Service


The SSL connections are terminated at virtual service level. So the SSL certificate must be
assigned to the virtual service. It is advised to install a certificate signed by a valid certificate
authority instead of using self-signed certificates. Install the certificate in NSX Advanced Load
Balancer and ensure that the CA certificate is imported and linked. For instructions, see Import
Certificates.

Creating an L7 Virtual Service


The following are the steps to create a Layer 7 virtual service for Workspace ONE UEM Admin
console:

1 Navigate to Applications > Virtual Services.

2 Select Advanced Setup from the Create Virtual Service drop-down menu. Select a cloud from
the Select Cloud drop-down menu.

a Application Profile: Select the application profile created in the previous section.

b Service Port: Specify the value as 80 and 444 (SSL).

c Pool: Specify the pool created in the previous section.

3 For SSL profile, use the default SSL profile, or create a new one as per the requirement.

4 For SSL certificate, install the certificate and bind it to the virtual service as shown above.

5 Click Next and retain the default settings for the remaining fields.

6 Click Next and then click Save.

Load Balancing Workspace ONE UEM Admin API


This section explains how to configureWorkspace ONE UEM using the Admin API.

Creating a Custom Health Monitor


Follow the same navigation steps mentioned inCreating a Custom Health Monitorsection in
Workspace ONE UEM Admin Console. Provide the API request data(header and body as
required) and click Save.

VMware, Inc. 737


VMware NSX Advanced Load Balancer Configuration Guide

Creating a Persistence Profile


1 Follow the same navigation steps mentioned in Creating a Persistence Profilesection in
Workspace ONE UEM Admin Console.

2 The recommended persistence method is Source IP persistence or cookie-based persistence


with the Persistence Timeout value set to less than the policy retrieval interval for some
services (for example, Secure Email Gateway). Click Save.

Creating Pool
n Follow the same navigation steps mentioned inCreating Poolsection in Workspace ONE UEM
Admin Console.

Creating an Application Profile


Follow the same navigation steps mentioned inCreating Application Profilesection in Workspace
ONE UEM Admin Console.

Installing SSL Certificate for L7 Virtual Service


Follow the same navigation steps mentioned inInstalling SSL Certificate for L7 Virtual Servicein
Workspace ONE UEM Admin Console.

Creating a L7 Virtual Service


Follow the same navigation steps mentioned inCreating an L7 Virtual Servicesection in Workspace
ONE UEM Admin Console.

Load Balancing Workspace ONE UEM Device Services


This section explains how to configureWorkspace ONE UEM Device Services.

Creating a Custom Health Monitor


Follow the same navigation steps mentioned inCreating a Custom Health Monitorsection in
Workspace ONE UEM Admin Console. Provide the Device Service request data(header and body
as required) and click Save.

Creating a Persistence Profile


n Follow the same navigation steps mentioned inCreating a Persistence Profilesection in
Workspace ONE UEM Admin Console.

n Set the value of the field Preferred persistence method to Source IP persistence with timeout
value set to 20 minutes.

VMware, Inc. 738


VMware NSX Advanced Load Balancer Configuration Guide

Creating a Pool
n Follow the same navigation steps mentioned inCreating Poolsection in Workspace ONE UEM
Admin Console.

Creating an Application Profile


Follow the same navigation steps mentioned inCreating Application Profilesection in Workspace
ONE UEM Admin Console.

Installing SSL Certificate for L7 Virtual Service


Follow the same navigation steps mentioned inInstalling SSL Certificate for L7 Virtual
Servicesection in Workspace ONE UEM Admin Console.

Creating a L7 Virtual Service


Follow the same navigation steps mentioned inCreating an L7 Virtual Servicesection in Workspace
ONE UEM Admin Console.

Load Balancing AirWatch Cloud Messaging


For load balancing AirWatch Cloud Messaging (AWCM), the requirement is to persist the
connections based on awcmsessionid present in the cookie, URI or HTTP header.

This can be done using the following methods:

n Consistent Hash.

n Using DataScript to maintain persistence tables.

Creating a Custom Health Monitor


Follow the same navigation steps mentioned inCreating a Custom Health Monitorsection in
Workspace ONE UEM Admin Console. Provide the AWCM request data(header and body as
required) and click Save.

Creating Pool
Follow the same navigation steps mentioned inCreating Poolsection in Workspace ONE UEM
Admin Console.

For instance, since AWCM needs persistence based on parameter awcmsessionid in either the URI
or header, Consistent Hash based on the custom string can be used. The custom string is defined
in the following steps using DataScript.

Creating an Application Profile


Follow the same navigation steps mentioned inCreating Application Profilesection in Workspace
ONE UEM Admin Console.

VMware, Inc. 739


VMware NSX Advanced Load Balancer Configuration Guide

For AWCM, it is required is to keep the front-end connection for 2 minutes. Navigate to the DDos
tab and change the HTTP Keep-Alive Timeout to 120000 ms (120 seconds).

Creating a DataScript
The following are the steps to create a DataScript and associate it with the AWCM pool:

1 Navigate to Templates > Scripts > DataScripts and click Create.

2 Specifythe name for the datascript.

3 Select the AWCM pool from the Pools drop-down menu and specify the other details.

4 Click the Events tab and click Add under the Events sub-section.

5 Add the following DataScript to bind the AWCM Pool to the Datascript.

<br<default_pool = "AWCM-Pool"
query = avi.http.get_query("awcmsessionid")
header = avi.http.get_header("awcmsessionid")
cookie = avi.http.get_cookie("awcmsessionid")
if query ~= nil and query ~= "true" then
avi.vs.log('QUERY HASH: '.. query)
avi.pool.select("AWCM-Pool")
avi.pool.chash(query)
elseif header ~= nil then
avi.vs.log('HEADER HASH: '.. header)
avi.pool.select("AWCM-Pool")
avi.pool.chash(header)
else if cookie ~= nil then
avi.vs.log('COOKIE HASH: '..cookie)
avi.pool.select("AWCM-Pool")
avi.pool.chash(cookie)
else
avi.vs.log('NIL HASH')
avi.pool.select("AWCM-Pool")
end
end

Creating an L7 Virtual Service


Follow the same navigation steps mentioned inCreating an L7 Virtual Servicesection in Workspace
ONE UEM Admin Console.

Select the following for creating a virtual service for AWCM:

1 Application Profile: AWCM Application Profile created in the previous section.

2 Service Port: 443 and 2001 (SSL).

3 Pool: AWCM pool created in the previous section.

4 Click Next to navigate to the DataScript tab.

5 Click Add DataScript.

VMware, Inc. 740


VMware NSX Advanced Load Balancer Configuration Guide

6 Select the AWCM DataScript created in the previous section from the Script To Execute
drop-down list.

7 Click Save DataScript.

Load Balancing VMware Tunnel (Tunnel Proxy)


This section explainshow to configure load balancing VMware tunnel through tunnel proxy.

Creating Health Monitor


Follow the same navigation steps mentioned inCreating a Custom Health Monitorsection in
Workspace ONE UEM Admin Console.

Create the following health monitors:

1 A HTTPS monitor on port 2020:

a Select HTTPS from the Type drop-down list.

b Select2020 from the Health Monitor Port drop-down list.

c Provide the Tunnel HTTPS request data(header and body as required).

d Set the Server Response Data to 407 and Response Code to 4XX.

e Click Save.

2 A TCP monitor on port 8443:

a Select TCP from the Type drop-down list.

b Select 8443 from the Health Monitor Port drop-down list.

c Click Save.

Creating Persistence Profile


Follow the same navigation steps mentioned inCreating a Persistence Profilesection in Workspace
ONE UEM Admin Console.

For VMware Tunnel,Tunnel (Proxy), Client IP Address persistence Type is recommended with
Persistence Timeout value set to 30 minutes.

Click Save and proceed to the next step of creating a pool for servers.

Creating Pool
Follow the same navigation steps mentioned inCreating Poolsection in Workspace ONE UEM
Admin Console.

1 Choose the following options to create a pool:

a Load Balance: Least Connections

b Analytics Profile: The Tunnel Persistence Profile created in the previous step.

VMware, Inc. 741


VMware NSX Advanced Load Balancer Configuration Guide

c Enable SSL: Not required for the pool.

d Add Health Monitor: Tunnel HTTPS monitor created in the previous section.

2 Click Next and navigate to Step 3: Advanced Tab. Select the Disable Port Translation check
box.

3 Click Save.

Creating Application profile


For tunnel service, SSL pass-through is required. Create an L4 application profile or use the
default System-L4-Application profile.

Creating L4 Virtual Service


The following are the steps to create a new L4 virtual service:

1 Navigate to Applications > Virtual Services and select the Advanced Setup.

2 Select the System-L4-Application from Application Profile drop-down list and configure the
virtual service with the following options:

a TCP/UDP Profile: System-TCP-Fast-Path.

b Service Port: 8443 (select Override TCP/UDP) and 2020(UDP).

c Pool: The Tunnel service pool created in the previous step.

Load Balancing VMware Tunnel (Per-App VPN)


This section explainshow to configureload balancing VMware tunnel through Per-App VPN.

Creating Health Monitor


Follow the same navigation steps mentioned inCreating a Custom Health Monitorsection in
Workspace ONE UEM Admin Console with Typeselected as TCP and Health Monitor Port as 443.

Creating Persistence Profile


Follow the same navigation steps mentioned inCreating a Persistence Profilesection in Workspace
ONE UEM Admin Console.

Client IP Address persistence is recommended with Persistence Timeout value set to 30 minutes.

Creating Pool
Follow the same navigation steps mentioned inCreating Poolsection in Workspace ONE UEM
Admin Console.

1 Load balancing algorithm : Least Connections

2 Persistence profile : Tunnel-Persistence-Profile (created in the previous step).

VMware, Inc. 742


VMware NSX Advanced Load Balancer Configuration Guide

3 Click Add Active Monitor and select the TCP Monitor as Tunnel-TCP.

Creating Application profile


For tunnel service, SSL pass-through is required. Create an L4 application profile or use the
default System-L4-Applicationprofile.

Creating L4 Virtual Service


To create a new L4 virtual service:

1 Navigate to Applications > Virtual Services and select the Advanced Setup.

2 Select System-L4-Application from Application Profile drop-down link and configure the
virtual service with the following options:

a TCP/UDP Profile: System-TCP-Fast-Path.

b Service Port: 443, select Override TCP/UDP and choose System-UDP.

c Pool: The Tunnel PerAppVPN Pool created in the previous step.

VMware, Inc. 743


Configuring TSO GRO RSS
14
This section explains the TSO, GRO and RSS configuration process.

Enabling TSO and GRO on an NSX Advanced Load Balancer


SE
The TSO feature is enabled by default on an NSX Advanced Load Balancer SE while the GRO
feature is disabled by default.

Note Enabling TSO/ GRO is non-disruptive and does not require an SE restart.

The following are the steps to enable TSO and GRO on SE:

1 Login to the CLI and use the configure serviceenginegroup command to enable TSO and
GRO features.

[admin:cntrl]: > configure serviceenginegroup Default-Group


Updating an existing object. Currently, the object is:
| disable_gro | True |
| disable_tso | True |
[admin:cntrl]: serviceenginegroup> no disable_gro
Overwriting the previously entered value for disable_gro
[admin:cntrl]: serviceenginegroup> no disable_tso
Overwriting the previously entered value for disable_tso
[admin:cntrl]: serviceenginegroup> save
| disable_gro | False |
| disable_tso | False |

2 To verify if the features are correctly turned ON in the SE, you can check the following statistics
on the Controller CLI:

a GRO statistics are part of interface statistics. For GRO, check the statistics for the following
parameters:

1 gro_mbufs_coalesced

b TSO statistics are part of mbuf statistics. For TSO, check the statistics for the following
parameters:

1 num_tso_bytes

VMware, Inc. 744


VMware NSX Advanced Load Balancer Configuration Guide

2 num_tso_chain

3 Execute the show serviceengine <interface IP address> interface command and filter
the output using the grep command shown as follows:

[admin:cntrl]: > show serviceengine 10.1.1.1 interface | grep gro


| gro_mbufs_coalesced | 1157967 |
| gro_mbufs_coalesced | 1157967 |

Note The sample output mentioned above is for 1-queue (No RSS).

See the following output for RSS-enabled, a 4-queue RSS:

Note In the case of a port-channel interface, provide the relevant physical interface name as
the filter in the intfname option. For reference, refer to the output mentioned below for the
Ethernet 4 interface.

show serviceengine 10.1.1.1 interface filter intfname eth4 | grep gro


| gro_mbufs_coalesced | 320370 |
| gro_mbufs_coalesced | 283307 |
| gro_mbufs_coalesced | 343143 |
| gro_mbufs_coalesced | 217442 |
| gro_mbufs_coalesced | 1164262 |

Note The statistics for an NIC is the sum of the statistics for each queue for the specific
interface.

[admin:cntrl]: > show serviceengine 10.1.1.1 mbufstats | grep tso


| num_tso_bytes | 4262518516 |
| num_tso_chains | 959426 |

If the features are enabled, the statistics in the output mentioned above will reflect non-zero
values for TSO parameters.

Enabling RSS on an NSX Advanced Load Balancer SE


The distribute_queues knob in the SE-group properties enables and disables the RSS. Login to
the CLI, and use the distribute_queues command to enable the RSS feature.

Note Any change in the distribute_queues parameters requires an SE restart.

| distribute_queues | False |
[admin:cntrl]: serviceenginegroup> distribute_queues
Overwriting the previously entered value for distribute_queues
[admin:cntrl]: serviceenginegroup> save
| distribute_queues | True |

VMware, Inc. 745


VMware NSX Advanced Load Balancer Configuration Guide

When RSS is turned ON, all the NICs in the SE configure and use an optimum number of queue
pairs as calculated by the SE. The calculation of this optimum number is described in the section
on configurable dispatchers.

For instance, the output of a four-queue RSS-supported interface is as follows:

[admin:cntrl]: > show serviceengine 10.1.1.1 interface filter intfname bond1 | grep ifq
| ifq_stats[1] |
| ifq_stats[2] |
| ifq_stats[3] |
| ifq_stats[4] |

The value of counters for ipackets (input packets) and opackets (output packets) per interface
queue is a non-zero value as shown below:

[admin:cntrl]: > show serviceengine 10.1.1.1 interface filter intfname bond1 | grep pack
| ipackets | 40424864 |
| opackets | 42002516 |
| ipackets | 10108559 |
| opackets | 11017612 |
| ipackets | 10191660 |
| opackets | 10503881 |
| ipackets | 9873611 |
| opackets | 10272103 |
| ipackets | 10251034 |
| opackets | 10208920 |

Note The output includes statistics of each queue and one combined statistics overall for the NIC.

Configuration Samples
The example mentioned below exhibits the configuration on a bare-metal machine with 24 vCPUs,
two 10G NICs, and one bonded of two 10G NICs, and distribute_queues enabled.

n Set the value of the configure num_dispatcher_cores parameter to 8.

[admin:cntrl]: serviceenginegroup> num_dispatcher_cores 8


Overwriting the previously entered value for num_dispatcher_cores
[admin-ctrlr]: serviceenginegroup> save

[admin:cntrl]:> show serviceengine 10.1.1.1 seagent | grep -E "dispatcher|queues"


|num_dispatcher_cpu | 8
|num_queues | 8

VMware, Inc. 746


VMware NSX Advanced Load Balancer Configuration Guide

n Set the value of the configure num_dispatcher_cores parameter to 0 (the default value).
After restarting the SE, though the configured value for dispatchers is set to 0, the number of
queues and the number of dispatchers is changed to 4, as shown in the following output:

[admin:cntrl]:> show serviceengine 10.1.1.1 seagent | grep -E "dispatcher|queues"


|num_dispatcher_cpu | 4
|num_queues | 4

Configuring Maximum Queues per vNIC


The max_queues_per_vnicparameter supports the following values:

n Zero (Reserved) — Auto (deduces optimal number of queues per dispatcher based on the NIC
and operating environment).

n One (Reserved) — One Queue per NIC (Default).

n Integer Value — Power of 2; the maximum limit is 16.

The max_queues_per_vnic parameter deprecates the distribute_queues parameter, which is used


to enable the RSS mode of operation wherein, the number of queues is equal to the number of
dispatchers.

The migration routine ensures that the max_queues_per_vnic parameter is set


tonum_dispatcher_cores if the distribute_queues is enabled and to 1 otherwise.

You can use the following command to configure max_queues_per_vnic:

[admin:admin-controller-1]: serviceenginegroup> max_queues_per_vnic

INTEGER 0,1,2,4,8,16 Maximum number of queues per vnic Setting to '0' utilises all queues
that are distributed across dispatcher cores.

[admin:admin-controller-1]: > configure serviceenginegroup Default-Group


Updating an existing object. Currently, the object is:
+-----------------------------------------+----------------------------+
| Field | Value |
+-----------------------------------------+----------------------------+
[output truncated]
| se_rum_sampling_nav_percent | 1 |
| se_rum_sampling_res_percent | 100 |
| se_rum_sampling_nav_interval | 1 sec |
| se_rum_sampling_res_interval | 2 sec |
| se_kni_burst_factor | 2 |
| max_queues_per_vnic | 1 |
| core_shm_app_learning | False |
| core_shm_app_cache | False |
| pcap_tx_mode | PCAP_TX_AUTO |
+-----------------------------------------+----------------------------+
[admin:admin-controller-1]: serviceenginegroup> max_queues_per_vnic 2
Overwriting the previously entered value for max_queues_per_vnic
[admin:admin-controller-1]: serviceenginegroup> save

VMware, Inc. 747


VMware NSX Advanced Load Balancer Configuration Guide

The show serviceegine [se] seagent displays the number of queues per dispatcher and the
total number of queues per interface.

show serviceengine [se] seagent


| num_dp_heartbeat_miss | 0 |
| se_registration_count | 2 |
| se_registration_fail_count | 0 |
| num_dispatcher_cpu | 1 |
| ------------------------- truncated output---------------------------|
| num_flow_cpu | 1 |
| num_queues | 1 |
| num_queues_per_dispatcher | 1 |

Configuring Hybrid RSS


The hybrid_rss_mode is an SE-Group configurable property. It toggles SE hybrid only mode of
operation in DPDK mode with RSS configured wherein, each SE data path instance operates as
an independent standalone hybrid instance performing both dispatcher and proxy function. This
requires a reboot.

The following is the configuration command:

configure serviceenginegroup <se-group> > hybrid_rss_mode


[admin:10-102-66-36]: serviceenginegroup> max_queues_per_vnic 0
Overwriting the previously entered value for max_queues_per_vnic
[admin:10-102-66-36]: serviceenginegroup>> hybrid_rss_mode
Overwriting the previously entered value for hybrid_rss_mode
[admin:10-102-66-36]: serviceenginegroup> save

Note The hybrid_rss_mode is protected by a must check that requires RSS to be enabled
before toggling this property to True.

n The property also shows up as per SE. The following is the configuration command:

----------------------------------------+
| num_dispatcher_cpu | 4 |
| num_flow_cpu | 4 |
| num_queues | 4 |
| num_queues_per_dispatcher | 1 |
| hybrid_rss_mode | True |
+---------------------------------------+

This chapter includes the following topics:

n TSO GRO RSS Features

n Recommendation for Better Performance of Service Engines

n Certificate Management Integration for Trust Anchor

VMware, Inc. 748


VMware NSX Advanced Load Balancer Configuration Guide

TSO GRO RSS Features


This section explains the features of SE groups, such as TSO, GRO, RSS, and multiple dispatchers
and queues.

TCP Segmentation Offload (TSO)


TCP segmentation offload is used to reduce the CPU overhead of TCP/IP on fast networks. A
host with TSO-enabled hardware sends TCP data to the Network Interface Card (NIC) without
segmenting the data in software. This type of offload relies on the NIC to segment the data and
add the TCP, IP, and data link layer headers to each segment.

TSO Support in Routing


With routing support enabled in SE, Generic Receive Offload (GRO) feature cannot be utilised
because routing is stateless and SE will not be able to segment the large GRO coalesced packet
if the packets are not allowed to be IP fragmented. As a result of the support for this feature,
GRO can be utilised for the routed traffic, allowing SE to segment larger packets into smaller TCP
segments, either through TSO if the interface supports it or the routing layer in SE.

During the three-way handshake, both client and server advertise their respective MSS so that the
peers will not send TCP segments larger than the MSS. This is enabled by default.

Generic Receive Offload


GRO is a software technique for increasing the inbound throughput of high-bandwidth network
connections by reducing CPU overhead. It works by aggregating multiple incoming packets from a
single flow into a larger packet chain before they are passed higher up the networking stack, as a
result reducing the number of packets that have to be processed.

The benefits of GRO are only seen if multiple packets for the same flow are received in a short
period. If the incoming packets belong to different flows, then the benefits of having GRO enabled
might not be seen.

Multi-Queue Support
The dispatcher on NSX Advanced Load Balancer is responsible for fetching the incoming packets
from a NIC, sending them to the appropriate core for proxy work and sending back the outgoing
packets to the NIC. A 40G NIC or even a 10G NIC receiving traffic at a high packet per second
or PPS,for instance, in the case of small UDP packets, might not be efficiently processed by a
single-core dispatcher.

This problem can be solved by distributing traffic from a single physical NIC across multiple
queues where each queue gets processed by a dispatcher on a different core. Receive Side
Scaling (RSS) enables the use of multiple queues on a single physical NIC.

VMware, Inc. 749


VMware NSX Advanced Load Balancer Configuration Guide

Receive Side Scaling (RSS)


When RSS is enabled on NSX Advanced Load Balancer, NICs make use of multiple queues in
the receive path. The NIC pins flow to queues and put packets belonging to the same flow to be
used in the same queue. This helps the driver to spread packet processing across multiple CPUs
thereby improving efficiency.

On the NSX Advanced Load Balancer SE, the multi-queue feature is also enabled on the transmit
side that is, different flows are pinned to different queues (packets belonging to the same flow in
the same queue) to distribute the packet processing among CPUs.

Note The multi-queue feature (RSS) is not supported along with IPv6 addresses. If RSS is
enabled, then the IPv6 address cannot be configured for NSX Advanced Load Balancer SE
interfaces. Similarly, if the IPv6 address is already configured on NSX Advanced Load Balancer
SE interfaces, the multi-queue feature (RSS) cannot be enabled on those interfaces.

Multiple Dispatchers and Queues per NIC


Depending on the traffic processed by NSX Advanced Load Balancer SE, the number of
dispatchers can be configured with one or more than one core. Systems with high PPS load are
configured with a high number of dispatchers whereas proxy heavy loads such as SSL workloads
might not need the high number of dispatchers.

Also, queues per NIC can be set for each dispatcher core for better performance. NSX Advanced
Load Balancer SE tries to detect the best settings automatically for each environment.

Service Engine Datapath Isolation mode


NSX Advanced Load Balancer SEs can dedicate one or more service engine cores for non se-dp
tasks. This configuration particularly helps if SEs are hosting latency-sensitive applications. Also,
this will have a penalty on overall SE performance as one or more cores are dedicated for non
se-dp tasks.

Hybrid RSS Mode


The SE hybrid RSS mode works only for DPDK mode with RSS configured and allows each SE
vCPU to function as an independent unit, allowing every core to handle the dispatch and proxy
job simultaneously. Also, by not allowing the cross-core punting of the packets, that is, for a
2-core SE with each core tagged as (dispatcher-0, proxy-0), (dispatcher-1, proxy-1) of vCPU0 and
vCPU1 respectively.Any ingress flow on dispatcher-0 is egressed using proxy-0 and not punted to
proxy-1 and vice versa.

The hybrid mode is brought in as a configurable property and aims at achieving higher
performances on low core SE, especially one and two core SE on vCenter - NSX-T cloud.

VMware, Inc. 750


VMware NSX Advanced Load Balancer Configuration Guide

Recommendation for Better Performance of Service Engines


Depending on the real-life workloads, the NSX Advanced Load Balancer SE settings can be
tweaked by using configurations at the SE Group level.

The following are the guidelines to follow while planning capacity for SEs:

General Guidelines
The following are the general guidelines:

n CPU and memory reservations are recommended for NSX Advanced Load Balancer SE virtual
machines for consistent and deterministic performance.

n Use compact mode in NSX Advanced Load Balancer SE Group settings for virtual service
placements on SEs. This ensures NSX Advanced Load Balancer uses the minimum number of
SEs required for virtual service placement. It helps in saving the cost in the case of public cloud
use cases.

Dispatcher Configurations
The following are the dispatcher configurations:

n The dedicated_dispatcher is set to False by default at the SE group level. This configuration is
optimal for SEs with smaller computer capacities, such as one and two cores.

n NSX Advanced Load Balancer recommends dedicated_dispatcher set to True for SE size
greater than two cores.

GRO and TSO Configurations


The following are the GRO and TSO configurations:

n The default settings for GRO is disabled, and TSO is enabled. This configuration works
normally for most of the workloads.

n GRO can be enabled whenever there is enough dispatchers (> = 4), and their utilization is low.

Receive Side Scaling Configurations


The following are the Receive Side Scaling (RSS) configurations:

n You can enable RSS for better performance. RSS can realize better PPS with more dispatchers
and queues per NIC.

n The number of dispatchers can only be set in the power of two, that is, the number of
dispatchers can be one, two, four, eight and so on.

n Default value of max_queues_per_vnic is one. Setting the value to zero automatically decides
the number of queues based on the dispatcher count configured. You can set this value as per
the requirements.

VMware, Inc. 751


VMware NSX Advanced Load Balancer Configuration Guide

n If the number of queues available per NIC is lesser than the dispatcher, the number of the
dispatcher is floored to the number of queues. It is recommended to have the number of
dispatchers greater than the number of available queues.

Datapath Isolation
You can enable SE datapath isolation for latency and jitter-sensitive applications. The feature
creates two independent CPU sets for datapath and control plane SE functions.

Recommendations for Different Workloads


The following are the recommendations for different workloads:

n High PPS load such as high connections per second with small file GETs must have more
dispatchers to do higher PPS.

n Workloads with high SSL transactions are proxy heavy and benefit from a high count of proxy
cores.

n Default settings are recommended for one and two core SEs.

The following examples explain the configuration recommendation for a six-core SE running on
the vCenter full access cloud.

Example: 1 – PPS heavy traffic profile


Let us assume 100 layer four virtual services with TCP and doing average of 1000 new TCP
connections per second, with each connection lasting three seconds and downloading a single
small file over single GET request.

Considering 18 to 20 packets for each TCP transaction for both front end and back end, this
requirement translates to nearly one million packets per second for new TCP connections. Given
the volume of packets, NSX Advanced Load Balancer SE should be configured with the following
configuration:

n Dedicated dispatcher: True

n Number of dispatchers: 2

n Number of proxy cores: 4

n Number of queues per NIC: 2

Example: 2 – SSL throughput and TPS heavy traffic profile


Let us assume multiple SSL applications doing a total of 2000 ECC transactions per second and 2
Gbps of SSL throughput of GET.

For the above requirements, the dispatcher cores will not be busy as the packets per second will
not be very high, and SSL processing will be consuming proxy cores for doing ECC transactions
and throughput. RSS will not help in this use case, and recommendations for following workload:

n Dedicated dispatcher: True

VMware, Inc. 752


VMware NSX Advanced Load Balancer Configuration Guide

n Number of dispatchers: 1

n Number of proxy cores: 5

n Number of queues per NIC: 1

Example: 3 – HTTP workloads with 50% of IP routing traffic


Multiple L7 applications doing nearly 5 – 6 Gbps with 1.5 Million packets per second with a single
SE with 50% of IP routing traffic. Application runnings are latency and jitter sensitive.

To achieve the above requirements, NSX Advanced Load Balancer recommends dedicating one of
the SE cores for non-data-path tasks. It can be achieved with the following configuration:

n Dedicated Dispatcher: True

n se_dp isolation mode: True

n Number of non-dp cores: 1

n Number of dispatcher cores: 2

n Number of queues per NIC: 2

n Number of proxy cores: 3

Certificate Management Integration for Trust Anchor


This section deals with certificate management integration for trust anchor.

Installing Trust Anchor Signed Certificate


The NSX Advanced Load Balancer supports automation of the process for requesting and
installing a certificate, signed by a trust anchor. This feature handles initial certificate registration
and renewal of certificates based on expiration of certificate.

To establish this, a Certificate Management Profile object is used. This object is created by
navigating to Templates > Security > Certificate Management. The Certificate Management
object provides a way for configuring a path to a certificate script, and a set of parameters that the
script needs (CSR, Common Name, and others) to integrate with a certificate management service
within the customer’s internal network. The script itself is left opaque by design to accommodate
the various certificate management services of different customers.

As a part of the SSL certificate configuration, the NSX Advanced Load Balanceryou can select
CSR, fill in the necessary fields for the certificate, and select the certificate management profile
to which this certificate is bound. The Controller then uses the CSR and the script to obtain the
certificate, and renews the certificate upon expiration. As part of the renewal process, a new
public-private key pair is generated and a certificate corresponding to this is obtained from the
certificate management service.

VMware, Inc. 753


VMware NSX Advanced Load Balancer Configuration Guide

Without this automation, the process of sending the CSR to the external trust anchor and
installation of the signed certificate and keys, must be performed by the NSX Advanced Load
Balancer user.

Note Python scripts are supported for this feature. Also Automated CSR workflow for SafeNet
HSM is supported.

Configuring Certificate Management Integration


The following are the steps to configure certificate management integration:

1 Prepare a Python script that defines a certificate_request() method. The method must
accept the following inputs as a dictionary:

n CSR.

n Hostname for the Common Name field.

n Parameters defined in the certificate management profile.

2 Create a certificate management profile that calls the script.

Preparing the Script


The script must use the def certificate_request command as shown in the example below:

def certificate_request(csr, common_name, args_dict):


"""
Check if a token exists that can be used:
If not, authenticate against the service with the provided credentials.
Invoke the certificate request and get back a valid certificate.
Inputs:
@csr : Certificate signing request string. This is a multi-line string output like what
you get from openssl.
@common_name: Common name of the subject.
@args_dict: Dictionary of the key value pairs from the certificate management profile.
"

Note The specific parameter values to be passed to the script are specified within the certificate
management profile.

Hiding Sensitive Parameters


For parameters that are sensitive, for instance, passwords, the values can be hidden. Marking a
parameter sensitive prevents its value from being displayed in the web interface or passed by the
API.

VMware, Inc. 754


VMware NSX Advanced Load Balancer Configuration Guide

Assigning Dynamic Parameter Values during CSR Creation


The value for a certificate management parameter can be assigned within the profile or within
individual CSRs.

n If the parameter value is assigned within the profile, the value applies to all CSRs generated
using the profile.

n To dynamically assign a parameter’s value, indicate within the certificate management profile
that the parameter is dynamic. This leaves the parameter’s value unassigned. The dynamic
parameter’s value is assigned when an individual CSR is created using the profile. The
parameter value applies only to the created CSR.

Creating the Certificate Management Profile


The following are the steps to create a certificate management profile:

1 Navigate to Templates > Security > Certificate Management and click Create.

2 Specify the name for the profile.

3 Select the control script for certificate management profile from the drop-down list.

4 If the profile must pass some parameter values to the script, select the Enable Custom
Parameters checkbox, and specify the parameter names and values.

For parameters that are sensitive (for instance, passwords), select the Is Sensitive checkbox.

a Marking a parameter sensitive prevents its value from being displayed in the web interface
or being passed by the API. For parameters that are to be dynamically assigned during
CSR creation, select the Dynamic checkbox. This leaves the parameter unassigned within
the profile.

5 Click Save.

Using the Certificate Management Profile To Get Signed Certificates


After adding the script and creating the certificate management profile, the profile can be used to
easily obtain and install Trust Anchor-signed certificates as follows:

1 Navigate to Templates > Security > SSL/TLS Certificates, and click Create.

2 Select Application Certificate option from Create drop-down list.

3 Specify the name and select CSR

4 option in the Type drop-down list.

5 Select the certificate management profile configured in the previous section from the
Certificate Management Profile drop-down list.

6 Click Save.

VMware, Inc. 755


VMware NSX Advanced Load Balancer Configuration Guide

The Controller generates a public-private key pair and CSR. It executes the script to request
the Trust Anchor-signed certificate from the PKI service, and saves the signed certificate in
persistent storage.

VMware, Inc. 756


Migration of Service Engine
Properties 15
This section lists some of the updates done to the CLI and API structure.

NSX Advanced Load Balancer allows you to tweak configuration parameters based on intended
use cases. Most of the commonly used parameters are available while using the UI. In addition, a
few advanced configuration parameters are available through the CLI and API alone.

This chapter includes the following topics:

n Service Engine Bootup Properties

n Changes in 20.1.3

Service Engine Bootup Properties


This section explainsthe SE bootup properties.

The SE bootup properties are available in serviceengineproperties ->


se_bootup_properties CLI command.

The serviceengineproperties hierarchy is available at the Controller level, and the parameters are
applicable to all SEs across all clouds at the time of bootup.

Unsupported Service Engine Bootup Properties


The following parameters available under se_bootup_properties are no longer supported.

n ssl_sess_cache_timeout: The timeout can be configured using session_timeout in the


SSL profile.

Migration to Service Engine Group Properties


The following parameters is moved to the SE Group level:

n l7_conns_per_core

n ssl_sess_cache_per_vs

n l7_resvd_listen_conns_per_core

You can configure these parameters differently for each SE Group if required.

VMware, Inc. 757


VMware NSX Advanced Load Balancer Configuration Guide

Migration of Compression Properties


The following parameters are moved to the Application Profile, under the compression_profile
level:

n buf_num

n buf_size

n level_normal

n level_aggressive

n window_size

n hash_size

You can configure these parameters differently for each Application Profile.

For more information on compression profile, see Compression.

Updates to Service Engine Runtime Properties


The SE runtime properties are available under the serviceengineproperties ->
se_runtime_properties CLI command.

The serviceengineproperties hierarchy is available at the Controller level, and the parameters
apply to all SEs across all clouds. You can modify these parameters while SEs are running and this
applies to all SEs including those already running.

Unsupported Service Engine Runtime Properties


The following parameters available under se_runtime_properties are no longer supported:

n upstream_connpool_strategy: The alternative is to use


connection_multiplexing_enabled under Application Profile.

n spdy_fw_proxy_parse_enable

The following caching related properties under se_runtime_properties are no longer


supported:

n mcache_enabled: Alternate: Application Profile -> cache_config -> enabled

n mcache_store_in_min_size: Alternate: Application Profile -> cache_config ->


min_object_size

n mcache_store_in_max_size: Alternate: Application Profile -> cache_config ->


max_object_size

n mcache_store_se_max_size: Alternate: Service Engine Group -> app_cache_percent

n mcache_fetch_enabled

n mcache_store_in_enabled

VMware, Inc. 758


VMware NSX Advanced Load Balancer Configuration Guide

n mcache_store_out_enabled

Migration to Service Engine Group Properties


The following parameters are moved to the SE Group level:

n upstream_connpool_enable

n upstream_connect_timeout

n upstream_send_timeout

n upstream_read_timeout

n downstream_send_timeout

n lbaction_num_requests_to_dispatch

n lbaction_rq_per_request_max_retries

n user_defined_metric_age

n enable_hsm_log

n ngx_free_connection_stack

n http_rum_console_log

n http_rum_min_content_length

You can configure these parameters differently for each SE Group, if required.

Migration of Compression Properties


The following parameters are moved to the Application Profile, under the compression_profile
level.

n min_length

n max_low_rtt

n min_high_rtt

n mobile_strs

You can configure these parameters differently for each Application Profile.

For more information on compression profile, see Compression.

Migration of LDAP/Basic Authentication Properties


The following parameters are moved to the Virtual Service, under ldap_vs_config:

n se_auth_ldap_cache_size

n se_auth_ldap_conns_per_server

n se_auth_ldap_reconnect_timeout

VMware, Inc. 759


VMware NSX Advanced Load Balancer Configuration Guide

n se_auth_ldap_bind_timeout

n se_auth_ldap_request_timeout

n se_auth_ldap_servers_failover_only

You can configure these parameters differently for each virtual service.

For more information on LDAP/ Basic Authentication, see Basic Authentication.

Changes in 20.1.3
This section contains a list of changes made to the CLI and API structure.

Updates to Service Engine Bootup Properties


The following parameters are moved to the Service Engine Group level:

n se_ip_encap_ipc

n se_l3_encap_ipc

Updates to Service Engine Runtime Properties


The following parameters are moved to the Service Engine Group level.

n dp_hb_frequency

n db_hb_timeout_count

n dp_aggressive_hb_frequency

n dp_aggressive_hb_timeout_count

You can configure these parameters differently for each SE Group if required.

Upgrade Considerations
The seproperties based APIs for these config knobs will only work for NSX Advanced Load
Balancer versions before when the above changes were introduced.

The properties will be automatically migrated to the relevant equivalent configuration as part of
the upgrades.

API Considerations
Ensure that any automation using these properties is modified to the new API schema. Also, note
that the previous API schema also remains available with an older “X-Avi-Version”.

VMware, Inc. 760


HTTP/2 Support on NSX
Advanced Load Balancer 16
This section discusses about HTTP/2 Support in NSX Advanced Load Balancer. HTTP/2 (originally
named HTTP/2.0) is the latest version of HTTP and developed over HTTP 1.1. HTTP/2 is a binary
protocol, while HTTP 1.1 is a text protocol.

The followings are the benefits of HTTP/2 over HTTP/1.1:

n Request and Response Multiplexing

With HTTP/1.1,for multiple parallel requests, multiple TCP connections are opened. With HTTP/2,
multiple requests can be broken into frames that can be interleaved; the remote end is capable of
reassembling them. Multiple connections can still be opened, but the number of connections is not
as many as in HTTP/1.1.

n Server Push

It allows a server to send multiple resources in response to a client request without the client
explicitly sending a request for each of these resources. This reduces latency otherwise introduced
by waiting for each request to serve the resource. In HTTP/1.1, applications try to work around this
by inlining the resource. HTTP/2 enables the client to cache the resource, reuse it across pages,
and use multiplexing along with other resources.

n Flow control

HTTP/2 provides flow control at the application layer level by not allowing either end to
overwhelm the other side, by using window sizes.

n Header Compression

In HTTP/1.1, each header in the request is sent as text. In HTTP/2, the header compresses request
and response header metadata using the HPACK compression format, reducing the transmitted
header size.

n Stream Prioritization

Since the HTTP messages are sent as frames and the frames from different streams can be
interleaved, HTTP/2 can specify priorities for streams i.e. all frames received can be prioritized
based on their stream priorities.

VMware, Inc. 761


VMware NSX Advanced Load Balancer Configuration Guide

Use Cases
n Workaround techniques used for HTTP 1.1 to make browsers compatible with HTTP/2, are no
longer required.

n All browsers that use HTTP/2 can be deployed on NSX Advanced Load Balancer.

n Support for the application using reverse proxy for gRPC.

Supported methods and modes for HTTP/2 on NSX


Advanced Load Balancer
n HTTP over TLS or HTTP over SSL

The NSX Advanced Load Balancer supports HTTP over TLS, or HTTP over SSL method for all
HTTP/2 requests. This method uses TLS version 1.2 or later.

n All settings and options available for HTTP Setting are also available for HTTP/2-enabled
virtual services. HTTP features like HTTP policy, DataScripts, HTTP-timeout setting, etc. are
also supported for HTTP/2 requests.

HTTP/2 Support for Virtual Service, Pool and Pool Groups


HTTP/2 is supported for front-end and back-end (server-side) connections. The following
enhancements are available for HTTP/2:

n For front-end traffic, HTTP/2 is supported on both non-SSL and SSL enabled ports.

n For back-end traffic, HTTP/2 is enabled at the pool level.

Enabling HTTP/2 on Virtual Service and Pools


The HTTP/2 configuration option is not available through the application profile. It can be
accessed through virtual service configuration.

The following configuration changes support HTTP/2 on the front-end and the back-end:

n The http2_enabled flag is deprecated from the application profile.

n An enable_http2 flag is available for the virtual service.

n The enable_http2 flag is available for the pool and pool group level to indicate that all the
servers configured under this pool are HTTP/2.0 servers.

Configuring HTTP/2 using NSX Advanced Load Balancer


1 Navigate to Applications > Virtual Services.

2 Edit the existing virtual service or create a new one.

3 Select the check box for HTTP2 available under Settings > Service Ports.

VMware, Inc. 762


VMware NSX Advanced Load Balancer Configuration Guide

4 HTTP2 can be enabled for SSL and non-SSL ports as required.

To enable HTTP2 for the pools and the pool groups associated with the virtual service:

5. Navigate to Applications > Pools and click Create Pool or use the existing one. Select the
Enable HTTP2 check box available under Servers as shown below:

6. Navigate to Applications > Pools Groups and click Create Pool Group or use the existing one.
For enabling HTTP2, select the check box for Enable HTTP2 available under Pool Servers as
shown below:

VMware, Inc. 763


VMware NSX Advanced Load Balancer Configuration Guide

Configuring HTTP/2 using NSX Advanced Load Balancer CLI


Use the enable_http2 flag from the configuration virtual service mode.

n Configuring Virtual Service

[admin:controller]: > configure virtualservice http2_vs


[admin:controller]: virtualservice> services index 1
[admin:controller]: virtualservice:services> enable_http2

Similarly, use the enable_http2 flag for the associated pool and pool groups for the virtual service.

n Configuring Pool

[admin:controller]: > configure pool v2-pool


[admin:controller]: pool> enable_http2

n Configuring Pool group

VMware, Inc. 764


VMware NSX Advanced Load Balancer Configuration Guide

Use the following steps to enable HTTP/2 for a pool group.

[admin:controller]: > configure poolgroup v2-pg


[admin:controller]: poolgroup> enable_http2

n Enabling HTTP/2 for Existing Pool Groups: To enable HTTP/2 for an existing pool group:

n Remove all pool members from the pool groups.

n Configure enable_http2 for all the pools using the steps mentioned above.

n Configure enable_http2 for the pool group.

n Add all the pools to pool group.

n Checking Status

The show virtualservice <virtual service name> command exhibits the flag value set as true,
as shown in the following code output:

[admin:controller]: > show virtualservice http2-vs


+------------------------------------+-----------------------------------------------------+
| Field | Value |
+------------------------------------+-----------------------------------------------------+
| uuid | virtualservice-2d67f7ed-eeee-4b81-af01-78ded84f4352 |
| name | http2-vs |
| enabled | True |
| services[1] | |
| port | 80 |
| enable_ssl | False |
| port_range_end | 80 |
| enable_http2 | True |
<----
| services[2] | |
| port | 443 |
| enable_ssl | True |
| port_range_end | 443 |
<----
| eable_http2 | True |
| application_profile_ref | applicationprofile-22 |
| network_profile_ref | System-TCP-Proxy |
| pool_ref | v2-pool |
| se_group_ref | Default-Group |
| http_policies[1] | |
| index | 11 |
| http_policy_set_ref | http_request_policy_1 |
+------------------------------------+-----------------------------------------------------+

[admin:controller]: > show pool v2-pool


+---------------------------------------+------------------------------------------------+
| Field | Value |
+---------------------------------------+------------------------------------------------+
| uuid | pool-5f38c27f-ff10-48e1-88e2-a9a2b39ad198 |
| name | v2-pool |

VMware, Inc. 765


VMware NSX Advanced Load Balancer Configuration Guide

| default_server_port | 80 |
| graceful_disable_timeout | 1 min |
| connection_ramp_duration | 10 min |
| max_concurrent_connections_per_server | 0 |
| servers[1] | |
| ip | 10.90.103.72 |
| port | 80 |
| hostname | 10.90.103.72 |
| enabled | True |
| ratio | 1 |
| verify_network | False |
| resolve_server_by_dns | False |
| static | False |
| rewrite_host_header | False |
| lb_algorithm | LB_ALGORITHM_LEAST_CONNECTIONS |
| lb_algorithm_hash | LB_ALGORITHM_CONSISTENT_HASH_SOURCE_IP_ADDRESS |
| inline_health_monitor | True |
| use_service_port | False |
| capacity_estimation | False |
| capacity_estimation_ttfb_thresh | 0 milliseconds |
| vrf_ref | global |
| fewest_tasks_feedback_delay | 10 sec |
| enabled | True |
| request_queue_enabled | False |
| request_queue_depth | 128 |
| host_check_enabled | False |
| sni_enabled | True |
| rewrite_host_header_to_sni | False |
| rewrite_host_header_to_server_name | False |
| lb_algorithm_core_nonaffinity | 2 |
| lookup_server_by_name | False |
| analytics_profile_ref | System-Analytics-Profile |
| tenant_ref | admin |
| cloud_ref | Default-Cloud |
| server_timeout | 0 milliseconds |
| delete_server_on_dns_refresh | True |
| enable_http2 |
True | <-----
+---------------------------------------+------------------------------------------------+

Limitations
n Caching is not supported for HTTP/2 pool.

n HTTP/2 Connection multiplexing on upstream is not supported.

n HTTP/2 health monitor support is not available. If the back-end server only supports HTTP/2,
HTTP(S) health monitor will not work for this pool. Only TCP or PING health monitor must be
configured for this pool.

n If the back-end server is SSL-enabled and supports both HTTP/1 and HTTP/2, HTTPS health
monitor must be configured with its own attribute.

n For non-SSL enabled port, HTTP/1.1 to HTTP/2.0 upgrade is not supported.

VMware, Inc. 766


VMware NSX Advanced Load Balancer Configuration Guide

n The client must be aware that the non-SSL port can only support HTTP/2.

n The back-end pool assumes that servers listening on the configured port only support HTTP/2
and will not send an HTTP1.1 to HTTP/2.0 upgrade.

n For the pool group, the HTTP version must be matched between the pool group and all the
associated pools.

n HTTP/1.1 chunked transfer-encoding mechanism on one side (the front-end) and HTTP/2
chunked mechanism on the other side (back-end) is not supported together. If chunking is
present i.e. if stream mode or partial buffer mode is required with chunk on one side and V2
chunk on the another side, this method is not supported.

Upgrade
If HTTP/2 is enabled in the application profile for a virtual service listening on port 80 and 443,
after an upgrade, the HTTP/2 will be automatically enabled on the virtual service on port 443.
HTTP/2 will not be enabled on port 80 which is non-SSL enabled port.

Additional Configuration Options for HTTP/2 Profile


The HTTP/2 configuration fields have been moved to a sub-field under http_profile called
http2_profile. The following fields have been added to the http2_profile:
n max_http2_requests_per_connection – This value controls the maximum number of requests
that can be sent over a client-side HTTP/2 connection. If the value is set to 0, an unlimited
number of requests can be sent over an HTTP/2 client-side connection.

n max_http2_header_field – This field controls the maximum size (in bytes) of the compressed
request header field. The limit applies equally to both name and value of the request header. It
can be between 1 and 8192 bytes. The default value is 4096 bytes.

n http2_initial_window_size – This field controls the window size of the initial flow control in
HTTP/2 streams. The value for this field ranges from 64 to 32768 KB. The default value is
64KB.

n max_http2_control_frames_per_connection – This field controls the number of control


frames that client can send over an HTTP/2 connection. The value for this field ranges from
0 to 10000, and the default value is 1000. Set the value to zero value for allowing unlimited
number of control frames to be sent on a client-side HTTP/2 connection.

n max_http2_queued_frames_to_client_per_connection – This field controls the number of


frames that can be queued waiting to be sent over a client-side HTTP/2 connection at any
given time. The value for this field ranges from 0 to 10000, and the default value is 1000. The
zero value for this parameter indicates that unlimited frames can be queued on a client-side
HTTP/2 connection.

VMware, Inc. 767


VMware NSX Advanced Load Balancer Configuration Guide

n max_http2_empty_data_frames_per_connection – This field controls the number of empty


data frames that the client can send over an HTTP/2 connection. The value for this field ranges
from 0 to 10000, and the default value is 1000. The zero value for this parameter indicates that
unlimited empty data frames can be sent over a client-side HTTP/2 connection.

n max_http2_concurrent_streams_per_connection – This field is to configure the maximum


number of concurrent streams over a client-side HTTP/2 connection. The value for this field
ranges from 1 to 256, and the default value is 128.

The following CLI snippets exhibit configuration samples for the options mentioned above.
Login to the NSX Advanced Load Balancer CLI. Use the applicationprofile mode and the
http2_profile option to change the values of the parameters mentioned above.

[admin:controller]: > configure applicationprofile ap-1


Updating an existing object. Currently, the object is:
+------------------------------------------------------
+---------------------------------------------------------+
| Field |
Value |
+------------------------------------------------------
+---------------------------------------------------------+
| uuid |
applicationprofile-1d264f41-d19c-445e-a4df-740dca5957f0 |
| name |
ap-1 |
| type |
APPLICATION_PROFILE_TYPE_HTTP |
| http_profile
| |
| connection_multiplexing_enabled |
True |
| xff_enabled |
True |
| xff_alternate_name |
X-Forwarded-For |
| hsts_enabled |
False |
| hsts_max_age |
365 |
| secure_cookie_enabled |
False |
| httponly_enabled |
False |
| http_to_https |
False |
| server_side_redirect_to_https |
False |
| x_forwarded_proto_enabled |
False |
| post_accept_timeout |
30000 milliseconds |
| client_header_timeout |
10000 milliseconds |

VMware, Inc. 768


VMware NSX Advanced Load Balancer Configuration Guide

| client_body_timeout |
30000 milliseconds |
| keepalive_timeout |
30000 milliseconds |
| client_max_header_size |
12 kb |
| client_max_request_size |
48 kb |
| client_max_body_size |
0 kb |
| max_rps_unknown_uri |
0 |
| max_rps_cip |
0 |
| max_rps_uri |
0 |
| max_rps_cip_uri |
0 |
| ssl_client_certificate_mode |
SSL_CLIENT_CERTIFICATE_NONE |
| websockets_enabled |
True |
| max_rps_unknown_cip |
0 |
| max_bad_rps_cip |
0 |
| max_bad_rps_uri |
0 |
| max_bad_rps_cip_uri |
0 |
| keepalive_header |
False |
| use_app_keepalive_timeout |
False |
| allow_dots_in_header_name |
False |
| disable_keepalive_posts_msie6 |
True |
| enable_request_body_buffering |
False |
| enable_fire_and_forget |
False |
| max_response_headers_size |
48 kb |
| respond_with_100_continue |
True |
| hsts_subdomains_enabled |
True |
| enable_request_body_metrics |
False |
| fwd_close_hdr_for_bound_connections |
True |
| max_keepalive_requests |
100 |
| disable_sni_hostname_check |

VMware, Inc. 769


VMware NSX Advanced Load Balancer Configuration Guide

False |
| reset_conn_http_on_ssl_port |
False |
| http_upstream_buffer_size |
0 kb |
| enable_chunk_merge |
True |
| http2_profile
| |
| max_http2_control_frames_per_connection |
1000 |
| max_http2_queued_frames_to_client_per_connection |
1000 |
| max_http2_empty_data_frames_per_connection |
1000 |
| max_http2_concurrent_streams_per_connection |
128 |
| max_http2_requests_per_connection |
1000 |
| max_http2_header_field_size |
4096 bytes |
| http2_initial_window_size |
64 kb |
| preserve_client_ip |
False |
| preserve_client_port |
False |
| preserve_dest_ip_port |
False |
| tenant_ref |
admin |
+------------------------------------------------------
+---------------------------------------------------------+
[admin:controller]: applicationprofile> http_profile
[admin:controller]: applicationprofile:http_profile> http2_profile
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_control_frames_per_connection 2000
Overwriting the previously entered value for max_http2_control_frames_per_connection
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_queued_frames_per_connection 2000
No command or arguments found in 'max_http2_queued_frames_per_connection 2000'.
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_queued_frames_to_client_per_connection 2000
Overwriting the previously entered value for max_http2_queued_frames_to_client_per_connection
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_concurrent_streams_per_connection 256
Overwriting the previously entered value for max_http2_concurrent_streams_per_connection
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_requests_per_connection 2500
Overwriting the previously entered value for max_http2_requests_per_connection
[admin:controller]: applicationprofile:http_profile:http2_profile>
http2_initial_window_size 256
Overwriting the previously entered value for http2_initial_window_size
[admin:controller]: applicationprofile:http_profile:http2_profile>
max_http2_header_field_size 8192

VMware, Inc. 770


VMware NSX Advanced Load Balancer Configuration Guide

Overwriting the previously entered value for max_http2_header_field_size


[admin:controller]: applicationprofile:http_profile:http2_profile> save
[admin:controller]: applicationprofile:http_profile> save
[admin:controller]: applicationprofile> save
+------------------------------------------------------
+---------------------------------------------------------+
| Field
| Value |
+------------------------------------------------------
+---------------------------------------------------------+
| uuid
| applicationprofile-1d264f41-d19c-445e-a4df-740dca5957f0 |
| name
| ap-1 |
| type
| APPLICATION_PROFILE_TYPE_HTTP |
| http_profile
| |
| connection_multiplexing_enabled
| True |
| xff_enabled
| True |
| xff_alternate_name
| X-Forwarded-For |
| hsts_enabled
| False |
| hsts_max_age
| 365 |
| secure_cookie_enabled
| False |
| httponly_enabled
| False |
| http_to_https
| False |
| server_side_redirect_to_https
| False |
| x_forwarded_proto_enabled
| False |
| post_accept_timeout
| 30000 milliseconds |
| client_header_timeout
| 10000 milliseconds |
| client_body_timeout
| 30000 milliseconds |
| keepalive_timeout
| 30000 milliseconds |
| client_max_header_size
| 12 kb |
| client_max_request_size
| 48 kb |
| client_max_body_size
| 0 kb |
| max_rps_unknown_uri
| 0 |
| max_rps_cip

VMware, Inc. 771


VMware NSX Advanced Load Balancer Configuration Guide

| 0 |
| max_rps_uri
| 0 |
| max_rps_cip_uri
| 0 |
| ssl_client_certificate_mode
| SSL_CLIENT_CERTIFICATE_NONE |
| websockets_enabled
| True |
| max_rps_unknown_cip
| 0 |
| max_bad_rps_cip
| 0 |
| max_bad_rps_uri
| 0 |
| max_bad_rps_cip_uri
| 0 |
| keepalive_header
| False |
| use_app_keepalive_timeout
| False |
| allow_dots_in_header_name
| False |
| disable_keepalive_posts_msie6
| True |
| enable_request_body_buffering
| False |
| enable_fire_and_forget
| False |
| max_response_headers_size
| 48 kb |
| respond_with_100_continue
| True |
| hsts_subdomains_enabled
| True |
| enable_request_body_metrics
| False |
| fwd_close_hdr_for_bound_connections
| True |
| max_keepalive_requests
| 100 |
| disable_sni_hostname_check
| False |
| reset_conn_http_on_ssl_port
| False |
| http_upstream_buffer_size
| 0 kb |
| enable_chunk_merge
| True |
| http2_profile
| |
| max_http2_control_frames_per_connection
| 2000 |
| max_http2_queued_frames_to_client_per_connection
| 2000 |

VMware, Inc. 772


VMware NSX Advanced Load Balancer Configuration Guide

| max_http2_empty_data_frames_per_connection
| 1000 |
| max_http2_concurrent_streams_per_connection
| 256 |
| max_http2_requests_per_connection
| 2500 |
| max_http2_header_field_size
| 8192 bytes |
| http2_initial_window_size
| 256 kb |
| preserve_client_ip
| False |
| preserve_client_port
| False |
| preserve_dest_ip_port
| False |
| tenant_ref
| admin |
+------------------------------------------------------
+---------------------------------------------------------+
[admin:controller]: >

[admin:controller]: > show applicationprofile ap-1


+------------------------------------------------------
+---------------------------------------------------------+
| Field |
Value |
+------------------------------------------------------
+---------------------------------------------------------+
| uuid |
applicationprofile-1d264f41-d19c-445e-a4df-740dca5957f0 |
| name |
ap-1 |
| type |
APPLICATION_PROFILE_TYPE_HTTP |
| http_profile
| |
| connection_multiplexing_enabled |
True |
| xff_enabled |
True |
| xff_alternate_name |
X-Forwarded-For |
| hsts_enabled |
False |
| hsts_max_age |
365 |
| secure_cookie_enabled |
False |
| httponly_enabled |
False |
| http_to_https |
False |
| server_side_redirect_to_https |
False |

VMware, Inc. 773


VMware NSX Advanced Load Balancer Configuration Guide

| x_forwarded_proto_enabled |
False |
| post_accept_timeout |
30000 milliseconds |
| client_header_timeout |
10000 milliseconds |
| client_body_timeout |
30000 milliseconds |
| keepalive_timeout |
30000 milliseconds |
| client_max_header_size |
12 kb |
| client_max_request_size |
48 kb |
| client_max_body_size |
0 kb |
| max_rps_unknown_uri |
0 |
| max_rps_cip |
0 |
| max_rps_uri |
0 |
| max_rps_cip_uri |
0 |
| ssl_client_certificate_mode |
SSL_CLIENT_CERTIFICATE_NONE |
| websockets_enabled |
True |
| max_rps_unknown_cip |
0 |
| max_bad_rps_cip |
0 |
| max_bad_rps_uri |
0 |
| max_bad_rps_cip_uri |
0 |
| keepalive_header |
False |
| use_app_keepalive_timeout |
False |
| allow_dots_in_header_name |
False |
| disable_keepalive_posts_msie6 |
True |
| enable_request_body_buffering |
False |
| enable_fire_and_forget |
False |
| max_response_headers_size |
48 kb |
| respond_with_100_continue |
True |
| hsts_subdomains_enabled |
True |
| enable_request_body_metrics |

VMware, Inc. 774


VMware NSX Advanced Load Balancer Configuration Guide

False |
| fwd_close_hdr_for_bound_connections |
True |
| max_keepalive_requests |
100 |
| disable_sni_hostname_check |
False |
| reset_conn_http_on_ssl_port |
False |
| http_upstream_buffer_size |
0 kb |
| enable_chunk_merge |
True |
| http2_profile
| |
| max_http2_control_frames_per_connection |
1000 |
| max_http2_queued_frames_to_client_per_connection |
1000 |
| max_http2_empty_data_frames_per_connection |
1000 |
| max_http2_concurrent_streams_per_connection |
128 |
| max_http2_requests_per_connection |
1000 |
| max_http2_header_field_size |
4096 bytes |
| http2_initial_window_size |
64 kb |
| preserve_client_ip |
False |
| preserve_client_port |
False |
| preserve_dest_ip_port |
False |
| tenant_ref |
admin |
+------------------------------------------------------
+---------------------------------------------------------+

Logs
Logs can be checked using one of the following modes:

n Using NSX Advanced Load Balancer UI

The application logs of NSX Advanced Load Balancer display HTTP/2.0 as the HTTP version in
the request. Navigate to Applications > Virtual Services , select the desired virtual service, and
navigate to Logs tab to check logs.

VMware, Inc. 775


VMware NSX Advanced Load Balancer Configuration Guide

Errors related to HTTP/2 requests and response can be checked under Significant logs.

n Using NSX Advanced Load Balancer CLI

The following are counters available for the HTTP/2 feature, that can be used during
troubleshooting.

n Request handled error

n Response codes (2xx, 3xx, 4xx, and 5xx)

n Protocol errors

n Flow-control error

n Frame size errors

n Compression errors

n Refused Stream errors

Use the show virtualservice <virtual service name> detail command to check for available
counters for the HTTP/2 method.

[admin:controller]: > show virtualservice <virtual service name> detail

| cache_bytes | 0 |
| http2_requests_handled | 2 |
| http2_response_2xx | 2 |
| http2_response_3xx | 0 |
| http2_response_4xx | 0 |
| http2_response_5xx | 0 |
| http2_protocol_errors | 0 |
| http2_flow_control_errors | 0 |
| http2_frame_size_errors | 0 |
| http2_compression_errors | 0 |

VMware, Inc. 776


VMware NSX Advanced Load Balancer Configuration Guide

| http2_refused_stream_errors | 0 |
| http2_enhance_your_calm | 0 |
| http2_miscellaneous_errors | 0

Use the show pool <pool name> detail command to check HTTP/2-related errors in pool status.

[admin:controller]: > show pool v2-pool detail | grep http2 |


http2_protocol_header_errors | 0 |
| http2_protocol_other_errors | 0 |
| http2_flow_control_errors | 0 |
| http2_frame_size_errors | 0 |
| http2_compression_errors | 0 |
| http2_refused_stream_errors | 0 |
| http2_enhance_your_calm | 0 |
| http2_misc_errors | 0 |
[admin:controller]:

This chapter includes the following topics:

n IP Group

IP Group
IP groups are comma-separated lists of IP addresses that can be referenced by profiles, policies,
and logs. Each entry in this list can be an IPv4 address, an IP range, an IP mask, or a country code.
IP groups are reusable objects that can be referenced by any number of features attached to any
number of virtual services. IP groups are commonly used for service classification, white listing,
or black listing and can be automatically updated through external API calls. When an IP group
is updated, the update is pushed from the Controller to any Service Engine that is hosting virtual
services, leveraging the IP group.

IP Group Usage
The following are few examples of IP groups used within the NSX Advanced Load Balancer.
Generally, the IP Group can be used in (or assigned to) any object that accepts an IP address. The
following are the objects in NSX Advanced Load Balancer that can use IP Groups.

n Policies

A network security or HTTP security policy can be configured to drop any clients coming from a
blacklist of IP addresses. Instead of maintaining a long list within the policy, the NSX Advanced
Load Balancer maintains the rule logic of that policy separately from the list of addresses kept in
the IP group. A user can be granted a role that allows them to update the list of IP addresses
without being able to change the policy itself.

n Logs

VMware, Inc. 777


VMware NSX Advanced Load Balancer Configuration Guide

Logs classify clients by their IP address and match them against an included geographic country
location database. Override this database by using a custom IP group to create specific mappings
such as internal IP addresses. For example, LA_Office can contain 10.1.0.0/16, while NY_Office
contains 10.2.0.0/16. Logs show these clients as originating from these locations. Logs searches
can also be performed on the group name such as LA_Office.

n DataScript

Custom decisions can be made based on a client’s inclusion or exclusion in an IP group. For
examples and syntax, see the DataScript function avi.ipgroup.contains.

n Pool Servers

If multiple pools are needed with different configurations but with the same list of servers, the
server IP address can be placed into the IP group. Each subscribing pool automatically inherits
the change in membership if an IP is added or removed from the group.

The table on the Templates > Groups > IP Group page contains the following information for each
IP group:

Name - Name of the IP address group.

IP Address or Ranges - Number of IP address, networks, or address ranges.

Country Codes or EPG - Any configured country codes that are listed.

Creating an IP Group
To create or edit an IP Group:

n Name - Specify a unique name for the IP group.

n Type - Select one of the following from the Type drop-down menu.

n IP Address.

n Country Code.

n Import IP Address From File - Click IMPORT FILE to upload the comma-separated-value
(CSV) file that contains any combination of IP addresses, range, or masks.

n ADD - Click ADD to add single, comma separated or ranges of IP addresses.

n Country Code - Populate the IP address ranges from the geo database for this country.

n Select by Country Code — Select one or more countries, or type the country name into the
search field to filter. Countries may not be combined within an IP group with individual IP
addresses. An IP group that contains countries may not be used as a list of servers for pool
membership.

VMware, Inc. 778

You might also like