Deployment Guide PDF
Deployment Guide PDF
Deployment Guide PDF
Release
3.3
Modified: 2018-05-14
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify,
transfer, or otherwise revise this publication without notice.
The information in this document is current as of the date on the title page.
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through the
year 2038. However, the NTP application is known to have some difficulty in the year 2036.
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networks
software. Use of such software is subject to the terms and conditions of the End User License Agreement (“EULA”) posted at
https://www.juniper.net/support/eula/. By downloading, installing or using such software, you agree to the terms and conditions of that
EULA.
Chapter 3 Installing and Configuring the Network Devices and Servers for a
Centralized Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Cabling the Hardware for the Centralized Deployment . . . . . . . . . . . . . . . . . . . . . 55
Configuring the EX Series Ethernet Switch for the Contrail Cloud Implementation
in a Centralized Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Configuring the QFX Series Switch for the Contrail Cloud Implementation in a
Centralized Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Configuring the MX Series Router in the Contrail Cloud Implementation for a
Centralized Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Configuring the Physical Servers and Nodes for the Contrail Cloud
Implementation in a Centralized Deployment . . . . . . . . . . . . . . . . . . . . . . . . . 63
Chapter 4 Installing and Configuring the Network Devices and Servers for a
Distributed Deployment or SD-WAN Solution . . . . . . . . . . . . . . . . . . . . . . . . 65
Configuring the Physical Servers in a Distributed Deployment . . . . . . . . . . . . . . . 65
Configuring the MX Series Router in a Distributed Deployment . . . . . . . . . . . . . . 66
Installing and Setting Up CPE Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Preparing for CPE Device Activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Installing and Configuring an NFX250 Device . . . . . . . . . . . . . . . . . . . . . . . . . 70
Installing and Configuring an SRX Series Services Gateway or vSRX Instance
as a CPE Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Chapter 5 Installing and Configuring Contrail Service Orchestration . . . . . . . . . . . . . . 73
Removing a Previous Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Provisioning VMs on Contrail Service Orchestration Nodes or Servers . . . . . . . . . 74
Before You Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Downloading the Installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Creating a Bridge Interface for KVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Creating a Data Interface for a Distributed Deployment . . . . . . . . . . . . . . . . . 78
Customizing the Configuration File for the Provisioning Tool . . . . . . . . . . . . . 79
Provisioning VMs with the Provisioning Tool for the KVM Hypervisor . . . . . . 104
Provisioning VMware ESXi VMs Using the Provisioning Tool . . . . . . . . . . . . . 104
Manually Provisioning VRR VMs on the Contrail Service Orchestration Node
or Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Verifying Connectivity of the VMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Setting up the Installation Package and Library Access . . . . . . . . . . . . . . . . . . . . 107
Copying the Installer Package to the Installer VM . . . . . . . . . . . . . . . . . . . . . 108
Creating a Private Repository on an External Server . . . . . . . . . . . . . . . . . . . 108
Installing and Configuring Contrail Service Orchestration . . . . . . . . . . . . . . . . . . 109
Before You Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Creating the Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Deploying Infrastructure Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Deploying Microservices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Checking the Status of the Microservices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Loading Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Performing a Health Check of Infrastructure Components . . . . . . . . . . . . . . 120
Generating and Encrypting Passwords for Infrastructure Components . . . . . . . . 123
If the information in the latest release notes differs from the information in the
documentation, follow the product Release Notes.
Juniper Networks Books publishes books by Juniper Networks engineers and subject
matter experts. These books go beyond the technical documentation to explore the
nuances of network architecture, deployment, and administration. The current list can
be viewed at https://www.juniper.net/books.
Documentation Conventions
Caution Indicates a situation that might result in loss of data or hardware damage.
Laser warning Alerts you to the risk of personal injury from a laser.
Table 2 on page xii defines the text and syntax conventions used in this guide.
Bold text like this Represents text that you type. To enter configuration mode, type the
configure command:
user@host> configure
Fixed-width text like this Represents output that appears on the user@host> show chassis alarms
terminal screen.
No alarms currently active
Italic text like this • Introduces or emphasizes important • A policy term is a named structure
new terms. that defines match conditions and
• Identifies guide names. actions.
• Junos OS CLI User Guide
• Identifies RFC and Internet draft titles.
• RFC 1997, BGP Communities Attribute
Italic text like this Represents variables (options for which Configure the machine’s domain name:
you substitute a value) in commands or
configuration statements. [edit]
root@# set system domain-name
domain-name
Text like this Represents names of configuration • To configure a stub area, include the
statements, commands, files, and stub statement at the [edit protocols
directories; configuration hierarchy levels; ospf area area-id] hierarchy level.
or labels on routing platform • The console port is labeled CONSOLE.
components.
< > (angle brackets) Encloses optional keywords or variables. stub <default-metric metric>;
# (pound sign) Indicates a comment specified on the rsvp { # Required for dynamic MPLS only
same line as the configuration statement
to which it applies.
[ ] (square brackets) Encloses a variable for which you can community name members [
substitute one or more values. community-ids ]
GUI Conventions
Bold text like this Represents graphical user interface (GUI) • In the Logical Interfaces box, select
items you click or select. All Interfaces.
• To cancel the configuration, click
Cancel.
> (bold right angle bracket) Separates levels in a hierarchy of menu In the configuration editor hierarchy,
selections. select Protocols>Ospf.
Documentation Feedback
• Online feedback rating system—On any page of the Juniper Networks TechLibrary site
at https://www.juniper.net/documentation/index.html, simply click the stars to rate the
content, and use the pop-up form to provide us with information about your experience.
Alternately, you can use the online feedback form at
https://www.juniper.net/documentation/feedback/.
Technical product support is available through the Juniper Networks Technical Assistance
Center (JTAC). If you are a customer with an active J-Care or Partner Support Service
support contract, or are covered under warranty, and need post-sales technical support,
you can access our tools and resources online or open a case with JTAC.
• JTAC hours of operation—The JTAC centers have resources available 24 hours a day,
7 days a week, 365 days a year.
• Find solutions and answer questions using our Knowledge Base: https://kb.juniper.net/
To verify service entitlement by product serial number, use our Serial Number Entitlement
(SNE) Tool: https://entitlementsearch.juniper.net/entitlementsearch/
The Juniper Networks Cloud Customer premises equipment (CPE) and SD-WAN solutions
use the Contrail Service Orchestration (CSO) to transform traditional branch networks,
offering opportunities for high flexibility of the network, rapid introduction of new services,
automation of network administration, and cost savings. The solutions can be
implemented by service providers for their customers or by Enterprise IT departments in
a campus and branch environment. In this documentation, service providers and Enterprise
IT departments are called service providers, and the consumers of their services are called
customers.
The Cloud CPE solution supports both Juniper Networks and third-party virtualized
network functions (VNFs) that network providers use to create the network services. The
following deployment models are available:
In this deployment, the network contains both service edge sites and on-premise sites.
A customer can have both cloud sites and tenant sites; however, you cannot share a
network service between the centralized and distributed deployments. If you require
the same network service for the centralized deployment and the distributed
deployment, you must create two identical network services with different names.
You must consider several issues when choosing whether to employ one or both types
of deployment. The centralized deployment offers a fast migration route and this
deployment is the recommended model for sites that can accommodate network
services—particularly security services—in the cloud. In contrast, the distributed
deployment supports private hosting of network services on a CPE device at a customer’s
site, and can be extended to offer software defined wide area networking (SD-WAN)
capabilities. Implementing a combination network in which some sites use the centralized
deployment and some sites use the distributed deployment provides appropriate access
for different sites.
The SD-WAN solution offers a flexible and automated way to route traffic through the
cloud. Similar to a distributed deployment, this implementation uses CPE devices located
at on-premise sites to connect to the LAN segments. Hub-and-spoke and full mesh
topologies are supported. The CSO software uses SD-WAN policies and service-level
agreement measurements to differentiate and route traffic for different applications.
One CSO installation can support a combined centralized and distributed deployment
and an SD-WAN solution simultaneously.
You can either use the solutions as turnkey implementations or connect to other
operational support and business support systems (OSS/BSS) through northbound
Representational State Transfer (REST) APIs.
The Cloud CPE Solution uses the following components for the NFV environment:
This application includes RESTful APIs that you can use to create and manage
network service catalogs.
Other CSO components connect to Network Service Orchestrator through its RESTful
API:
• Administration Portal, which you use to set up and manage your virtual network and
customers through a graphical user interface (GUI).
Administration Portal offers role-based access control for administrators and operators.
Administrators can make changes; however, operators can only view the portal.
• Customer Portal, a GUI that your customers use to manage sites, CPE devices, and
network services for their organizations.
Customer Portal offers role-based access control for administrators and operators.
Administrators can make changes; however, operators can only view the portal.
• Designer Tools:
• Configuration Designer, which you use to create configuration templates for virtualized
network functions (VNFs). When you publish a configuration template, it is available
for use in Resource Designer.
• Resource Designer, which you use to create VNF packages. A VNF package consists
of a configuration template and specifications for resources. You use configuration
templates that you create with Configuration Designer to design VNF packages.
When you publish a VNF package, it is available for use in Network Service Designer.
• Network Service Designer, which you use to create a network service package. The
package offers a specified performance and provides one or more specific network
functions, such as a firewall or NAT, through one or more specific VNFs.
• Service and Infrastructure Monitor, which works with Icinga, an open source enterprise
monitoring system to provide real-time data about the Cloud CPE solution, such as
the status of virtualized network functions (VNFs), virtual machines (VMs), and physical
servers; information about physical servers’ resources; components of a network service
(VNFs and VMs hosting a VNF); counters and other information for VNFs.
The Cloud CPE solution extends the NFV model through the support of physical network
elements (PNEs). A PNE is a networking device in the deployment that you can configure
through CSO, but not use in a service chain. Configuration of the PNE through CSO as
opposed to other software, such as Contrail or Junos OS, simplifies provisioning of the
physical device through automation. Combining provisioning and configuration for PNEs
and VNFs provides end-to-end automation in network configuration workflows. An
example of a PNE is the MX Series router that acts as an SDN gateway in a centralized
deployment.
In the distributed deployment, VNFs reside on a CPE device located at a customer site.
The NFX250 is a switch that hosts the vSRX application to enable routing and IPSec VPN
access with the service provider’s POP. MX Series routers, configured as provider edge
(PE) routers, provide managed Layer 1 and Layer 2 access and managed MPLS Layer 3
access to the POP. Network Service Controller provides the VIM, NFVI, and device
management for the NFX250. Network Service Controller includes Network Activator,
which enables remote activation of the NFX Series device when the site administrator
connects the device and switches it on.
Figure 1 on page 20 illustrates how the components in the Cloud CPE solution interact
and how they comply with the ETSI NFV MANO model.
Administration
Network Service Designer
Portal
NFV MANO
Customer Portal
Network
VNF
Service
Catalog
Catalog
NFV NFVI
Instances Resources
EMS
PNE/VNF Manager
VNF
NFVI: VIM:
PNE
COTs server and Ubuntu Contrail OpenStack
(centralized deployment) (centralized deployment)
NFX Series platform Network Service Controller
(distributed deployment) (distributed deployment)
g043515
API Connection
The following process describes the interactions of the components when a customer
requests the activation of a network service:
1. Customers send requests for activations of network services through Customer Portal
or OSS/BSS applications.
3. Network Service Orchestrator receives requests through its northbound RESTful API
and:
a. Accesses information about the network service and associated VNFs from their
respective catalogs, and communicates this information to the VIM, which is
provided by Contrail OpenStack.
• For the distributed deployment, accesses information about the network service
and associated VNFs from their respective catalogs, and communicates this
information to the Network Service Controller.
• The VIM creates the service chains and associated VMs in the NFVI, which is
provided by the servers and Ubuntu. Contrail OpenStack creates one VM for each
VNF in the service chain.
• VNF Manager starts managing the VNF instances while the element management
system (EMS) performs element management for the VNFs.
• For the distributed deployment, Network Service Controller creates the service
chains and associated VMs in the NFVI, which is provided by the CPE device.
The PNE fits into the NFV model in a similar, though not identical, way to the VNFs.
1. Network Service Orchestrator receives the request through its northbound RESTful
API and sends information about the PNE to PNE/VNF Manager.
3. VNF Manager starts managing the VNF instances and the EMS starts element
management for the VNFs.
1. Network Service Orchestrator receives the request through its northbound RESTful
API.
Figure 2 on page 22 shows the topology of the Cloud Customer Premises equipment
(CPE) and SD-WAN solutions. You can use one Contrail Service Orchestration (CSO)
installation for all or any of the supported solutions and deployments:
• Centralized deployment
• SD-WAN solution
Regionalserver
hosting NSC
Different sites for an enterprise might connect to different regional POPs, depending on
the geographical location of the sites. Within an enterprise, traffic from a site that connects
to one regional POP travels to a site that connects to another regional POP through the
central POP. A site can connect to the Internet and other external links through either
the regional POP or the central POP.
Service providers use the central server to set up the Cloud CPE solution through
Administration Portal. Similarly, customers activate and manage network services through
their own dedicated view of Customer Portal on the central server.
Centralized Deployment
INTERNET
AND PUBLIC CLOUD
A
INTERNET
AND PUBLIC CLOUD
REGION ONE
POP Central
Enterprise 1 CCRA hosting
Site 1 NS0
B CENTRAL
POP
IP/MPLS CORE
REGION TWO REGION THREE
POP POP
Enterprise 2
Site 1 Enterprise 1
Site 2
The central and regional POPs contain one or more Contrail Cloud implementations.
VNFs reside on Contrail compute nodes and service chains are created in Contrail. You
can choose whether to use the CSO OpenStack Keystone on the central infrastructure
server or the OpenStack Keystone on the Contrail controller node in the central POP to
authenticate CSO operations. The Contrail Cloud implementation provides Contrail
Analytics for this deployment.
The MX Series router in the Contrail Cloud implementation is an SDN gateway and
provides a Layer 3 routing service to customer sites through use of virtual routing and
forwarding (VRF) instances, known in Junos OS as Layer 3 VPN routing instances. A
unique routing table for each VRF instance separates each customer’s traffic from other
customers’ traffic. The MX Series router is a PNE.
Sites can access the Internet directly, through the central POP, or both. Data traveling
from one site to another passes through the central POP.
Distributed Deployment
Region One
POP One IP/MPLS CORE
CPE
device PE Central servers
and IPsec hosting NSO
concentrator and Contrail
Analytics
Central
POP
CPE CPE
Region One Region One
device device
POP Three POP Two
Enterprise 2 PE Enterprise 1
Site 1 PE Regional servers Regional servers PE Site 2
hosting NSC and hosting NSC and
Contrail Analytics Contrail Analytics
g043630
NSC Network Service
Controller
Each site in a distributed deployment hosts a CPE device on which the vSRX application
is installed to provide security and routing services. The Cloud CPE solution supports the
following CPE devices:
• vSRX
The vSRX CPE device can reside at a customer site or in the service provider cloud. In
both cases, you configure the site in CSO as an on-premise site. Authentication of the
vSRX as a CPE device takes place through SSH.
An MX Series router in each regional POP acts as an IPsec concentrator and provider
edge (PE) router for the CPE device. An IPsec tunnel, with endpoints on the CPE device
and MX Series router, enables Internet access from the CPE device. Data flows from one
site to another through a GRE tunnel with endpoints on the PE routers for the sites. The
distributed deployment also supports SD-WAN functionality for traffic steering, based
on 5-tuple (source IP address, source TCP/UDP port, destination IP address, destination
TCP/UDP port and IP protocol) criteria.
Network administrators can configure the MX Series router, the GRE tunnel, and the IPsec
tunnel through Administration Portal. Similar to the centralized deployment, the MX
Series router in the distributed deployment is a PNE.
The CPE device provides the NFVI, which supports the VNFs and service chains. Customers
can configure sites, CPE devices, and network services with Customer Portal.
The OpenStack Keystone resides on the central infrastructure server and Contrail Analytics
resides on a dedicated VM or server.
SD-WAN Solution
The SD-WAN solution supports hub-and-spoke and full mesh VPN topologies.
Figure 5 on page 25 shows the topology of the SD-WAN Solution with a hub and spoke
implementation.
Enterprise 1
Regional servers
Spoke site 1 hosting NSC,
LAN Spoke to Internet traffic Contrail Analytics
CPE
and VRR
device
Internet
CPE Gateway
LAN device Central Regional servers
Spoke site 2 POP hosting NSC,
Contrail Analytics
and VRR
Central servers
hosting NSO,
and Contrail Analytics Region One
CPE device CPE device
POP Two
Internet
Gateway
LAN LAN
Spoke site A Spoke site B HUB
device
Enterprise 2
Spoke-to-spoke traffic
CPE device CPE device
Multiple WAN links with GRE
or IPsec tunnels
CPE Customer Premises
Equipment
NSO Network Service LAN LAN
Orchestrator Spoke site 3 Spoke site 4 g043631
network with low cost and removes the need for hardware-based route reflectors that
require space in a data center and ongoing maintenance.
For VRR redundancy, you need create at least two VRRs for a region. We recommend
that you create VRRs in even numbers and assign these VRRs equally in different
redundancy groups. Each hub or spoke device establishes a BGP peering session with
two VRRs that are in different redundancy groups. If the primary VRR fails or connectivity
is lost, the BGP peering session remains active because the secondary VRR continues to
receive and advertise LAN routes to a device, thereby providing redundancy.
• Physical server affinity—VRRs that reside on a same physical server should not belong
to different redundancy group.
• Network affinity—VRRs that reside on a same network should not belong to different
redundancy group.
There can be only two redundancy groups—group 0 and group 1. If you do not specify the
redundancy group for VRRs, all VRRs are placed in the default redundancy group—group
0—and hub or spoke devices establish a BGP session with only one VRR.
The Cloud CPE and SD-WAN solutions offer robust implementations with resiliency for
the following features:
The Contrail OpenStack instance includes three Contrail controller nodes in the Contrail
Cloud Platform, and provides resiliency for virtualized infrastructure managers (VIMs),
virtualized network functions (VNFs), and network services.
• CSO provides additional resiliency for virtualized network functions (VNFs) and network
services in the Cloud CPE solution. You can enable or disable automatic recovery of a
network service in a centralized deployment. If a network service becomes unavailable
due to a connectivity issue with a VNF, Network Service Orchestrator maintains existing
instances of the network service in end users’ networks and initiates recreation of the
VNFs. During this recovery process, the end user cannot activate the network service
on additional network links. When the problem is resolved, normal operation resumes
and end users can activate the network service on additional network links.
The Cloud CPE and SD-WAN solutions use OpenStack Keystone to authenticate and
authorize Contrail Service Orchestration (CSO) operations. You can implement the
Keystone in several different ways, and you specify which method you use when you
install CSO:
• A CSO Keystone, which is integrated with CSO and resides on the central CSO server.
This option offers enhanced security because the Keystone is dedicated to CSO and
is not shared with any other applications. Consequently, this option is generally
recommended.
In this case, customers and Cloud CPE infrastructure components use the same
Keystone token.
• You can also use an external Keystone that is specific to your network.
See Table 3 on page 28 for guidelines about using the Keystone options with different
types of deployments.
Distributed
Deployment and
SD-WAN
Centralized Deployment Implementation Combined deployment
The CSO Keystone • Installation of the Keystone • Installation occurs • Installation occurs with the CSO
(recommended) occurs with the CSO with the CSO installation.
installation. installation. • You do not need to perform any
• After installation, you must use • You do not need to configuration after installation for
Administration Portal or the API perform any the distributed portion of the
to configure a service profile for configuration after deployment.
each virtualized infrastructure installation. • After installation, you must configure
monitor (VIM). service profiles for VIMs in the
centralized portion of the
deployment.
The Contrail • Installation occurs with Contrail Not available • Available for the centralized portion
OpenStack Keystone OpenStack of the deployment.
on the Contrail Cloud • You specify the IP address and • Installation occurs with Contrail
Platform (external access details for the Contrail OpenStack.
Keystone) OpenStack Keystone when you • You specify the IP address and
install CSO. access details for the Contrail
OpenStack Keystone when you
install CSO.
An external Keystone You specify the IP address and access details for your Keystone when you install CSO.
that is specific to
your network.
This section describes the architecture of the components in the Contrail Cloud
implementation used in the centralized deployment.
SERVICE
PROVIDER
MX Series router
Server 4 (optional)
Server 3 (optional)
Server 2
Server 1
SERVER 1
g043429
Management Network (1G)
IP Fabric (10G)
• The MX Series router provides the gateway to the service provider’s cloud.
• The number of servers depends on the scale of the deployment and the high availability
configuration. You must use at least two servers and you can use up to five servers.
• Each server supports four nodes. The function of the nodes depends on the high
availability configuration and the type of POP.
• Contrail Service Orchestration node, which hosts the Contrail Service Orchestration
software.
• Contrail controller node, which hosts the Contrail controller and Contrail Analytics.
• Contrail compute node, which hosts the Contrail Openstack software and the virtualized
network functions (VNFs).
The Contrail Cloud implementation in a central POP contains all three types of node.
Figure 7 on page 30 shows the configuration of the nodes in the Contrail Cloud
implementation in the central POP for a deployment that offers neither Contrail nor
Contrail Service Orchestration high availability:
• Server 1 supports one Contrail controller node, two Contrail compute nodes, and one
Contrail Service Orchestration node.
• Server 2 and optional servers 3 through 5 each support four Contrail compute nodes.
g043444
Contrail Service Orchestration Node
Contrail Compute Node
Figure 8 on page 30 shows the configuration of the nodes in the Contrail Cloud
implementation in the central POP for a deployment that offers both Contrail and Contrail
Service Orchestration high availability:
• Servers 1, 2, and 3 each support one Contrail controller node for Contrail redundancy.
• Servers 1 and 2 each support one Contrail Service Orchestration node for Contrail
Service Orchestration redundancy.
• Other nodes on servers 1, 2, and 3 are Contrail compute nodes. Optional servers 4
through 7 also support Contrail compute nodes.
The Contrail Cloud implementation in a regional POP contains only Contrail nodes and
not Contrail Service Orchestration nodes. In a deployment that does not offer Contrail
high availability, the regional Contrail Cloud implementations support:
• One Contrail controller node and three Contrail compute nodes on server 1.
In a deployment that offers Contrail high availability, the regional Contrail Cloud
implementations support:
A Contrail compute node hosts Contrail OpenStack, and the VNFs. Contrail OpenStack
resides on the physical server and cannot be deployed in a VM. Each VNF resides in its
own VM. Figure 10 on page 31 shows the logical representation of the Contrail compute
nodes.
Traditional branch networks use many dedicated network devices with proprietary
software and require extensive equipment refreshes every 3-5 years to accommodate
advances in technology. Both configuration of standard services for multiple sites and
customization of services for specific sites are labor-intensive activities. As branch offices
rarely employ experienced IT staff on site, companies must carefully plan network
modifications and analyze the return on investment of changes to network services.
In contrast, the Cloud CPE solution enables a branch site to access network services
based on Juniper Networks and third-party virtualized network functions (VNFs) that run
on commercial off-the-shelf (COTS) servers located in a central office or on a CPE device
located at the site. This approach maximizes the flexibility of the network, enabling use
of standard services and policies across sites and enabling dynamic updates to existing
services. Customization of network services is fast and easy, offering opportunities for
new revenue and quick time to market.
Use of generic servers and CPE devices with VNFs leads to capital expenditure (CAPEX)
savings compared to purchasing dedicated network devices. Set up and ongoing support
of the equipment requires minimal work at the branch site: for the centralized deployment,
the equipment resides in a central office, and for the distributed deployment, the CPE
device uses remote activation to initialize, become operational, and obtain configuration
updates. The reduced setup and maintenance requirements, in addition to automated
configuration, orchestration, monitoring, and recovery of network services, result in lower
operating expenses (OPEX).
Specifications
The Cloud CPE solution supports two environment types: a trial environment and a
production environment. You can deploy the environments with or without high availability
(HA).Table 4 on page 35 shows the number of sites and VNFs supported for each
environment.
Trial environment without 10 VNFs 25 sites, 2 VNFs per site 25, up to 5 full mesh sites
HA
Trial environment with HA 100 VNFs, 20 VNFs per Contrail 200 sites, 2 VNFs per site 200, up to 50 full mesh sites
compute node
Production environment 500 VNFs, 20 VNFs per 200 sites, 2 VNFs per site 200, up to 50 full mesh sites
without HA Contrail compute node
Production environment 500 VNFs, 20 VNFs per 3000 sites, 2 VNFs per site Up to 500 full mesh sites
with HA Contrail compute node
Up to 3000 hub and spoke sites
• The number and specification of node servers and servers. See “Minimum Requirements
for Servers and VMs” on page 40
• The number and specification of virtual machines (VMs). “Provisioning VMs on Contrail
Service Orchestration Nodes or Servers” on page 74
Table 5 on page 36 lists the node servers and servers that have been tested for these
functions.
Table 5: COTS Node Servers and Servers Tested in the Cloud CPE and SD-WAN Solutions
Table 6 on page 37 shows the software that has been tested for COTS servers used in
the Cloud CPE solution. You must use these specific versions of the software when you
implement the Cloud CPE and SD-WAN solutions.
Description Version
Operating system for all COTS nodes and servers Ubuntu 14.04.5 LTS
NOTE: Ensure that you perform a fresh install of Ubuntu 14.04.5 LTS on the
CSO servers in your deployment because upgrading from a previous version
to Ubuntu 14.04.5 LTS might cause issues with the installation.
Operating system for VMs on CSO servers • Ubuntu 14.04.5 LTS for VMs that you configure manually and not with the
provisioning tool.
• The provisioning tool installs Ubuntu 14.04.5 LTS in all VMs.
Hypervisor on CSO servers KVM provided by the Ubuntu operating system on the server or VMware ESXi
Version 5.5.0
Additional software for CSO servers Secure File Transfer Protocol (SFTP)
Software defined networking (SDN) for a Contrail Cloud Platform Release 3.2.5 with Heat v2 APIs
centralized deployment
Data switch Juniper Networks QFX Series QFX 5100-48S-AFI switch with: 1
Switch
• 48 SFP+ transceiver interfaces
• 6 QSFP+ transceiver interfaces
Table 8 on page 38 shows the software tested for the centralized deployment. You must
use these specific versions of the software when you implement a centralized deployment.
Hypervisor on CSO servers KVM provided by the Ubuntu operating system on the server or
VMware ESXi Version 5.5.0
Software defined networking (SDN), including Contrail Contrail Release 3.2.5 with OpenStack Mitaka
Analytics, for a centralized deployment
Network Devices and Software Tested in the Hybrid WAN Distributed Deployment and the
SD-WAN Implementation
Table 9 on page 38 shows the network devices that have been tested for the distributed
deployment and the SD-WAN implementation.
Table 9: Network Devices Tested for the Distributed Deployment and SD-WAN Implementation
PE router and IPsec Juniper Networks MX Series 3D • MX960, MX480, or MX240 router with
concentrator (Hybrid WAN Universal Edge Router a Multiservices MPC line card
distributed deployment only) • MX80 or MX104 router with Multiservices MIC line card
• Other MX Series routers with a Multiservices MPC or
Multiservices MIC line card
See MPCs Supported by MX Series Routers and MICs
Supported by MX Series Routers for information about
MXSeries routers that support Multiservices MPC and MIC
line cards.
Table 9: Network Devices Tested for the Distributed Deployment and SD-WAN Implementation (continued)
Cloud hub device (SD-WAN Juniper Networks MX Series 3D • MX104, MX240, MX480, or MX960 router with an
implementation only) Universal Edge Router Multiservices MIC line card.
See MPCs Supported by MX Series Routers and MICs
Juniper Networks SRX Series
Supported by MX Series Routers for information about
Services Gateway
MXSeries routers that support Multiservices MPC and MIC
line cards.
• SRX1500Services Gateway
• SRX4100 Services Gateway
• SRX4200 Services Gateway
On-premise hub device Juniper Networks SRX Series • SRX1500 Services Gateway
(SD-WAN implementation Services Gateway • SRX4100 Services Gateway
only)
• SRX4200 Services Gateway
Table 10 on page 39 shows the software tested for the distributed deployment. You must
use these specific versions of the software when you implement a distributed deployment.
Table 10: Software Tested in the Distributed Deployment and SD-WAN Solution
Hypervisor on CSO servers KVM provided by the Ubuntu operating system on the
server or VMware ESXi Version 5.5.0
Routing and Security for NFX250 device vSRX KVM Appliance 15.1X49-D133
Operating system for vSRX used as a CPE device on an x86 server vSRX KVM Appliance 15.1X49-D133
Operating system for SRX Series Services Gateway used as a CPE Junos OS Release 15.1X49-D133
device or spoke device
Table 10: Software Tested in the Distributed Deployment and SD-WAN Solution (continued)
Operating system for MX Series router used as PE router Junos OS Release 16.1R3.00
Operating system for MX Series Router used as a hub device for an Junos OS Release 16.1R5.7
SD-WAN implementation
Operating system for SRX Series Services Gateway used as a hub Junos OS Release 15.1X49-D133
device for an SD-WAN implementation
• Ensure that you have active support contracts for servers so that you can upgrade to
the latest firmware and BIOS versions.
Table 11 on page 40 shows the specification for the nodes and servers for the Cloud CPE
or SD-WAN solution.
Item Requirement
CPU One 64-bit dual processor, type Intel Sandybridge, such as Intel Xeon E5-2670v3 @ 2.4 Ghz or
higher specification
The number of node servers and servers that you require depends on whether you are
installing a trial or a production environment, and whether you require high availability
(HA).
Table 12 on page 41 shows the required hardware specifications for node servers and
servers in the supported environments. The server specifications are slightly higher than
the sum of the virtual machine (VM) specifications listed in “Minimum Requirements for
VMs on CSO Node Servers or Servers” on page 42, because some additional resources
are required for the system software.
Production
Trial Environment Trial Environment Environment without Production
Function without HA with HA HA Environment with HA
NOTE: If you use a trial environment without HA and with virtualized network functions (VNFs) that require Junos Space as
the Element Management System (EMS), you must install Junos Space on a VM on another server. This server specification
for a trial environment without HA does not accommodate Junos Space. For information on Junos Space VM requirements, see
Table 13 on page 42.
Production
Trial Environment Trial Environment Environment without Production
Function without HA with HA HA Environment with HA
For information about the ports that must be open on all VMs for all deployment
environments, see Table 17 on page 50.
Table 13 on page 42 shows complete details about the VMs for a trial environment without
HA.
csp-installer-vm — • 4 vCPUs
• 32 GB RAM
• 300 GB hard disk storage
Table 14 on page 43 shows complete details about the VMs for a trial environment with
HA.
csp-installer-vm — • 4 vCPUs
• 48 GB RAM
• 300 GB hard disk storage
Table 15 on page 46 shows complete details about the VMs required for a production
environment without HA.
Name of VM or Microservice
Collection Components That Installer Places in VM Resources Required
csp-installer-vm — • 4 vCPUs
• 64 GB RAM
• 300 GB hard disk storage
Name of VM or Microservice
Collection Components That Installer Places in VM Resources Required
Table 16 on page 47 shows complete details about the VMs for a production environment
with HA.
csp-installer-vm — • 4 vCPUs
• 32 GB RAM
• 300 GB hard disk storage
Table 17 on page 50 shows the ports that must be open on all CSO VMs to enable the
following types of CSO communications:
The provisioning tool opens these ports on each VM; however, if you provision the VMs
manually, you must manually open the ports on each VM.
80 Internal HAProxy
443 External and internal HTTPS, including Administration Portal and Customer
Portal
Related • Hardware and Software Required for Contrail Service Orchestration on page 36
Documentation
• Provisioning VMs on Contrail Service Orchestration Nodes or Servers on page 74
The Cloud CPE solution supports the Juniper Networks and third-party VNFs listed in
Table 18 on page 52.
Juniper Networks vSRX vSRX KVM • Network Address • Centralized deployment Element Management
Appliance Translation (NAT) • Distributed deployment System (EMS)
15.1X49-D133 • Demonstration version supports NAT, firewall, microservice, which is
of Deep Packet and UTM. included with CSO
Inspection (DPI)
• Firewall
• Unified threat
management (UTM)
Cisco Cloud Services 3.15.0 Firewall Centralized deployment Junos Space Network
Router 1000V Series Management Platform
(CSR-1000V)
You must upload VNFs to the Contrail Cloud Platform for the centralized deployment
after you install the Cloud CPE solution. You upload the VNF images for the distributed
deployment through Administration Portal or API calls.
You can use these VNFs in service chains and configure some settings for VNFs for a
service chain in Network Service Designer. You can then view those configuration settings
for a network service in Administration Portal. Customers can also configure some settings
for the VNFs in their network services through Customer Portal. VNF configurations that
customers specify in Customer Portal override VNF configurations that the person who
designs network services specifies in Network Service Designer.
Related • Uploading the vSRX VNF Image for a Centralized Deployment on page 130
Documentation
• Uploading the LxCIPtable VNF Image for a Centralized Deployment on page 131
• Uploading the Cisco CSR-1000V VNF Image for a Centralized Deployment on page 133
This section describes how to connect cables among the network devices and servers
in the Contrail Cloud implementation. See Architecture of the Contrail Cloud Implementation
in the Centralized Deployment for more information.
1. Connect cables from the EX Series switch to the other devices in the network.
See Table 19 on page 56 for information about the connections for the EX Series
switch.
2. Connect cables from the QFX Series switch to the other devices in the network.
See Table 20 on page 56 for information about the connections for the QFX Series
switch.
3. Connect cables from the MX Series router to the other devices in the network.
See Table 21 on page 57 for information about the connections for the MX Series
router.
Interface on Destination
Interface on MX Series Router Destination Device Device
ge-1/0/0 and ge-1/0/1 or xe-0/0/2 and Service provider’s device at the cloud –
xe-0/0/3, depending on the network
• Configuring the QFX Series Switch for the Contrail Cloud Implementation in a
Centralized Deployment on page 59
• Configuring the MX Series Router in the Contrail Cloud Implementation for a Centralized
Deployment on page 61
• Configuring the Physical Servers and Nodes for the Contrail Cloud Implementation in
a Centralized Deployment on page 63
Configuring the EX Series Ethernet Switch for the Contrail Cloud Implementation in a
Centralized Deployment
Before you configure the EX Series switch, complete any basic setup procedures and
install the correct Junos OS software release on the switch.
Related • Hardware and Software Required for Contrail Service Orchestration on page 36
Documentation
• Configuring the QFX Series Switch for the Contrail Cloud Implementation in a
Centralized Deployment on page 59
• Configuring the MX Series Router in the Contrail Cloud Implementation for a Centralized
Deployment on page 61
Configuring the QFX Series Switch for the Contrail Cloud Implementation in a
Centralized Deployment
Before you configure the QFX Series switch, complete any basic setup procedures and
install the correct Junos OS software release on the switch.
3. Configure a link aggregation group (LAG) for each pair of server ports. For example:
6. Configure the interface that connects to the MX Series router. For example:
Related • Hardware and Software Required for Contrail Service Orchestration on page 36
Documentation
• Configuring the EX Series Ethernet Switch for the Contrail Cloud Implementation in a
Centralized Deployment on page 58
• Configuring the MX Series Router in the Contrail Cloud Implementation for a Centralized
Deployment on page 61
Configuring the MX Series Router in the Contrail Cloud Implementation for a Centralized
Deployment
Before you configure the MX Series router, complete any basic setup procedures and
install the correct Junos OS software release on the switch.
user@router# set interfaces ge-1/0/0 unit 0 family inet service input service-set s1
service-filter ingress-1
user@router# set interfaces ge-1/0/0 unit 0 family inet service output service-set s1
service-filter ingress-1
2. Configure the interfaces that connect to the QFX Series switch. For example:
3. Configure BGP and tunneling for the service provider’s cloud. For example:
user@router# set services nat rule rule-napt-zone term t2 then translated source-pool
contrailui
user@router# set services nat rule rule-napt-zone term t2 then translated
translation-type basic-nat44
user@router# set services nat rule rule-napt-zone term t3 from source-address
172.16.70.1/32
user@router# set services nat rule rule-napt-zone term t3 then translated source-pool
jumphost
user@router# set services nat rule rule-napt-zone term t3 then translated
translation-type basic-nat44
user@router# set firewall family inet service-filter ingress-1 term t1 from source-address
172.16.80.2/32
user@router# set firewall family inet service-filter ingress-1 term t1 from protocol tcp
user@router# set firewall family inet service-filter ingress-1 term t1 from
destination-port-except 179
user@router# set firewall family inet service-filter ingress-1 term t1 then service
user@router# set firewall family inet service-filter ingress-1 term t2 from source-address
172.16.80.4/32
user@router# set firewall family inet service-filter ingress-1 term t2 then service
user@router# set firewall family inet service-filter ingress-1 term t3 from source-address
172.16.70.1/32
user@router# set firewall family inet service-filter ingress-1 term t3 then service
user@router# set firewall family inet service-filter ingress-1 term end then skip
Related • Hardware and Software Required for Contrail Service Orchestration on page 36
Documentation
• Configuring the EX Series Ethernet Switch for the Contrail Cloud Implementation in a
Centralized Deployment on page 58
• Configuring the QFX Series Switch for the Contrail Cloud Implementation in a
Centralized Deployment on page 59
Configuring the Physical Servers and Nodes for the Contrail Cloud Implementation in
a Centralized Deployment
For a centralized deployment, you must configure the physical servers and nodes in the
Contrail Cloud implementation and install Contrail OpenStack on the server cluster before
you run the installer.
2. Configure IP addresses for the Ethernet management ports of the physical servers
and nodes.
3. Configure DNS on the physical servers and nodes, and ensure that DNS is working
correctly.
5. From each server and node, verify that you can ping the IP addresses and hostnames
of all the other servers and nodes in the Contrail Cloud implementation.
6. Using Contrail Server Manager, install Contrail OpenStack on the server cluster and
set up the roles of the Contrail nodes in the cluster.
You configure an OpenStack Keystone on the primary Contrail controller node in the
central Contrail Cloud implementation, and also use this Keystone for:
• Redundant configure and control nodes in the central Contrail Cloud implementation
7. For each node, use the ETCD keys to specify the same username and password for
Contrail.
For a distributed deployment, you must configure the Contrail Service Orchestration
(CSO) and Contrail Analytics servers (or nodes, if you are using a node server) before
you run the installer.
2. Configure IP addresses for the Ethernet management ports of the physical servers.
3. Configure DNS on the physical servers, and ensure that DNS is working correctly.
5. From each server and node, verify that you can ping the IP addresses and hostnames
of all the other servers and nodes in the distributed deployment.
Related • Hardware and Software Required for Contrail Service Orchestration on page 36
Documentation
You need to configure interfaces, virtual routing and forwarding instances (VRFs), and
DHCP on the MX Series router with Junos OS. You can, however, use Administration Portal
to specify configuration settings for both endpoints of the required IPSec tunnel between
the MX Series router and the NFX250 with Administration Portal. When the NFX250
becomes operational, Contrail Service Orchestration (CSO) components set up the
tunnel.
For example:
ge-0/3/7 {
vlan-tagging;
unit 10 {
vlan-id 10;
family inet {
address 195.195.195.1/24;
unit 20 {
vlan-id 20;
family inet {
address 196.196.196.254/24;
ge-0/3/8 {
unit 0 {
family inet {
address 198.198.198.1/24;
2. Configure a VRF for Operation, Administration, and Maintenance (OAM) traffic between
Contrail Service Orchestration and the NFX250.
For example:
nfx-oam {
instance-type vrf;
interface ge-0/0/0.220;
vrf-target target:64512:10000;
vrf-table-label;
routing-options {
static {
3. Configure a VRF for data traffic that travels over the wide area network (WAN).
Data that travels through the IPSec tunnel also uses this VRF. When you configure
the MX endpoint of the IPSec tunnel in Administration Portal, you specify these VRF
settings.
For example:
nfx-data {
instance-type vrf;
interface ge-0/3/7.10;
vrf-target target:64512:10001;
vrf-table-label;
protocols {
bgp {
group nfx-gwr-bgp-grp {
type external;
family inet {
unicast;
export send-direct;
peer-as 65000;
neighbor 195.195.195.2;
System{
Services {
dhcp-local-server {
group 8-csp-gpr {
interface ge-0/3/8.0;
access {
address-assignment {
pool 8-csp-gpr-pool {
family inet {
network 198.198.198.0/24;
range valid {
low 198.198.198.5;
high 198.198.198.250;
dhcp-attributes {
domain-name juniper.net;
name-server {
8.8.8.8;
Related • Hardware and Software Required for Contrail Service Orchestration on page 36
Documentation
• Topology of the Cloud CPE and SD-WAN Solutions on page 22
• Specify activation data with Administration Portal or the API for each CPE device,
such as:
When the administrator completes the initial configuration process, the NFX250 device
obtains a boot image and configuration image from its regional server and becomes
operational.
Installing and Configuring an SRX Series Services Gateway or vSRX Instance as a CPE Device
An administrator at the customer’s site installs and configures an SRX Series Services
Gateway or a vSRX instances as a CPE device using the following workflow:
• vSRX documentation
You can remove the existing virtual machines (VMs) and perform a completely new
installation. This approach makes sense if the architecture of the VMs on the Contrail
Service Orchestration node or server has changed significantly between releases.
For example:
Id Name State
2 csp-ui-vm running
For example:
For example:
For example:
root@host:~/# salt-key -D
Virtual Machines (VMs) on the central and regional Contrail Service Orchestration (CSO)
nodes or servers host the infrastructure services and some other components. All servers
and VMs for the solution should be in the same subnet. To set up the VMs, you can:
• Use the provisioning tool to create and configure the VMs if you use the KVM hypervisor
or ESXi VMware on a CSO node or server.
• Manually configure Virtual Route Reflector (VRR) VMs on a CSO node or server, if you
use the ESXi VMware VM.
The VMs required on a CSO node or server depend on whether you configure:
See “Minimum Requirements for Servers and VMs” on page 40 for details of the VMs and
associated resources required for each environment.
The following sections describe the procedures for provisioning the VMs:
• The operating system for physical servers must be Ubuntu 14.04.5 LTS.
• For a centralized deployment, configure the Contrail Cloud Platform and install Contrail
OpenStack.
• Use the Contrail Service Orchestration installer if you purchased licenses for a
centralized deployment or both Network Service Orchestrator and Network Service
Controller licenses for a distributed deployment.
This option includes all the Contrail Service Orchestration graphical user interfaces
(GUIs).
• Use the Network Service Controller installer if you purchased only Network Service
Controller licenses for a distributed deployment or SD-WAN implementation.
This option includes Administration Portal and Service and Infrastructure Monitor,
but not the Designer Tools.
3. Expand the installer package, which has a name specific to its contents and the release.
For example, if the name of the installer package is csoVersion.tar.gz:
The expanded package is a directory that has the same name as the installer package
and contains the installation files.
2. Update the index files of the software packages installed on the server to reference
the latest versions.
3. View the network interfaces configured on the server to obtain the name of the primary
interface on the server.
root@host:~/# ifconfig
5. View the list of network interfaces, which now includes the virtual interface virbr0.
root@host:~/# ifconfig
6. Open the file /etc/network/interfaces and modify it to map the primary network
interface to the virtual interface virbr0.
For example, use the following configuration to map the primary interface eth0 to the
virtual interface virbr0:
auto virbr0
iface virbr0 inet static
bridge_ports eth0
address 192.168.1.2
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
dns-nameservers 8.8.8.8
dns-search example.net
a. Customize the IP address and subnet mask to match the values for the virbr0
interface in the file /etc/network/interfaces
For example:
<network>
<name>default</name>
<uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0'/>
<ip address='192.168.1.2' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.1.1' end='192.168.1.254'/>
</dhcp>
</ip>
</network>
After modification:
<network>
<name>default</name>
<uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid>
<bridge name='virbr0' stp='off' delay='0'/>
<ip address='192.168.1.2' netmask='255.255.255.0'>
</ip>
</network>
9. Verify that the primary network interface is mapped to the virbr0 interface.
See Also •
For example:
3. Create an xml file with the name virbr1.xml in the directory /var/lib/libvirt/network.
4. Paste the following content into the virbr1.xml file, and edit the file to match the actual
settings for your interface.
For example:
<network>
<name>default</name>
<uuid>0f04ffd0-a27c-4120-8873-854bbfb02074</uuid>
<bridge name='virbr1' stp='off' delay='0'/>
<ip address='192.0.2.1' netmask='255.255.255.0'>
</ip>
</network>
5. Open the /etc/network/interfaces file and add the details for the second interface.
For example:
auto eth1
iface eth1 inet manual
up ifconfig eth1 0.0.0.0 up
auto virbr0
iface virbr0 inet static
bridge_ports eth0
address 192.168.1.2
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
dns-nameservers 8.8.8.8
dns-search example.net
auto virbr1
iface virbr1 inet static
bridge_ports eth1
address 192.168.1.2
netmask 255.255.255.0
7. Verify that the secondary network interface, eth1, is mapped to the second interface.
You do not specify an IP address for the data interface when you create it.
2. Access the confs directory that contains the example configuration files. For example,
if the name of the installer directory is csoVersion
root@host:~/# cd csoVersion/confs
3. Access the directory for the environment that you want to configure.
Table 22 on page 79 shows the directories that contain the example configuration
file.
4. Make a copy of the example configuration file in the /confs directory and name it
provision_vm.conf.
For example:
root@host:~/cspVersion/confs# cp
/cso3.3/trial/nonha/provisionvm/provision_vm_example.conf provision_vm.conf
6. In the [TARGETS] section, specify the following values for the network on which CSO
resides.
7. Specify the following configuration values for each CSO node or server that you
specified in Step 6.
8. Except for the Junos Space Virtual Appliance and VRR VMs, specify configuration
values for each VM that you specified in Step 6.
9. For the Junos Space VM, specify configuration values for each VM that you specified
in Step 6.
• gateway—IP address of the gateway for the host. If you do not specify a value, the
value defaults to the gateway defined for the CSO node or server that hosts the
VM.
• newpassword—Password that you provide when you configure the Junos Space
appliance.
root@host:~/# ./provision_vm.sh
The following examples show customized configuration files for the different
deployments:
• Trial environment without HA (see Sample Configuration File for Provisioning VMs
in a Trial Environment without HA on page 82).
• Trial environment with HA (see Sample Configuration File for Provisioning VMs in a
Trial Environment with HA on page 88).
[TARGETS]
# Mention primary host (installer host) management_ip
installer_ip =
ntp_servers = ntp.juniper.net
username = root
password = passw0rd
data_interface =
# VM Details
[csp-central-infravm]
management_address = 192.168.1.4/24
hostname = centralinfravm.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host
memory = 49152
vcpu = 8
enable_data_interface = false
[csp-central-msvm]
management_address = 192.168.1.5/24
hostname = centralmsvm.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host
memory = 49152
vcpu = 8
enable_data_interface = false
[csp-central-k8mastervm]
management_address = 192.168.1.14/24
hostname = centralk8mastervm.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host
memory = 8192
vcpu = 4
enable_data_interface = false
[csp-regional-infravm]
management_address = 192.168.1.6/24
hostname = regionalinfravm.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host
memory = 24576
vcpu = 4
enable_data_interface = false
[csp-regional-msvm]
management_address = 192.168.1.7/24
hostname = regionalmsvm.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host
memory = 24576
vcpu = 4
enable_data_interface = false
[csp-regional-k8mastervm]
management_address = 192.168.1.15/24
hostname = regionalk8mastervm.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host
memory = 8192
vcpu = 4
enable_data_interface = false
[csp-installer-vm]
management_address = 192.168.1.10/24
hostname = installervm.example.net
username = root
password = passw0rd
local_user = installervm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host
memory = 24576
vcpu = 4
enable_data_interface = false
[csp-contrailanalytics-1]
management_address = 192.168.1.11/24
hostname = canvm.example.net
username = root
password = passw0rd
local_user = canvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host
memory = 49152
vcpu = 8
enable_data_interface = false
[csp-regional-sblb]
management_address = 192.168.1.12/24
hostname = regional-sblb.example.net
username = root
password = passw0rd
local_user = sblb
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host
memory = 8192
vcpu = 4
enable_data_interface = true
[csp-vrr-vm]
management_address = 192.168.1.13/24
hostname = vrr.example.net
gateway = 192.168.1.1
newpassword = passw0rd
guest_os = vrr
host_server = cso-host
memory = 8192
vcpu = 4
[csp-space-vm]
management_address = 192.168.1.14/24
web_address = 192.168.1.15/24
gateway = 192.168.1.1
nameserver_address = 192.168.1.254
hostname = spacevm.example.net
username = admin
password = abc123
newpassword = jnpr123!
guest_os = space
host_server = cso-host
memory = 16384
vcpu = 4
[TARGETS]
# Mention primary host (installer host) management_ip
installer_ip =
ntp_servers = ntp.juniper.net
# Note: Central and Regional physical servers are used as "csp-central-ms" and
"csp-regional-ms" servers.
# The list of servers to be provisioned and mention the contrail analytics servers
also in "server" list.
server = csp-central-infravm, csp-regional-infravm, csp-installer-vm, csp-space-vm,
csp-contrailanalytics-1, csp-central-elkvm, csp-regional-elkvm, csp-central-msvm,
csp-regional-msvm, csp-vrr-vm, csp-regional-sblb
dns_servers = 192.168.10.1
hostname = cso-central-host
username = root
password = passw0rd
data_interface =
[cso-regional-host]
management_address = 192.168.1.3/24
management_interface = virbr0
gateway = 192.168.1.1
dns_search = example.net
dns_servers = 192.168.10.1
hostname = cso-regional-host
username = root
password = passw0rd
data_interface =
[csp-contrailanalytics-1]
management_address = 192.168.1.9/24
management_interface =
hostname = canvm.example.net
username = root
password = passw0rd
vm = false
# VM Details
[csp-central-infravm]
management_address = 192.168.1.4/24
hostname = centralinfravm.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-regional-infravm]
management_address = 192.168.1.5/24
hostname = regionalinfravm.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-space-vm]
management_address = 192.168.1.6/24
web_address = 192.168.1.7/24
gateway = 192.168.1.1
nameserver_address = 192.168.1.254
hostname = spacevm.example.net
username = admin
password = abc123
newpassword = jnpr123!
guest_os = space
host_server = cso-regional-host
memory = 32768
vcpu = 4
[csp-installer-vm]
management_address = 192.168.1.8/24
hostname = installer.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host
memory = 65536
vcpu = 4
enable_data_interface = false
[csp-central-elkvm]
management_address = 192.168.1.10/24
hostname = centralelkvm.example.net
username = root
password = passw0rd
local_user = elkvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-regional-elkvm]
management_address = 192.168.1.11/24
hostname = regionalelkvm.example.net
username = root
password = passw0rd
local_user = elkvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-central-msvm]
management_address = 192.168.1.12/24
hostname = centralmsvm.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-regional-msvm]
management_address = 192.168.1.13/24
hostname = regionalmsvm.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-regional-sblb]
management_address = 192.168.1.14/24
hostname = regional-sblb.example.net
username = root
password = passw0rd
local_user = sblb
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host
memory = 32768
vcpu = 4
enable_data_interface = true
[csp-vrr-vm]
management_address = 192.168.1.15/24
hostname = vrr.example.net
gateway = 192.168.1.1
newpassword = passw0rd
guest_os = vrr
host_server = cso-regional-host
memory = 8192
vcpu = 4
[TARGETS]
# Mention primary host (installer host) management_ip
installer_ip =
ntp_servers = ntp.juniper.net
[cso-host2]
management_address = 192.168.1.3/24
management_interface = virbr0
gateway = 192.168.1.1
dns_search = example.net
dns_servers = 192.168.10.1
hostname = cso-host2
username = root
password = passw0rd
data_interface =
[cso-host3]
management_address = 192.168.1.4/24
management_interface = virbr0
gateway = 192.168.1.1
dns_search = example.net
dns_servers = 192.168.10.1
hostname = cso-host3
username = root
password = passw0rd
data_interface =
# VM Details
[csp-central-infravm1]
management_address = 192.168.1.5/24
hostname = centralinfravm1.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host1
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-central-infravm2]
management_address = 192.168.1.6/24
hostname = centralinfravm2.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host2
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-central-infravm3]
management_address = 192.168.1.7/24
hostname = centralinfravm3.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host3
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-central-msvm1]
management_address = 192.168.1.8/24
hostname = centralmsvm1.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host1
memory = 65536
vcpu = 8
enable_data_interface = false
[csp-central-msvm2]
management_address = 192.168.1.9/24
hostname = centralmsvm2.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host2
memory = 65536
vcpu = 8
enable_data_interface = false
[csp-central-msvm3]
management_address = 192.168.1.9/24
hostname = centralmsvm3.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host3
memory = 65536
vcpu = 8
enable_data_interface = false
[csp-regional-infravm1]
management_address = 192.168.1.10/24
hostname = regionalinfravm1.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host1
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-regional-infravm2]
management_address = 192.168.1.11/24
hostname = regionalinfravm2.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host2
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-regional-infravm3]
management_address = 192.168.1.12/24
hostname = regionalinfravm3.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host3
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-regional-msvm1]
management_address = 192.168.1.13/24
hostname = regionalmsvm1.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host1
memory = 32768
vcpu = 8
enable_data_interface = false
[csp-regional-msvm2]
management_address = 192.168.1.14/24
hostname = regionalmsvm2.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host2
memory = 32768
vcpu = 8
enable_data_interface = false
[csp-regional-msvm3]
management_address = 192.168.1.14/24
hostname = regionalmsvm3.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host3
memory = 32768
vcpu = 8
enable_data_interface = false
[csp-space-vm]
management_address = 192.168.1.15/24
web_address = 192.168.1.16/24
gateway = 192.168.1.1
nameserver_address = 192.168.1.254
hostname = spacevm.example.net
username = admin
password = abc123
newpassword = jnpr123!
guest_os = space
host_server = cso-host3
memory = 16384
vcpu = 4
[csp-installer-vm]
management_address = 192.168.1.17/24
hostname = installervm.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host1
memory = 49152
vcpu = 4
enable_data_interface = false
[csp-contrailanalytics-1]
management_address = 192.168.1.18/24
hostname = can1.example.net
username = root
password = passw0rd
local_user = installervm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host2
memory = 49152
vcpu = 16
enable_data_interface = false
[csp-central-lbvm1]
management_address = 192.168.1.19/24
hostname = centrallbvm1.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host1
memory = 16384
vcpu = 4
enable_data_interface = false
[csp-central-lbvm2]
management_address = 192.168.1.20/24
hostname = centrallbvm2.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host2
memory = 16384
vcpu = 4
enable_data_interface = false
[csp-central-lbvm3]
management_address = 192.168.1.20/24
hostname = centrallbvm3.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host3
memory = 16384
vcpu = 4
enable_data_interface = false
[csp-regional-lbvm1]
management_address = 192.168.1.21/24
hostname = regionallbvm1.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host1
memory = 16384
vcpu = 4
enable_data_interface = false
[csp-regional-lbvm2]
management_address = 192.168.1.22/24
hostname = regionallbvm2.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host2
memory = 16384
vcpu = 4
enable_data_interface = false
[csp-regional-lbvm3]
management_address = 192.168.1.22/24
hostname = regionallbvm3.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host3
memory = 16384
vcpu = 4
enable_data_interface = false
[csp-vrr-vm1]
management_address = 192.168.1.23/24
hostname = vrr1.example.net
gateway = 192.168.1.1
newpassword = passw0rd
guest_os = vrr
host_server = cso-host3
memory = 8192
vcpu = 4
[csp-vrr-vm2]
management_address = 192.168.1.24/24
hostname = vrr2.example.net
gateway = 192.168.1.1
newpassword = passw0rd
guest_os = vrr
host_server = cso-host3
memory = 8192
vcpu = 4
[csp-regional-sblb1]
management_address = 192.168.1.25/24
hostname = regional-sblb1.example.net
username = root
password = passw0rd
local_user = sblb
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host1
memory = 24576
vcpu = 4
enable_data_interface = true
[csp-regional-sblb2]
management_address = 192.168.1.26/24
hostname = regional-sblb2.example.net
username = root
password = passw0rd
local_user = sblb
local_password = passw0rd
guest_os = ubuntu
host_server = cso-host2
memory = 24576
vcpu = 4
enable_data_interface = true
[TARGETS]
# Mention primary host (installer host) management_ip
installer_ip =
ntp_servers = ntp.juniper.net
# The list of servers to be provisioned and mention the contrail analytics servers
also in "server" list.
server = csp-central-infravm1, csp-central-infravm2, csp-central-infravm3,
csp-regional-infravm1, csp-regional-infravm2, csp-regional-infravm3,
csp-central-lbvm1, csp-central-lbvm2, csp-central-lbvm3, csp-regional-lbvm1,
csp-regional-lbvm2, csp-regional-lbvm3, csp-space-vm, csp-installer-vm,
csp-contrailanalytics-1, csp-contrailanalytics-2, csp-contrailanalytics-3,
csp-central-elkvm1, csp-central-elkvm2, csp-central-elkvm3, csp-regional-elkvm1,
csp-regional-elkvm2, csp-regional-elkvm3, csp-central-msvm1, csp-central-msvm2,
csp-central-msvm3, csp-regional-msvm1, csp-regional-msvm2, csp-regional-msvm3,
csp-vrr-vm1, csp-vrr-vm2, csp-regional-sblb1, csp-regional-sblb2,
csp-regional-sblb3
[cso-central-host2]
management_address = 192.168.1.3/24
management_interface = virbr0
gateway = 192.168.1.1
dns_search = example.net
dns_servers = 192.168.10.1
hostname = cso-central-host2
username = root
password = passw0rd
data_interface =
[cso-central-host3]
management_address = 192.168.1.4/24
management_interface = virbr0
gateway = 192.168.1.1
dns_search = example.net
dns_servers = 192.168.10.1
hostname = cso-central-host3
username = root
password = passw0rd
data_interface =
[cso-regional-host1]
management_address = 192.168.1.5/24
management_interface = virbr0
gateway = 192.168.1.1
dns_search = example.net
dns_servers = 192.168.10.1
hostname = cso-regional-host1
username = root
password = passw0rd
data_interface =
[cso-regional-host2]
management_address = 192.168.1.6/24
management_interface = virbr0
gateway = 192.168.1.1
dns_search = example.net
dns_servers = 192.168.10.1
hostname = cso-regional-host2
username = root
password = passw0rd
data_interface =
[cso-regional-host3]
management_address = 192.168.1.7/24
management_interface = virbr0
gateway = 192.168.1.1
dns_search = example.net
dns_servers = 192.168.10.1
hostname = cso-regional-host3
username = root
password = passw0rd
data_interface =
[csp-contrailanalytics-1]
management_address = 192.168.1.17/24
management_interface =
hostname = can1.example.net
username = root
password = passw0rd
vm = false
[csp-contrailanalytics-2]
management_address = 192.168.1.18/24
management_interface =
hostname = can2.example.net
username = root
password = passw0rd
vm = false
[csp-contrailanalytics-3]
management_address = 192.168.1.19/24
management_interface =
hostname = can3.example.net
username = root
password = passw0rd
vm = false
# VM Details
[csp-central-infravm1]
management_address = 192.168.1.8/24
hostname = centralinfravm1.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host1
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-central-infravm2]
management_address = 192.168.1.9/24
hostname = centralinfravm2.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host2
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-central-infravm3]
management_address = 192.168.1.10/24
hostname = centralinfravm3.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host3
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-regional-infravm1]
management_address = 192.168.1.11/24
hostname = regionalinfravm1.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host1
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-regional-infravm2]
management_address = 192.168.1.12/24
hostname = regionalinfravm2.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host2
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-regional-infravm3]
management_address = 192.168.1.13/24
hostname = regionalinfravm3.example.net
username = root
password = passw0rd
local_user = infravm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host3
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-space-vm]
management_address = 192.168.1.14/24
web_address = 192.168.1.15/24
gateway = 192.168.1.1
nameserver_address = 192.168.1.254
hostname = spacevm.example.net
username = admin
password = abc123
newpassword = jnpr123!
guest_os = space
host_server = cso-central-host2
memory = 32768
vcpu = 4
[csp-installer-vm]
management_address = 192.168.1.16/24
hostname = installervm.example.net
username = root
password = passw0rd
local_user = installervm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host1
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-central-lbvm1]
management_address = 192.168.1.20/24
hostname = centrallbvm1.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host1
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-central-lbvm2]
management_address = 192.168.1.21/24
hostname = centrallbvm2.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host2
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-central-lbvm3]
management_address = 192.168.1.22/24
hostname = centrallbvm3.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host3
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-regional-lbvm1]
management_address = 192.168.1.23/24
hostname = regionallbvm1.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host1
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-regional-lbvm2]
management_address = 192.168.1.24/24
hostname = regionallbvm2.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host2
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-regional-lbvm3]
management_address = 192.168.1.25/24
hostname = regionallbvm3.example.net
username = root
password = passw0rd
local_user = lbvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host3
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-central-elkvm1]
management_address = 192.168.1.26/24
hostname = centralelkvm1.example.net
username = root
password = passw0rd
local_user = elkvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host1
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-central-elkvm2]
management_address = 192.168.1.27/24
hostname = centralelkvm2.example.net
username = root
password = passw0rd
local_user = elkvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host2
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-central-elkvm3]
management_address = 192.168.1.28/24
hostname = centralelkvm3.example.net
username = root
password = passw0rd
local_user = elkvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host3
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-regional-elkvm1]
management_address = 192.168.1.29/24
hostname = regionalelkvm1.example.net
username = root
password = passw0rd
local_user = elkvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host1
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-regional-elkvm2]
management_address = 192.168.1.30/24
hostname = regionalelkvm2.example.net
username = root
password = passw0rd
local_user = elkvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host2
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-regional-elkvm3]
management_address = 192.168.1.31/24
hostname = regionalelkvm3.example.net
username = root
password = passw0rd
local_user = elkvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host3
memory = 32768
vcpu = 4
enable_data_interface = false
[csp-central-msvm1]
management_address = 192.168.1.32/24
hostname = centralmsvm1.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host1
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-central-msvm2]
management_address = 192.168.1.33/24
hostname = centralmsvm2.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host2
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-central-msvm3]
management_address = 192.168.1.34/24
hostname = centralmsvm3.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-central-host3
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-regional-msvm1]
management_address = 192.168.1.35/24
hostname = regionalmsvm1.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host1
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-regional-msvm2]
management_address = 192.168.1.36/24
hostname = regionalmsvm2.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host2
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-regional-msvm3]
management_address = 192.168.1.37/24
hostname = regionalmsvm3.example.net
username = root
password = passw0rd
local_user = msvm
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host3
memory = 65536
vcpu = 16
enable_data_interface = false
[csp-regional-sblb1]
management_address = 192.168.1.38/24
hostname = regional-sblb1.example.net
username = root
password = passw0rd
local_user = sblb
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host1
memory = 32768
vcpu = 4
enable_data_interface = true
[csp-regional-sblb2]
management_address = 192.168.1.39/24
hostname = regional-sblb2.example.net
username = root
password = passw0rd
local_user = sblb
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host2
memory = 32768
vcpu = 4
enable_data_interface = true
[csp-regional-sblb3]
management_address = 192.168.1.40/24
hostname = regional-sblb3.example.net
username = root
password = passw0rd
local_user = sblb
local_password = passw0rd
guest_os = ubuntu
host_server = cso-regional-host3
memory = 32768
vcpu = 4
enable_data_interface = true
[csp-vrr-vm1]
management_address = 192.168.1.41/24
hostname = vrr1.example.net
gateway = 192.168.1.1
newpassword = passw0rd
guest_os = vrr
host_server = cso-regional-host3
memory = 32768
vcpu = 4
[csp-vrr-vm2]
management_address = 192.168.1.42/24
hostname = vrr2.example.net
gateway = 192.168.1.1
newpassword = passw0rd
guest_os = vrr
host_server = cso-regional-host2
memory = 32768
vcpu = 4
Provisioning VMs with the Provisioning Tool for the KVM Hypervisor
If you use the KVM hypervisor on the CSO node or server, you can use the provisioning
tool to:
• Create and configure the VMs for the CSO and Junos Space components.
2. Access the directory for the installer. For example, if the name of the installer directory
is csoVersion:
root@host:~/# cd /~/csoVersion/
root@host:~/cspVersion/# ./provision_vm.sh
4. During installation, observe detailed messages in the log files about the provisioning
of the VMs.
For example:
root@host:~/cspVersion/# cd logs
root@host:/cspVersion/logs/# tail -f LOGNAME
NOTE: You cannot provision a Virtual Route Reflector (VRR) VM using the
provisioning tool. You must provision the VRR VM manually.
Before you begin, ensure that the maximum supported file size for datastore in a VMware
ESXi is greater than 512 MB. To view the maximum supported file size in datastore, you
can establish an SSH session to the ESXi host and run the vmfkstools -P datastorePath
command.
1. Download the CSO Release 3.3 installer package from the Software Downloads page
to the local drive.
2. Log in as root to the Ubuntu VM with the kernel version 4.4.0-31-generic, and has
access to the internet. The VM must have the following specifications:
• 8 GB RAM
• 2 vCPUs
3. Copy the installer package from your local drive to the VM.
The contents of the installer package are extracted in a directory with the same name
as the installer package.
For example:
root@host:~/# cd Contrail_Service_Orchestration_3.3/confs
root@host:~/Contrail_Service_Orchestration_3.3/confs#
For example:
root@host:~/Contrail_Service_Orchestration_3.3/confs# cp
/cso3.3/trial/nonha/provisionvm/provision_vm_example_ESXI.conf provision_vm.conf
8. In the [TARGETS] section, specify the following values for the network on which CSO
resides.
9. Specify the following configuration values for each ESXI host on the CSO node or
server.
• vmnetwork—Labels for each virtual network adapter. This label is used to identify
the physical network that is associated to a virtual network adapter.
The vmnetwork data for each VM is available in the Summary tab of a VM in the
vSphere Client. You must not specify vmnetwork data within double quotes.
The datastore data for each VM is available in the Summary tab of a VM in the
vSphere Client. You must not specify datastore data within double quotes.
root@host:~/Contrail_Service_Orchestration_3.3/# ./provision_vm_ESXI.sh
For example:
root@host:~/Contrail_Service_Orchestration_3.3/#scp confs/provision_vm.conf
root@installer_VM_IP:/root/Contrail_Service_Orchestration_3.3/confs
This action brings up VMware ESXi VMs with the configuration provided in the files.
Manually Provisioning VRR VMs on the Contrail Service Orchestration Node or Server
You cannot use the provision tool—provision_vm_ESXI.sh—to provision the Virtual Route
Reflector (VRR) VM. You must manually provision the VRR VM.
1. Download the VRR Release 15.1F6-S7 software package (.ova format) for VMware
from the Virtual Route Reflector page, to a location accessible to the server.
2. Launch the VRR using vSphere or vCenter Client for your ESXi server and log in to the
server with your credentials.
root@host:~/# configure
root@host:~/# delete groups global system services ssh root-login deny-password
root@host:~/# set system root-authentication plain-text-password
root@host:~/# set system services ssh
root@host:~/# set system services netconf ssh
root@host:~/# set routing-options rib inet.3 static route 0.0.0.0/0 discard
root@host:~/# commit
root@host:~/# exit
CAUTION: If the VMs cannot communicate with all the other hosts in the
deployment, the installation can fail.
1. Copy the installer package file from the central CSO server to the installer VM.
The contents of the installer package are placed in a directory with the same name
as the installer package. In this example, the name of the directory is csoVersion.
4. If you have created an installer VM using the provisioning tool, you must copy the
/csoVersion/confs/provision_vm.conf file from the Ubuntu VM to the
/csoversion/confs/provision_vm.conf directory of the installer VM.
6. For installer_ip in the [TARGETS] section, specify the IP address of the installer VM.
You can use a private repository either on the installer VM (the default choice) or on an
external server.
• If you use the installer VM for the private repository, it is created when you install the
solution, and you can skip this procedure.
• If you use an external server for the private repository, use the following procedure to
create it.
1. Install the required Ubuntu release on the server that you use for the private repository.
The contents of the installer package are placed in a directory with the same name
as the installer package. In this example, the name of the directory is csoVersion.
For example:
root@host:~/# cd csoVersion
root@host:~/csoVersion#./create_private_repo.sh
6. When you run the setup_assist script to create configuration files, specify that you use
an external private repository. See “Installing and Configuring Contrail Service
Orchestration” on page 109
You use the same installation process for both Contrail Service Orchestration (CSO) and
Network Service Controller and for both KVM and ESXi environments.
• Provision the virtual machines (VMs) for the CSO node or server. (See “Provisioning
VMs on Contrail Service Orchestration Nodes or Servers” on page 74).
• Copy the installer package to the installer VM and expand it. (See “Setting up the
Installation Package and Library Access” on page 107)
• If you have created an installer VM using the provisioning tool, you must copy the
/Contrail_Service_Orchestration_3.3/confs/provision_vm.conf file from the Ubuntu VM
to the /csoversion/confs/provision_vm.conf directory of the installer VM.
• If you use an external server rather than the installer VM for the private repository that
contains the libraries for the installation, create the repository on the server. (See
“Setting up the Installation Package and Library Access” on page 107).
The installation process uses a private repository so that you do not need Internet
access during the installation.
• The names of each regional region if you use more than one region. The default
specifies one regional region, called regional.
• The timezone for the servers in the deployment, based on the Ubuntu timezone
guidelines.
The default value for this setting is the current timezone of the installer host.
• The fully qualified domain name (FQDN) of each Network Time Protocol (NTP)
server that the solution uses. For networks within firewalls, use NTP servers specific
to your network.
• If you want to access Administration Portal with the single sign-on method, the name
of the public domain in which the CSO servers reside. Alternatively if you want to
access Administration Portal with local authentication, you need a dummy domain
name.
• For a distributed deployment, whether you use transport layer security (TLS) to
encrypt data that passes between the CPE device and CSO.
You should use TLS unless you have an explicit reason for not encrypting data
between the CPE device and CSO.
• Whether you use the CSO Keystone or an external Keystone for authentication of
CSO operations.
• A CSO Keystone is installed with CSO and resides on the central CSO server.
This default option is recommended for all deployments, and is required for a
distributed deployment unless you provide your own external Keystone. Use of a
CSO Keystone offers enhanced security because the Keystone is dedicated to
CSO and is not shared with any other applications.
• An external Keystone resides on a different server to the CSO server and is not
installed with CSO.
You specify the IP address and access details for the Keystone during the
installation.
• The Contrail OpenStack Keystone in the Contrail Cloud Platform for a centralized
deployment is an example of an external Keystone.
In this case, customers and Cloud CPE infrastructure components use the same
Keystone token.
• You can also use your own external Keystone that is not part of the CSO or
Contrail OpenStack installation.
• The IP address of the Contrail controller node for a centralized deployment. For a
centralized deployment, you specify this external server for Contrail Analytics.
• Whether you use a common password for all VMs or a different password for each
VM, and the value of each password.
• The CIDR address of the subnet on which the CSO VMs reside.
• If you use NAT with your CSO installation, the public IP addresses used for NAT for
the central and regional regions.
If you use the same password for all the VMs, you can enter the password once.
Otherwise, you must provide the password for each VM.
The default value is 172.16.0.0/16. If this value is close to your network range, use
a similar address with a /16 subnet.
• The range of the Kubernetes service overlay network addresses, in CIDR notation.
• The IP address of the Kubernetes service API server, which is on the service overlay
network.
This IP address must be in the range you specify for the Kubernetes Service overlay
network. The default value is 192.168.3.1.
This IP address must be in the range you specify for the Kubernetes Service overlay
network. The default value is 192.168.3.1.
• The tunnel interface unit range that CSO uses for an SD-WAN implementation
with an MX Series hub device.
You must choose values that are different to those that you configured for the MX
Series router. The possible range of values is 0–16385, and the default range is
4000–6000.
• The FQDN that the load balancer uses to access the installation.
• For a non-HA deployment, the IP address and the FQDN of the VM that hosts
the HAproxy.
• For an HA deployment, the virtual IP address and the associated hostname that
you configure for the HAproxy.
2. Access the directory for the installer. For example, if the name of the installer directory
is csoVersion:
root@host:~/# cd /~/csoVersion/
root@host:~/cspVersion/# ./setup_assist.sh
The script starts, sets up the installer, and requests that you enter information about
the installation.
• trial—Trial environment
• production—Production environment
• y—CSO is behind NAT. After you deploy CSO, you must apply NAT rules. For
information about NAT rules, see “Applying NAT Rules if CSO is Deployed Behind
NAT” on page 134.
7. Accept the default timezone or specify the Ubuntu timezone for the servers in the
topology.
• y—Deployment uses HA
10. Press enter if you use only one region or specify a comma-separated list or regions if
you use multiple regions. You can configure a maximum of three regions. The default
region is regional.
12. Specify whether you need a separate regional southbound load balancer.
13. For a distributed deployment, specify whether you use TLS to enable secure
communication between the CPE device and CSO.
Accept the default unless you have an explicit reason for not using encryption for
communications between the CPE device and CSO.
14. Specify whether you want separate VMs for kubernetes master.
16. Specify a domain name to determine how you access Administration Portal, the main
CSO GUI:
• If you want to access Administration Portal with the single sign-on method, specify
the name of the public domain in which the CSO servers reside.
• If you want to use local authentication for Administration portal, you specify a
dummy name.
17. Specify whether you use an external Keystone to authenticate CSO operations, and
if so, specify the OpenStack Keystone service token.
• n—Specifies use of the CSO Keystone which is installed with and dedicated to CSO.
This default option is recommended unless you have a specific requirement for an
external Keystone.
19. Specify whether you use a common password for all CSO VMs, and if so, specify the
password.
20. Specify the following information for the virtual route reflector (VRR) that you create:
• y—VRR is behind NAT. If you are deploying a VRR in a private network, the NAT
instance translates all requests (BGP traffic) to a VRR from a public IP address
to a private IP address.
c. Specify the public IP address for each VRR that you create. For example,
192.0.20.118/24.
d. Specify the redundancy group for each VRR that you have created.
• For non HA deployments, specify the redundant group of the VRR as zero.
21. Starting with the central region, specify the following information for each server in
the deployment of each region.
The script prompts you for each set of information that you must enter.
• Password for the root user (only required if you use different passwords for each
VM)
• The IP address of the Kubernetes overlay network address, in CIDR notation, that
the microservices use.
The default value is 172.16.0.0/16. If this value is close to your network range, use a
similar address with a /16 subnet.
• The range of the Kubernetes service overlay network addresses, in CIDR notation.
The default value is 192.168.3.0/24. It is unlikely that there will be a conflict between
this default and your network, so you can usually accept the default. If, however,
there is a conflict with your network, use a similar address with a /24 subnet.
• The IP address of the Kubernetes service API server, which is on the service overlay
network.
This IP address must be in the range you specify for the Kubernetes Service overlay
network. The default value is 192.168.3.1.
This IP address must be in the range you specify for the Kubernetes Service overlay
network. The default value is 192.168.3.1.
• Specify the range of tunnel interface units that CSO uses for an SD-WAN
implementation with an MX Series hub device
The default setting is 4000–6000. You specify values in the range 0–16385 that
are different to those that you configured on the MX Series router.
• The IP address and FQDN of the host for the load balancer:
• For non-HA deployments, the IP address and FQDN of the VM that hosts the
HAproxy.
• For HA deployments, the virtual IP address and associated FQDN that you
configure for the HAproxy.
The tool uses the input data to configure each region and indicates when the
configuration stage is complete.
• Specify the IP address and prefix of the Kubernetes overlay network that the
microservices use.
• Specify the fully-qualified domain names of the host for the load balancer.
• For a non-HA deployment, the IP address or FQDN of the VM that hosts the
HAproxy.
• For an HA deployment, the virtual IP address that you configure for the HAproxy.
• Specify a unique virtual router identifier in the range 0–255 for the HA Proxy VM in
each region.
The tool uses the input data to configure each region and indicates when the
configuration stage is complete.
23. Specify the subnet in CIDR notation on which the CSO VMs reside.
The script requires this input, but uses the value only for distributed deployments and
not for centralized deployments.
The default is eth0. Accept this value unless you have explicitly changed the primary
interface on your host of VMs.
26. When all regions are configured, the tool starts displaying the deployment commands.
2. Deploy the central infrastructure services and wait at least ten minutes before you
execute the next command.
CAUTION: Wait at least ten minutes before executing the next command.
Otherwise, the microservices may not be deployed correctly.
3. Deploy the regional infrastructure services and wait for the process to complete.
If you have configured multiple regions, then you can deploy the infrastructure services
on the regions in any order after deploying the central infrastructure.
Deploying Microservices
To deploy the microservices:
2. Deploy the central microservices and wait at least ten minutes before you execute
the next command.
CAUTION: Wait at least ten minutes before executing the next command.
Otherwise, the microservices may not be deployed correctly.
3. Deploy the regional microservices and wait for the process to complete:
1. Log in as root into the VM or server that hosts the central microservices.
If the result is an empty display, as shown below, the microservices are running and
you can proceed to the next section.
The first item in the display shows the microservice and the second item shows its
pod.
4. Wait a couple of minutes, then check the status of the microservice and its pod.
Loading Data
After you check that the microservices are running, you must load data to import plug-ins
and data design tools.
To load data:
1. Ensure that all the microservices are up and running on the central and each regional
microservices host.
root@host:~/#./load_services_data.sh
NOTE: You must not execute load_services_data.sh more than once after a
new deployment.
• Cassandra
• Elasticsearch
• Etcd
• MariaDB
• RabbitMQ
• ZooKeeper
• Redis
• ArangoDb
• SimCluster
• ELK Logstash
• ELK Kibana
• Contrail Analytics
• Keystone
• Swift
• Kubernetes
For example:
root@host:~/# cd Contrail_Service_Orchestration_3.3
root@host:~/Contrail_Service_Orchestration_3.3#
To check the status of infrastructure components of the central environment, run the
following command:
root@host:~/Contrail_Service_Orchestration_3.3#./components_health.sh central
To check health component of the regional environment, run the following command:
root@host:~/Contrail_Service_Orchestration_3.3#./components_health.sh regional
To check health component of central and regional environments, run the following
command:
root@host:~/Contrail_Service_Orchestration_3.3# ./components_health.sh
After a couple of minutes, the status of each infrastructure component for central
and regional environments are displayed.
For example:
************************************************************************
************************************************************************
Overall result:
• Uploading the vSRX VNF Image for a Centralized Deployment on page 130
• Uploading the LxCIPtable VNF Image for a Centralized Deployment on page 131
• Uploading the Cisco CSR-1000V VNF Image for a Centralized Deployment on page 133
From Contrail Service Orchestration (CSO) Release 3.3 onwards, CSO uses an algorithm
to automatically generate a dynamic password for the following infrastructure
components:
• Cassandra
• Keystone
• MariaDB
• RabbitMQ
• Icinga
• Prometheus
• ArangoDB
The auto-generated passwords for each infrastructure component and the cspadmin
password for Administration Portal are displayed on the console after you complete
answering the Setup Assistance questions.
NOTE: You must note the auto-generated password that is displayed on the
console as they are not saved in the system.
To enhance the password security, the length and pattern for each password is different
and the password is encrypted. The passwords in the log file are masked.
After you have installed Contrail Service Orchestration (CSO) and uploaded virtualized
network functions (VNFs) for a centralized deployment, you must complete the following
tasks in Contrail OpenStack.
2. Execute the following command for each VNF image that you uploaded.
Where:
For example:
1. Copy the endpoint_replace.py script from the CSO installer VM to the Contrail controller
node.
Where:
For example:
NOTE: This procedure must be performed on all the Contrail Controller nodes
in your CSO installation.
2. To check whether the JSM Heat resource is available, execute the heat
resource-type-list | grep JSM command.
If the search returns the text OS::JSM::Get Flavor, the file is available in Contrail
OpenStack.
a. Use Secure Copy Protocol (SCP) to copy the jsm_contrail_3.py file as follows:
c. Restart the heat services by executing the service heat-api restart && service
heat-api-cfn restart && service heat-engine restart command.
d. After the services restart successfully, verify that the JSM heat resource is available
as explained in Step 2. If it is not available, repeat Step 3.
If you create the virtual networks in Administration Portal, CSO automatically sets up
the required routing and sharing attributes for the networks. If, however, you create the
virtual networks in Contrail, you must:
• Configure routing from the Contrail Service Orchestration (CSO) regional server to
both virtual networks.
This action ensures that the multiple tenants (customers) can access the network.
2. If you want to execute Keystone commands, set the source path, using the path that
you configured during the installation.
For example:
For example:
• admin
• cspadmin
• neutron
ID Name
0a3615846a4d689bedf8 admin
20a61f33a15453f21682 cspadmin
41a71e35a152a7c39e69 neutron
10. Obtain the Keystone service token from the /etc/contrail/keystone file.
• admin
• member
• operator
where
For example:
ID Name
7df60593f801df3cad04 _member_
5be423fdf76a5d4f8964 admin
3bc8235fd643ae814c3d operator
13. Use the following command to add the admin and cspadmin users to the admin and
_member_ groups.
where
For example:
14. Use the following command to assign the system_user property to the admin,
cspadmin, and neutron users.
where
For example:
3. Assign the admin role to user admin for the project that you created.
4. Create a user, and assign the user to the project that you created.
For example:
Related • Configuring the Physical Servers and Nodes for the Contrail Cloud Implementation in
Documentation a Centralized Deployment on page 63
• Authentication and Authorization in the Cloud CPE and SD-WAN Solutions on page 27
• Uploading the vSRX VNF Image for a Centralized Deployment on page 130
• Uploading the LxCIPtable VNF Image for a Centralized Deployment on page 131
• Uploading the Cisco CSR-1000V VNF Image for a Centralized Deployment on page 133
The Contrail Service Orchestration (CSO) installer places the vSRX image in the
/var/www/html/csp_components directory on the installer virtual machine (VM) during
the installation process. You must copy this image from the installer VM to the Contrail
controller node and upload it to make the vSRX virtualized network function (VNF)
available in a centralized deployment.
3. Copy the vSRX-img file from the installer VM to any directory on the Contrail controller
node.
For example, if the IP address of the Contrail controller node is 192.0.2.1, and you want
to copy the file to the root directory:
4. Check whether you have an OpenStack flavor with the following specification on the
Contrail controller node.
• 2 vCPUs
• 4 GB RAM
For example:
If you do not have a flavor with the required specification, create one.
For example:
5. Access the directory where you copied the image on the Contrail controller node, and
upload it into the Glance software.
For example:
root@host:/# cd root
root@host:/root# glance image-create --name vSRX-img --is-public True --container-format
bare --disk-format qcow2 < vSRX-img
NOTE: You must name the image vSRX-img to ensure that the virtual
infrastructure manager (VIM) can instantiate the VNF.
The status of the instance should be spawning or running. You can click the instance
to see its console.
If you need to investigate the image further, the default username for the vSRX-img
package is root and the password is passw0rd.
• Uploading the Cisco CSR-1000V VNF Image for a Centralized Deployment on page 133
You use this process to make the LxCIPtable VNF available in a centralized deployment.
2. Download the appropriate Ubuntu cloud image to the Contrail controller node.
For example:
root@host:/# cd /tmp
root@host:/tmp# wget
http://cloud-images.ubuntu.com/releases/14.04/release/ubuntu-14.04-server-cloudimg-amd64-disk1.img
3. On the Contrail controller node, upload the Ubuntu image into the Glance software.
4. In a local directory on the Contrail OpenStack node, create a metadata file for the
image. For example:
6. From the OpenStack GUI, log in to the instance with the username ubuntu and the
password specified in the user-data file.
CAUTION: You must use the value passw0rd for the LxCIPtable VNF
to operate correctly.
PermitRootLogin = yes
d. In the file /etc/network/interfaces, modify the eth0, eth1, and eth2 settings as
follows:
auto eth0
iface eth0 inet dhcp
metric 1
auto eth1
iface eth1 inet dhcp
metric 100
auto eth2
iface eth2 inet dhcp
metric 100
b. From the OpenStack Instances page, select Create Snapshot for this instance, and
specify the Name as LxcImg.
• Uploading the Cisco CSR-1000V VNF Image for a Centralized Deployment on page 133
You use this process to make the Cisco CSR-1000V VNF available in a centralized
deployment.
For example:
For example:
For example:
5. From the OpenStack GUI, log in to the instance using the management IP address as
the username and without a password.
For example:
b. From the OpenStack Instances page, select Create Snapshot for this instance, and
specify the name of the image as csr1000v-img.
• Uploading the LxCIPtable VNF Image for a Centralized Deployment on page 131
If you have deployed Contrail Service Orchestration (CSO) behind NAT, you must apply
NAT rules after you run the setup_assit.sh script on central and regional hosts. The NAT
rule set determines the direction of the traffic to be processed.
NOTE: If you do not apply NAT rules after you install or upgrade CSO, you
cannot access Administration Portal, Kibana UI, and Rabbit MQ console.
a. Copy the following commands and paste them into a text file.
a. Copy the following commands and paste them into a text file.
The NAT rules are applied for central and regional NAT servers, and you can access
Administration Portal, Kibana UI, and Rabbit MQ console.
If your installed version is Contrail Service Orchestration (CSO) Release 3.2.1, you can
use a script to directly upgrade to CSO Release 3.3.
NOTE: You can upgrade to CSO Release 3.3 only from CSO Release 3.2.1. If
your installed version of CSO is not Release 3.2.1, then you must perform a
fresh installation of CSO 3.3.
You can roll back to CSO Release 3.2.1, if the upgrade is unsuccessful.
To upgrade to CSO Release 3.3, you must run the scripts that are available in the
Contrail_Service_Orchestration_3.3.tar.gz file in the following order:
1. upgrade.sh—This script upgrades CSO software from Release 3.2.1 to Release 3.3.
The upgrade.sh script, puts CSO in maintenance mode, takes a snapshot of all VMs
so that you can roll back to the previous release if the upgrade fails (optional), upgrades
all microservices and infrastructure components if required, performs health checks
at various levels, validates if all VMs, infracomponents, and microservices are up and
running, and puts the CSO in live mode.
NOTE: Before you upgrade ensure that all ongoing jobs are stopped;
otherwise during the upgrade the ongoing jobs are stopped. During the
upgrade, you experience a downtime as CSO goes into maintenance mode.
2. revert.sh—Run this script only if the upgrade fails and if you have taken a snapshot of
all VMs. This script reverts to the previously installed version.
Upgrade to CSO Release 3.3 is independent of the deployment type (HA and non- HA),
environment type (trial or production), infrastructure components and microservices
used, and the hypervisor type (KVM or VMware ESXi).
To ensure a smooth upgrade, the scripts perform a number of health checks before and
after the upgrade. Health checks are performed to determine the operational condition
of all components, the host, and VMs. If there is an error during the health check, the
upgrade process is paused. You can rerun the script to rectify the error that you encounter.
• System Health Checks—Checks the following parameters of VMs and the host machine.
Limitations
Upgrade to CSO Release 3.3 has the following limitations:
• The upgrade is applicable only to CSO software and is not applicable to the existing
devices and sites in CSO. After a successful upgrade, the existing sites and devices
continue to have the same functionality of the previously installed version, that is, CSO
Release 3.2.1.
Security • Release 3.3 security management-related features is supported on devices that are onboarded in
Management Release 3.2.1.
SD-WAN • For the Application Visibility feature, the trend data is reset after the upgrade. You can access the
Release 3.2.1 trend data through the REST APIs.
• The Application Quality of Experience (AppQoE) feature works only for the tenants that you create
in Release 3.3. For more information on AppQoE, see Application Quality of Experience (AppQoE)
Overview in the Contrail Service Orchestration User Guide.
• Device Management functions work for Release 3.2.1 sites.
Cloud CPE • All functionalities of centralized and distributed deployments continues to work on Release 3.2.1 sites
or devices that are onboarded in Release 3.2.1.
• Multi-region support for centralized deployment is not supported on Release 3.2.1 sites or devices
that are onboarded in Release 3.2.1.
• Device Management functions work for Release 3.2.1 sites.
• High availability (HA) for VRRs are not supported for sites that are created in Release 3.2.1.
From Contrail Service Orchestration (CSO) Release 3.3, you can directly upgrade the
CSO software from Release 3.2.1 by running scripts.
This upgrade procedure is independent of the deployment type (trial and production),
environment type (non-HA and HA), infrastructure components and microservices used,
and the hypervisor type (KVM or VMware ESXi).
• Ensure that you are in Contrail Service Orchestration (CSO) Release 3.2.1.
• If you are using VMware ESXi VMs, you must create the provision_vm.conf file in the
Contrail_Service_Orchestration_3.3/confs/ directory.
For example, for a trial environment with HA, you can refer to the provision_vm.conf
that is available in the Contrail_Service_Orchestration_3.3/confs/trial/ha/provisionvm/.
1. Download the CSO Release 3.3 installer package from the Software Downloads page
to the local drive.
3. Copy the installer package from your local folder to the installer VM.
The contents of the installer package are extracted in a directory with the same name
as the installer package.
root@host:~/# cd Contrail_Service_Orchestration_3.3
root@host:~/Contrail_Service_Orchestration_3.3#
root@host:~/Contrail_Service_Orchestration_3.3# ls
• upgrade.sh
• revert.sh
This script upgrades CSO software from Release 3.2.1 to Release 3.3. The upgrade.sh
script puts CSO in maintenance mode, takes a snapshot of running status of all VMs
(optional), upgrades all microservices and infrastructure components if required,
performs health checks at various levels, validates if all VMs, infrastructure
components, and microservices are up and running, and puts the CSO in live mode.
If the environment type is production, the upgrade.sh script takes a snapshot of all
VMs by default. For trial environment you are prompted to confirm whether you want
to take a snapshot.
NOTE: The script does not take a snapshot of installer VM and Virtual
Route Reflector (VRR) VM.
root@host:~/Contrail_Service_Orchestration_3.3# ./upgrade.sh
INFO ===============================================
INFO Overall Upgrade Summary
INFO ===============================================
INFO Configuration Upgrade : success
INFO System Health Check : success
INFO CSO Health-Check before Upgrade : success
INFO CSO Maintenance Mode Enabled : success
INFO VM Snapshot : success
INFO Central Infra Upgrade : success
INFO Regional Infra Upgrade : success
INFO Microservices pre-deploy scripts execution : success
INFO Central Microservices Upgrade : success
INFO Regional Microservices upgrade : success
INFO Microservices post-deploy scripts execution : success
INFO CSO Health-Check after Upgrade : success
INFO Enable CSO Services : success
INFO Load Microservices Data : success
INFO Overall Upgrade Status : success
INFO ============================================
INFO System got upgraded to 3.3 Successfully.
INFO =============================================
The time taken to complete the upgrade process depends on the hypervisor type and
the environment type. If you are using KVM as the hypervisor, while taking a snapshot
all VMs are shut down. If you are using VMware ESXi as the hypervisor, while taking a
snapshot all VMs are up and running.
If an error occurs, you must fix the error and rerun the upgrade.sh script. When you
rerun the upgrade.sh script, the script continues to execute from the previously failed
step.
You can view the following log files that are available at
root/Contrail_Service_Orchestration_3.3/logs:
• upgrade_console.log
• upgrade_error.log
• upgrade.log
7. (Optional) If you are unable to troubleshoot the error you can roll back to your previous
release. Run the revert.sh script.
root@host:~/Contrail_Service_Orchestration_3.3# ./revert.sh
INFO ===============================================
INFO Overall Revert Summary
INFO ===============================================
After a successful upgrade, CSO is functional and you can login to Administrator Portal
and Customer Portal.
NOTE: After you successfully upgrade from CSO Release 3.2.1 to Contrail
Service Orchestration (CSO) Release 3.3, ensure that you download the
application signatures before installing signatures on the device. This is a
one-time operation after the upgrade.
Adding Virtual Route Reflectors (VRRs) After Upgrading to CSO Release 3.3
To support high availability (HA) for Virtual Route Reflectors (VRRs), you must add VRRs
and create redundancy groups after you upgrade to Contrail Service Orchestrator (CSO)
Release 3.3.
To add VRRs:
root@host:~/# cd Contrail_Service_Orchestration_3.3
root@host:~/Contrail_Service_Orchestration_3.3#
root@host:~/Contrail_Service_Orchestration_3.3# ./add_vrr.sh
The existing VRR details are displayed.
host-name | redundancy-group
vrr-192.204.243.28 0
=========================================
NOTE: By default, VRRs that are created during Release 3.2.1 belong to
the redundancy group, group 0.
• y—VRR is behind NAT. If you are deploying a VRR in a private network, the NAT
instance translates all requests (BGP traffic) to a VRR from a public IP address
to a private IP address.
• If you want to use a common password for all VRRs, enter y and specify the
common password.
• If you want to use a different password for each VRR, enter n and specify the
password for each VRR.
• Specify the public IP address for each VRR that you create. For example,
192.110.20.118/24.
• Specify the redundancy group for each VRR that you have created.
• For non-HA deployments, specify the redundancy group of the VRR as zero.
If you have chosen a common password for all VRRs, you are prompted to specify
the common password only for the first VRR instance.
You can view the newly added VRRs through the APIs: routing-manager (GET: https://IP
Address of Administration Portal/routing-manager/vrr-instance) or ems-central (GET:
https://IP Address of Administration Portal/ems-central/device).
Each hub or spoke device establishes a BGP peering session with VRRs that you have
created and assigned to different redundancy groups, thereby providing redundancy.
This topic describes the possible errors that you might encounter while you are upgrading
Contrail Service Orchestrator (CSO).
Problem Description: While you are upgrading CSO to Release 3.3 or reverting to the
previously-installed release, the upgrade or revert status is displayed as Going to sync
salt... for a considerable time.
The Salt Master on the installer VM might be unable to reach all Salt Minions on the other
VMs, and the salt timeout exception might occur.
Solution Based on the output of the salt ‘*’ test.ping command, you must either restart the Salt
Master or the Salt Minion.
2. Run the salt ‘*’ test.ping command, to check if the Salt Master on the installer VM is
able to reach other VMs.
• If the following error occurs, you must restart the Salt Master.
Salt request timed out. The master is not responding. If this error persists after verifying
the master is up,worker_threads may need to be increased
csp-regional-sblb.DB7RFF.regional:
True
csp-contrailanalytics-1.8V1O2D.central:
True
csp-central-msvm.8V1O2D.central:
True
csp-regional-k8mastervm.DB7RFF.regional:
True
csp-central-infravm.8V1O2D.central:
False
csp-regional-msvm.DB7RFF.regional:
False
csp-regional-infravm.DB7RFF.regional:
True
csp-central-k8mastervm.8V1O2D.central:
True
If the status of a VM is False, you must login to the VM, and restart the Salt Minion.
3. Rerun the salt ‘*’ test.ping command to verify if the status for all VMs is True.
Problem Description: While you are upgrading CSO to Release 3.3, the following error might occur:
Could not free cache on host server ServerName
Problem Description: While you are upgrading CSO to Release 3.3, the following error might occur:
One or more kube-system pods are not running
Solution Check the status of the kube-system pod, and restart kube-proxy if required.
2. To view the status of the kube-system pod, run the following command:
Check the status of kube-proxy. You must restart kube-proxy if the status is Error,
Crashloopback, or MatchNodeSelector.
Problem Description: While you are upgrading CSO to Release 3.3, the following error might occur:
One or more nodes down
Solution Check the status of kube-master or kube-minion and restart the nodes, if required.
Identify the node that is in the Not Ready status. You must restart the node if the status
is Not Ready.
3. To restart the node that is in the Not Ready status, log in to the node through SSH and
run the following command:
4. Rerun the following command to check the status of the node that you restarted.
Snapshot Error
Problem Description: The upgrade.sh script, sets CSO to maintenance mode, takes a snapshot
of all VMs so that you can roll back to the previous release if the upgrade fails. While you
are upgrading to CSO Release 3.3, the snapshot process might fail because of the
following reasons:
• Unable to shutdown one or more VMs—You must manually shutdown the VM.
• Unable to take a snapshot for one ore more VMs—You must manually restart the VMs,
start kubernetes pods, and set CSO to active mode.
Id Name State
----------------------------------------------------
10 vrr1 running
11 vrr2 running
40 canvm shut off
41 centralinfravm shut off
43 centralk8mastervm running
44 centralmsvm shut off
45 installervm running
46 regional-sblb shut off
47 regionalinfravm running
48 regionalk8mastervm shut off
49 regionalmsvm shut off
3. Execute the following command to shutdown the VMs that are in running state:
If you want to proceed with the upgrade process, you can rerun the upgrade.sh script.
• If you are unable to take the snapshot for one or more VMs, you must:
Id Name State
----------------------------------------------------
10 vrr1 running
11 vrr2 running
40 canvm running
41 centralinfravm running
43 centralk8mastervm running
44 centralmsvm running
45 installervm running
46 regional-sblb shut off
47 regionalinfravm running
48 regionalk8mastervm shut off
49 regionalmsvm running
3. Execute the following command to restart the VMs that are in shut off state.
a. Infrastructure VM
b. Load balancer VM
d. Contrail Analytics VM
e. K8 Master VM
f. Microservices VM
4. On the installer VM, run the following commands to start the kubernetes pod:
5. Log in to the central infrastructure VM through SSH , and run the following command
to set CSO to active mode.
If you want to proceed with the upgrade process, you can rerun the upgrade.sh script.
You can use the license tool to upload and install licenses for the following products:
Using this license tool is a quick and convenient way to upload and install licenses
simultaneously. You can also use the API to install and upload licenses or to incorporate
this functionality into your custom interface.
Contrail Service Orchestration uses the following workflow for uploading and installing
licenses:
1. You run the license tool on the installer VM, which communicates with the central
microservices host.
3. The regional microservices host executes installation instructions on the CPE device
or the Contrail Controller node.
The license tool enables you to install and retrieve license information through a command
line interface (CLI).
Table 24 on page 152 describes the arguments and variables for the tool.
-i license-id Specifies the identifier of the license. Mandatory for license installation
-p license-path Specifies the path to the license file. Mandatory for license installation
-t tenant-name Specifies the name of the customer in • Use for operations concerning all sites for a
Contrail Service Orchestration. single customer.
• Do not use for operations concerning multiple
customers.
- -sitefile site-list-path Specifies the path to a text file that contains Use for operations concerning multiple
a list of comma- or newline-separated sites customers or a subset of sites for a single
in Contrail Service Orchestration. customer.
- -get_license_info Extracts licenses information Requires either the -t or the sitefile option
- -service firewall | utm | nat Specifies the network function for the license. Mandatory if the site hosts multiple VNFs
2. Access the directory that contains the installer. For example, if the name of the installer
directory is csoVersion
root@host:~/# cd csoVersion
For example:
root@host:~/#export OS_AUTH_URL=http://192.0.2.0:35357/v2.0
root@host:~/#export OS_USERNAME=cspadmin
root@host:~/#export OS_PASSWORD=passw0rd
root@host:~/#export OS_TENANT_NAME=admin
root@host:~/#export TSSM_IP=192.2.0.1
root@host:~/#export REGION_IP=192.0.2.2
1. Run the tool with the following options (see Table 24 on page 152).
For example:
Response:SUCCESS
Site: jd8-site-1
vSRX IP: 10.102.82.36
License Info: license": [
{
"license_id": "JUNOS000001",
"install_status": success
}
]
Response:SUCCESS
Site: jd8-site-2
vSRX IP: 10.102.82.2
License Info: license": [
{
"license_id": "JUNOS000001",
"install_status": success
}
]
3. If there is a problem with the license installation, review the license_install.log file for
troubleshooting information.
Installing a License for a Specific Service on All Sites for One Customer
If you use more than one VNF at a site, you must specify the service when you install the
license.
1. Run the tool with the following options (see Table 24 on page 152).
For example:
3. If there is a problem with the license installation, review the license_install.log file for
debugging information.
2. Run the tool with the following options (see Table 24 on page 152).
For example:
4. If there is a problem with the license installation, review the license_install.log file for
debugging information.
Installing a License for a Specific Service on One or More Sites for Multiple Tenants
To install a license on one or more sites:
2. Run the tool with the following options (see Table 24 on page 152).
For example:
4. If there is a problem with the license installation, review the license_install.log file for
debugging information.
1. Run the tool with the following options (see Table 24 on page 152).
For example:
3. If there is a problem with operation, review the license_install.log file for debugging
information.
2. Run the tool with the following options (see Table 24 on page 152).
For example:
4. If there is a problem with operation, review the license_install.log file for debugging
information.
See Table 25 on page 159 for information about logging into the Contrail Service
Orchestration GUIs.
Customer Portal Same as the URL used to access the Specify the credentials when you create
Administration Portal the Customer either In Administration
Portal or with API calls.
For example:
http://192.0.2.2:5601
Grafana and Prometheus • Prometheus—ha-proxy-IP-Address:30900 For Grafana, specify the username and
• Grafana—ha-proxy-IP-Address:3000 password.
These tools provide monitoring
and troubleshooting for the Where: The default username is admin and the
infrastructure services in CSO. default password is admin.
You use Prometheus to create ha-proxy-IP-Address—IP address of HA proxy
queries for the infrastructure For Prometheus, the login credentials are
services and Grafana to view • For a deployment without HA, use the IP not needed.
the results of the queries in a address of the VM that hosts the
visual format. microservices for the central POP. After the upgrade, to login to the
• For an HA deployment, use the virtual IP Administration Portal, you must specify the
address that you provide for the HA proxy cspadmin password of the previously
when you install CSO. installed version.
For example:
http://192.0.2.2:30900
There are three tools that you use together to design and publish network services for
centralized and distributed deployments in a hybrid WAN deployment:
• Firstly, you use Configuration Designer to create configuration templates for virtualized
network functions (VNFs). The configuration templates specify the parameters that
the customer can configure for a network service.
• Next, you use Resource Designer to create VNF packages. A VNF package specifies
the network functions, function chains, performance, and a configuration template
that you created in Configuration Designer.
• Design service chains for network services using the VNF packages that you created
with Resource Designer.
You use the same process to create network services for centralized and distributed
deployments. You cannot, however, share network services between a centralized
deployment and a distributed deployment that are managed by one Contrail Service
Orchestration installation. In this case, you must create two identical services, one for
the centralized deployment and one for the distributed deployment.
You can also use Configuration Designer to create workflows for device templates.
For detailed information about using the Designer Tools, see the Contrail Service
Orchestration User Guide.
• Configure network devices and servers for the deployment. See the following topics:
• Configuring the EX Series Ethernet Switch for the Contrail Cloud Implementation in
a Centralized Deployment on page 58
• Configuring the QFX Series Switch for the Contrail Cloud Implementation in a
Centralized Deployment on page 59
• Configuring the Physical Servers and Nodes for the Contrail Cloud Implementation
in a Centralized Deployment on page 63
• Uploading the vSRX VNF Image for a Centralized Deployment on page 130
• Uploading the LxCIPtable VNF Image for a Centralized Deployment on page 131
• Uploading the Cisco CSR-1000V VNF Image for a Centralized Deployment on page 133
You can use the license tool to install vSRX licenses. See “Installing Licenses with the
License Tool” on page 152.
• You must create a (Virtualized Infrastructure Manager) VIM for each POP.
• You can add an MX Series router as a physical network element (PNE) to provide
a Layer 3 routing service to customer sites through use of virtual routing and
forwarding (VRF) instances.
• You add the Junos Space element management system (EMS) if you use a VNF
that requires this EMS.
4. Access Contrail and add the following rule to the default security group in the Contrail
project.
a. Create a regional service edge site for each branch site in the customer’s network.
b. Create a local service edge site if customers access the Internet through the
corporate VPN
9. If you configured a PNE, then associate the PNE with the site and configure a VRF for
each customer site.
For detailed information about using Administration Portal, see the Contrail Service
Orchestration User Guide.
NOTE: You must send an activation code to the customer for each NFX250
device. The customer’s administrative user must provide this code during the
NFX250 installation and configuration process. The Juniper Networks Redirect
Service uses this code to authenticate the device.
After you have installed Contrail Service Orchestration and published network services
with Network Service Designer, you use Administration Portal to set up the distributed
deployment. The following workflow describes the process:
3. Add an on-premise spoke site for each site in the customer’s network.
6. Add data for the POPs and provider edge (PE) router.
7. Upload images for devices used in the deployment, such as the vSRX gateway and
the NFX250 device, to the central activation server.
1. Upload licenses for vSRX and SRX devices and VNFs with the installer tool (see
“Installing Licenses with the License Tool” on page 152).
When an administrator installs and configures the NFX250 devices at a customer site,
the device automatically interacts with the Redirect Service. The Redirect Service
authenticates the device and sends information about its assigned regional server.
The device then obtains a boot image and configuration image from the regional
server and uses the images to become operational.
Customers activate SRX Series Services Gateways and vSRX instances acting as CPE
devices through Customer Portal.
For detailed information about using Administration Portal, see the Contrail Service
Orchestration User Guide.
BEST PRACTICE: Create different POPs for Hybrid WAN and SD-WAN
deployments so that it’s clear which physical device (in this case the hub
device) to select when you configure the spoke sites.
3. Access the POP that contains the hub device for the SD-WAN deployment.
Multiple tenants can share the hub. You typically use one hub for each POP.
The device should have the status Provisioned and an Activate Device link in the
Management Status column on the POPs page in Administration Portal.
a. Copy the Stage 1 configuration from the Routers page in Administration Portal
to the SRX Series device console.
b. Click Activate next to the hub device in the Routers page of Administration Portal.
7. Add a cloud site to specify which hub site the tenant uses.
8. Create an on-premise spoke site for the customer and specify the LAN segments that
connect to the CPE device.
If an SLA violation occurs, CSO automatically switches the traffic from one WAN link
to another on the CPE device. You can track these occurrences and view associated
alarms in the Monitor Pages in both the All Tenants and specific tenant views.
Related •
Documentation
After you have set up the network for a customer with Administration Portal, that customer
can view, configure, and manage their network through Customer Portal. Customer Portal
is actually customer-specific view of Administration Portal. Customers have their own
login credentials, which provide role-based access control to the information for their
networks. Customers see only their own networks, and cannot view other customers’
networks. You can also view and manage each customer’s network from Administration
Portal, by accessing the view for a specific customer.
• Deploy and manage available network services for a hybrid WAN deployment.
For detailed information about using Customer Portal, see the Contrail Service
Orchestration User Guide.
• Cassandra
• Kubernetes
• RabbitMQ
• Host metrics
• VM metrics
Refer to the documentation for Prometheus and Grafana for information about using
these products.
Monitoring Microservices
Service and Infrastructure Monitor (SIM) provides a continuous and comprehensive
monitoring of Contrail Service Orchestration. The application provides both a visual
display of the state of the deployment and the ability to view detailed event messages.
• Network services
• Microservices
• Virtual machines
• Physical servers
For detailed information about using Service and Infrastructure Monitor, see the Contrail
Service Orchestration User Guide.
You can also use Kibana to view log files and analyze log files in a visual format. See
“Setting Up the Visual Presentation of Microservice Log Files” on page 171
Related • Viewing and Creating Dashboards for Infrastructure Services on page 170
Documentation
• Setting Up the Visual Presentation of Microservice Log Files on page 171
http://ha-proxy-IP-Address:3000
Where:
http://ha-proxy-IP-Address:30900
Refer to the documentation for Prometheus and Grafana for more information about
using these products. You can also refer to the documentation for the different
infrastructure services to determine what type of information to include in your custom
dashboards.
Contrail Service Orchestration includes Kibana and Logstash to enable viewing of logged
data for microservices in a visual format.
1. Access Kibana using the URL for the server that you require (see “Accessing the Contrail
Services Orchestration GUIs” on page 159).
3. Click Create.
4. Log in as root to the installer host and access the installer directory.
5. Copy the deploy_manager/export.json file to a location from which you can import it
to the Kibana GUI.
NOTE: Do not change the format of the JSON file. The file must have the
correct format to enable visualization of the logs.
7. Click Import.
8. Navigate to the location of the export.json file that you made available in Step 5.
9. Click Open.
Refer to the Kibana documentation for information about viewing files in a visual format.
When you log into Kibana, you see the Discover page, which displays a chart of the number
of logs for a specific time period and a list of events for the deployment. You can filter
this data to view subsets of logs and add fields to the table to find the specific information
that you need. You can also change the time period for which you view events.
1. Specify a high-level query in the search field to view a subset of the logs.
You can use keywords from the list of fields in the navigation bar, and specific values
for parameters that you configure in Contrail Service Orchestration (CSO), such as a
specific customer name or a specific network service.
For example, specify the following query to view logs concerning requests made for
the customer test-customer.
For example, select request to show details about the request made for this customer.
Troubleshooting Microservices
You can use the troubleshooting dashboard to investigate issues for the microservices.
This widget shows the number of logs for each alert level.
This widget shows the number of logs for each HTTP status code.
This widget shows a visual representation of the number of logs for each microservice
analyzed by HTTP status code.
2. Click on an option, such as an alert level, in a widget to filter the data and drill down
to a specific issue.
Analyzing Performance
You can use the troubleshooting dashboard to investigate issues for the microservices.
This widget shows how long an API associated with a microservice has been in use.
You can view minimum, maximum, or average durations.
• Request ID Vs Timestamp
• API Vs Count
This widget shows the number of times an API has been called.
• Application Vs API
This widget shows the level of microservice use analyzed by the type of API call.
2. Click on an option, such as a request identifier, in a widget to filter the data and drill
down to a specific issue.
After you deploy the microservices, you can manage the containers with the
deploy_micro_services.sh script.