NACC SG 2of2
NACC SG 2of2
NACC SG 2of2
Contrail Cloud
2.a
Student Guide
Volume 2 of 2
Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
YEAR 2000 NOTICE
Juniper Networks hardware and software products do not suffer from Year 2000 problems and hence are Year 2000 compliant. The Junos operating system has no known
time-related limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
SOFTWARE LICENSE
The terms and conditions for using Juniper Networks software are described in the software license provided with the software, or to the extent applicable, in an agreement
executed between you and Juniper Networks, or Juniper Networks agent. By using Juniper Networks software, you indicate that you understand and agree to be bound by its
license terms and conditions. Generally speaking, the software license restricts the manner in which you are permitted to use the Juniper Networks software, may contain
prohibitions against certain uses, and may state conditions under which the license is automatically terminated. You should consult the software license for further details.
Contents
This two-day course is designed to provide students with the knowledge required to work with the Juniper Contrail
software-defined networking (SDN) solution. Students will gain in-depth knowledge of how to use the OpenStack and
Contrail Web UIs and APIs to perform the required tasks. Through demonstrations and hands-on labs, students will gain
experience with the features of Contrail. This course is based on Contrail Release 2.21.
Course Level
Network Automation Using Contrail Cloud is an intermediate-level course.
Intended Audience
This course benefits individuals responsible for working with software-defined networking solutions in data center,
service provider, and enterprise network environments.
Prerequisites
The prerequisites for this course are as follows:
• Basic TCP/IP skills;
• General understanding of data center virtualization;
• Basic understanding of the Junos operating system;
• Attendance of the Introduction to the Junos Operating System (IJOS) and Juniper Networks SDN
Fundamentals (JSDNF) courses prior to attending this class; and
• Basic knowledge of object-oriented programming and Python scripting is recommended.
Objectives
After successfully completing this course, you should be able to:
• Define basic SDN principles and functionality.
• Define basic OpenStack principles and functionality.
• Define basic Contrail principles and how they relate to OpenStack.
• List and define the components that make up the Contrail solution.
• Explain where Contrail fits into NFV and SDN.
• Describe the functionality of the Contrail control and data planes.
• Describe Nova Docker support in Contrail.
• Describe extending Contrail cluster with physical routers.
• Describe support for TOR switches and OVSDB.
• Describe the OpenStack and Contrail WebUIs.
• Create a tenant project.
• Create and manage virtual networks.
• Create and manage policies.
• Create and assign floating IP addresses.
• Add an image and launch an instance from it.
• Describe how a tenant is created internally.
• Use Contrail's API to configure OpenStack and Contrail.
• Describe service chaining within Contrail.
• Set up a service chain.
• Explain the use of Heat Templates with Contrail.
Day 1
Chapter 1: Course Introduction
Chapter 2: Contrail Overview
Chapter 3: Architecture
Chapter 4: Basic Configuration
Lab: Tenant Implementation and Management
Day 2
Chapter 5: Service Chaining
Lab: Service Chains
Chapter 6: Contrail Analytics
Chapter 7: Troubleshooting
Lab: Performing Analysis and Troubleshooting in Contrail
Appendix A: Installation
Lab: Installation of the Contrail Cloud (Optional)
Franklin Gothic Normal text. Most of what you read in the Lab Guide and
Student Guide.
CLI Input Text that you must enter. lab@San_Jose> show route
GUI Input Select File > Save, and type config.ini
in the Filename field.
CLI Undefined Text where the variable’s value is the Type set policy policy-name.
user’s discretion or text where the
ping 10.0.x.y
variable’s value as shown in the lab
GUI Undefined guide might differ from the value the Select File > Save, and type filename in
user must input according to the lab the Filename field.
topology.
We Will Discuss:
• Service chaining within Contrail; and
• Configuring service chains;
• Configuring Source NAT; and
• Using Heat templates.
Deployment Locations
Services can be deployed in multiple locations, including the hypervisor, VMs, physical devices, access control lists (ACLs) on
the physical access switch and a service card within a router or a switch.
The main option that is most typical for SDN/NFV solutions is deployment of services inside VMs.
Configuration Elements
Setting up a service chain requires the configuration of several elements. You can think of these elements as building blocks
as they fit together to allow you to easily create varied and dynamic service chains. All service chains require the following
elements:
• Virtual network(s): Defined VNs to use.
• Service template: Includes parameters such as service mode, service type, and VM image to use.
• Service instance: Includes parameters such as the service template to use, number of instances to launch, and
which VNs to assign to the service VM interfaces.
• Service policy: Includes policy rules (source and destination VNs) and other policy match conditions such as
source and destination ports).
In the next section, we go through some use cases and discuss these elements in further detail.
Service Templates
Service templates are always configured within the scope of a domain, and the templates can be used on all projects within
a domain. Furthermore, a template can be used to launch multiple service instances in different projects within a domain.
The following are parameters configured for a service template:
• Service template name: The name of the template.
• Mode of service: Select from Transparent, In-Network, or In-Network-NAT.
• Type of service: Select between Firewall or Analyzer. Use Firewall for both firewall and NAT services.
• Image name: Select the appropriate image from a list of available images (added through the OpenStack
Web UI).
• Interface types: Select the interface type or types for the service.
– For firewall or NAT services, both Left and Right interfaces are required.
– For an analyzer service, only a Left interface is required.
– For Juniper Networks virtual images, a Management interface is required, in addition to any left or right
requirement.
Continued on the next page.
Service Instance
A service instance is always maintained within the scope of a project. A service instance is launched using a specified
service template from the domain to which the project belongs. The following parameters configured for a service instance:
• Service Instance name: Enter a name for the service instance
• Service Template name: Select the appropriate template from a list of available service templates.
• Management VN: Select the management VN from a list of available VNs. If you are using the Management
interface, select Auto Configured. The software will use an internally-created virtual network.
• Left VN: Select the Left VN from a list of available VNs.
• Right VN: Select the Right VN from a list of available VNs.
Service Policy
The final piece of a service chain is the service policy. The following are the parameters to be configured for a service policy:
• Policy name: Provide a name for the service policy.
• Source network name: Select the appropriate network name for the source network.
• Destination network name: Select the appropriate network name for the destination network.
• Enable the Services check box and then choose the appropriate service instance name from the drop-down
list.
The Result
The result of running Heat template is shown in the slide. Virtual network HEAT_NET with the required subnet, gateway and
route targets in created.
? "Fn::Split"
:
- ","
- Ref: service_interface_type_list
This example shows different ways lists and dictionaries can be presented in YAML.
The Result
The results of running the template, as seen in Contrail GUI, are shown on the slide. It is seen that
service_instance_A_B and policy_A_B have been created, the correct service template is used, and the policy is
applied to VNs.
Many more examples of using Heat with Contrail can be found in contrail-heat repository at the following URL:
https://github.com/Juniper/contrail-heat.
We Discussed:
• Service chaining within Contrail;
• Configuration of service chains;
• Configuration of Source NAT; and
• Using Heat templates.
Review Questions
1.
2.
3.
We Will Discuss:
• How to work with the Monitor workspace;
• How to analyze live traffic using service instances and the Debug workspace;
• How to run flow queries and examine system logs;
• How to use Underlay Overlay mapping; and
• How to work with Contrail Analytics API.
Monitoring vRouters
By navigating to the Monitor > Infrastructure > Virtual Routers workspace, you are taken to the vRouters
summary screen. On the vRouters summary screen you will see the familiar bubble chart. Remember that with the vRouters
bubbles, blue means the vRouter is working, but it has no associated instances. A green bubble means the vRouter is
working and has associated instances. A red bubble means that the vRouter is down and unreachable.
An explanation of the fields shown in Virtual Routers table follows.
• Host name: The name of the vRouter. Click the name of any vRouter to reveal more details.
• IP Address: The IP address of the vRouter.
• Version: The version of software installed on the system.
• Status: The current operational status of the vRouter — Up or Down.
• CPU (%): The CPU percentage currently in use by the selected vRouter.
• Memory (MB): The memory currently in use and the total memory available for this vRouter.
• Networks: The total number of networks for this vRouter.
• Instances: The total number of instances for this vRouter.
• Interfaces: The total number and status of interfaces for this vRouter.
Instances Workspace
The Monitor > Networking > Instances workspace provides information on the VM instances that are active and
managed by Contrail.
You can click on the particular instance to see more details about it.
Instance Details
The slide shows Instance Details workspace. It includes the network topology graph, Details tab (it shows instance
UUID, used and total memory, CPU utilization), Interfaces tab, Traffic Statistics tab, and Port Map tab.
There are two main ways to launch the analyzer VM instance, through the Monitor > Debug > Packet Capture and the
Configure > Services workspaces. Note that if you use the methods under the Monitor > Debug > Packet
Capture workspace, a VM instance that uses predefined m1.medium instance flavor, which uses 4 GB of RAM and 40 GBs of
disk space, is launched. However, if you use the method under the Configure > Services workspace, you can select
which instance flavor to use. This instance flavor selection can result in a much smaller VM instance. Alternatively, you can
manually adjust the resource usage for the m1.medium instance flavor to use less resources through the OpenStack Web UI.
Note that if you delete the m1.medium instance flavor in the OpenStack Web UI, you will not be able to launch an analyzer VM
through the Monitor > Debug > Packet Capture workspace.
Continued on the next page.
Service Templates
Begin creating a service template by navigating to the Services > Service Templates workspace and clicking the
Create button.
Querying Flows
Use the Query > Flows workspace to perform rich and complex SQL-like queries on flows in the Contrail Controller. You
can use the query results for such things as gaining insight into the operation of applications in a virtual network, performing
historical analysis of flow issues, and pinpointing problem areas with flows. We discuss this workspace over the next slides.
Traffic Statistics
Click on any physical link in the Physical Topology workspace to enable Traffic Statistics tab that shows
graphics of traffic for that link, as a function of time. Use sliders in the lower graph to zoom-in to an interval that is of interest
to you, the graph for this time interval will be shown in upper panel in a different scale.
Search Flows
Contrail can list historical overlay flow records, according to the flow parameters used for Flow Record queries, i.e.
Source-VN/Source-IP, Dest-VN/Dest-IP, vRouter, Protocol/Source-Port and Protocol/Dest-Port.
Then it can map these flows onto the Physical Nodes to see what path they took. The underlay flow parameters used for a
given overlay flow depend on the type of encapsulation being used. MPLS-over-UDP and VXLAN encapsulations try to add
entropy (for better load-balancing between paths) by varying the UDP source-port based on overlay flow parameters. Multiple
overlay flows are expected to hash onto the same underlay flow. When looking at sFlow or IPFIX information of underlay flows
to infer the path of a given overlay flow, Contrail can exploit all samples of the underlay flow, even if they were actually due to
other overlay flows.
Trace Flows
Click the Trace Flows tab to see a list of active flows. To see the path of a flow, click a flow in the active flows list, then
click the Trace Flow button. The path taken in the underlay by the selected flow displays.
Because the Trace Flows feature uses IP traceroute to determine the path between the two vRouters involved in the flow,
it has the same limitations as the IP traceroute, including that Layer 2 routers in the path are not listed, and therefore do not
appear in the topology.
Analytics API
The slide highlights the topic we discuss next.
We Discussed:
• How to work with the Monitor workspace;
• How to analyze live traffic using service instances and the Debug workspace;
• How to run flow queries and examine system logs;
• How to use Underlay Overlay mapping; and
• How to work with Contrail Analytics API.
Review Questions
1.
2.
3.
2. Records of all packets, bytes, and flows are sent to the analytics node. Also, infrastructure statistics are sent to the
analytics nodes.
3. User Visible Entity is defined as an object that may span multiple controller components and may require aggregation
before its information is presented.
Chapter 7: Troubleshooting
Network Automation Using Contrail Cloud
We Will Discuss:
• The use of Contrail command-line interface (CLI) commands;
• The use of Fabric scripts;
• The use of OpenStack CLI commands;
• The use of vRouter Commands; and
• The use of Contrail Introspect
Troubleshooting Checklist
The slide presents the list of most important Contrail system health checks you want to perform before doing any more
detailed troubleshooting. It includes checking for available disk space on all nodes and partitions; checking if NTP is
correctly configured and working; checking contrail-status utility. The contrail-status command is described on
the next slide.
optional arguments:
-h, --help show this help message and exit
--analytics-api-ip ANALYTICS_API_IP
IP address of Analytics API Server (default:
127.0.0.1)
--analytics-api-port ANALYTICS_API_PORT
Port of Analytics API Server (default: 8081)
--start-time START_TIME
Logs start time (format now-10m, now-1h) (default: None)
--end-time END_TIME Logs end time (default: None)
--last LAST Logs from last time period (format 10m, 1d) (default: None)
--source SOURCE Logs from source address (default: None)
--node-type
{Invalid,Config,Control,Analytics,Compute,WebUI,Database,OpenStack,ServerMgr}
Logs from node type (default: None)
--module
{contrail-control,contrail-vrouter-agent,contrail-api,contrail-schema,contrail-a
nalytics-api,contrail-collector,contrail-query-engine,contrail-svc-monitor,Devic
eManager,contrail-dns,contrail-discovery,IfmapServer,XmppServer,contrail-analyti
cs-nodemgr,contrail-control-nodemgr,contrail-config-nodemgr,contrail-database-no
demgr,Contrail-WebUI-Nodemgr,contrail-vrouter-nodemgr,Storage-Stats-mgr,Ipmi-Sta
ts-mgr,contrail-snmp-collector,contrail-topology,InventoryAgent,contrail-alarm-g
en,contrail-tor-agent}
Logs from module (default: None)
--instance-id INSTANCE_ID
Logs from module instance (default: None)
--category CATEGORY Logs of category (default: None)
--level LEVEL Logs of level (default: None)
--message-type MESSAGE_TYPE
Logs of message type (default: None)
--reverse Show logs in reverse chronological order (default: False)
--verbose Show internal information (default: False)
--all Show all logs (default: False)
--raw Show raw XML messages (default: False)
--object-type
{service-chain,database-node,routing-instance,xmpp-connection,analytics-query,vi
rtual-machine-interface,config-user,analytics-query-id,None,logical-interface,xm
pp-peer,generator,virtual-network,analytics-node,prouter,bgp-peer,config,dns-nod
e,control-node,physical-interface,None,virtual-machine,vrouter,None,None,service
-instance,config-node}
Logs of object type (default: None)
--object-values Display list of object names (default: False)
--object-id OBJECT_ID
Logs of object name (default: None)
Continued on the next page.
Managing Services
All Contrail services are managed by the process supervisord, which is open source software written in Python. Each
Contrail node type, such as compute, control, and so on, has an instance of supervisord that, when running, launches
Contrail services as child processes. All supervisord instances display in contrail-status output with the prefix
supervisor. If the supervisord instance of a particular node type is not running, none of the services for that node
type are running. For more details about the open source supervisord process, see http://www.supervisord.org.
The standard Linux command, service, is used to start, stop, reset, or view the status of a given service running in
Contrail. There are several options for the service command as shown below:
• start: Start a named service
• stop: Stop a named service
• restart: Restart a named service
• status: Display
The example demonstrates how to restart the supervisor-webui service.
Traffic on compute-1
Now you dump traffic on physical interface eth0 of compute-1 node, using tcpdump. The encapsulated (GRE over MPLS)
packets are seen in the output.
The flow –l command is used to show the flows seen by vRouter. The ping traffic has produced two flows, corresponding
to two session directions.
Traffic on compute-2
You issue the same commands on the other compute node, compute-2, that has VN-B_VM-1 instantiated on it. The
command’s output is similar. You notice the MPLS label for incoming packet which is 17 in our case.
You can display the HTTP introspect of a Contrail daemon directly by accessing the Introspect ports, as listed in the slide.
Another way to launch the Introspect page is by browsing to a particular node page using the Contrail Web user interface (the
link is at the bottom of the page, when you navigate to a particular node in the Monitor > Infrastructure workspaces).
vRouter Introspect
The Contrail vRouter Agent Introspect on a compute node can be accessed from the Infrastructure > Virtual
Routers workspace or directly via HTTP on port 8085. Various links accessible from here can be used to see the current
operational data in the vRouter. In particular:
• agent.xml shows agent operational data. In this, we can see the list of interfaces, VMs, VNs, VRFs, Next
hops, MPLS labels, Security groups, ACLs, Mirror configurations.
• controller.xml shows the current XMPP connections with the control node (XMPP server), the status and
stats on these connections.
• cpuinfo.xml gives the CPU load and memory usage on the compute node.
• ifmap_agent.xml shows the current configuration data received from IF-MAP.
• kstate.xml gives data configured in the vRouter (kernel) module.
• pkt.xml gives the current flow data in the agent.
• sandesh_trace.xml gives the module traces.
• services.xml provides stats and packet dumps for control packets like DHCP, DNS, ARP, ICMP.
Support URLs
The slides shows some helpful URLs to visit if experiencing trouble with your Contrail setup.
We Discussed:
• The use of Contrail command-line interface (CLI) commands;
• The use of Fabric scripts;
• The use of OpenStack CLI commands;
• The use of vRouter Commands; and
• The use of Contrail Introspect
Review Questions
1.
2.
3.
www.juniper.net
Network Automation Using Contrail Cloud
www.juniper.net
Network Automation Using Contrail Cloud
Appendix A: Installation
Network Automation Using Contrail Cloud
We Will Discuss:
• Pre-installation tasks and roles;
• Contrail installation using Fabric scripts;
• Additional Contrail settings and operations;
• Contrail high availability solutions;
• Configuring simple virtual gateway; and
• Installation process using Server Manager.
Supported Platforms
Contrail Release 2.21 is supported on the OpenStack Juno and Icehouse releases. Juno is supported on Ubuntu 14.04.2
and Centos 7.1.
Contrail networking is supported on Red Hat RHOSP 5.0, which is supported only on OpenStack Icehouse.
Contrail Release 2.21 supports VMware vCenter 5.5. vCenter is limited to Ubuntu 14.04.2 (Linux kernel version:
3.13.0-40-generic).
Other supported platforms include:
• CentOS 6.5 (Linux kernel version: 2.6.32-358.el6.x86_64)
• CentOS 7.1 (Linux kernel version: 3.10.0-229.el7)
• Redhat 7/RHOSP 5.0 (Linux kernel version: 3.10.0-123.el7.x86_64)
• Ubuntu 12.04.04 (Ubuntu kernel version: 3.13.0-34-generic)
• Ubuntu 14.04. (Linux kernel version: 3.13.0-40-generic)
The current list of OpenStack releases is available at https://wiki.openstack.org/wiki/Releases
At the time of this writing, Icehouse release is considered end-of-life (EOL).
Server Requirements
The minimum requirement for a proof-of-concept (POC) system is 3 servers, either physical or virtual machines. All
non-compute roles can be configured in each controller node. For scalability and availability reasons, it is highly
recommended to use physical servers. Each server must have a minimum of:
• 64 GB memory
• 300 GB hard drive
• 4 CPU cores
• At least one Ethernet port
For production environment, each server must have a minimum of:
• 256 GB memory
• 500 GB hard drive
• 16 CPU cores
The following are additional minimum hardware specifications needed for implementing the Contrail storage solution:
• Two 500 GB, 7200 RPM drives in the server 4 and server 5 cluster positions (those with the compute storage
role) in the Contrail installation. This configuration provides 1 TB of clustered, replicated storage.
Continued on the next page.
Installation Software
All components necessary for installing the Contrail Controller are available as:
• An RPM file (contrail-install-packages-1.xx-xxx.el6.noarch.rpm) that can be used to install
the Contrail system on an appropriate CentOS operating system.
• A Debian file (contrail-install-packages-1.xx-xxx~xxxxxx_all.deb) that can be used to install
the Contrail system on an appropriate Ubuntu operating system.
Versions are available for each Contrail release, for the supported Linux operating systems and versions, and for the
supported versions of OpenStack.
All installation images can be downloaded from http://www.juniper.net/support/downloads/?p=contrail#sw.
The Contrail image includes the following software:
• All dependent software packages needed to support installation and operation of OpenStack and Contrail
• Contrail Controller software – all components
• OpenStack release currently in use for Contrail
As noted on the slide, Server Manager is a separate package file. Contrail Storage is a separate package as well, and also
NFS VM file must be downloaded from the same URL to support Storage functionality.
Installation Stages
Install the stock CentOS or Ubuntu operating system image appropriate for your version of Contrail onto the server, then
install Contrail packages separately.
The following are general guidelines for installing the operating system and preparing to install Contrail.
1. Install a CentOS or Ubuntu minimal distribution as desired on all servers. Follow the published operating system
installation procedure for the selected operating system; refer to the website for the operating system.
2. After rebooting all of the servers after installation, verify that you can log in to each of them using the root
password defined during installation.
3. After the initial installations on all servers, configure some items specific to your systems, then begin the first
part of the Contrail installation. These steps are discussed on the following slides.
Root login over SSH using password is disabled in modern Ubuntu Linux distributions by default. To enable it, edit the /etc/
ssh/sshd_config file and replace the following line
PermitRootLogin without-password with PermitRootLogin yes
This task can be accomplished by a text editor such as vi or nano, or by using sed utility as follows:
sudo sed -i.bak 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/
sshd_config
Finally, restart SSH daemon using sudo service ssh restart command.
2. Provide host strings for the nodes in the cluster. Replace the addresses shown in the example with the actual IP
addresses of the hosts in your system:
host1 = 'root@10.10.10.230'
host2 = 'root@10.10.10.231'
host3 = 'root@10.10.10.233'
host4 = 'root@10.10.10.234'
3. Define external routers (such as Juniper MX series routers) to which the Control Nodes will be peered. For
example,
ext_routers = [(‘mx1’, ’10.10.10.240’), (‘mx2’, ’10.10.10.242’)]
If there are no external routers, you need to define:
ext_routers = []
Continued on the next page.
3. Run the setup.sh script. This step will create the Contrail packages repository as well as the Fabric utilities
(located in /opt/contrail/utils) needed for provisioning:
cd /opt/contrail/contrail_packages
./setup.sh
4. Create the testbed.py file as described previously and put it into /opt/contrail/utils/fabfile/
testbeds directory.
Before the next step, only if your installation has multiple interfaces, run setup_interface:
fab setup_interface
See Contrail documentation for more details on multiple interface support and testbed.py example
configurations.
Install Procedure
The slide presents details on the initial installation and configuration of ESXi host as a compute node in Contrail system
using fabric scripts. On the right, example testbed.py configuration (only part relevant for ESXi) is shown.
More details on setup and performance of the solution can be found in Contrail technical documentation at www.juniper.net
and in OpenContrail Blog at http://www.opencontrail.org/integrating-vmware-esxi-with-openstack-opencontrail/.
3. Another alternative is to use the web user interface. Connect to the node’s IP address at port 8080 and select
Configure > Infrastructure > BGP Routers. The list of BGP peers is displayed. For a BGP peer,
click on the gear icon on the right hand side of the peer entry. Then click Edit. This displays the Edit BGP
Router dialog box. Scroll down the window and select Advanced Options. Configure the MD5
authentication by selecting MD5 Authentication Mode and entering the Authentication Key value.
When upgrading to Contrail Release 2.10, add the following steps if you have live migration configured.
Upgrades to Release 2.0 do not require these steps. Select the command that matches your live migration
configuration:
fab setup_nfs_livem
or
fab setup_nfs_livem_global
High Availability
The slide highlights the topic we discuss next.
HA Deployment Example
This slide presents an example of how OpenStack and Contrail can be deployed on the same set of HA servers.
OpenStack and Contrail services can be deployed in the same set of highly available nodes by setting the internal_vip
parameter in the env.ha dictionary of the testbed.py. Because the highly available nodes are shared by both
OpenStack and Contrail services, it is sufficient to specify only internal_vip. However, if the nodes have multiple
interfaces with management and data control traffic separated by provisioning multiple interfaces, then the
external_vip also needs to be set in the testbed.py. For example:
env.ha = {
‘internal_vip’ : ‘an-ip-in-control-data-network’,
‘external_vip’ : ‘an-ip-in-management-network’,
}
Below are some details on the software modules used in the setup:
• Apache ZooKeeper is a centralized service for maintaining configuration information, naming, providing
distributed synchronization, and providing group services.
• RabbitMQ is an open source message broker software (sometimes called message-oriented middleware) that
implements the Advanced Message Queuing Protocol (AMQP).
• Galera Cluster for MySQL is a Multimaster Cluster based on synchronous replication.
Here, the configured domain is englab.juniper.net and hostname is server-manager. You can check
host and domain name with hostname -f command.
• Copy Server Manager package to the local host.
• Ensure Internet access is present during installation, as it is required to get dependent packages.
Continued on the next page.
• The Server Manager service does not start automatically upon successful installation. The user must finish the
provisioning by modifying the following templates to match the actual environment. Refer to the Contrail
documentation for details on configuring these files:
/etc/cobbler/dhcp.template
/etc/cobbler/named.template
/etc/bind/named.conf.options
/etc/cobbler/settings
/etc/cobbler/modules.conf
/etc/sendmail.cf
Clusters Workspace
The server manager user interface can be accessed using:
http://<server-manager-user-interface-ip>:8080
From the Contrail user interface, select Setting > Server Manager to access the Server Manager home page. From
this page you can manage server manager settings for clusters, servers, images, and packages.
From the Clusters page that appears by selecting Setting > Server Manager > Clusters click the plus icon in the
upper right of the Clusters page. The Add Cluster window appears, where you can add a new cluster ID and the domain
email address of the cluster.
OS Images Workspace
To add a new OS image, on the OS Images Workspace page, click the plus (+) icon in the upper right header to open the
Add OS Image dialog box. Enter the information for the new image and click Save to add the new item to the list of
configured items.
Packages Workspace
To add a new package, on the Packages Workspace page, click the plus (+) icon in the upper right header to open the Add
Package dialog box. Enter the information for the new package and click Save to add the new item to the list of configured
items.
Servers Workspace
Click on the Servers link in the left sidebar at Setting > Server Manager to view a list of all servers.
To add a new server, from Setting > Server Manager > Servers, click the plus (+) icon at the upper right side in
the header line. The Add Server window pops up.
We Discussed:
• Pre-installation tasks and roles;
• Contrail installation using Fabric scripts;
• Additional settings and operations;
• Contrail high availability solutions;
• Configuring simple virtual gateway; and
• Installation process using Server Manager.
Review Questions
1.
2.
3.