LF Networking Whitepaper: Presented by The LFN Technical Advisory Council (TAC)
LF Networking Whitepaper: Presented by The LFN Technical Advisory Council (TAC)
LF Networking Whitepaper: Presented by The LFN Technical Advisory Council (TAC)
2 LFN Landscape................................................................................................................. 7
6 Glossary........................................................................................................................... 47
7 Credits............................................................................................................................. 48
This document does not intend to prescribe the "right" solution for building a network.
There is more than one way of doing that and it all depends on the network designer’s
preferences and available resources. Instead, we try to introduce the capabilities
of each LFN project and suggest potential ways they can be used in harmony. One
of the goals of this whitepaper is to solicit engagement from potential users and
contributors to the LFN projects. You are strongly encouraged to share your insights
and thoughts with the LFN community on this document as well as on any of the
projects themselves. The LFN Technical Advisory Committee (TAC) mailing list is one
place to start such engagement. Please see the details in chapter 5.
Background
Just over two decades ago, the network was mainly a fixed voice network in widespread
use in mature markets but with limited reach in emerging economies. Cellular
infrastructure and the internet were only just starting to appear. Each regional
network was built and run by a Communications Services Provider (CSP) who would
acquire the underlying proprietary technology from Network Equipment Providers
(NEPs) and charge subscribers to use the network. The resulting networks were
largely homogenous with most of the equipment typically coming from a single vendor.
In this traditional model, the technology and product roadmap of the CSP was
the technology and product roadmap of their NEP which was driven by jointly
developed standards. Standards-led product development led to decentralized
yet globally compatible service offerings, enabling worldwide roaming and an
unprecedented level of compatibility over defined reference points and across
many vendors. Development costs for the NEPs were high, ultimately resulting in
Situation
Move forward a short twenty years or so and the industry has transformed. Mobile
and internet are booming worldwide. Traffic has moved from circuit-switched voice
to packet-switched data. The network has far greater reach: hundreds of millions
of people in mature and emerging economies worldwide now stay connected to
the network to regularly access valued consumer services such as streaming, and
business services such as video conferencing. Capacity has significantly increased
and demand continues to grow as more devices connect to the network and
services consume more bandwidth. Markets today are far more competitive and
communications services are increasingly commoditized. As consumers, we pay less
and get more. The network itself has become the foundation for the new, global
digital economy of the 21st century.
Despite these advances, if we scratch the surface of the industry a little, we see
that business models and ways of cooperating around technology remain largely
unchanged from twenty or even one hundred years ago.
The industry challenge is that the traditional networks that are the foundation of
the CSP business can, in fact, be slowing the business. With consumers paying less
to get more each year, the CSP must continuously create new services and provide
more bandwidth at a lower cost each year just to remain viable as a business. The
underlying network technologies and closed supplier ecosystems prevent the CSP
from leveraging the open market to introduce new capabilities to reduce costs or
innovate to create a new service. The tipping point has already been reached in
highly competitive markets such as India where CSPs are disappearing from the
market or are merging but still losing customers to competition. From the once
flourishing NEP ecosystem, less than a handful of vendors remain today. Despite
the network itself becoming the foundation for the new global digital economy, the
industry that provides the network is facing significant challenges.
“Standards and open source, better together” means that open source software
can accelerate and simplify the process as the open source implementation of
the standards provides immediate feedback loop to the standard creation, and a
reference implementation for equipment providers and operators.
LF Networking Arrives
In recognizing both the importance of communications to the emerging global digital
economy and to improving lives of people everywhere, and the challenges facing the
communications industry, the Linux Foundation established LF Networking (LFN) as
the umbrella organization to provide platforms and building blocks for network
infrastructure and services across service providers, cloud providers, enterprises, vendors,
and system integrators that enable rapid interoperability, deployment, and adoption.
LFN increases the availability and adoption of quality open source software to
reduce the cost of building and managing networks, thus giving CSPs, cloud
providers, enterprises and others the means to:
• reduce capital and operational costs, for example by increasing the number
of functions that can be remotely deployed and maintained, through
automating operations and through increased use of commodity hardware
The value of open source is not missed on the NEPs, many of who use Linux as the
operating system for their network equipment. Increased adoption of open source
in other areas of their products will help NEPs improve quality and output while
reducing development and maintenance costs.
China Mobile, AT&T, and Rakuten are examples of organizations using open source.
US military research agency DARPA has stated its intention of establishing an open
source program for 5G and the US Congress is legislating to provide funding. It
is expected that open source will significantly displace proprietary systems from
networks in the coming years, and LFN has a significant contribution to make.
As this transition proceeds, in coming years when you scratch the surface of the
network, you will see a markedly transformed network underpinning the modern,
digital economy.
The traditional CSP has a relatively simple business model, a long network
construction and service introduction cycle, and is accustomed to business
operations based on user access and basic network planning, construction, and
maintenance according to the physical network technology field. Hence, we are now
facing a partially standardized Communication Technology (CT) industry chain with
highly standardized network functions (NF) as well as highly customized operation
and maintenance management.
In order to break the closed business R&D and equipment R&D ecology of the
communications industry, the industry’s leading CSPs joined hands with vendors in
creating LF Networking (or LFN in short), as a vehicle to unite industry forces such as
standards and open source, hoping to build a truly open next-generation network
innovation technology ecosystem.
Internal Landscape
LFN projects address the touch points mentioned above and offer functionality
related to the different layers required for building a modern network.
It’s worth looking at the projects within the LFN umbrella in the context of the
network itself.
This starts with the Transport Layer (also referred to as the ‘Datapath’), where user data
is moved from one point to another and speed and reliability are key. The FD.io project
focuses on fast packet processing, with the promise to move data up to one hundred
times faster. FD.io’s work applies to multiple layers of the network including Layer 2
Data, Layer 3 Network, Layer 4 Transport, Layer 5 Session, and Layer 7 Application.
The next layer is the Network Operating System (NOS), where the essential software
components required for building a network device are integrated and packaged
together. OpenSwitch (OPX) is a NOS that abstracts the complexity and hardware
implementation details of network devices, and exposes a unified interface towards
the higher network layers.
The Network control layer is where end-to-end complex network services are
designed and executed. It relies heavily on network modeling that allows network
designers to create the desired services. ONAP, OpenDaylight, and Tungsten Fabric
take network service definitions as input, break them into their more basic building
blocks, and then interface with the lower layers of the network to instantiate
and control the service components. The network control layer also provides the
interface to Operational and Business Support Systems (OSS/BSS) where ONAP
provides the management and orchestration functions that ensure OSS/BSS can
manage modern dynamic networks.
The top layer of network functionality includes the components which provide
visibility into the state of the network as well as automated network management.
The OPNFV project and the Common NFVi Telco Taskforce (CNTT) initiative focus on
the integration of the different layers and provide tools and reference architectures
for building networks. In addition, OPNFV provides a verification program for
network infrastructure and virtual network functions to ensure that the different
components of the network are fully compatible with each other and provide the
expected functionality and performance.
3.2 ONAP............................................................................................................................ 17
3.4 OpenDaylight.............................................................................................................. 26
3.5 OpenSwitch................................................................................................................. 30
3.6 PNDA............................................................................................................................ 35
3.7 SNAS............................................................................................................................. 37
3.1 FD.IO
FD.io (Fast Data – Input/Output) is a collection of several projects that support flexible,
programmable packet processing services on a generic hardware platform. FD.io
offers a home for multiple projects fostering innovations in software-based packet
processing towards the creation of high-throughput, low-latency and resource-
efficient IO services suitable to many architectures (x86, ARM, and PowerPC) and
deployment environments (bare metal, VM, container). FD.io provides “universal”
dataplane functionality and acceleration at scale, within the LFN ecosystem.
The core component is the highly modular Vector Packet Processing (VPP) library
(details below) which allows new graph nodes to be easily “plugged in” without
changes to the underlying codebase. This gives developers the potential to easily
build any number of packet processing solutions.
• When the code path length exceeds the size of the microprocessor’s
instruction cache (I-cache) thrashing occurs as he microprocessor is
continually loading new instructions. In this model, each packet incurs an
identical set of I-cache misses.
• The associated deep call stack will also add load-store-unit pressure as stack-
locals fall out of the microprocessor’s Layer 1 Data Cache (D-cache).
• The inefficiencies associated with the deep call stack by receiving vectors
of up to 256 packets at a time from the Network Interface, and processes
them using a directed graph of node. The graph scheduler invokes one node
dispatch function at a time, restricting stack depth to a few stack frames.
Further optimizations that this approach enables are pipelining and prefetching
to minimize read latency on table data and parallelize packet loads needed to
process packets.
At runtime, the FD.io VPP platform assembles a vector of packets from RX rings,
typically up to 256 packets in a single vector. The packet processing graph is then
applied, node by node (including plugins) to the entire packet vector. The received
packets typically traverse the packet processing graph nodes in the vector, when the
network processing represented by each graph node is applied to each packet in
turn. Graph nodes are small and modular, and loosely coupled. This makes it easy
to introduce new graph nodes and rewire existing graph nodes.
IPSec
VPP Release 20.01 makes it so IPSec can now be processed in a single solution
instance—whether appliance, VM, or cloud instance—across multiple cores. This
makes it safe to run multi-core, because now the Security Associations (SAs) are
bound to the initial core they were seen on. Learn more here.
Plugins
Plugins are shared libraries and are loaded at runtime by FD.io VPP. It finds plugins
by searching the plugin path for libraries, and then dynamically loads each one in
Features
Most FD.io VPP features are written as plugins. The features include everything from
layer 2 switching to a TCP/IP host stack. For a complete list of features please visit
FD.io VPP features.
Drivers
FD.io VPP supports and has tested most DPDK drivers (some have not been
completely tested). FD.io VPP also has some native drivers most notably VMXNET3
(ESXI), AVF (Intel), vhostuser (QEMU), virtue, tapv2, host-interface and Mellanox.
Use Cases
ROUTERS, UNIVERSAL CPE ETC.
FD.io VPP supports entry hardware options from a number of hardware vendors for
building Customer Premise Equipment devices. FD.io VPP based commercial options
are available from vendors such as Netgate with TNSR, Cisco with the ASR 9000,
Carrier Grade Services Engine and many more.
These implementations are accelerated with DPDK Cryptodev for whole platform crypto.
LOAD BALANCER
FD.io VPP has a rich set of plugins to enhance its capabilities. Cloud load-balancing
is just one of a number of feature enhancing plugins available to the end user. For
example: Google Maglev Implementation, Consistent Hashing, Stateful and stateless
load balancing, Kube-proxy integration.
INTRUSION PREVENTION
Fd.io VPP has four different Access Control List technologies; ranging from the
simple IP-address whitelisting (called COP) to the sophisticated FD.io VPP Classifiers.
More Information
For more information on FD.io VPP please visit FD.io VPP.
• Work with both user space and kernel space network stacks
Use and engage or adopt a new protocol stack dynamically as applicable. For more
information on DMM please visit the DMM wiki page.
SWEETCOMB
Sweetcomb is a management agent that runs on the same host as a VPP instance,
and exposes yang models via NETCONF, RESTCONF and gNMI to allow the management
of VPP instances. Sweetcomb works as a plugin (ELF shared library) for sysrepo
datastore. For more information on Sweetcomb please the Sweetcomb wiki page.
Scope of ONAP
ONAP enables end user organizations and their network or cloud providers to
collaboratively instantiate network elements and services in a dynamic, closed
control loop process, with real-time response to actionable events.
In order to design, deploy and operate services and assure these dynamic services,
ONAP activities are built up as follows:
1. Planning VNF onboarding – checking which VNFs will be necessary for the
required environment and features
3. Distributing services:
Service Operations
1. Closed Loop design and deployment
Use Cases
As part of each release, the ONAP community also defines blueprints for key use
cases, which the user community expects to pursue immediately. Testing these
blueprints with a variety of open source and commercial network elements during
the development process provides the ONAP platform developers with real-time
Benefits of ONAP
Open Network Automation Platform provides the following benefits:
• the model-driven approach enables ONAP to support services, that are using
different VNFs, as a common service block
ONAP Releases
ONAP is enhanced with numerous features from release to release. Each release is
named after a global city. A list of past and current releases may be found here.
• breaks down the use-case into simple operations and functions required
Note: Ideally VNFs will be open source; however, proprietary VNFs may also be used
as needed.
The project will develop test suites that cover detailed functional test cases, test
methodologies and platform configurations which will be documented and maintained
in a repository for use by other OPNFV testing projects and the community in general.
Developing test suites will also help lay the foundation for a test automation
framework that in future can be used by the continuous integration (CI) project
(Octopus). Certain VNF deployment use cases could be automatically tested as an
optional step of the CI process. The project targets testing of the OPNFV platform in
a hosted test-bed environment (i.e. using the OPNFV community test labs worldwide).
Many test projects are integrated into a single, lightweight framework for automation
(x-testing) that leverages the OPNFV test-api and testdb frameworks for publishing results.
Lab as a Service (LaaS): LaaS is a “bare-metal cloud” hosting resource for the LFN
community. This comprises compute and network resources that are installed and
configured on demand for the developers through an online web portal. The highly
configurable nature of LaaS means that users can reserve a Pharos compliant
or CNTT compliant POD. Resources are booked and scheduled in blocks of time,
ensuring individual projects and users do not monopolize resources. By providing a
lab environment to developers, LaaS enables more testing, faster development, and
better collaboration between LFN projects.
Note: OPNFV Feature projects are working towards closing feature gaps in
upstream open source communities providing the components for building full NFVI
stacks, and OPNFV Deployment tools include Airship and Fuel / MCP.
OpenDaylight Architecture
MODEL-DRIVEN
The core of the OpenDaylight platform is the Model-Driven Service Abstraction Layer
(MD-SAL). In OpenDaylight, underlying network devices and network applications
are all represented as objects, or models, whose interactions are processed within
the SAL.
The SAL is a data exchange and adaptation mechanism between YANG models
representing network devices and applications. The YANG models provide
generalized descriptions of a device or application’s capabilities without requiring
either to know the specific implementation details of the other. Within the SAL,
The SAL matches producers and consumers from its data stores and exchanges
information. A consumer can find a provider that it’s interested in. A producer can
generate notifications; a consumer can receive notifications and issue RPCs to get
data from providers. A producer can insert data into SAL’s storage; a consumer can
read data from SAL’s storage. A producer implements an API and provides the API’s
data; a consumer uses the API and consumes the API’s data.
Each of these components is isolated as a Karaf feature, to ensure that new work
doesn’t interfere with mature, tested code. OpenDaylight uses OSGi and Maven to
build a package that manages these Karaf features and their interactions.
Use Cases
The OpenDaylight platform (ODL) provides a flexible common platform
underpinning a wide variety of applications and use cases. Some of the most
common use cases are mentioned here.
ONAP
Leveraging the common code base provided by Common Controller Software
Development Kit (CCSDK), ONAP provides two application level configuration and
lifecycle management controller modules called ONAP SDN-C and ONAP App-C.
These controllers manage the state of a single Resource (Network or Application).
Both provide similar services (application level configuration using NetConf, Chef,
Ansible, RestConf, etc.) and life cycle management functions (e.g. stop, resume,
health check, etc.). The ONAP SDN-C has been used mainly for Layer1-3 network
elements and the ONAP App-C is being used for Layer 4-7 network functions. The
ONAP SDN-C and the ONAP App-C components are extended from OpenDaylight
controller framework.
The components used to provide Network Virtualization is shown in the diagram below:
NETWORK ABSTRACTION
OpenDaylight can expose Network Services API for northbound applications for
network automation in a multi-vendor network.
These are just a few of the common use cases for OpenDaylight. The platform can
and continues to be tailored to several other industry use cases.
• ONIE installer
• Puppet
In addition, a set of OPX specific commands are available and can be invoked from a
Linux shell (e.g. display the current software version, hardware inventory etc.).
OPX Architecture
OPX BASE
The key components of OPX Base are:
• NAS manages the middle-ware that associates physical ports to Linux interfaces,
and adapts Linux native API calls (e.g. netlink) to the switching ASIC
OPX APPLICATIONS
A variety of open source or vendor specific applications have been tested and can
be deployed with OPX:
• FRR - BGP
• NetSNMP
• Puppet
• Chef
It should be noted that these applications are not pre-installed with OPX. In a
"disaggregated" model, users select applications to install them based on the
requirements of a given network deployment.
In general, since OPX is based on Linux Debian distribution with an unmodified kernel,
any Debian binary application can be installed and executed on OpenSwitch devices.
Hardware Simulation
OPX software supports hardware virtualization (or simulation). Software simulation
of basic hardware functionality is also provided (simulation specific SAI and SDI
components), and the higher layer software functionality can be developed and
tested on generic PC/server hardware. OPX hardware simulation can be executed
under Virtual Box, GNS3 / QEmu etc.
The PNDA project aims to deliver a fully cloud native PNDA data platform on
Kubernetes. The current focus has been migrating to a containerized and helm
orchestrated set of components, which has simplified PNDA development and
deployment as well as lowered project maintenance costs. The goal of the Cloud-
native PNDA project is to deliver the PNDA big data experience on Kubernetes in the
first half of 2020.
SNAS extracts data from BGP routers using a BGP Monitoring Protocol (BMP)
interface. The data is parsed and made available to consumers through a Kafka
message bus. Consumers applications in turn can perform further analytics and
visualization of the topology data.
Tungsten Fabric enables usage of the same controller and forwarding components
for every deployment, providing a consistent interface for managing connectivity
in all the environments it supports, and is able to provide seamless connectivity
between workloads managed by different orchestrators, whether virtual machines
or containers, and to destinations in external networks.
ARCHITECTURE OVERVIEW
Tungsten Fabric controller integrates with cloud management systems such as
OpenStack or Kubernetes. Its function is to ensure that when a virtual machine (VM)
or container is created, it is provided with network connectivity according to the
network and security policies specified in the controller or orchestrator.
• Tungsten Fabric vRouter – installed in each host that runs workloads (virtual
machines or containers), the vRouter performs packet forwarding and
enforces network and security policies
Key Features
Tungsten Fabric manages and implements virtual networking in cloud environments
using OpenStack and Kubernetes orchestrators, where it uses overlay networks
between vRouters that run on each host. It is built on proven, standards-based
networking technologies that today support the wide-area networks of the world’s
major service providers, but repurposed to work with virtualized workloads and
cloud automation in data centers that can range from large scale enterprise data
centers to much smaller telco POPs. It provides many enhanced features over the
native networking implementations of orchestrators, including:
This section aims to present an end-to-end use case example where the LFN
projects work in harmony to deliver a "service" that includes VNFs, connectivity and
analytics-powered assurance as shown in the following picture:
VES VNF
Requirements
Host 1
Overlay network
VNF1
External
Host 2
Network connectivity
acceleration
VNF2
Physical Overlay
CI/CD Network Network
Verification & Host 3
Certification Real Time Analytics
Phase 0
Underlay network Phase 1
Phase 2
Using the 8 LFN projects, an end user (e.g., a carrier) can realize the above as follows:
Several LFN projects may be used as infrastructure building blocks for addressing
the needs of network functions, such as high throughput/low latency networking:
• OpenDaylight and Tungsten Fabric can be used as 3rd party SDN solutions
to provide network connectivity.
• Open Switch (OPX) can be used to configure the physical (underlay) network
that connects the physical hosts used to deploy OpenStack The network
topology may follow the leaf and spine topology as a physical infrastructure
is recommended in the requirements of physical infrastructure of the CNTT
Reference Architecture.
• FD.io provides data plane network acceleration through its Vector Packet
Processor (VPP).
• An NFV vendor pre-validates and certifies a couple of VNFs (i.e., VNF1 and
VNF2) through the OPNFV Verification Program (OVP).
• The NFV vendor ensures that the VNF complies with the ONAP VNF
requirements. This will enable ONAP to properly control the lifecycle of the
VNF as part of a network service.
At runtime, ONAP orchestrates the deployment of the whole service either through
ONAP internal functions/components or leveraging the capability to interwork with
3rd party components.
In particular, ONAP Service Orchestrator (ONAP SO) instructs the underlying ONAP
functions in order to deploy all of the elements that compose the end-to-end service.
ONAP deploys the VNFs in the available NFVI and the overlay network connecting
them using ONAP SDN-C. SDN-C uses its OpenDaylight based architecture to
model and deploy the L1-L3 network. Next the ONAP APP-C is used to configure
the network functions and their L4-L7 functionality. This is also done leveraging the
OpenDaylight architecture.
OpenDaylight may be used to stitch together the physical switch fabric of the
infrastructure with the virtual networking in the NFVI (e.g. OpenStack Neutron).
Through the OpenDaylight Northbound Interface, ONAP-SDNC is able to instruct
the OpenDaylight SDN controller for underlay network management. The
southbound interfaces (e.g. NETCONF, etc) support interactions with OpenSwitch
running on the leaf and spine fabric switches in the NFVI.
The LFN community is eager to learn about new use cases that might stem from
reading this document. We encourage readers who come up with ideas for
use cases to share them with the community using the LFN Technical Advisory
Committee (TAC) mailing list at:
https://lists.lfnetworking.org/g/lfn-tac
The best way to learn about an open source community is to participate and
contribute. Learn about getting started on the LFN website. For further reading,
please refer to the Wikis and documentation links below:
LF Networking
Wiki: https://wiki.lfnetworking.org/
ONAP
Wiki: https://wiki.onap.org/
Docs: https://docs.onap.org/
FD.io
Wiki: https://wiki.fd.io/view/Main_Page
Docs: https://fd.io/documentation/
Docs: https://docs.opendaylight.org
OPNFV/CNTT
OPNFV Wiki: https://wiki.opnfv.org/
OpenSwitch (OPX)
Wiki / Docs: https://github.com/open-switch/opx-docs/wiki
Docs: https://github.com/open-switch/opx-docs/wiki
PNDA
Wiki: https://wiki.pnda.io/
Docs: http://pnda.io/guide
SNAS
Docs: https://www.snas.io/docs
Tungsten Fabric
Wiki: https://wiki.tungsten.io/
Docs: https://tungstenfabric.github.io/website/
Al Morten, AT&T