Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CORE A Real-Time Network Emulator

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/261091924

CORE: A real-time network emulator

Conference Paper · November 2008


DOI: 10.1109/MILCOM.2008.4753614

CITATIONS READS
271 9,558

4 authors, including:

Jae H. Kim
The Boeing Company
91 PUBLICATIONS 1,130 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

SD-NBN project View project

Software Defined Multipath Communications for Satellite Networks View project

All content following this page was uploaded by Jae H. Kim on 20 May 2014.

The user has requested enhancement of the downloaded file.


CORE: A REAL-TIME NETWORK EMULATOR

Jeff Ahrenholz, Claudiu Danilov, Thomas R. Henderson, Jae H. Kim


Boeing Phantom Works
P.O. Box 3707, MC 7L-49, Seattle, WA 98124-2207
{jeffrey.m.ahrenholz; claudiu.b.danilov; thomas.r.henderson; jae.h.kim}@boeing.com

under an open source license. In addition to the IMUNES


ABSTRACT basic network emulation features, CORE adds support for
We present CORE (Common Open Research Emulator), a wireless networks, mobility scripting, IPsec, distributed
real-time network emulator that allows rapid instantiation emulation over multiple machines, control of external
of hybrid topologies composed of both real hardware and Linux routers, a remote API, graphical widgets, and
virtual network nodes. CORE uses FreeBSD network stack several other improvements. In this paper we present and
virtualization to extend physical networks for planning, evaluate some of these features that make CORE a
testing and development, without the need for expensive practical tool for realistic network emulation and
hardware deployments. experimentation.

We evaluate CORE in wired and wireless settings, and The remainder of this paper is organized as follows: we
compare performance results with those obtained on present related work in Section 2. Then we provide an
physical network deployments. We show that CORE scales overview of CORE’s features in Section 3, and highlight
to network topologies consisting of over a hundred virtual the implementation of wireless networking in Section 4
nodes emulated on a typical server computer, sending and and distributed emulation in Section 5. We then examine
receiving traffic totaling over 300,000 packets per second. the performance of the CORE emulator for both wired and
We demonstrate the practical usability of CORE in a wireless networks in Section 6 and present a typical hybrid
hybrid wired-wireless scenario composed of both physical emulation scenario in Section 7, and end the paper with
and emulated nodes, carrying live audio and video our conclusions.
streams.
2. RELATED WORK
Keywords: Network emulation, virtualization, routing, In surveying the available software that allows users to run
wireless, MANET real applications over emulated networks, we believe that
CORE stands out in the following areas: scalability, ease
1. INTRODUCTION of use, application support, and network emulation
The Common Open Research Emulator, or CORE, is a features.
framework for emulating networks on one or more PCs.
CORE emulates routers, PCs, and other hosts and Simulation tools, such as ns-2 [6], ns-3 [7], OPNET [8],
simulates the network links between them. Because it is a and QualNet [9] typically run on a single computer and
live-running emulation, these networks can be connected abstract the operating system and protocols into a
in real-time to physical networks and routers. The acronym simulation model for producing statistical analysis of a
stems from the initial use of this emulator to study open network system. In contrast, network emulation tools, such
source routing protocols, but as we describe below, we’ve as PlanetLab [10], NetBed [11], and MNE [12] often
extended the capabilities of CORE to support wireless involve a dedicated testbed or connecting real systems
networks. under test to specialized hardware devices. CORE is a
hybrid of the two types of tools, emulating the network
CORE is based on the open source Integrated Multi- stack of routers or hosts through virtualization, and
protocol Network Emulator/Simulator (IMUNES) from the simulating the links that connect them together. This way
University of Zagreb [1]. IMUNES provides a patch to the it can provide the realism of running live applications on
FreeBSD 4.11 or 7.0 operating system kernel to allow an emulated network while requiring relatively
multiple, lightweight virtual network stack instances inexpensive hardware.
[2][3][4]. These virtual stacks are interconnected using
FreeBSD’s Netgraph kernel subsystem. The emulation is Machine virtualization tools, such as VMware [13],
controlled by an easy-to-use Tcl/Tk GUI. CORE forked Virtual PC [14], or Parallels [15], have become
from IMUNES in 2004. Certain pieces were contributed increasingly popular, mainly due to the availability of
back in 2006, and the entire system will soon be released hardware that can drive multiple operating systems at the
978-1-4244-2677-5/08/$25.00 ©2008 IEEE 1 of 7
Figure 1. CORE Graphical User Interface

same time with reasonable performance. Operating system


virtualization tools, such as Xen [16], UML [17], KVM Because CORE emulation runs in real time, real machines
[18], and OpenVZ [19], are mainly used for isolating and network equipment can connect and interact with the
multiple Linux server environments driven by the same virtual networks. Unlike some network emulations, CORE
hardware machine. CORE belongs to the class of runs on commodity PCs.
paravirtualization techniques, where only part of the
operating system is made virtual. In this case, only the 3. CORE OVERVIEW
isolation of processes and network stacks is employed, A complete CORE system consists of a Tcl/Tk GUI,
resulting in virtual machine instances that are as FreeBSD 4.11 or 7.0 with a patched kernel, custom kernel
lightweight as possible. Machine hardware such as disks, modules, and a pair of user-space daemons. See Figure 2
video cards, timers, and other devices are not emulated, for an overview of the different components.
but shared between these nodes. This lightweight
virtualization allows CORE to scale to over a hundred 3.1. CORE GUI
virtual machines running on a single emulation server. The graphical user interface is scripted in the Tcl/Tk
language which allows for rapid development of X11 user
From a network layering perspective, CORE provides interfaces. The user is presented with an empty drawing
high-fidelity emulation for the network layer and above, canvas where nodes of various types can easily be placed
but uses a simplified simulation of the link and physical and linked together. An example of a running CORE GUI
layers. The actual operating system code implements the is shown in Figure 1. Routers, PCs, hubs, switches, INEs
TCP/IP network stack, and user or system applications that (inline network encryptors) and other nodes are available
run in real environments can run inside the emulated directly from the GUI. Effects such as bandwidth limits,
machine. This is in contrast to simulation techniques, delay, loss, and packet duplication can be dynamically
where abstract models represent the network stack, and assigned to links. Addressing and routing protocols can be
protocols and applications need to be ported to the configured and the entire setup can be saved to text-based
simulation environment. configuration file. A start button allows the user to enter an
2 of 7
“Execute” mode which instantiates the topology in the multiple vimages to each other, or to other Netgraph nodes
kernel. Once running, the user may double-click on any such as hubs, switches, or RJ45 jacks connecting to the
node icon to get a standard Unix shell on that virtual node outside world. Each wired link in CORE is implemented as
for invoking commands in real-time. In addition, several an underlying Netgraph pipe node. The pipe was originally
other tools and widgets can be used to interact with and introduced by IMUNES as a means to apply link effects
inspect the live-running emulation. such as bandwidth traffic shaping, delay, loss, and
duplicates. One could create a link between two routers,
tunnels for example, having 512 kbps bandwidth, 37 ms
virtual images core_span
Tcl/Tk GUI propagation delay, and a bit error of 1/1000000. These
(vimages) CORE
core_wlan parameters can be adjusted on the fly, as the emulation
API
runs. CORE modifies this pipe node slightly for
userspace
implementing wireless networks and also adds a random
FreeBSD jitter delay option.
kernel
netgraph 3.4 External connectivity
ng_wlan system CORE provides users with a RJ45 node that directly maps
to an Ethernet interface on the host machine, allowing
NIC direct connectivity between the virtual images inside a
Figure 2. Overview of CORE Components running emulation and external physical networks. Each
RJ45 node is assigned to one of the Ethernet interfaces on
3.2. Network stack virtualization the FreeBSD host, and CORE takes over the settings of
CORE uses the FreeBSD network stack virtualization that interface, such as its IP address, etc., and also transfers
provided by the VirtNet project [4], which allows for all traffic passing through that physical port to the
multiple virtual instances of the OS network stack to be emulation environment. This way, the user may physically
run concurrently. The existing networking algorithms and attach any network device to that port and packets will
code paths in FreeBSD are intact, but operate on this travel between the real and emulated worlds in real time.
virtualized state. All global network variables such as
counters, protocol state, socket information, etc. have their 4. WIRELESS NETWORKS
own private instance [5]. CORE provides two modes of wireless network emulation:
a simple, on-off mode where links are instantiated and
Each virtual network stack is assigned its own process break abruptly based on the distance between nodes, and a
space, using the FreeBSD jail mechanism, to form a more advanced model that allows for custom wireless link
lightweight virtual machine. These are named virtual effects. Nodes, each corresponding to separate vimages,
images (or vimages) by the VirtNet project and are created may be manually moved around on the GUI canvas while
using a new vimage command. Unlike traditional virtual the emulation is running, or mobility patterns may be
machines, vimages do not feature an entire operating scripted. In the current version, CORE wireless emulation
system running on emulated hardware. All vimages run the does not perform detailed layer 1 and 2 modeling of a
same kernel and share the same file system, processor, wireless medium, such as 802.11, and does not model
memory, clock, and other resources. Network packets can channel contention and interference. Instead, it focuses on
be passed between virtual images simply by reference realistic emulation of layers 3 and above, while relying on
through the in-kernel Netgraph system, without the need the adoption of external RF models by providing a
for a memory copy of the payload. Because of this standard link model API.
lightweight emulation support, a single host system can
accommodate numerous (over 100) vimage instances, and The implementation of the on-off wireless mode is based
the maxmimum throughput supported by the emulation on the Netgraph hub node native to FreeBSD, which
system does not depend on size of the packet payload, as simply forwards all incoming packets to every node that is
we will demonstrate in Section 6. connected to it (Figure 3, left). We added a hash table to
the Netgraph hub and created a new wlan node, where a
3.3. Network link simulation hash of the source and destination node IDs determines
Netgraph is a modular networking system provided by the connectivity between any two nodes connected to the
FreeBSD kernel, and a Netgraph instantiation consists of a wlan. The hash table is controlled by the position of the
number of nodes arranged into graphs. Nodes can nodes on the CORE GUI. We represent this wlan node as a
implement protocols or devices, or may process data. small wireless cloud on the CORE canvas. Vimage nodes
CORE utilizes this system at the kernel level to connect can be joined to a wireless network by drawing a link
3 of 7
between the vimage and this cloud. Nodes that are moved generators, or traffic loggers, even though running
a certain distance away from each other fall out of range independently, need to share the memory and CPU of the
and can no longer communicate through the wlan node host computer. For example, in an emulation environment
(Figure 3, center). with all nodes running the OSPF routing daemon available
in the Quagga open source package, we were able to
forward to all hash lookup on/off tag packet 55ms instantiate 120 emulated routers on a regular computer. To
increase the scalability of the system, we developed the
1 hub 1 wlan 1 wlan
capability to distribute an emulation scenario across
55 ms multiple FreeBSD host systems, each of them
2 2 2 independently emulating part of the larger topology. When
3 4 3 4 37 ms 3 4
using a distributed emulation, each emulated node needs to
29 ms
be configured with the physical machine that will be used
Figure 3. WLAN Kernel Module
to emulate that node. The controller GUI uses this
information to compute partial topologies composed of
The advanced wireless model allows CORE to apply
nodes running at individual emulation hosts. The control
different link effects between each pair of wireless nodes
GUI then distributes these partial topologies to the
(Figure 3, right). Each wireless vimage is connected to the
emulation hosts, which in turn emulate the partial
wlan node with a pipe that is capable of applying different
topologies independently. When a link connects two nodes
per packet effects depending on the source and destination
that are emulated on different FreeBSD hosts, a tunnel is
of each packet. The wlan kernel module hash table stores,
created between the two physical machines to allow data
in addition to node connectivity, the parameters that
packets to flow between the two emulated nodes. We use a
should be applied between each pair of nodes. A tag is
separate C daemon, named Span, to instantiate these
added to the packet as it passes through, being read by the
tunnels.
pipe. The pipe then applies the link effects contained in the
tag instead of its globally-configured effects.
5.1 Connecting emulation hosts
The CORE Span tool uses the Netgraph socket facility to
To determine more complex link effects between nodes,
bridge emulations running on different machines using a
we use a modular C daemon to compute the distance and
physical network. One way to connect two CORE
link effects calculations, instead of using the Tcl/Tk GUI.
emulations would be using the RJ45 jack described earlier
This allows for swapping out different wireless link
in this paper. However, this limits the number of
models, depending on the configuration. Wireless link
connections to the number of Ethernet devices available on
effects models can set the statistical link parameters of
the FreeBSD machine, and requires the emulation hosts to
bandwidth, delay, loss, duplicates, and jitter. The CORE
be physically collocated in order to be directly connected.
GUI and the link effects daemon communicate through an
Span allows any number of Netgraph sockets to be created
API. When the topology is executed, the GUI sends node
and tunnels data using normal TCP/IP sockets between
information to the daemon, which then calculates link
machines. Each Netgraph socket appears as a node in the
effects depending on the configured model. For example, a
Netgraph system, which can be connected to any emulated
simple link effects model available in the CORE default
virtual image, and a user-space socket on the other end.
setup is an increasing delay and loss model as the distance
Span operates by managing the mapping between these
between two nodes increases. Different link models can
various sockets.
use the same API to interact with the CORE GUI for
emulating various layer 1 and 2 wireless settings.
Span also runs on Linux and Windows systems, and sets
up a TAP virtual interface as the tunnel endpoint. This
Once the link effects wireless daemon computes the
allows Linux or Windows machines to participate in the
appropriate statistical parameters, it configures the wlan
emulated network, as any data sent out the virtual interface
kernel module directly through the libnetgraph C library
goes across a tunnel and into the CORE emulated network.
available in FreeBSD, and informs the GUI of links
between nodes and their parameters for display purposes.
A different way to connect CORE machines together is by
using the Netgraph kernel socket or ksocket. This allows
5. DISTRIBUTED EMULATION
opening a socket in the kernel that connects directly to
The in-kernel emulation mechanism ensures that each another machine’s kernel. The sockets appear as Netgraph
CORE virtual image is very lightweight and efficient; nodes that can be connected to any emulated node. CORE
however, the applications that are potentially running on uses the ksocket to connect together WLAN nodes that
the virtual nodes, such as routing daemons, traffic belong to the same wireless network, but are emulated on
4 of 7
different machines. The WLAN node forwards all data to a number of router hops in the emulated network was
connected ksocket without performing the hash table increased from 1 to 120. The resulting iperf measurements
lookup. It also prepends the packet with the source ID of are shown in Figure 4. In Figure 5, we plot the total
the originating Netgraph node. When receiving data from a number of packets per second handled by the entire
ksocket, the remote wlan node uses the source ID tag from system. This is the measured end-to-end throughput
the packet for the hash table lookup. This allows emulation multiplied by the number of hops and divided by the
of a large wireless network with some wireless nodes packet size. This value represents the number of times the
distributed over multiple emulation machines. CORE system as a whole needed to deal with sending or
receiving packets.
6. PERFORMANCE
The performance of CORE is largely hardware and 100

scenario dependent. Most questions concern the number of 90

nodes that it can handle. This depends on what processes 80

each of the nodes is running and how many packets are

Throughput (Mbps)
70

sent around the virtual networks. The processor speed 60

appears to be the principal bottleneck. 50

40

30
Here we consider a typical single-CPU Intel Xeon 3.0GHz
20
server with 2.5GB RAM running CORE 3.1 for FreeBSD
10
4.11. We have found it reasonable to run 30-40 nodes each
0
running Quagga with OSPFv2 and OSPFv3 routing. On 0 20 40 60 80 100 120
this hardware CORE can instantiate 100 or more nodes, Number of Hops

but at that point it becomes critical as to what each of the mss=1500 mss=1000 mss=500 mss=50
nodes is doing.
Figure 4. iperf Measured Throughput
Because this software is primarily a network emulator, the
more appropriate question is how much network traffic it 350000

can handle. In order to test the scalability of the system, 300000


Th roughput (total packets/sec)

we created CORE scenarios consisting of an increasing


250000
number of routers linked together, one after the other, to
form a chain. This represents a worst-case routing scenario 200000

where each packet traverses every hop. At each end of the 150000
chain of routers we connected CORE to a Linux machine
100000
using an RJ45 node. One of the Linux machines ran the
iperf benchmarking utility in server mode, and the other 50000

ran the iperf client that connects through the chain of 0


emulated routers. TCP packets are sent as fast as possible 0 20 40 60 80 100 120
Number of Hops
to measure the maximum throughput available for a TCP
application. mss=1500 mss=1000 mss=500 mss=50

Figure 5. Total Packets per Second


For this test, the links between routers were configured
with no bandwidth, delay, or other link restrictions, so The measured throughput in Figure 4 shows that the
these tests did not exercise the packet queuing of the CORE system can sustain maximum transfer rates (for the
system. The two Ethernet interfaces connected the Linux 100M link) up to about 30 nodes. At this point the CPU
machines at 100M full-duplex. Only emulated wired links usage reaches its maximum of 100% usage. Even when
were used inside of CORE, and by default each emulated emulating 120 nodes, the network was able to forward
router was running the Quagga 0.99.9 routing suite about 30 Mbps of data.
configured with OSPFv2 and OSPFv3 routing.
Figure 5 shows a linear increase of the number of packets-
The iperf utility transmitted data for 10 seconds and per-second with the number of hops as the link is
printed the throughput measured for each test run. We saturated. Then the packets-per-second rate levels off at
changed the TCP maximum segment size (MSS) value, about 300,000 pps. This is where the CPU usage hits
which governs the size of the packets transmitted, for four 100%. This suggests that the performance of the CORE
different MSS values: 1500, 1000, 500, and 50. The
5 of 7
Video
CORE 2 CORE 1 Server

Video
Client

Figure 6. Hybrid Scenario

system is bounded by the number of packet operations per channel. CORE has been extended to remotely control
second; other factors such as the size of the packets and these Linux routers, and can govern the actual connectivity
the number of emulated hops are not the limiting of the wireless interfaces by inserting and removing
performance factor, as send or receive operations are iptables firewall rules in Linux as links are created and
implemented in the kernel simply as reference transfers. broken from the GUI. The identical Quagga OSPFv3-
MANET [20] routing protocol code is run on the Linux
These tests consider only the performance of a single routers and on the FreeBSD emulated routers. The network
system. The FreeBSD 7.0 version of CORE supports is expanded by including six emulated wired routers and
symmetric multiprocessing (SMP) systems, and with CPU ten additional emulated wireless routers, for a total of 24
usage being the main bottleneck, a multiprocessor system routing nodes. The wired routers are shown near the top
should perform even better. The current version has left of Figure 6, and the wireless routers appear in the
somewhat limited SMP support but development of the bottom left of Figure 6. Span is used in this scenario to
kernel virtualization continues with the focus on adopting link together the two physical CORE servers (CORE 1 and
virtual network stacks in the –CURRENT FreeBSD CORE 2), each responsible for emulating portions of the
development branch, so a separate patch will not be network.
required. As described in Section 5, we have also added
support to distribute the emulation across multiple physical A laptop, labeled “Video Client” in Figure 6, is used to
machines, allowing for greater performance; but this display a video stream transmitted by one of the Linux
introduces a new performance bottleneck – the available routers labeled “Video Server”. The video stream first
resources of the physical networks that tunnel data traverses the physical 802.11 network and then into a Span
between emulations hosts. tunnel that sends the data into one of the CORE machines.
The packets are forwarded through the emulated network,
7. HYBRID SCENARIO first in an OSPFv2 wired network and then into an
CORE has been used for demonstrations, research and OSPFv3-MANET wireless network. Finally, the video
experimentation. One frequent use of CORE is to extend a stream enters another Span tunnel that connects to the
network of physical nodes when a limited amount of virtual interface of the Windows laptop where the video
hardware is available. In this section we show a typical use client displays the video. This path is depicted with a green
case of CORE. The scenario includes eight Linux routers line in Figure 6.
communicating with 802.11a wireless radios, shown as
rectangular black systems in Figure 6. Each Linux router Performance of the video stream can be viewed on the
also features an Ethernet port used by CORE as a control laptop screen as the wireless nodes are moved around, in
6 of 7
either the real wireless network or the emulated one. In [8] “OPNET Modeler: Scalable Network Simulation”,
http://www.opnet.com/solutions/network_rd/modeler.html
this scenario we observed that the OSPFv3-MANET
routing protocol behaves similarly between the real Linux [9] “Scalable Network Technologies: QualNet Developer”,
http://www.scalable-networks.com/products/developer.php
systems and the emulated FreeBSD nodes, as we would
expect from the same code running in both platforms. [10] “PlanetLab: an open platform for deploying…”, http://www.planet-lab.org/

[11] “Emulab - Network Emulation Testbed Home”,


CONCLUSION AND FUTURE DIRECTIONS http://boss.netbed.icics.ubc.ca/
The CORE network emulator was introduced and briefly
[12] “Mobile Network Emulator (MNE)”,
compared with other emulation, simulation, and http://cs.itd.nrl.navy.mil/work/proteantools/mne.php
virtualization tools. The CORE GUI and FreeBSD kernel
components were described, along with two modes of [13] “VMware Server”, http://www.vmware.com/products/server/
wireless networks and distributing the emulation across [14] “Microsoft Virtual PC”,
multiple FreeBSD systems. The performance of the system http://www.microsoft.com/windows/products/winfamily/virtualpc/default.mspx
was characterized with a series of throughput tests. Finally, [15] “Parallels Workstation”, http://www.parallels.com/en/workstation/
the practical usability of the system was demonstrated by
presenting a hybrid wired-wireless scenario that combined [16] “Xen Hypervisor”, http://www.xen.org/xen/
physical and emulated nodes. [17] “The User-Mode Linux Kernel”, http://user-mode-linux.sourceforge.net/

The key features of CORE include scalability, ease of use, [18] “Kernel Based Virtual Machine”, http://kvm.qumranet.com/kvmwiki
the potential for running real applications on a real TCP/IP [19] “OpenVZ Wiki”, http://wiki.openvz.org/Main_Page
network stack, and the ability to connect the live running
[20] P. Spagnolo and T. Henderson, “Comparison of Proposed OSPF MANET
emulation with physical systems. Extensions,” in Proceedings – IEEE Military Communications Conference
MILCOM, vol. 2. IEEE, Oct. 2006.
Future work continues on the CORE tool to make it more
modular. The wireless daemon is being improved with
better support for pluggable wireless models. Experiments
are being performed to merge CORE emulation with
existing, validated simulation models for layers 1 and 2.
Management of instantiating and running the emulation is
being moved to a daemon, away from the monolithic
Tcl/Tk GUI. Components of this daemon are being
developed to take advantage of Linux virtualization
techniques in addition to the existing FreeBSD vimages.
The CORE system will be released as open source in the
near future.

REFERENCES

[1] “Integrated Multi-Protocol Emulator/Simulator”,


http://www.tel.fer.hr/imunes/

[2] Zec, M. “Implementing a Clonable Network Stack in the FreeBSD Kernel”,


USENIX 2003 Proceedings, November 2003.

[3] Zec, M., and Mikuc, M. “Operating System Support for Integrated Network
Emulation in IMUNES”, ACM ASPLOS XI, October 2004.

[4] “The FreeBSD Network Stack Virtualization Project”,


http://imunes.net/virtnet/

[5] Zec, M. “Network Stack Virtualization”, EuroBSDCon 2007, September


2007.

[6] “The Network Simulator - ns-2”, http://www.isi.edu/nsnam/ns/

[7] “ns-3 Project”, http://www.nsnam.org/

7 of 7

View publication stats

You might also like