CORE A Real-Time Network Emulator
CORE A Real-Time Network Emulator
CORE A Real-Time Network Emulator
net/publication/261091924
CITATIONS READS
271 9,558
4 authors, including:
Jae H. Kim
The Boeing Company
91 PUBLICATIONS 1,130 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Jae H. Kim on 20 May 2014.
We evaluate CORE in wired and wireless settings, and The remainder of this paper is organized as follows: we
compare performance results with those obtained on present related work in Section 2. Then we provide an
physical network deployments. We show that CORE scales overview of CORE’s features in Section 3, and highlight
to network topologies consisting of over a hundred virtual the implementation of wireless networking in Section 4
nodes emulated on a typical server computer, sending and and distributed emulation in Section 5. We then examine
receiving traffic totaling over 300,000 packets per second. the performance of the CORE emulator for both wired and
We demonstrate the practical usability of CORE in a wireless networks in Section 6 and present a typical hybrid
hybrid wired-wireless scenario composed of both physical emulation scenario in Section 7, and end the paper with
and emulated nodes, carrying live audio and video our conclusions.
streams.
2. RELATED WORK
Keywords: Network emulation, virtualization, routing, In surveying the available software that allows users to run
wireless, MANET real applications over emulated networks, we believe that
CORE stands out in the following areas: scalability, ease
1. INTRODUCTION of use, application support, and network emulation
The Common Open Research Emulator, or CORE, is a features.
framework for emulating networks on one or more PCs.
CORE emulates routers, PCs, and other hosts and Simulation tools, such as ns-2 [6], ns-3 [7], OPNET [8],
simulates the network links between them. Because it is a and QualNet [9] typically run on a single computer and
live-running emulation, these networks can be connected abstract the operating system and protocols into a
in real-time to physical networks and routers. The acronym simulation model for producing statistical analysis of a
stems from the initial use of this emulator to study open network system. In contrast, network emulation tools, such
source routing protocols, but as we describe below, we’ve as PlanetLab [10], NetBed [11], and MNE [12] often
extended the capabilities of CORE to support wireless involve a dedicated testbed or connecting real systems
networks. under test to specialized hardware devices. CORE is a
hybrid of the two types of tools, emulating the network
CORE is based on the open source Integrated Multi- stack of routers or hosts through virtualization, and
protocol Network Emulator/Simulator (IMUNES) from the simulating the links that connect them together. This way
University of Zagreb [1]. IMUNES provides a patch to the it can provide the realism of running live applications on
FreeBSD 4.11 or 7.0 operating system kernel to allow an emulated network while requiring relatively
multiple, lightweight virtual network stack instances inexpensive hardware.
[2][3][4]. These virtual stacks are interconnected using
FreeBSD’s Netgraph kernel subsystem. The emulation is Machine virtualization tools, such as VMware [13],
controlled by an easy-to-use Tcl/Tk GUI. CORE forked Virtual PC [14], or Parallels [15], have become
from IMUNES in 2004. Certain pieces were contributed increasingly popular, mainly due to the availability of
back in 2006, and the entire system will soon be released hardware that can drive multiple operating systems at the
978-1-4244-2677-5/08/$25.00 ©2008 IEEE 1 of 7
Figure 1. CORE Graphical User Interface
Throughput (Mbps)
70
40
30
Here we consider a typical single-CPU Intel Xeon 3.0GHz
20
server with 2.5GB RAM running CORE 3.1 for FreeBSD
10
4.11. We have found it reasonable to run 30-40 nodes each
0
running Quagga with OSPFv2 and OSPFv3 routing. On 0 20 40 60 80 100 120
this hardware CORE can instantiate 100 or more nodes, Number of Hops
but at that point it becomes critical as to what each of the mss=1500 mss=1000 mss=500 mss=50
nodes is doing.
Figure 4. iperf Measured Throughput
Because this software is primarily a network emulator, the
more appropriate question is how much network traffic it 350000
where each packet traverses every hop. At each end of the 150000
chain of routers we connected CORE to a Linux machine
100000
using an RJ45 node. One of the Linux machines ran the
iperf benchmarking utility in server mode, and the other 50000
Video
Client
system is bounded by the number of packet operations per channel. CORE has been extended to remotely control
second; other factors such as the size of the packets and these Linux routers, and can govern the actual connectivity
the number of emulated hops are not the limiting of the wireless interfaces by inserting and removing
performance factor, as send or receive operations are iptables firewall rules in Linux as links are created and
implemented in the kernel simply as reference transfers. broken from the GUI. The identical Quagga OSPFv3-
MANET [20] routing protocol code is run on the Linux
These tests consider only the performance of a single routers and on the FreeBSD emulated routers. The network
system. The FreeBSD 7.0 version of CORE supports is expanded by including six emulated wired routers and
symmetric multiprocessing (SMP) systems, and with CPU ten additional emulated wireless routers, for a total of 24
usage being the main bottleneck, a multiprocessor system routing nodes. The wired routers are shown near the top
should perform even better. The current version has left of Figure 6, and the wireless routers appear in the
somewhat limited SMP support but development of the bottom left of Figure 6. Span is used in this scenario to
kernel virtualization continues with the focus on adopting link together the two physical CORE servers (CORE 1 and
virtual network stacks in the –CURRENT FreeBSD CORE 2), each responsible for emulating portions of the
development branch, so a separate patch will not be network.
required. As described in Section 5, we have also added
support to distribute the emulation across multiple physical A laptop, labeled “Video Client” in Figure 6, is used to
machines, allowing for greater performance; but this display a video stream transmitted by one of the Linux
introduces a new performance bottleneck – the available routers labeled “Video Server”. The video stream first
resources of the physical networks that tunnel data traverses the physical 802.11 network and then into a Span
between emulations hosts. tunnel that sends the data into one of the CORE machines.
The packets are forwarded through the emulated network,
7. HYBRID SCENARIO first in an OSPFv2 wired network and then into an
CORE has been used for demonstrations, research and OSPFv3-MANET wireless network. Finally, the video
experimentation. One frequent use of CORE is to extend a stream enters another Span tunnel that connects to the
network of physical nodes when a limited amount of virtual interface of the Windows laptop where the video
hardware is available. In this section we show a typical use client displays the video. This path is depicted with a green
case of CORE. The scenario includes eight Linux routers line in Figure 6.
communicating with 802.11a wireless radios, shown as
rectangular black systems in Figure 6. Each Linux router Performance of the video stream can be viewed on the
also features an Ethernet port used by CORE as a control laptop screen as the wireless nodes are moved around, in
6 of 7
either the real wireless network or the emulated one. In [8] “OPNET Modeler: Scalable Network Simulation”,
http://www.opnet.com/solutions/network_rd/modeler.html
this scenario we observed that the OSPFv3-MANET
routing protocol behaves similarly between the real Linux [9] “Scalable Network Technologies: QualNet Developer”,
http://www.scalable-networks.com/products/developer.php
systems and the emulated FreeBSD nodes, as we would
expect from the same code running in both platforms. [10] “PlanetLab: an open platform for deploying…”, http://www.planet-lab.org/
The key features of CORE include scalability, ease of use, [18] “Kernel Based Virtual Machine”, http://kvm.qumranet.com/kvmwiki
the potential for running real applications on a real TCP/IP [19] “OpenVZ Wiki”, http://wiki.openvz.org/Main_Page
network stack, and the ability to connect the live running
[20] P. Spagnolo and T. Henderson, “Comparison of Proposed OSPF MANET
emulation with physical systems. Extensions,” in Proceedings – IEEE Military Communications Conference
MILCOM, vol. 2. IEEE, Oct. 2006.
Future work continues on the CORE tool to make it more
modular. The wireless daemon is being improved with
better support for pluggable wireless models. Experiments
are being performed to merge CORE emulation with
existing, validated simulation models for layers 1 and 2.
Management of instantiating and running the emulation is
being moved to a daemon, away from the monolithic
Tcl/Tk GUI. Components of this daemon are being
developed to take advantage of Linux virtualization
techniques in addition to the existing FreeBSD vimages.
The CORE system will be released as open source in the
near future.
REFERENCES
[3] Zec, M., and Mikuc, M. “Operating System Support for Integrated Network
Emulation in IMUNES”, ACM ASPLOS XI, October 2004.
7 of 7